report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
America’s tax system is based on taxpayers voluntarily filing tax returns that report the full amount of tax owed and paying any taxes that are due. IRS has four operating divisions: Wage and Investment Division (W&I) serves the vast number of individual taxpayers including those who file jointly and only have wage and investment income. Small Business/Self-Employed Division (SB/SE) serves about 45 million small business, individual taxpayers with rental properties and farming businesses, and individuals investing in businesses, such as partnerships. SB/SE also serves corporations and partnerships with less than $10 million in assets and provides field collection services for the other three IRS divisions. Large and Mid-Size Business Division (LMSB) serves corporations, subchapter S corporations, and partnerships with assets greater than $10 million. These businesses have a large number of employees, have complicated tax and accounting issues, and often conduct business globally. Tax-Exempt and Government Entities Division (TE/GE) serves three very distinct customer segments. Employee Plans serves private and public retirement plan customers. Exempt Organizations serves customers that are exempt from income taxes, such as charities, civic organizations, and business leagues. Government Entities serves customers from federal, state, and local governments; Indian tribal governments; and tax-exempt bond issuers. These divisions are responsible for providing a full range of services to these taxpayers. Typically, these services would include assisting taxpayers with filing returns, processing those returns and maintaining their accounts, and examining suspected inaccurate returns. Taxpayers who are assessed additional tax and penalties or who have a pending enforcement action to collect delinquent taxes, such as a proposed levy or lien, have the right to request a hearing through an administrative appeal before the assessment or collection actions are final. IRS notifies the taxpayers in writing of these pending actions and explains their appeal rights. Generally, the taxpayer has 30 days from this notification to request an appeal. Appeals’ mission is to independently resolve tax disputes prior to litigation on a basis which is fair and impartial to both the government and the taxpayer. To assure their independence, Appeals’ staff cannot discuss substantive case issues with compliance staff unless taxpayers or their representative are present. Generally, compliance staff does not directly participate in an appeal or learn about the resulting decision. To identify whether the proposed compliance action should be sustained, Appeals staff review the case file prepared by IRS’s compliance program and determine whether that evidence demonstrates that the taxpayer and compliance staff have followed the applicable law, regulation, and IRS procedure. If requested, the taxpayer may meet with Appeals staff and provide additional evidence to support their appeal. To close an examination case, Appeals may (1) agree with the examination program and fully sustain its recommended assessment, (2) disagree and reduce the recommended assessment to partially sustain the assessment, or (3) fully concede to the taxpayer’s position and not sustain the assessment. For a collection case, Appeals may (1) agree with and sustain the proposed enforcement action or (2) not sustain the proposed enforcement action by modifying the proposed action (e.g., propose an installment agreement rather than a levy), deferring collection, or fully conceding to the taxpayer’s position. If the taxpayer and IRS cannot reach agreement on the outcome of the case through the Appeals process, the taxpayer may have the case reviewed by the U.S. Tax Court, U.S. Court of Federal Claims, or a U.S. district court. In line with its mission to resolve cases prior to litigation, Appeals is also authorized to review the facts of the case in light of the hazards that would exist if the case were litigated. Appeals is the only IRS organization authorized to consider hazards of litigation when deciding whether to allow taxes and penalties. This means that Appeals may recommend a fair and impartial resolution somewhere between fully sustaining or fully conceding the examiner’s proposal that reflects the probable result in the event of litigation. If taxpayers do not reach agreement with IRS examiners on the proposed deficiency, or if they choose not to contact Appeals, IRS will issue a notice of deficiency. This notice describes the deficiency and states that the taxpayer has 90 days to file a petition with the court for a redetermination of the deficiency. However, even though Appeals may be initially bypassed, it still has an opportunity to settle these cases. Under IRS procedures designed to encourage resolution of cases at the lowest possible level, the attorney from the local IRS District Counsel’s office handling the court case is required to refer the case to Appeals for possible settlement before it is scheduled for trial. Figure 1 summarizes IRS’s appeals system. Appeals’ workload is organized into eight “workstreams” that reflect similarities in the case workload rather than which of IRS’s four operating divisions initiated the case. Two of the eight workstreams relate to collection issues and generally originate in two of IRS’s four operating divisions responsible for collection issues (Collection Due Process and Offer-in-Compromise workstreams). Three of the eight workstreams include a wide range of generally smaller examination and returns- processing-related penalty cases (Innocent Spouse, Penalty Appeals, and Exam/TEGE). The three other workstreams (Coordinated Industry Case, Industry Case, and Other) cover a small number of complex examinations from IRS’s LMSB programs as well as cases that do not fit into other workstreams. Appendix II includes definitions of Appeals workstreams, identifies the related operating divisions for the workstream, and the number of cases closed in each workstream during fiscal year 2004. Results-oriented organizations consistently strive to improve their performance through strategic planning. As part of this approach, agencies set objectives and measure performance to evaluate whether performance has improved. Specifically, goals or objectives are the results that a program is expected to achieve, performance measures are selected after goals or objectives are developed, logically related to these goals or objectives, and used to gauge progress toward them. Other federal agencies have previously decided that developing and sharing information on the results of appeals may help them measure performance or at least serve as an indicator of whether their decisions are legally correct. For example, the Merit Systems Protection Board, an independent quasi-judicial agency established to protect merit systems in the federal workplace, has set a performance goal of maintaining or reducing its low percentage of appealed decisions that are reversed or sent back to board judges for a new decision. The board’s performance plan for fiscal year 2005 contains an array of case-specific data to measure this performance goal. Appeals overturned about 41 percent of the fiscal year 2004 cases we reviewed and in about half of those cases Appeals disagreed with the way compliance programs applied the law or regulations. This suggests that providing information on Appeals decisions could help compliance program managers improve case results by fostering more proper and consistent case decisions. However, finding the source of possible inconsistencies will require gathering and analyzing additional information and systematic analysis. Improved decision making, however, can benefit compliance programs, Appeals, and taxpayers. Based on our case review, for cases closed in fiscal year 2004, we estimate that Appeals did not sustain about 41 percent of compliance cases (about 42,075 of the 102,623) that year. We identified six principal reasons for those nonsustentions. As shown in table 1, we estimate that Appeals did not sustain compliance decisions in 52 percent of the cases not sustained (21,879 cases) at least in part because Appeals disagreed with compliance staff’s application of tax law or IRS regulations. Providing feedback on such disagreements could help compliance managers improve case results by taking action to foster the proper and consistent application of tax laws and regulations. For example, compliance managers could assess whether guidance or manuals, supervision, quality control, or other management tools should be revised to ensure that cases are properly closed. Identifying more specifically which laws or regulations were applied differently by the compliance programs would require an investment to gather and analyze additional data. For instance, in table 1 we identified the handling of a state tax refund as an example of differing applications of tax laws and regulations. To determine whether this is a common problem or an isolated instance, officials would have to investigate the issue by, for example, drawing a random sample of cases or questioning first-line managers and staff. Because these differing applications span a host of laws and regulations across IRS’s compliance programs, the corrective action that might be taken may only affect a relatively small number of cases. In complex cases, Appeals and the compliance managers may need to work together to develop a mutual understanding of how laws and regulations should be applied. As also shown in table 1, we estimate that Appeals did not sustain compliance decisions in 44 percent of the cases because the taxpayer provided additional information to Appeals. For cases in this category, officials would need to investigate whether compliance staff could have done more to obtain the information needed to resolve the tax before the case was appealed. For example, compliance managers might assess whether staff clearly articulated the type and extent of information needed, gave the taxpayer sufficient time to respond, or received the information but did not use it appropriately to resolve the case. Similar data gathering and analysis would be needed for the other reasons we identified for Appeals not sustaining cases in order for the information to be useful in improving compliance’s decision making. For example, for cases where Appeals had to perform original audit work or significant rework, compliance managers would need to identify why their staff did not perform the necessary work while the case was still their responsibility. For cases where Appeals accepted a collection alternative, compliance managers might assess whether it was because the taxpayer had not requested an alternative, the taxpayer’s financial circumstances had changed since compliance worked on the case, or a request for an alternative was inappropriately rejected by compliance staff. For cases where taxpayers did not respond to compliance, compliance managers might assess whether staff had made sufficient attempts to contact the taxpayer. As shown in table 2, the appeal rate--the percentage of cases appealed-- varies across Appeals’ workstreams from 29 percent for LMSB’s Coordinated Industry Case program (CIC) cases to one-tenth of 1 percent for cases in the Penalty Appeals and Other workstreams. Managers of programs with high appeal rates told us that they would benefit from Appeals feedback information in improving decision making. For example, with relatively high appeals rates and complex tax issues frequently considered in both the Industry Case (IC) and CIC programs, LMSB managers believe that Appeals case result information is important for managing their programs to update policies and procedures, modify or assess new training needs, or identify needed changes in the tax law. Similarly, Offer-in-Compromise program managers say they have benefited from working with Appeals staff on studies analyzing why cases were not sustained by Appeals. One study indicated that compliance managers and Appeals needed to reevaluate or reinforce some of their policies as well as be more consistent in following established procedures for assessing financial information, such as calculating transportation expenses, establishing the value of cars, and estimating future income. Managers of programs with low appeal rates may not see as much benefit in obtaining feedback from Appeals. With an appeal rate of less than one- half of 1 percent, managers in W&I, the source of many cases in the Exam/TEGE workstream, explained that they had limited interest in devoting resources to analyzing Appeals feedback information, although they would review any analysis Appeals provided to them. Managers told us that, given the challenges facing W&I, they needed to focus resources on other issues. However, although managers might not see much direct benefit for their programs, reducing the appeal rate for compliance programs could benefit Appeals. As shown in table 2, the Exam/TEGE, Penalty Appeals, and Other workstreams have appeal rates of less than 1 percent, but the cases from these three workstreams make up about half of Appeals’ workload. In addition, the cases that are appealed are generally not fully sustained. The percentage of cases in our sample that were not sustained ranged from 73 percent for Exam/TEGE to 56 percent for the Other workstream. From Appeals’ perspective, improving case results in these workstreams could represent a target of opportunity for reducing its case load and increasing its efficiency. Analysis of our sample found that across all workstreams, Appeals cases that are fully sustained require about half of the staff hours of cases that are not fully sustained. If providing feedback to compliance programs improved their decision making, taxpayers would benefit as well. For example, if compliance programs used feedback information to improve their understanding of how to apply tax laws and regulations, they could reduce the number of taxpayers requesting an appeal and therefore resolve cases more quickly and with more uniform decisions. Further, since Appeals managers said some taxpayers decide not to pursue an appeal even though they disagree with a compliance decision, more consistent application of the tax laws or regulations could also improve the fairness and accuracy of their outcomes. The Exam/TEGE workstream can be used as a hypothetical example of the potential effect of these benefits. If the quality of compliance case decisions were to improve and as a result the percentage of cases fully sustained in Appeals were to increase from 28 percent to 38 percent, Appeals would save an estimated 7 staff years. Another potential cost saving would result if fewer taxpayers appealed because the quality of compliance case decisions improved. For example, if the number of cases from the Exam/ TEGE workstream that are appealed fell by 10 percent, Appeals would save an estimated 17 staff years. Identifying which compliance programs would benefit most from feedback is important given that Appeals hears a wide variety of cases, the cases are spread across the operating divisions, and Appeals does not fully sustain cases for a variety of reasons. This dispersion means that in some situations the costs IRS would incur to analyze Appeals data and devise and implement improvements in operations may not be justified given how few cases could be affected. When analyzing our case sample, we found that overall (1) about half of all not fully sustained cases cited either the application of laws and regulations or additional information as the reason for nonsustention and (2) certain workstreams have significantly higher nonsustention rates than others. As shown in table 3, by considering these two facts in combination, we found that two workstreams–Penalty Appeal and Exam/TEGE–had a large percentage of cases that were not sustained for these two reasons. Other information already available might also be used to identify the most promising areas in which to conduct feedback projects. For example, those cases that are most costly to Appeals to work on, measured for instance by staff hours per case, may yield the most savings to Appeals if the cases could be resolved in the compliance programs without an appeal being made. Appeals and compliance programs have been selecting their projects more on the basis of manager judgment than through data analysis, such as those cases described above. As discussed earlier, Appeals and compliance program managers will need to sort through possible reasons why some areas appear to have high levels of nonsustained cases. This may require several iterations of data analysis, discussion, and manager judgment. Once officials have identified the areas with the greatest potential for improvement, Appeals and compliance programs can explore low-cost avenues for using feedback information. For example, in two of the three workstreams with the highest percentage of appealed cases, Appeals and compliance programs have completed some projects based on Appeals case results. The joint study on Offer-in-Compromise cases not sustained by Appeals was conducted by a small team of compliance and Appeals staff and involved the review of 113 cases in 1 week. In contrast, Appeals and LMSB concluded that jointly reviewing fully conceded issues in the CIC program was too expensive because these cases can involve numerous complex issues. Rather, Appeals has started to provide to LMSB all Appeals Case Memorandums (ACM) as a low-cost solution for providing the information to target possible areas needing improvement. However, Appeals has not yet explored potential avenues for using feedback information in other large worksteams, such as Penalty Appeals. Appeals has taken several steps to launch and begin expanding the feedback project. As Appeals and compliance managers gain experience in analyzing and using feedback information, Appeals, in partnership with compliance managers, can build upon those efforts by identifying the additional feedback information that needs to be shared and further developing results-oriented objectives. In addition, potentially useful feedback data contain errors that undermine their usefulness. Appeals has taken several steps to launch and expand the feedback project. For example, officials are sending ACMs to certain compliance programs. During 2005, Appeals started to send ACMs to LMSB Industry Case and Coordinated Industry Case programs, W&I for the Innocent Spouse and EITC programs, SB/SE for the Collection Due Process, Offer-in-Compromise, and International Examination programs, and TE/GE for some Exempt Organizations cases. The Collection Due Process program also receives some summary-level information on whether the taxpayer and Appeals agreed on the outcome of an appeal as well as the Appeals inventory level. Appeals and the compliance programs are working together to determine which additional programs should receive specific feedback information. In addition, Appeals and compliance programs’ staff meet regularly through advisory board meetings. The advisory boards were created to focus on important cross-functional issues, solve problems, identify new issues arising in the compliance programs, and generally maintain close working relationships. For example, as previously discussed, Appeals and the SB/SE collection staff jointly worked on a review of Offer-In- Compromise cases to determine why Appeals accepted some offers that the compliance program rejected and are considering similar efforts with other compliance programs. Generally, decisions on what information to initially share with the compliance programs have grown out of discussions between Appeals and compliance staff and reflect their best judgment about the information that likely will help the compliance programs improve case results. Appeals and the compliance programs are still determining what additional feedback information should be shared. Appeals, in coordination with the compliance programs, is revising its case closing documents to provide additional information describing the basis for the resolution of a case. For example, Appeals is working with the compliance programs to provide information on whether additional information was considered by Appeals, cases were closed based on hazards of litigation, or taxpayers did not respond or delayed their responses. Compliance program managers told us that providing more detailed information tailored to their needs will help them to improve their results. Appeals plans to implement a revised Appeals case closing system during 2006. Appeals managers believe, and we agree, that the compliance programs will likely identify additional information needs in the future as they begin to analyze and use the information. For example, some compliance managers have told us that information on sustention rates would be useful. However, Appeals’ has no plans to develop this information. Although Appeals has worked with the compliance programs on many aspects of the feedback project, Appeals developed the objectives and performance measures for the feedback project with relatively little input from the compliance programs. After initially developing the objectives and measures, Appeals distributed them to compliance program representatives for comment but received little response. Appeals officials therefore concluded the compliance programs agreed with the objectives and measures. Best practices in strategic planning, of which setting objectives is a part, call for the involvement of stakeholders. In the case of the feedback project, involving the compliance programs in establishing the project objectives is particularly important because the programs themselves must play active roles in the project to make any changes that will improve their case results. Appeals officials acknowledge that mutually agreed upon objectives and measures would increase the likelihood that compliance programs would use the feedback information provided. However, gaining consensus may not be easy. The following illustrates the importance of involving the compliance programs in these decisions and of the potential difficulties that could arise. Appeals could adopt an objective for the feedback project of improving the sustention rate for compliance program cases that go to Appeals. That is, if the quality of compliance decisions is improved through feedback of Appeals case results, more appealed cases should be upheld. However, some of the compliance program officials we interviewed do not want improvement in the sustention rate to be an objective for a variety of reasons. For example, officials note that Appeals can change a case result for reasons that are out of their control. As discussed before, Appeals is authorized to close cases based on hazards of litigation and compliance programs are not. As a result, compliance managers are concerned that including cases closed based on hazards of litigation as part of a sustention rate would be unfair. Other managers do not agree that Appeals always makes the correct decisions on compliance cases. Thus, the active involvement of compliance program officials in the selection of objectives would be important to determining the strengths and weaknesses of potential objectives. Further, the involvement of compliance programs in establishing objectives and project measures may better ensure that the feedback project is focusing on desired results. As defined by Appeals in April 2005, the objectives of the case feedback project are to build strong relationships between Appeals and operating divisions and capture and share trend data, analyze trend data–and provide meaningful commentary to the operating divisions and functions, and influence operating division policy and procedure. Although these objectives indicate some of the activities that are integral to feedback sharing and a desired outcome--influence on operating divisions’ policies and procedures--involving the operating divisions in considering program objectives would provide an opportunity to build on these objectives to more fully define the results intended for the feedback project. The Commissioner did not specify the benefits that he thought should result from sharing of Appeals case information with the operating divisions and their compliance programs. However, as discussed earlier, sharing this information has the potential to improve the operations of the divisions and, consequently, the quality of their case decisions, potentially increasing the case sustention rate and taxpayer satisfaction with the Appeals process while also decreasing the time to complete an appeal. In addition, sharing information may also improve Appeals’ decision making by, for example, clarifying IRS’s interpretation of new or particularly complex tax laws so that both Appeals and compliance managers apply them consistently. By working with the compliance programs, Appeals would have the opportunity to further refine the project objectives to more specifically identify which of these possible results-oriented improvements are being sought by the project. For example, as mentioned earlier, the Merit Systems Protection Board has set a performance goal of maintaining or reducing its low percentage of appealed decisions that are reversed or sent back to the board. To the extent that new objectives are identified, Appeals and the compliance programs would need to ensure that appropriate performance measures are developed to track progress toward those objectives. When we compared data in Appeals’ Centralized Database System (ACDS) to documentation in closed Appeal case files, we found significant error rates related to data that would be used for a case feedback project. The highest error rates in the fields were related to the results of an appeal, such as in the revised tax, revised penalty amounts, or case-closing code field. For example, 14 percent of the cases contained errors in the revised tax field. These errors related to the outcome data that likely would be included as part of any feedback information provided to the compliance programs and would diminish the information’s usefulness to compliance program managers. Further, 12 of 165 cases (7 percent) could not be analyzed because the files could not be located or essential Appeals documents were not available. On the basis of error rates identified, we reviewed internal controls for processing case results data and identified several internal control weaknesses that may have contributed to inaccurate data in ACDS. For example, we were informed by Appeals that some appeals officers, who are responsible for working the taxpayers’ case, did not verify that ACDS data, such as the amount of tax or penalty owed by the taxpayer, was entered into ACDS accurately. Appeals policy requires that the appeals officers verify the key data in ACDS, such as the statute of limitations date, when a case is received. When a case is completed, Appeals procedures require the case manager, who supervises the appeals officer, to review and sign case-closing documents, which include data such as the amount of proposed tax or penalty and case-closing code. The closed-case data are then entered into the ACDS information system by the Appeals Processing Services staff. According to Appeals, once the case is sent to Processing Services for data entry, the appeals officer and case manager generally do not see the case again and do not know whether the closing data have been entered into ACDS accurately. Appeals guidance does not require that the appeals officer or Processing Services staff verify whether the data were accurately entered. According to Processing Services staff, appeals officers may not ensure that case-closing documents are complete. For example, data, such as the amount of revised tax or penalty (the amount of tax or penalty as determined by Appeals) or the closing code, may not be entered on the closing document by the appeals officers. Processing Services staff said that in these cases, they must review the case file to determine the correct closing data and enter that data into ACDS. The staff stated that identifying the correct data may be difficult in complex cases. Other internal controls only partly compensate for the lack of data entry verification. Appeals performs an annual Inventory Validation Listing process for open cases where critical fields in ACDS are verified and errors identified are corrected in ACDS. Since only open cases are reviewed, fields with closing data, such as revised tax, revised penalty, and closing code, are not reviewed. These closing fields are critical to the feedback loop process and without verification inaccurate data could be sent to the compliance programs. Appeals is making efforts to improve the accuracy of the data in ACDS. Appeals, for the first time, completed a data reliability study of ACDS in 2005. This study consisted of a random probability sample of 1,568 Appeals cases where data fields that were considered critical or were used daily were tested. From the study, Appeals identified data accuracy and internal control issues that were consistent with our findings. Appeals found that some fields in ACDS had lower than expected accuracy rates. For example, the revised tax field for the Innocent Spouse workstream had an accuracy rate of 71.9 percent, while the revised penalty field for the Other workstream was 78.1 percent. Appeals also identified that improvements were needed in (1) internal controls including training of Processing Services staff on ACDS input procedures, (2) ACDS data fields with lower than expected accuracy rates, and (3) Appeals’ section of the Internal Revenue Manual, which includes guidelines for standard data accuracy reviews. Appeals has been revising its database and related data entry procedures to improve the accuracy of the data in ACDS. Case-closing documents are being redesigned in a computer-based format so that only data which are appropriate to the case under appeal could be selected, thus reducing the potential for errors. Although Appeals is making efforts to improve the accuracy of the data in ACDS, it has not completed plans to address all of the identified data accuracy issues. Appeals will likely continue to experience data accuracy issues unless it improves its internal controls to verify, on an ongoing basis, the accuracy of case data entered into ACDS. Using the results of Appeals case outcomes has the potential to improve compliance programs’ case results and service to taxpayers with benefits that could accrue to the divisions, Appeals, and taxpayers. Nevertheless, given the scope of Appeals’ work, careful targeting of investments to use Appeals information is needed to ensure that the benefits will be significant enough to justify the costs IRS incurs to collect and analyze Appeals data and make changes in policies, procedures, or practices based on those analyses. Because relatively few compliance program cases may be affected by the use of some Appeals feedback information, officials need to be judicious in selecting topical areas to study. Opportunities exist to move beyond professional judgment in selecting these areas to a more data- driven approach. However, to maximize the benefits of sharing Appeals information, as intended by the Commissioner, the officials need to better define what the program is intended to achieve and how results will be measured. Appeals and the compliance programs need to enter into an active partnership to develop results-oriented objectives and associated performance measures for the feedback project. Finally, the feedback project must be built on reliable data, which requires that better internal controls be instituted to drive down the error rates in key data that will be provided to the compliance programs. We are making recommendations to the Commissioner of Internal Revenue to ensure that the feedback project reaches its maximum potential in improving case results. Specifically, we recommend that the Commissioner direct Appeals in partnership with the compliance programs, to analyze Appeals case- results data, such as the workstream sustention rates, reasons for nonsustention, or staff hours spent per case, to identify areas in which improvements are likely to generate the greatest benefits to the compliance programs, Appeals, and taxpayers; in partnership with the compliance programs, to further investigate the most promising areas and assess whether actions, such as additional guidance or training, are needed to improve the quality of compliance programs’ case decisions; in partnership with the compliance programs, to further develop results- oriented objectives and associated performance measures for the feedback project; and to build upon its current efforts to improve the quality of Appeals information for the feedback project by establishing internal controls to verify, on an ongoing basis, the accuracy of the data entered into Appeals information systems on case results. The Commissioner of Internal Revenue provided written comments on a draft of this report in a March 6, 2006, letter which is reprinted in appendix V. The Commissioner agreed with our recommendations and said they will help IRS develop a much stronger feedback program. With regard to the first recommendation, the Commissioner said IRS would continue quarterly meetings between the operating divisions and Appeals and report national-level feedback data at least annually to identify specific compliance programs where shared benefits would be realized. We agree that these actions would be a first step toward implementing the recommendation. However, as discussed in the report, more systematic data analysis would help Appeals and the compliance programs identify areas more likely to realize the benefits of using feedback data. As discussed in the Commissioner’s comments, this may involve reviewing external data, such as the National Taxpayer Advocate’s reports, as well as other data to identify areas that may yield the most savings to IRS if the cases were resolved in the compliance programs without appeals. The analysis of existing data is a necessary step toward tailoring analyses to each of IRS’s compliance programs. As agreed with your offices unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of the Treasury, The Commissioner of Internal Revenue, and other interested parties. Copies will be made available to others upon request. This report is available at no charge on GAO's Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-9110 or Jonda Van Pelt, Assistant Director, at (415) 904-2186. We can also be reached by e-mail at [email protected] or [email protected], respectively. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Carl Barden, Evan Gilman, Leon Green, Shirley Jones, Laurie King, Ellen Rominger, and Michael Rose. Our objectives were to determine whether (1) information on Appeals results has the potential to provide useful feedback to the Internal Revenue Service (IRS) operating divisions to benefit compliance programs, Appeals, and taxpayers through better case resolution and (2) the feedback project was being effectively managed to maximize its potential to improve IRS’s performance and thereby reduce disputes with taxpayers. To determine whether information on the results of Appeals cases has the potential to provide useful feedback and whether the feedback project is being effectively managed, we interviewed 24 Appeals executives, managers, and staff who work with compliance program staff on feedback issues, coordination, or information systems issues. We also interviewed 58 compliance program executives, managers, and staff selected by operating division liaisons to represent their compliance programs because of their familiarity with Appeals issues. We discussed with these officials the type of feedback data that are being collected by Appeals and sent to the compliance program officials as well as the type of feedback data compliance program officials would like to receive from Appeals. We reviewed documents provided by Appeals on the feedback project. We reviewed the Appeals Centralized Database System (ACDS) to determine whether it contained sufficient case results information. We found that it did not contain sufficient information for our analyses, such as whether Appeals agreed with the compliance decision. Therefore, to develop this information, we selected a random probability sample of case files to review. The sample was drawn from an initial population of 103,946 Appeals cases closed in ACDS for fiscal year 2004. However, since Industry Case (IC) and Coordinated Industry Case (CIC) cases, which originate from IRS’s Large and Mid-Size Business Division (LSMB), are complex and the supporting documentation is voluminous, we excluded these 1,323 cases from the population. Therefore, the final population size was 102,623 cases. Of the 165 cases selected in our sample, we reviewed 153 cases to determine the results of the cases. The remaining 12 cases could not be analyzed because the files could not be located or essential Appeals documents were not available. We assessed the known characteristics of the 12 cases not received against those of the 153 received for potential systematic differences. Based on this nonresponse bias analysis, we concluded that it was acceptable to treat the 12 cases as missing at random. We reviewed documents, such as the Appeals Case Memorandum and the Case Activity Record, and determined whether the cases were fully sustained, partially sustained, or not sustained. In determining the extent to which a case was sustained, we based our decision on the determination made by Appeals in the Appeals Case Memorandum using the following scale: “fully sustained” indicated that in our judgment Appeals agreed with compliance on all issues appealed by the taxpayer; “partially sustained” indicated that Appeals agreed with at least one but not all of the issues; and “not sustained” indicated that Appeals did not agree with any of the issues. We also reviewed the cases to determine the reasons the cases were not sustained by Appeals. Since cases could include several compliance issues, there may have been multiple reasons why a case was not sustained. We recorded each decision and the reason for the decision cited in the Appeals case file for a case not being sustained on a data collection instrument (DCI) that we developed. The analysts who participated in reviewing the case files and recording the information on the data collection instrument were knowledgeable about the appeals process and how to interpret the information in the case files. To ensure that the data entered on the DCIs conformed to GAO's data quality standards, each completed DCI was reviewed by at least one other GAO analyst. The reviewer compared the data recorded on the DCI to the data in the case files to determine whether he or she concurred with the interpretation of the case files and the way the data were recorded on the DCI. When there were differing perspectives, the analysts met and reconciled them. Tabulations of the DCI items were automatically generated using a statistical software package to develop case outcome information. For these analyses, the computer programs were checked by a second, independent analyst. We developed case outcome information for each of the Appeals workstreams except IC and CIC. For the CDP, Exam/TEGE, and OIC workstreams, our sample sizes were large enough to generalize the results separately for each workstream, or to have a margin of error small enough to produce meaningful workstream estimates. Because we followed a probability procedure based on random selection, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval, plus or minus 8 percentage points. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. For example, Appeals did not sustain 41 percent of the cases in the sample, which has a 95 percent confidence interval of 33 percent as a lower bound and 49 percent as an upper bound. Workstream estimates come from subsets of the sample. Thus workstream-specific estimates have larger confidence intervals due to the smaller sample size. Tables 4, 5, and 6 present the confidence intervals for sample data presented in the report. To determine how effectively the feedback project was being managed, we reviewed documents supplied by Appeals and compliance program officials, such as meeting minutes for the advisory boards and strategic planning documents. We also interviewed these officials and reviewed our prior work on best practices for developing information that can be used to improve agency performance. To compute appeal rates, we compared compliance cases closed in fiscal year 2003 by worksteam to the Appeals cases closed in fiscal year 2004. Since Appeals typically required about a year to complete a case, the 2004 Appeals closings were cases that were most likely closed by compliance programs during 2003. Further, IRS uses a similar approach to compute audit rates. To identify the number of compliance cases by worksteam, we used data published in IRS’s fiscal year 2003 Databook. Data on cases closed for the Innocent Spouse and Offer-in-Compromise workstreams was not available in the Databook and was provided by IRS staff. We assessed whether the case results data contained in ACDS were sufficiently reliable for our use. We selected the first 100 cases from our random sample of 165 cases to make this determination. We interviewed knowledgeable Appeals officials about the data, performed electronic testing of relevant data fields for obvious errors in accuracy and completeness, and collected and reviewed documentation about the data and the system. We also reviewed prior Treasury Inspector General for Tax Administration reports. Of the 100 cases selected for our sample, we reviewed 92 cases from all of the Appeals workstreams except as mentioned earlier, the IC and CIC workstreams. The remaining 8 cases could not be analyzed because essential Appeal documents were not available. We compared documents in closed Appeals cases, such as the Appeals Case Memorandum, to data in ACDS. However, Appeals did not always provide documentation for the basis of the compliance determination; therefore, in some cases, we were unable to determine if data, such as the amount of tax proposed by compliance, were accurate. We had Appeals verify data errors in fields that were specific to case results information, such as the amount of revised tax and penalty, as well as the closing code. Due to the high error rate of some data fields in our sample, we reviewed internal controls used in the processing of case results data at one Appeals area office. This review consisted of observation and inquiry of Appeals officials on Appeals’ case processing procedures and review of Appeals documentation. We also spoke to officials in Appeals headquarters concerning weaknesses identified in Appeals’ internal controls. On the basis of our data reliability review of ACDS, we determined that data in ACDS were not sufficiently reliable for our use. Instead of relying on that data, we used data developed from our sample of Appeals cases and continued our analyses of Appeals’ internal controls. We conducted our review at Appeals headquarters in Washington, D.C., and one Appeals area office from October 2004 through October 2005 in accordance with generally accepted government auditing standards. Appeals’ workload is organized into eight workstreams. These workstreams include cases that have similar characteristics rather than reflecting the IRS operating division where they originated. For example, cases in the Collection Due Process workstream include only appeals by taxpayers under provisions of the IRS Restructuring and Reform Act 1998, which authorizes an independent review by Appeals of proposed levies and filed liens. These cases could originate in either the Wage and Investment Division or the Small Business and Self-Employed Division, since either division could propose a levy or file a lien. Other workstreams include a wide range of cases from across IRS operating divisions. The Exam/TEGE workstream includes appeals for compliance actions, including recommended assessments and proposed penalties originating from much of IRS’s reporting and filing compliance program, with the exclusion of LMSB cases. These appeals can include diverse issues, such as recommended assessments related to the Earned Income Tax Credit or large charitable organizations, such as universities or hospitals. During fiscal year 2004, Appeals completed nearly 104,000 cases. Table 7 describes these workstreams and the IRS operating divisions where these cases where proposed. Taxpayers in each workstream requested Appeals of recommended assessments or other compliance actions, such as proposed levies and filed liens, at widely differing rates. To compute the appeal rate for each workstream, we compared the number of compliance cases closed for each workstream to the number of cases Appeals closed. We compared fiscal year 2003 compliance case closings to fiscal year 2004 Appeals case closings because Appeals averaged 260 calendar days during fiscal year 2004 to complete its work on a case. For example, as reported in table 16 of the IRS Databook for 2003, during fiscal year 2003, IRS filed 548,683 notices of federal tax liens, served 1,680,844 notices of levy, and made 399 seizures for a total of 2,229,926 compliance actions. Each of these actions could be the basis for a CDP appeal. During fiscal year 2004, Appeals completed work on 32,226 CDP cases, for an Appeal rate of 1.445 percent or 1 percent. Other approaches could be used to compute appeal rates. Our analysis used compliance cases closed as the basis for measuring appeal rates because (1) a uniform, published source of data was available and provided data on six of the eight Appeals workstreams and (2) it broadly compares IRS compliance programs to the Appeals program. Another approach for measuring appeal rates, for example, could use only cases closed where IRS recommended an additional tax assessment and not include cases where no tax was proposed, because taxpayers would not have a basis for requesting an appeal. In some programs this difference may be substantial. For example, in the Offer-in-Compromise program, according to unpublished data provided by IRS, 58 percent of fiscal year 2003 Offer-in- Compromise cases were closed by Compliance because the offer was not processable or was returned to the taxpayer. Accordingly, the taxpayer did not have a basis for an appeal. Eliminating these cases from the Offer-In- Compromise cases closed in 2003 would more than double the appeal rate from 13 percent to 31 percent. However, limited data are available to use other approaches for computing appeal rates. For example, about 30 percent of the cases closed in Appeals’ second largest workstream, Exam/TEGE, originated from the Earned Income Tax Credit and the Automated Underreporter programs. Data were not published on the proportion of these cases that were closed with recommended assessments. Tax Administration, Planning for IRS’s Enforcement Process Changes Included Many Key Steps but Can Be Improved. GAO-04-287. Washington, D.C.: January 20, 2004. Tax Administration, IRS Needs to Further Refine Its Tax Filing Season Performance Measures. GAO-03-143. Washington, D.C.: November 22, 2002. IRS Modernization: IRS Should Enhance It’s Performance Management System. GAO-01-234. Washington, D.C.: February 23, 2001. Standards for Internal Control in the Federal Government. GAO/AIMD- 00-21.3.1. Washington, D.C.: November 1999. Executive Guide: Measuring Performance and Demonstrating Results of Information Technology Investments. GAO/AIMD-98-89. Washington, D.C.: March 1998. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996. | Taxpayers disagreeing with Internal Revenue Service (IRS) compliance decisions can request an independent review by IRS's Appeals Office (Appeals). In 2004 the Commissioner requested that Appeals establish a feedback program to share the results of Appeals' reviews with the compliance programs. GAO was asked to assess whether (1) information on Appeals results would provide useful feedback to IRS operating divisions to benefit compliance programs, Appeals, and taxpayers through better case resolution and (2) the feedback project is being effectively managed to maximize its potential to improve IRS's performance and thereby reduce disputes with taxpayers. Appeals' case result information has the potential to help compliance programs improve taxpayer service, but realizing improvements requires investments in data collection and analysis that must be considered in light of the likely benefits. Based on a review of 153 Appeals cases, GAO estimates that 41 percent of the 102,623 cases closed in fiscal year 2004 were not fully sustained. Of these, about half were not sustained because Appeals applied a law or regulation differently than the programs. Lacking such information, officials could not assess whether actions like additional guidance were needed. However, identifying specific provisions that were interpreted differently would require data gathering and analysis. Because the differences span a host of laws and regulations, corrective action may only affect a small number of cases. Improved decision making, however, can benefit compliance programs, Appeals, and taxpayers. An initial data analysis, such as identifying programs with high nonsustention rates due to differences in applying laws or regulations, would help to target areas most likely to benefit from feedback. Appeals has taken several initial steps to launch the feedback project. During 2005, for example, Appeals and the compliance programs began to identify additional information needs. In addition, Appeals and the compliance programs could refine the feedback project's objectives to target the results-oriented improvements that are logical benefits of information sharing. Obtaining agreement between Appeals and the programs on objectives may not be easy because their perspectives differ on the steps needed to improve operations, but is necessary. Also, Appeals' plans to update its information system to provide additional data on case results will be hindered by inaccurate data. We found that several important data fields had error rates up to 14 percent. Appeals staff cited several reasons for this, including weak data verification procedures. |
FTA generally funds New Starts projects through FFGAs, which are required by statute to establish the terms and conditions for federal participation in a New Starts project. FFGAs may also define a project’s scope, including the length of the system and the number of stations; its schedule, including the date when the system is expected to open for service; and its cost. For projects to obtain FFGAs, New Starts projects must emerge from a regional, multimodal transportation planning process. The first two phases of the New Starts process—systems planning and alternatives analysis—address this requirement. The systems planning phase identifies the transportation needs of a region, while the alternatives analysis phase provides information on the benefits, costs, and impacts of different options, such as rail lines or bus routes, in a specific corridor versus a region. The alternatives analysis phase results in the selection of a locally preferred alternative, which is the New Starts project that FTA evaluates for funding. After a locally preferred alternative is selected, the project sponsor submits an application to FTA for the project to enter the preliminary engineering phase. When this phase is completed and federal environmental requirements are satisfied, FTA may approve the project’s advancement into final design, after which FTA may approve the project for an FFGA and proceed to construction. FTA oversees grantees’ management of projects from the preliminary engineering phase through the construction phase. To help inform administration and congressional decisions about which projects should receive federal funds, FTA currently distinguishes between proposed projects by evaluating and assigning ratings to various statutory evaluation criteria—including both project justification and local financial commitment criteria—and then assigning an overall project rating. (See fig. 1.) These evaluation criteria reflect a broad range of benefits and effects of the proposed project, such as cost-effectiveness, as well as the ability of the project sponsor to fund the project and finance the continued operation of its transit system. FTA has developed specific measures for each of the criteria outlined in the statute. On the basis of these measures, FTA assigns the proposed project a rating for each criterion and then assigns a summary rating for local financial commitment and project justification. These two ratings are averaged together, and then FTA assigns projects a “high,” “medium-high,” “medium,” “medium-low,” or “low” overall rating, which is used to rank projects and determine what projects are recommended for funding. Projects are rated at several points during the New Starts process—as part of the evaluation for entry into the preliminary engineering and the final design phases, and yearly for inclusion in the New Starts Annual Report. As required by SAFETEA-LU, the administration uses the FTA evaluation and rating process, along with the phase of development of New Starts projects, to decide which projects to recommend to Congress for funding. Although many projects receive a summary rating that would make them eligible for an FFGA, only a few are proposed for an FFGA in a given fiscal year. FTA proposes FFGAs for those projects that are projected to meet the following conditions during the fiscal year for which funding is proposed: All nonfederal project funding must be committed and available for the project. The project must be in the final design phase and have progressed far enough for uncertainties about costs, benefits, and impacts (e.g., financial or environmental) to be minimized. The project must meet FTA’s tests for readiness and technical capacity, which confirm that there are no remaining cost, project scope, or local financial commitment issues. SAFETEA-LU introduced a number of changes to the New Starts program, including some that affect the evaluation and rating process. For example, given past concerns that the evaluation process did not account for a project’s impact on economic development and FTA’s lack of communication to sponsors about upcoming changes, the statute added economic development to the list of project justification criteria that FTA must use to evaluate and rate New Starts projects, and requires FTA to issue notice and guidance each time significant changes are made to the program. SAFETEA-LU also established the Small Starts program, a new capital investment grant program, simplifying the requirements imposed for those seeking funding for lower-cost projects such as bus rapid transit, streetcar, and commuter rail projects. This program is intended to advance smaller-scale projects through an expedited and streamlined evaluation and rating process. FTA also subsequently introduced a separate eligibility category within the Small Starts program for “Very Small Starts” projects. Small Starts projects that qualify as Very Small Starts are simple, low-cost projects that FTA has determined qualify for a simplified evaluation and rating process. In addition to implementing the Small Starts program, FTA has taken other steps to implement SAFETEA-LU changes to the New Starts evaluation process. For example, FTA incorporated economic development into the existing evaluation framework by considering the information provided by project sponsors as an “other factor.” FTA also sought public comments on different proposals for revising the evaluation process to better reflect the statute through the Advanced Notice of Proposed Rulemaking (ANPRM) and the final NPRM for the New Starts and Small Starts programs. However, following concerns voiced by Members of Congress and the transit industry about the weights placed on different project benefits, FTA was prohibited from using funds to proceed with the rulemaking process, with the exception of reviewing comments, under the fiscal year Consolidated Appropriations Act of 2008. Figure 2 shows a timeline of FTA’s efforts to date to implement SAFETEA-LU changes to the New Starts evaluation and ratings process. FTA primarily uses the cost-effectiveness and land use criteria to evaluate New Starts projects, but concerns have been raised about the extent to which the measures for these criteria capture total project benefits. Specifically, FTA’s TSUB measure considers how the mobility improvements from a proposed project will reduce users’ travel times. According to FTA officials, experts, and the literature we consulted, the TSUB measure accounts for most secondary project benefits, including economic development, because these benefits are typically derived from mobility improvements that reduce users’ travel times. However, project sponsors and experts raised concerns about how FTA currently measures and weights different project justification criteria, noting that these practices may underestimate some project benefits. For example, some experts and project sponsors we spoke to said that the TSUB measure does not account for benefits for nontransit users or capture any economic development benefits that are not directly correlated to mobility improvements. As a result, FTA may be underestimating projects’ total benefits, particularly in areas looking to use these projects as a way to relieve congestion or promote more high-density development. In these cases, it is unclear the extent to which FTA’s current approach to estimating benefits impacts how projects are ranked in FTA’s evaluation and ratings process. FTA officials acknowledged these limitations, but noted that improvements in local travel models are needed to resolve some of these issues. FTA currently relies on the cost-effectiveness and land use criteria to evaluate and rate New Starts projects. Specifically, FTA assigns a weight of 50 percent to both the cost-effectiveness and land use criteria when developing project justification ratings. Table 1 provides a summary of all project justification criteria that FTA is required to review, the measures it uses to evaluate these criteria, and how this information is used to rate projects. To evaluate the land use criterion, FTA has developed and uses three qualitative land use measures: land use in the project area, the extent to which the area has transit supportive plans and policies, and the performance and impacts of these policies. For example, to determine whether a project’s surrounding area has transit supportive plans and policies, FTA examines whether there are growth management strategies and transit supportive corridor policies in place, the extent to which zoning regulations near stations are transit supportive, and the tools available to implement land use policies. To evaluate cost-effectiveness, FTA relies on the TSUB measure and costs. The TSUB measure captures predicted improvements in mobility caused by the implementation of a project. In particular, TSUB captures transit users’ cost and travel time savings, as well as improvements in comfort, convenience, and reliability of travel. Project sponsors use local travel models to forecast ridership and simulate trips taken in 2030, the forecast year used to estimate savings over time for two alternatives. To evaluate the benefits for these two alternatives, FTA uses the outputs from these models to consider and weigh a range of attributes, such as time spent waiting at and walking to the transit station, and calculates the perceived level of time savings associated with a given project. The first alternative, known as the baseline alternative, assumes low-cost improvements to the project area’s current transportation network, while the second alternative-—the “build alternative”—assumes the proposed New Starts transit project is constructed. As outlined in figure 3, FTA uses the forecasts for these two alternatives to calculate the predicted TSUB value for the proposed project. To determine a project’s final cost-effectiveness rating, FTA divides the project’s annual capital and operating costs by its predicted TSUB value and compares the computed figure to established cost-effectiveness breakpoints. FTA officials that we interviewed noted that the TSUB measure used to assess the cost-effectiveness criterion in the New Starts evaluation framework emphasizes predicted mobility improvements because most project benefits are realized only when transit users perceive that their time and cost of travel has been reduced. For example, the introduction of new transit service may reduce users’ overall travel time to a given destination. These reductions in travel time usually occur because a project offers faster travel times as a result of travel on the project’s fixed guideway, which does not incur the degree of congestion faced by buses operating in mixed travel. According to FTA, such transit user benefits are the distinct and primary benefit of transit investments. Most other benefits of transit projects, such as economic development, are considered secondary benefits because they are still directly related to mobility improvements. For example, transportation investments that improve the accessibility and attractiveness of certain locations can result in higher property values in those areas, which can affect the type and density of development that occurs in the area of the investment. The transportation literature and different experts we consulted agreed that such increases in property values are generally the result of mobility improvements. As such, they noted that conducting a separate evaluation of secondary benefits, such as economic development, may be inappropriate because it can result in double counting certain project impacts. For example, in a 2002 report, the Transportation Research Board (TRB) reported that secondary benefits like economic development “are double counts” of mobility improvements and must be carefully measured and presented “in such a way that decision makers are aware of the potential for double counting.” FTA also considers information on environmental benefits, mobility improvements, and other factors (including economic development), but these criteria are not weighted in the current evaluation framework. As a result, they are not used to calculate the project justification rating, except under certain circumstances. For example, FTA currently evaluates information on mobility improvements, but this criterion is not used in determining the project justification rating, except in certain cases as a tiebreaker when the average of the cost-effectiveness and land use ratings falls equally between two categories. Project sponsors and experts we interviewed raised concerns about how FTA uses and measures different New Starts project justification criteria in the evaluation framework, which could potentially result in certain project benefits being underestimated. Some project sponsors we spoke with expressed frustration that FTA does not include certain criteria in the initial calculation of project ratings, such as economic development and environmental benefits. They noted that this practice limits the information captured on projects, particularly since these are important benefits of transit projects at the local level and were required to be evaluated under SAFETEA-LU. In addition to these concerns, we have previously reported that FTA’s reliance on two evaluation criteria to calculate a project’s overall rating is not aligned with the multiple-measure evaluation and rating process outlined in statute and current New Starts regulations. As a result, we recommended that FTA improve the measures used to evaluate New Starts projects or provide a crosswalk in the regulations showing clear linkages between the criteria in the statute and the criteria used in the evaluation process. FTA’s current guidance on the New Starts evaluation process states that environmental benefits are not weighted presently because the current measure does not meaningfully distinguish among projects. Furthermore, FTA officials we interviewed told us that they had not yet developed a reliable way to incorporate economic development into the framework, had not received any reasonable suggestions for measuring this criterion, or had project sponsors submit information demonstrating the impacts of their projects on economic development. Despite these issues, however, they acknowledged that the current approach for evaluating projects does not align with SAFETEA-LU and noted that the revised evaluation process described in the NPRM and proposed policy guidance was developed to meet these requirements. Different experts and project sponsors we interviewed also disagreed with FTA’s emphasis on mobility in the cost-effectiveness measure, noting that it does not account for other important project benefits. Specifically, experts and project sponsors, as well as members of the transit industry and DOT officials, stated that FTA’s TSUB measure does not capture the benefits that accrue to highway users as user benefits when more people switch to the improved transit service and highway congestion decreases. The omission of these nontransit user benefits means that the benefits accruing to motorists are not accounted for in the evaluation process. In cases where a project’s predicted impact on congestion is significant, this omission may lead FTA to underestimate a project’s total user benefits. Given FTA’s focus on cost-effectiveness in the evaluation process, underestimating user benefits for certain projects could impact the overall project ratings and change the relative ranking of proposed transit projects. In response to this issue, FTA officials told us that although the TSUB measure and existing software have the capacity to capture highway user benefits, they do not currently accept estimates of nontransit user benefits because local travel models do not reliably predict changes in travel speeds resulting from transit investments. Instead, FTA currently adjusts the cost-effectiveness breakpoints upward, which has the effect of giving all projects the same credit for highway travel time savings. As a result, some projects are being credited with achieving these benefits, even when the project has no impact at all on highway travel time savings, while other projects may not be receiving enough credit for their impact on highway travel time savings. FTA officials noted that they would prefer to estimate the predicted impact of projects on highway congestion rather than using a rough proxy for these benefits, particularly since their current approach does not distinguish among projects in a meaningful way. Officials at FTA and the Office of the Secretary of Transportation also told us that they are conducting research on ways to improve the estimation of highway speeds (and thus, the calculation of nontransit user benefits) by local travel models, but a significant investment of resources by different levels of government will likely be required to do so. A few experts we spoke with also commented that FTA’s cost- effectiveness measure does not capture any project benefits, such as economic development effects, that are unrelated to mobility improvements. As noted earlier, FTA contends that its emphasis on mobility improvements is appropriate, since most secondary project benefits—including economic development—are derived from this measure. Although our work, the transportation literature we reviewed, and experts we consulted generally support this contention, these sources also indicated that some secondary project benefits, namely certain economic development effects, may not always accrue in direct proportion to mobility improvements. Some studies we reviewed and experts we spoke with noted that property value increases near a project may occur due to option value or agglomeration effects, both of which are indirect results of transit investments and not explicitly related to mobility improvements. In such cases, FTA’s existing TSUB measure would understate the total benefits that result from providing enhanced access to a dense urban core, rather than transporting commuters from longer distances (e.g., light or heavy rail) due to its emphasis on travel time savings. Furthermore, our previous work on measuring costs and benefits of transportation investments has stated that there could be some residual benefit from these indirect effects that is not accounted for in travel time benefits or other direct impacts. This lack of accounting for certain secondary benefits in the TSUB measure may prevent FTA from capturing all project benefits and developing accurate project rankings. In interviews with FTA officials about this issue, they acknowledged that some benefits may accrue in varying proportions to mobility improvements—that is, certain benefits may not be directly related to changes in mobility improvements. In such cases, the current evaluation process may not favor certain types of projects—such as streetcars—that are not designed to create travel time savings, but rather create other benefits. Such benefits could include changes in land use that are not captured by the TSUB measure. In the future, FTA officials told us that they would prefer to improve local models, so that they can consistently and reliably assess projects’ impact on nontransit users and economic development. Finally, some project sponsors also expressed concern about FTA’s requirement to use fixed land use assumptions when estimating the predicted user benefits resulting from the implementation of a proposed project. According to sponsors, this practice prevents FTA from explicitly counting some future benefits that may arise due to an area’s increased accessibility. For example, some transit projects’ primary goal is to change land use around transit stations in order to capitalize on the area’s enhanced accessibility. Such changes could also lead to increases in future transit ridership, resulting in higher user benefits for the project. Furthermore, a recent panel of experts convened by FTA noted that it was unrealistic to evaluate only the incremental impacts of the proposed transit project, since local governments often find it difficult to justify high-density, mixed-use zoning in the absence of transit. Thus, by assuming that no such land use changes will occur, FTA may be underestimating projects’ predicted user benefits. FTA officials told us they have two reasons for fixing land use assumptions when calculating user benefits. First, it is difficult to determine the magnitude of the additional land use changes, including economic development that will result from a project. Most localities do not have analytical methods for these projections, and the methods that do exist are often more unreliable than the local models used to forecast travel demand. Second, even with a reasonable estimate of additional development, it is difficult to value the benefits of the additional development. Officials from FTA told us that significant changes to local travel models would be required before they could allow project sponsors to vary their assumptions about future land use when estimating user benefits. FTA faces several systemic challenges to improving the New Starts program, including addressing multiple program goals, limitations of local travel models, the need to maintain the rigor while minimizing the complexity of the evaluation process, and developing clear and consistent guidance for incorporating qualitative information into the evaluation process. FTA and project sponsors we spoke with have interpreted the emphasis of the New Starts program differently because the evaluation criteria, which have been delineated in previous and existing transportation legislation, establish multiple goals for the program. Additionally, models used to generate local travel demand forecasts have limited capabilities and may not provide all of the information needed to properly evaluate transit projects. FTA has taken some steps to mitigate the modeling limitations but faces challenges in doing so, including a lack of resources to invest in local travel model improvements. Finally, experts, transportation consultants, and some project sponsors we spoke with support FTA’s rigorous process for evaluating proposed transit projects but are concerned that the process has become too burdensome and complex. FTA has taken some steps to streamline its evaluation process and incorporate qualitative information into the assessment, but project sponsors we spoke to emphasized the continued need for clear, consistent guidance on how such qualitative information will be used. FTA and project sponsors we spoke with have interpreted the emphasis of the New Starts program differently. Although the goals have not been explicitly articulated in legislation, the evaluation criteria outlined within the law express various goals of the New Starts program. These include mobility improvements, environmental benefits, operating efficiencies, cost-effectiveness, economic development, and land use. The presence of multiple program goals within the statute, as articulated by the evaluation criteria, has led to different interpretations by FTA and project sponsors about what project benefits should be emphasized in the New Starts evaluation process. As noted earlier, FTA focuses on mobility improvements in its evaluation process because it contends that those benefits are a critical goal of all transit projects and that most secondary project benefits, including economic development, are derived from improvements that reduce users’ travel times. Many of the experts and some of the project sponsors we spoke to agreed that transit projects can work toward a number of different goals, including mobility improvements, though some project sponsors told us that creating nontransportation benefits, such as generating local economic development, can be the primary goal of a project. In the latter case, the primary goal of a project is not to create significant mobility improvements, but rather to stimulate high-density development and change land use patterns around a transit station. Accordingly, such projects may not generate the mobility improvements needed to qualify for New Starts funding under the current New Starts evaluation process. Some project sponsors, therefore, could devote substantial resources to apply for New Starts funding for projects that are incompatible with FTA’s emphasis on mobility improvements. The models used to generate local travel forecasts are limited and may not provide sufficient or reliable information to properly evaluate transit projects. According to a recent report by TRB, the demands on local models have grown significantly in recent years as a result of new policy concerns, such as the need to estimate motor vehicle emissions and evaluate alternative land use policies, and existing models are inadequate to address many of these new concerns. The current models used by most MPOs are generally able to represent aggregate and corridor-level travel demand, but they are not dynamic. That is, they are based on average travel speeds over discrete areas and cannot represent the conditions that would be expected by an individual traveler choosing how, when, and where to travel. This limitation affects a model’s ability to accurately represent travel behavior, nonauto (e.g., walking or biking) or transit travel, and transit’s impacts on highway congestion, thereby limiting a model’s ability to provide all of the information needed to properly evaluate transit projects. Some of the experts, as well as FTA and Office of the Secretary officials we interviewed, agreed that local modeling capacity is limited and should be updated to better reflect travel behavior. For example, one expert maintained that transit projects’ estimated impacts on all travel in the region can be tested with estimates that are “sensitive” enough to pick up projects’ impacts, but noted that most MPOs do not have the capacity to generate such estimates. In addition, the TRB report and some experts we spoke with have expressed concerns that many MPOs have inadequate traffic and household data to validate their models and provide information on the travel behavior of different populations. Our past work has also cited the difficulties of accurately predicting changes in traveler behavior, land use, or usage of highways resulting from a transit project with current travel models, as well as concerns about the quality of data inputs into local travel models. FTA has taken some steps to mitigate the modeling limitations—which TRB recognized in its report on the state of the practice—but faces challenges in doing so. As previously discussed, FTA has developed proxy measures to account for certain project benefits that cannot be accurately modeled at the present time, such as projects’ impacts on highway congestion. FTA officials told us that they would prefer to improve local models so that they can consistently and reliably assess projects’ impacts on nontransit users and economic development. To that end, FTA has recently developed a request for proposals to seek approaches for predicting changes in highway user benefits that can be used in the short- term (within 5 years). However, the request for proposals has not yet been issued or awarded, and there is no timeline for doing so. Additionally, according to officials from FTA and the Office of the Secretary, FTA approached FHWA to help with this effort, but FHWA declined to be involved because it deemed the issue to be only relevant to transit. As a result, the Office of the Secretary provided the other half of the funding for the request for proposals. Officials from FTA and the Office of the Secretary stated that the improvements to travel models would affect the way all planning is done and, thus, have impacts on numerous local, state, and federal programs, including highway programs. Officials from FTA and the Office of the Secretary emphasized that the request for proposals is just a small step forward to improve modeling. In the long-term, larger, more fundamental changes are needed to create dynamic travel models. For example, current models would need to be adjusted to capture the movement of individuals rather than parts of the transportation system, such as a highway segment. Additionally, models need to be altered so that they produce second-by-second results rather than results by groups of hours. These long-term improvements would allow for reliable and accurate estimates of highway user benefits resulting from transit-related mobility improvements and would also improve travel speed estimates at both the regional and micro levels. Like the efforts to improve approaches for predicting changes in highway user benefits, FTA and Office of the Secretary officials said that these long-term changes in modeling will benefit many transportation programs beyond the New Starts program. However, FTA and Office of the Secretary officials told us that a significant investment of resources by all levels of government will likely be required to overcome current modeling limitations. In its 2007 report, TRB called for $20 million annually to update local travel models across the country. Currently, DOT invests about $2.4 million annually to improve modeling capabilities. Approximately $500,000 per year is allocated to DOT’s Travel Model Improvement Program, which is designed to assist MPO model development efforts, and another $1.9 million is set aside annually through SAFETEA-LU for the development of TRANSIMS. TRB also reported that MPOs face similar challenges. Specifically, MPO budgets for model development have not grown commensurately with travel modeling and forecasting requirements at the federal level, and staffing levels often limit the extent to which MPOs can focus on improvements to travel models in addition to their typical obligations. Experts and some project sponsors we spoke with generally support FTA’s quantitatively rigorous process for evaluating proposed transit projects but are concerned that the process has become too burdensome and complex, and as noted earlier, may underestimate certain project benefits. For example, several experts and transportation consultants told us that although it is appropriate to measure the extent to which transit projects create primary and secondary benefits, such as mobility improvements and economic development, it is difficult to quantify all of these projected benefits. Additionally, several project sponsors noted that the complexity of the evaluation process can necessitate hiring consultants to handle the data requests and navigate the application process—which could increase the project’s costs. Our previous reviews of the New Starts program have noted similar concerns from project sponsors. For example, in 2007, we reported that a majority of project sponsors told us that the complexity of the requirements—such as the analysis and modeling required for travel forecasts—creates disincentives for entering the New Starts pipeline. Sponsors also said that the expense involved in fulfilling the application requirements, including the costs of hiring additional staff and consultants, discourages agencies with fewer resources from applying for this funding. In response to such concerns, FTA has tried to simplify the evaluation process in several ways. For example, following SAFETEA-LU’s passage, FTA established the Very Small Starts eligibility category within the Small Starts program for projects less than $50 million in total cost. This program further simplifies the application requirements in place for the Small Start program, which funds lower-cost projects, such as bus rapid transit, streetcar, and commuter rail projects. Additionally, in its New Starts program, FTA no longer rates projects on the operating efficiencies criterion because, according to FTA, operating efficiencies are already sufficiently captured in FTA’s cost-effectiveness measures, and the measure did not adequately distinguish among projects. Thus, projects no longer have to submit information on operating efficiencies. Likewise, FTA no longer requires project sponsors to submit information on environmental benefits because it found that the information gathered did not adequately distinguish among projects and that EPA’s ambient air quality rating was sufficient. FTA also commissioned a study by Deloitte in June 2006 to review the project development process and identify opportunities for streamlining or simplifying the process. This study identified a number of ways that FTA’s project development process could be streamlined, including revising the policy review and issuance cycle to minimize major policy and guidance changes to every 2 years and conducting a human capital assessment to identify skill gaps and opportunities for reallocating resources in order to enhance FTA’s ability to review and assist New Starts projects in a timely and efficient manner. FTA is working to implement these recommendations. Incorporating qualitative information into the New Starts evaluation process can provide a more balanced approach to evaluating transit projects, but developing clear and consistent guidance for incorporating qualitative information can be challenging. Though a quantitative evaluation process can be both rigorous and transparent, it does have limitations. Our past work and some experts and project sponsors we interviewed expressed concern about using a strictly quantitative process when evaluating proposed transportation investments because, as discussed above, certain benefits cannot be easily quantified. For example, some project sponsors and experts said that because certain impacts, such as economic development, cannot be easily quantified, a qualitative approach is needed to ensure that those project impacts are included in the New Starts evaluation process. Additionally, experts and project sponsors we spoke with raised concerns about FTA’s heavy reliance on quantitative measures in the New Starts evaluation process, noting that it can be very costly to run multiple iterations of travel models (which a quantitative-focused evaluation process requires) and that some transit agencies do not have the expertise to refine their models to FTA’s specifications. In recognition of the limitations of a quantitative analysis, FTA has integrated some qualitative information into its current evaluation process. For example, FTA currently uses three qualitative land use measures to evaluate a transit project’s potential land use impacts. The NPRM also proposes to incorporate some qualitative information into the evaluation process, including measures of a transit project’s impact on economic development. Additionally, FTA incorporated the make-the-case document into its evaluation process in 2003, which allows project sponsors to submit an essay that justifies why the New Starts project is the best possible alternative and why it is needed. Although the fiscal year 2009 rating cycle was the first time that FTA planned to rate the make-the-case documents for the evaluation process, it ultimately decided not to because agency officials were generally dissatisfied with the quality of the make- the-case documents submitted. FTA officials attributed the overall unsatisfactory quality of the make-the-case documents to insufficient guidance about what information to include in the document and how this information would be evaluated. FTA told us that they are working to improve the guidance for the next rating cycle. According to a few project sponsors we spoke to, FTA’s recent experience with the make-the-case document illustrates the need for consistent, transparent guidance for using qualitative information in its evaluation process. To help FTA incorporate qualitative information into the evaluation and rating process in a transparent and consistent manner, a few experts we spoke with suggested that FTA convene an external panel of transportation experts to rate qualitative information, such as the make-the-case document and the economic development criterion. Different options for evaluating proposed transit projects exist. However, all have limitations and are impacted to varying degrees by the systemic challenges previously identified, including local modeling limitations and the need to balance the rigor of the evaluation process with an interest in minimizing complexity. One option is to revise the current evaluation process as proposed by FTA in the August 2007 NPRM and proposed policy guidance. A second option is to use benefit-cost analysis as the evaluation framework for projects. A third option is to use evaluation frameworks that vary by project goal in order to better support local transit priorities. A fourth option is to eliminate the federal evaluation process and devolve these responsibilities to the state level by making New Starts a formula grant program. One option to evaluate proposed transit projects is to revise the existing New Starts evaluation process, as proposed by FTA. In response to provisions in SAFETEA-LU and to improve the New Starts program, FTA proposed to revise the current process by introducing new evaluation measures and weights, as described in its August 2007 NPRM and proposed policy guidance. The proposed process revises the current evaluation process to reflect the multiple measure approach to evaluating transit projects described in SAFETEA-LU. As in the current process, FTA’s proposed evaluation process assigns ratings to projects on the basis of various evaluation criteria to determine summary ratings for both local financial commitment and project justification (see fig. 4). In contrast to the current process, however, the proposed process places weights on measures that were previously not used to calculate initial project justification ratings, including environmental benefits, economic development, and mobility improvements. Under the proposed evaluation process, project justification criteria are grouped into categories of “cost-effectiveness” and “effectiveness.” The cost-effectiveness category accounts for 50 percent of the overall project justification rating and is based on the current measure of cost- effectiveness with no proposed changes. The effectiveness category accounts for the other 50 percent of the project justification rating and is based on measures of (1) mobility improvements, (2) economic development and land use, and (3) environmental benefits. See table 2 for descriptions of all the proposed project justification measures. Although experts and project sponsors had differing opinions, many experts we spoke to generally thought that the weights proposed for the project justification criteria were appropriate. In particular, many said it was appropriate that FTA retained its emphasis on mobility improvements in the proposed evaluation framework by weighting the cost-effectiveness criterion heavily. They generally agreed with FTA’s assumption that societal benefits from transit projects generally result from user benefits— that is, reductions in the real and perceived cost of travel. As such, FTA’s measure of predicted user benefits accounts for many project benefits. Under the proposed process, FTA would measure different dimensions of user benefits as part of its cost-effectiveness, mobility improvements, and economic development criteria. In addition, as called for by many project sponsors and experts we spoke to, the proposed framework places weights on measures of economic development, environmental benefits, and other factors, such as congestion impacts. Many of those experts said that the weights placed on economic development and environmental benefits are appropriate. In particular, the experts said that the relatively low weight placed on the measures of economic development is appropriate because transit-related development benefits are generally transfers of economic activity from one area to another and not net benefits to a region. They also said that many economic development benefits result from user benefits, and as such, they are captured in the cost-effectiveness criterion. As we have reported in the past, these benefits represent real benefits for the jurisdiction making the transportation improvement but are considered transfers and not real economic benefits from a regional or national perspective. Further, although SAFETEA-LU lists economic development effects and transit supportive land use as separate project justification criteria, most of the experts we spoke to agreed with FTA that combining measures of economic development and land use into a single evaluation criterion is appropriate because the two criteria are strongly related. Although many experts generally agreed with the weights proposed, some project sponsors we spoke to disagreed with the weights placed on the evaluation criteria. In particular, they told us that transit user benefits, as measured under the cost-effectiveness and mobility improvements criteria, continue to be weighted too heavily under the proposed evaluation process. They stated that mobility improvements are emphasized at the expense of other project benefits, such as economic development. A provision in the SAFETEA-LU Technical Corrections Act of 2008 amended the language of 49 U.S.C.§ 5309 to require that FTA give comparable, but not necessarily equal, numerical weight to each project justification criteria in calculating the overall project rating. This provision could potentially address the foregoing concerns, as FTA is now required to capture project benefits in a comparable manner. However, an FTA official told us that the evaluation process proposed in their August 2007 NPRM and proposed policy guidance would have made the change now expressed in law by proposing to weight each of the different criteria included in the statute. Furthermore, according to experts and project sponsors we spoke with, the proposed revisions to the current evaluation process preserve the rigor of FTA’s existing evaluation framework. Unlike the Federal Aid Highway Program, in which funds are automatically distributed to states via formulas, the New Starts program’s evaluation process requires local transit agencies to compete for project funds based on specific financial and project justification criteria. As noted by some experts we spoke with and in our past work, the use of such a rigorous and systematic evaluation process helps to properly distinguish among different projects and could serve as a model for other transportation programs. Further, some project sponsors also noted that use of the make-the-case document, as proposed under the “other factors” criterion, could be an effective way to incorporate additional qualitative information into the evaluation process. Although experts and project sponsors had differing opinions, many experts and project sponsors noted that the revised process may still inaccurately estimate total project benefits because of how certain benefits are measured. As a result, without improvements to the way FTA measures certain project benefits, it risks ranking proposed projects inaccurately. In particular, some experts and project sponsors we spoke with expressed continued concern about how FTA measures user benefits for the purposes of rating projects’ cost-effectiveness, noting the lack of accounting for nontransit user benefits, such as highway users, and the use of fixed land use assumptions when calculating transit user benefits. As previously discussed, FTA maintains that its measure of transit user benefits is the best that can be done given local modeling limitations and recognizes that these limitations may impact the relative ranking of proposed projects. Many project sponsors and experts we spoke to also expressed concern about how FTA measures project costs when determining the cost-effectiveness rating. As required by FTA, the cost used for this rating must include “all essential project elements necessary for completion of the project.” According to FTA, there has been much discussion in the past as to what constitutes an essential element of the project versus a project “betterment.” In its August 2007 NPRM, FTA sought industry comment on how the concept of essential project elements should be addressed in the evaluation process. Many of the stakeholders we consulted, as well as comments submitted to FTA’s docket, said that betterments should be excluded from the project cost when calculating cost-effectiveness. This could result in better cost- effectiveness scores for some proposed projects, according to FTA. Some stakeholders we spoke to also noted that defining what an essential project element is can be difficult. Although many experts we spoke to agreed with the weight placed on cost-effectiveness in the evaluation process, some also said that FTA should not rely solely on the TSUB measure as a proxy for all other benefits, which they maintained is the practical effect of both the current and proposed evaluation processes. Some benefits, such as economic development unrelated to mobility improvements, are not captured by the TSUB measure or the proposed new measures of project benefits, according to many experts we spoke to. FTA’s continued emphasis on its measures of mobility in the revised evaluation process may lead to underestimating projects’ total benefits and, thus, inappropriately ranking proposed projects. FTA acknowledged this concern in its August 2007 Proposed Policy Guidance, noting that not all transit-related economic development is the result of improvements in mobility. FTA is currently studying the magnitude of benefits unrelated to mobility improvements that result from projects and told us that local modeling limitations have made it difficult to estimate projects’ land use impacts. In particular, FTA convened an expert panel on October 17, 2007, to discuss methods for evaluating the economic development benefits of transit projects. FTA’s intended objective is to develop, to the extent possible, a standardized, empirically based, and rational method for evaluating the potential economic development benefits of New Starts projects. (See table 3 for more information on the proposed evaluation measures.) Some experts and project sponsors also expressed concern that the proposed evaluation process introduces evaluation measures that will not appropriately distinguish among projects. In particular, they said that FTA’s proposed measures of economic development, congestion, and environmental benefits are crude proxy measures of the real benefits and will not meaningfully distinguish among projects. FTA officials acknowledged that the proposed measures of environmental benefits are imperfect proxies but said that they are the most appropriate measures available to distinguish among projects, given the difficulties in forecasting the impact of projects on the environment. Further, they said that they decided not to propose measures of the predicted impact of projects on the environment, including air quality and greenhouse gas emissions, in order to avoid placing additional burden on project sponsors. The officials also said that they are conducting research to identify other technically appropriate measures. In particular, FTA’s August 2007 Proposed Policy Guidance states that the agency is initiating a long-term effort, in consultation with the transit community and environmental experts, to develop more robust environmental measures that will be effective at distinguishing among candidate projects. However, FTA has not established a timeline for this effort and, according to transit associations we spoke with, has not contacted them to publicize this long-term project. FTA officials also acknowledged that the proposed measure of congestion impacts, as part of the mobility improvements criterion, is an imperfect proxy, but is appropriate given difficulties in forecasting the impact of projects on nontransit users. Also, as noted earlier, FTA is collaborating with the Office of the Secretary to develop methods of measuring transit’s impact on highway users. Given local travel modeling limitations and SAFETEA-LU provisions, FTA officials told us that their proposed measures of congestion and environmental benefits are appropriate, respond to the intent of SAFETEA-LU, and minimize the burden on project sponsors. However, some experts and project sponsors told us that these proxy measures make the evaluation process more complicated without improving the relative ranking of projects. To appropriately balance the rigorous evaluation of projects with the complexity of the process, many experts and project sponsors said that FTA should include only those evaluation measures that help properly distinguish among projects. Furthermore, some experts and project sponsors we spoke with said FTA’s proposed measures of economic development are not appropriate because they will not capture projects’ impacts on local development patterns. They noted that the measures should be of predicted impacts and not of current conditions. Because local models do not reliably predict the complex interaction between transit projects and land use, some experts and project sponsors we spoke to said that FTA should rely on both quantitative and qualitative measures to evaluate projects’ predicted economic development impacts. For example, a project sponsor told us that local economic models along with surveys of local real estate experts can be used to help assess the future impact of a transit project on a corridor’s development. FTA officials told us that the proposed measures of economic development and land use are drawn from research identifying the causal factors for economic development and therefore are the most appropriate and reliable measures available given difficulties in forecasting the impact of transit projects on economic development and land use. FTA officials also noted that they have solicited feedback about measuring these benefits in the past and have not received any practical or appropriate suggestions. A second option to evaluate proposed transit projects is benefit-cost analysis. Benefit-cost analysis, a process that attempts to quantify and monetize benefits and costs accruing to society from an investment, can be used to identify investment alternatives with the greatest net benefit to the locality, region, or nation. This analysis examines the immediate and long-term effects of the investment for both users and nonusers. Because benefit-cost analysis can be used to systematically assess proposed investments, it may be a useful tool for evaluating New Starts projects. Although using this approach to evaluate other federal investments is commonly advocated, FTA is currently prohibited from considering the dollar value of mobility improvements in evaluating projects, developing regulations, or carrying out any other duties. This prohibition has the practical effect of precluding FTA from conducting benefit-cost analysis of proposed transit projects. Despite this prohibition, benefit-cost analysis could help FTA better organize and evaluate information about proposed transit projects. Some experts we spoke to said that benefit-cost analysis, in conjunction with other qualitative evaluation measures, would be an ideal framework for evaluating New Starts projects. Most experts we spoke to agreed that, conceptually, benefit-cost analysis offers a full comparison of transit projects’ benefits and costs. One expert said that it is appropriate to have an evaluation process that produces detailed estimates of all benefits and costs so that projects with the highest net benefits can be identified and funded because the New Starts’ program budget is limited. In the past, we have encouraged the use of benefit-cost analysis in other areas, such as freight transportation, and noted the usefulness of the analysis for federal transportation decision makers. Some experts also maintained that most of the information necessary for benefit-cost analysis is already produced or available to project sponsors. Most experts we spoke to who advocated using benefit-cost analysis, however, maintained that the quantitative results of the analysis should be used in concert with qualitative measures to account for those factors that cannot be monetized. We have noted in the past that guidance on benefit-cost analysis advises decision makers to augment the results of the analysis with consideration of other factors, such as the equitable distribution of benefits. Executive Order 12893 directs agencies to assess benefits and costs of proposed infrastructure investments. In addition, we and others, including the Office of Management and Budget and DOT, have also identified benefit-cost analysis as a useful tool for integrating the social, environmental, economic, and other effects of investment alternatives and for helping transportation decision makers identify projects with the greatest net benefits. In this way, benefit-cost analysis could provide FTA with a systematic and comprehensive assessment of proposed projects’ impacts. In addition to the legal prohibition on FTA monetizing certain project benefits, there are many short-term challenges to implementing benefit- cost analysis. First, according to some experts we spoke to and our previous work, because local travel models produce outputs that become inputs for benefit-cost analysis, this approach to evaluating projects is limited by the previously mentioned limitations of local travel models. Accordingly, some experts we spoke to maintained that the results of benefit-cost analysis would not be reliable. FTA officials also told us that many project sponsors do not have the technical capacity to conduct benefit-cost analysis. A second challenge identified by many experts and project sponsors is the difficulty of monetizing certain project benefits and considering the distribution of predicted benefits. For example, determining how to quantify and monetize reductions in emissions and travel time can be challenging. Although agency guidance exists, researchers do not always agree on the appropriate methods for valuing these impacts. Additionally, while benefit-cost analysis attempts to determine the net benefits of projects, it does not usually consider the distribution of those benefits across locations or populations or other equity concerns that may exist. As two experts told us, and as we have noted in the past, these distributional issues could be addressed within benefit-cost analysis by, for example, weighting the benefits and costs to a disadvantaged group differently than those to other segments of the population. However, it can be difficult in practice to determine the appropriate weights to assign to particular groups. Some experts and project sponsors said that FTA should not adopt this approach to evaluating projects because of these particular weaknesses. An FTA official told us that they do not support using benefit-cost analysis because of the challenges associated with monetizing benefits. FTA officials also maintained that their current evaluation process captures information similar to a formal benefit-cost analysis. They also said that their current process is appropriate because the goal of the New Starts evaluation process, given funding constraints, is to produce a relative ranking of proposed projects, not to identify all projects with positive net benefits. As we have previously stated, FTA’s emphasis on mobility improvements and reliance on certain proxy measures in the current and proposed evaluation processes may underestimate total project benefits, thereby impacting the relative ranking of projects. In contrast, benefit-cost analysis would attempt to monetize all benefits and costs, which experts told us would be a more comprehensive approach to evaluating projects. Finally, an FTA official we spoke with also noted that the statutory prohibition on monetizing mobility improvements when evaluating projects prevents FTA from using benefit-cost analysis for the New Starts program. A third option to evaluate proposed transit projects is to evaluate them differently based on their primary goal. Experts and projects sponsors told us that transit projects have different and multiple goals, from improving mobility to reducing greenhouse gas emissions. (See figure 5 for examples of transit project goals.) Some experts and project sponsors said that the New Starts program could focus more on facilitating local transit goals, such as economic development, by using different evaluation processes for projects with different goals. They advocated for options that would emphasize local goals because they said the practical effect of FTA’s current evaluation process is the exclusion of certain transit projects from funding consideration. More specifically, projects with the goal of fostering high-density development through the construction of transit stations often cannot achieve a successful ranking under the New Starts process because they generally are not predicted to create significant transit user benefits. According to one expert we spoke to, this goal- focused option could either involve different evaluation criteria for different types of projects or consistent criteria but different weights for the criteria based on the goal of the project. For example, projects with the primary goal of catalyzing and managing local economic development could be evaluated mainly on the basis of predicted economic development effects and the extent of transit-supportive policies and characteristics in the project corridor. Experts and project sponsors we spoke to said the main weakness of using different evaluation frameworks is that federal transit spending should reflect national priorities. More specifically, they said that because the New Starts program is funded by the federal government, projects should go through a national evaluation process designed to support those projects that serve particular national goals. One expert in particular said that FTA should retain its primary focus on funding projects that improve mobility and not on those designed to change the structure of cities. FTA officials also maintained that projects should not be evaluated differently because the New Starts program is a national program and, as such, should have an evaluation process that reflects national priorities and is consistently applied to all projects. Additionally, some experts we spoke to said that establishing defensible and appropriate measures for different evaluation processes could be difficult. Some experts also said that it may be hard to separate projects into different categories, given the fact that most projects have overlapping goals. Finally, some experts expressed concern that project sponsors would self-select into the evaluation process under which they score best. Such self-selection could increase the total number of projects qualifying for New Starts funding, while potentially decreasing the rigor of the selection process. FTA officials also expressed this concern because potential measures associated with certain goals, such as economic development, are relatively subjective. The officials maintained that it would be difficult to develop appropriate and defensible metrics to assess projects with goals other than mobility improvements. According to some experts we spoke to, a fourth option is to eliminate the evaluation process at the federal level and devolve this responsibility to the states. In particular, these experts suggested using a formula grant program to distribute New Starts funds, noting that this option would result in projects that better reflect local transit priorities. One expert we spoke to maintained that most transit projects only have local or regional benefits and no national impacts, and thus, should be controlled by states. A formula grant program in particular, according to some of those experts, could ensure that local areas build projects that meet their needs, as opposed to those that meet FTA’s expectations. According to experts we spoke to, shifting the federal investment in fixed guideway transit from a discretionary grant program to a formula grant program would devolve the evaluation of projects to the state or local levels. Formula grant programs allocate funds to states or their subdivisions in accordance with a distribution formula prescribed in law or regulation. Grant recipients may then allocate these funds to specific projects based on program eligibility guidelines. One expert we spoke to also suggested developing a large-scale transportation formula grant program that would include money for New Starts projects. Such a program could use performance-based indicators to make state allocations. Other experts we spoke to, however, said that establishing accountability mechanisms for project performance under a formula program could be difficult. Formula grant programs lodge decision power, and thus accountability, at the state and local levels to varying degrees and with varying constraints. The practical result of this, as we have noted in our past work, is often that program-specific performance information is collected through program operations, which limits the ability of the federal government to hold grantees accountable. Some formula grant programs’ designs inherently limit the prospect of collecting program-wide performance data through program operations. As we have also previously reported, many current surface transportation projects funded through formula grant programs are not effective at addressing key transportation challenges. They generally do not address these challenges because the federal role is unclear and programs lack links to needs or performance. Furthermore, devolving the evaluation process for proposed transit projects would also eliminate the rigorous, national, evaluation process FTA has developed—through the New Starts program—which we have previously recognized as a model for other programs. More specifically, we have noted that while the New Starts program requires project sponsors to justify their proposed transit projects on the basis of cost- effectiveness and other criteria, there are no similar federal requirements for analyses of highway project benefits because those projects are funded under a formula program. FTA’s New Starts program is often cited as a model for other federal transportation programs. FTA’s recommendations for funding are based on a rigorous examination of the benefits and costs of proposed projects, and Congress has generally followed FTA’s funding recommendations. However, there is growing lack of confidence among Members of Congress and the transit industry about the process and the results it produces. For instance, FTA may be underestimating projects’ benefits because existing and proposed evaluation measures do not fully capture all potential benefits, such as benefits to highway users and environmental benefits. Capturing these other benefits potentially could change the relative rankings of proposed projects and FTA’s funding recommendations. According to FTA officials and some experts we interviewed, local models must be improved in order to develop and employ better measures of project impacts. These models produce the data necessary to measure potential benefits of transit projects, such as the projects’ impacts on highway congestion. However, due to technical limitations, current models cannot be counted on to accurately and reliably produce this information. Without improvements to these models, FTA will have to continue using proxies for certain benefits—which could lead to inaccurate assessments of projects’ benefits. Improving these models is a complex and costly endeavor—and will likely require support from all levels of the government. However, given that New Starts projects cost hundreds of millions of dollars, it seems prudent that FTA and other federal, state, and local agencies take steps to improve the models used to provide critical information to policymakers about the merits of the projects and ultimately, whether the projects should be implemented. Furthermore, the benefits of improving local travel models would extend beyond transit projects, as data from these models are used to inform regional transportation planning for other modes, as well. The upcoming reauthorization of all transportation programs, including the New Starts program, provides an opportunity to seek additional resources to improve local travel models. FTA is working to improve the New Starts evaluation process and, in particular, address the limitations associated with its current measures. For example, FTA has issued a request for proposals to develop approaches for predicting changes in highway user benefits, which could help eliminate the need to use crude proxies in the evaluation process and, therefore, more accurately measure project benefits. However, FTA has not established a timeline for completing this effort. Furthermore, FHWA has declined to participate in this effort, even though the results could benefit all kinds of transportation planning. In addition, although FTA has committed to work with environmental experts to improve the environmental benefits measures, FTA has not begun this effort, or established time frames for initiating or completing this effort. Given that there is general consensus that FTA’s existing and proposed environmental benefits measures do not meaningfully distinguish among projects, FTA should work expeditiously to improve these measures before having project sponsors develop and submit information that is not useful for evaluation and rating purposes. In addition, FTA has worked to incorporate qualitative information about certain project benefits in the evaluation process, which can help ensure that all project benefits are fully considered. However, the inclusion of qualitative information in the evaluation process does not negate the need for FTA to work to improve existing or develop new quantitative measures for the different evaluation criteria. There are a number of alternatives FTA can consider as it explores options for revamping the New Starts program. The NRPM presents one way to modify the existing evaluation framework, but there are also several different options that could serve as a means to determine which transit projects should receive New Starts funding. In particular, our past work and some of the experts we spoke to identified benefit-cost analysis as a viable tool that could provide a comprehensive analysis of projects’ costs and benefits over time. However, FTA’s ability to consider this approach is constrained by the current prohibition on placing dollar values on mobility improvements. Going forward, it is important that FTA have the flexibility to consider a wide range of approaches for evaluating transit projects, including benefit-cost analysis, as it seeks to improve the New Starts program. To improve the New Starts evaluation process and the measures of project benefits, which could change the relative ranking of projects, we recommend that the Secretary of Transportation take the following five actions: (1) Seek additional resources to improve local travel models in the next authorizing legislation; (2) Seek a legislative change to allow FTA to consider the dollar value of mobility improvements in evaluating projects, developing regulations, or carrying out any other duties; (3) Direct the Administrator of FTA to establish a timeline for issuing, awarding, and implementing the result of its request for proposals on short- and long-term approaches to measuring highway user benefits from transit improvements; (4) Direct the Administrator of FTA to establish a timeline for initiating and completing its longer-term effort to develop more robust measures of transit projects’ environmental benefits that are practically useful in distinguishing among proposed projects, including consultation with the transit community, and; (5) Direct the Administrators of FTA and FHWA to collaborate in efforts to improve the consistency and reliability of local travel models, including the aforementioned request for proposals on approaches to measuring highway user benefits. We provided a draft of this report to DOT for review and comment. DOT generally agreed with the findings and recommendations in this report, and provided clarifying comments and technical corrections, which we incorporated, as appropriate. We are sending copies of this report to DOT and appropriate congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or (202) 512-2834. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The Federal Transit Administration (FTA) evaluated and rated 29 New Starts, Small Starts, and Very Small Starts projects for funding during the fiscal year 2009 evaluation cycle. FTA evaluated and rated 13 New Starts projects, 2 of which had pending full funding grant agreements (FFGA) and were recommended for funding. FTA did not recommend any new New Starts projects for funding this year. FTA also evaluated and rated 16 Small Starts and Very Small Starts projects and recommended 13 of these projects for funding. The fiscal year 2009 President’s budget requests $1.62 billion in New Starts funding, the majority of which is for 15 projects with existing FFGAs. FTA identified 16 New Starts projects during the fiscal year 2009 cycle, including 2 projects with pending FFGAs and 14 projects in preliminary engineering and final design. (See table 4 for a full list of these projects.) Of the 16 total projects, 13 projects were evaluated and rated using the newly instituted five-level scale, and 3 projects were statutorily exempt from being rated. Although they evaluated and rated fewer New Starts projects during the fiscal year 2009 cycle than in previous years, FTA officials told us that this decrease does not indicate that there are fewer projects in the pipeline. They stated that the Annual Report only provides a snapshot of the total portfolio of projects in development or under construction. As a result, projects that have existing FFGAs or those that are currently in alternatives analysis are not included in this list. Since last year’s New Starts evaluation and rating cycle, four projects in the pipeline “graduated” from final design and received FFGAs, and one sponsor withdrew two projects from the process after changing the project type in both corridors from bus rapid transit to light rail rapid transit. FTA expects that the revised projects will return to the pipeline and progress toward an FFGA in the future. FTA officials also anticipate that several other projects that are currently in alternatives analysis will move into preliminary engineering at some point in the near future, at which point they will be evaluated and rated. FTA did not recommend any new projects for funding in the current evaluation cycle but did recommend funding for two projects with pending FFGAs: the West Corridor Light Rail Transit (LRT) in Denver and the University Link LRT Extension in Seattle. In its Annual Report, FTA states that both of these projects meet the New Starts criteria, are at an advanced stage of development with few remaining uncertainties, and are expected to be ready for an FFGA prior to or during fiscal year 2009. The total capital cost of these two projects is estimated to be $2.46 billion, with the total federal New Starts share for the West Corridor LRT at 44 percent and the University Link LRT extension at 42 percent of the total cost, respectively. FTA also recommended reserving $78 million in New Starts funding for final design activities for projects that will reach final design prior to the development of the fiscal year 2009 appropriations bill. Unlike in previous years, FTA has not specified which projects will be eligible for this funding or allocated a particular amount for any given project. According to the Annual Report and officials we spoke to at FTA, this approach will allow the agency to make “real time” funding recommendations as project uncertainties are mitigated and Congress makes final appropriations decisions. FTA does not expect that all of the projects in preliminary engineering will advance to final design in fiscal year 2009 (see table 4). FTA evaluated and rated 16 eligible Small Starts and Very Small Starts projects, including 12 projects that were advanced into project development during this cycle and 4 existing Small Starts projects that were not fully funded in fiscal year 2008. Ten projects received a “medium” rating and 6 projects received a “medium-high” rating. FTA recommended 13 of these 16 projects for funding. (See table 5 for a list of FTA’s funding recommendations for fiscal year 2009.) The total capital cost of the 13 projects that FTA recommended for funding is estimated to be $771.6 million, and the total Small Starts, including Very Small Starts, share is expected be about $451.6 million. Most of these projects are proposed to be funded under a multiyear Project Construction Grant Agreement. However, three projects, which have requested less than $25 million in total Small Starts funding, are proposed in this budget to be funded under one-year capital grants. The administration’s fiscal year 2009 budget proposal recommends that $1.62 billion be made available for the New Starts program. This amount is $51.7 million more than the program’s fiscal year 2008 appropriation. Figure 6 illustrates the planned uses of the administration’s proposed request for the New Starts fiscal year 2009 budget, including the following: $1,146.62 million would be allocated among the 15 projects with existing $160 million would be allocated among 2 projects with pending FFGAs; $78 million would be allocated to projects that will reach final design before the end of this fiscal year; $200 million would be allocated for Small Starts projects; $20 million for ferry capital projects (Alaska and Hawaii) and Denali $16.2 million for oversight activities. To address our objectives, we reviewed previous GAO reports, FTA’s existing and proposed New Starts policy guidance, FTA’s August 2007 Notice of Proposed Rulemaking (NPRM) for New Starts, and the provisions of SAFETEA-LU that address the New Starts program to identify the information captured by the current and proposed New Starts project justification criteria. We also reviewed various pieces of legislation, including SAFETEA-LU and New Starts authorizing legislation, along with legislative history, to determine the extent to which New Starts program goals have been expressed or defined in law. Furthermore, we reviewed FTA’s Annual Report on New Starts for fiscal year 2009 to determine the number of projects evaluated, rated, and recommended for funding, the amount of funding requested for these projects, and the total costs of proposed projects. We also examined a sample of public comments submitted in response to the proposed revisions to FTA’s current evaluation process, as described in the NPRM. First, we reviewed all 104 comments submitted to the docket to understand the range of perspectives on the proposed revisions described in the NPRM. Second, following this review, we conducted a more in-depth review of 13 comments submitted by (1) project sponsors we interviewed; (2) professional and advocacy groups we interviewed; and (3) organizations submitting extensive and relevant comments, as determined by team members. Third, upon completion of this analysis, we also reviewed 27 of the remaining 91 comments. After sorting the remaining comments, we randomly selected comments in proportion to the total number of comments received by (1) geographic diversity; (2) relevance of comment to FTA’s proposals; and (3) diversity of opinion. We categorized and analyzed comments to determine the frequency of particular perspectives and opinions about FTA’s proposed revisions, as well as other options for evaluating projects. Because the comments were selected as a nonprobability sample, the results cannot be generalized to all comments. We interviewed FTA and transit industry officials to get an in-depth assessment of the information captured by the current and proposed New Starts project justification measures as well as how FTA’s current evaluation process influences projects’ cost, schedule, and design. We also interviewed FTA officials to discuss how the design and use of these measures impacts the calculation of project benefits, how the proposed revisions respond to SAFETEA-LU and past concerns voiced by the transit industry, and what other options they have considered to measure different project justification criteria. To learn more about the ongoing rulemaking process, we also attended New Starts Listening Sessions in Washington, D.C., and Charlotte, North Carolina, in October 2007. We also attended FTA’s expert panel discussion to identify approaches for incorporating land use and economic development into the New Starts evaluation framework. In addition, we interviewed three industry associations (that represent project sponsors) that participate closely in these programs: the American Public Transportation Association, New Starts Working Group, and Reconnecting America. We also interviewed 11 project sponsors, including both Small Starts projects in the project development phase and New Starts projects in the preliminary engineering or final design stages for the fiscal year 2009 evaluation cycle. We conducted semistructured interviews with the project sponsors to gather additional information on FTA’s current evaluation process; how FTA’s evaluation measures influence projects’ cost, schedule, and design; and other options for evaluating proposed transit projects. We selected these projects based on the following criteria: (1) projects seeking different types of funding (e.g., New Starts or Small Starts); (2) projects involving different modes of transit (e.g., rail, light rail, or bus); (3) projects in different stages of project development (e.g., preliminary engineering or final design); (4) projects of different sizes (based on the total capital cost and ridership projections); and (5) projects from different geographic areas. Because the 11 projects were selected as a nonprobability sample, the results cannot be generalized to all projects. Table 6 lists the New Starts and Small Starts project sponsors we interviewed for our review. To further address our objectives, we interviewed a variety of transportation experts and consultants to obtain their perspectives on FTA’s current evaluation process and other options for evaluating proposed transit projects. We used a semistructured interview guide and followed up by e-mail to collect comparable information from all experts. We selected an initial group of transportation experts to interview based on their past participation in GAO and FTA expert panels on similar topics and their research on transit issues, including the New Starts program. During these initial interviews, we solicited recommendations of other experts we should interview. Using this snowballing technique, we selected the most frequently recommended experts for interviews, as well as those with the most relevant expertise. Table 7 lists the experts we interviewed. Following the interviews, team members categorized and analyzed the experts’ comments to determine the frequency of particular perspectives about FTA’s current evaluation process and other options for evaluating projects. To supplement the perspectives of these experts, we also interviewed other scholars and consultants with specific knowledge of the New Starts project evaluation process, including Don Emerson, Principal Consultant, Parsons Brinckerhoff Consulting; Laurie Hussey, Consultant, Cambridge Systematics, Inc.; Terry Moore, Planning Director, Land-Use and Transportation Planning, ECONorthwest; Kenneth Orski, Editor and Publisher, Innovation Briefs; Randy Pozdena, Senior Economist, Monetary Policy and Industrial Organization, ECONorthwest; Michael Replogle, Transportation Director, Environmental Defense; and Ronald Utt, Herbert and Joyce Morgan Senior Research Fellow, Heritage Foundation. We also reviewed academic and professional literature about the impact of public transit on mobility, economic development, and the environment. The purpose of our literature review was to assess the accuracy of particular assertions made by experts, project sponsors, and government officials we interviewed. Our literature review included articles identified through searches of research databases and the Internet, as well as suggestions of experts we interviewed. Team members analyzed and summarized the evidence from these articles in consultation with a GAO methodologist and economist. We conducted this performance audit from October 2007 to June 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The transportation system user benefits (TSUB) measure is intended to capture all the significant user benefits of a proposed transit project. The measure includes predicted travel time savings and accounts for other benefits by quantifying the effect of nontravel time factors that influence travel behavior. The unit of the TSUB measure is equivalent to minutes of in-vehicle travel time. Project sponsors use local travel demand models to forecast ridership and simulate trips taken in 2030, which is the forecast year used for estimating benefits over time, for two alternatives. The baseline alternative assumes low-cost improvements to the transportation network, while the second alternative (the “build alternative”) assumes the proposed New Starts transit project (e.g., fixed guideway transit infrastructure investment) is constructed. Travel time savings from a proposed transit project can result from a shorter wait, a shorter walk, or shorter in-vehicle times. To adequately account for the time saved for each of these, the predicted travel time savings for wait and walk times are weighted by a factor of two or three, compared to in-vehicle time savings, because behavioral surveys have shown that travelers perceive these out-of-vehicle times as more onerous. The exact weighting factor is usually derived from local travel models calibrated based on local travel surveys. Other factors beyond travel time—namely, travel time reliability and the convenience and comfort of the travel mode—are also incorporated into the measure of user benefits through what is commonly referred to as a modal constant. The modal constant varies by locality based on the results of the model’s calibration. Local models are generally calibrated by adjusting the modal constant until the model accurately predicts current travel patterns. Once a model is calibrated with a particular constant, it is used to forecast future travel times, and thus travel time savings, for the baseline and build alternatives. These travel time savings, reflecting both actual time savings and nontravel time factors, are referred to as user benefits. The TSUB measure values user benefits differently for different individuals. More specifically, it values the benefits of predicted users of the project differently based on the travel mode they are switching from (e.g., automobile or transit). Behavioral surveys have shown that automobile users react differently to the user benefits created by a transit project. Some require very small reductions in transit travel time to change their travel mode from automobile to transit (i.e., the build alternative) because they are relatively indifferent between the existing transit option and automobile travel. These travelers receive benefits, which economists call gains in consumer surplus, because the reduction in transit travel times is greater than what is required to induce their change in travel mode. Others require the transit project’s full measure of time savings before they perceive any advantage to transit and change their mode. These travelers, even though they choose to switch modes, receive little gain in consumer surplus. In between these two kinds of travelers are those with a range of preferences. Accordingly, the “average” traveler that changes to the proposed transit project from automobile travel requires half of the time savings created by the project to change, and thus receives half of the project’s benefits as a gain in consumer surplus. For example, if a transit project is introduced that makes travel in a particular corridor 10 minutes faster than driving an automobile, the average benefit to an automobile user switching to transit will be 5 minutes because some will require time savings of less than 5 minutes to change modes and some will require more. To account for this variation, FTA divides the total predicted time savings for new transit riders by two when calculating user benefits because, on average, only half of the benefits are received by those travelers as gains in consumer surplus while the other half of the benefits are needed to induce the change in mode and do not represent a net benefit gain. Alternatively, individuals who switch transit modes— from bus in the baseline alternative to a new light rail, for example—would get the full 10 minute benefit of the switch because no benefit is needed to induce a mode shift since they are already transit users. These transit users take advantage of the full travel time savings. Transit projects can also create benefits for those who do not choose to use them. For example, a transit project that reduces the number of automobile travelers may reduce overall highway congestion. FTA does not currently credit proposed projects with predicted benefits to highway users because (1) FTA has found that most travel models around the country do not predict plausible changes in highway speeds resulting from transit improvements and (2) the absence of a consistent method for highway speed prediction leads directly to potentially large differences in the predicted benefits of transit projects with similar impacts. To account for benefits to highway users, such as reduced congestion as the result of more transit users, FTA raises the breakpoints for the cost-effectiveness criterion by 20 percent, since they are only using the transit user benefits as the denominator of cost-effectiveness. After accounting for factors that influence travel behavior as noted above, travel times are compared between the baseline alternative and build alternatives to produce the estimate of user benefits. That measure of user benefits, TSUB, becomes the denominator in the calculation of FTA’s cost- effectiveness criterion. In addition to the individual named above, Nikki Clowers, Assistant Director; Vidhya Ananthakrishnan; Kyle Browning; Lauren Calhoun; Jay Cherlow; David Hooper; Delwen Jones; Sara Ann Moessbauer; Josh Ormond; and Susan Zimmerman made key contributions to this report. | Through the New Starts program, the Federal Transit Administration (FTA) evaluates and recommends new fixed guideway transit projects for funding using the evaluation criteria identified in law. In August 2007, FTA issued a Notice of Proposed Rulemaking (NPRM), in part, to incorporate certain provisions within the Safe, Accountable, Flexible, Efficient, Transportation Equity Act: A Legacy for Users (SAFETEA-LU) into the evaluation process. SAFETEA-LU requires GAO to annually review FTA's New Starts process. This report discusses (1) the information captured by New Starts project justification criteria, (2) challenges FTA faces as it works to improve the New Starts program, and (3) options for evaluating New Starts projects. To address these objectives, GAO reviewed statutes, FTA guidance and regulations governing the New Starts program, and interviewed experts, project sponsors, and Department of Transportation (DOT) officials. FTA primarily uses cost-effectiveness and land use criteria to evaluate New Starts projects, but concerns have been raised about the extent to which the measures for these criteria capture total project benefits. FTA's current transportation system user benefits measure, which assesses a project's cost effectiveness, focuses on how proposed projects will improve mobility by reducing the real and perceived cost of travel. FTA told GAO that such mobility improvements are a critical goal of all transit projects. While the literature and most experts that GAO consulted with generally agree with this assertion, they also raised concerns that certain benefits are not captured. As a result, FTA may be underestimating transit projects' total benefits, but it is unclear the extent to which this impacts FTA's evaluation and rating process. FTA officials acknowledged many of these limitations but noted that resolving these issues would be difficult without a substantial investment of resources by all levels of government to improve and update local travel models. FTA faces several systemic challenges to improving the New Starts program, including addressing multiple program goals, limitations in local travel models, the need to maintain the rigor while minimizing the complexity of the evaluation process, and developing clear and consistent guidance for incorporating qualitative information. The evaluation criteria identified in the law reflect multiple goals for the program, which has led to varying expectations between FTA and project sponsors about what types of projects should be funded. Also, models that generate local travel demand forecasts are limited and may not provide all of the information needed to properly evaluate transit projects. FTA has taken steps to mitigate the modeling limitations, such as incorporating proxy measures to account for certain project impacts and developing a request for proposals to improve local travel models so that they can better predict changes in highway user benefits. However, according to FTA officials, the request for proposals is only a first step in improving local travel models, and additional resources are needed. Experts and project sponsors GAO interviewed discussed different options for evaluating proposed transit projects but identified significant limitations of each option. One option is to revise the current New Starts evaluation process as proposed by FTA in the August 2007 NPRM. While some experts GAO spoke to appreciated the rigor of the current evaluation process, others noted that the NPRM may still underestimate total project benefits. For example, FTA's measure of mobility improvements does not account for benefits accruing to highway users, and its measures of environmental benefits may not properly distinguish among projects. Experts also discussed other options for evaluating proposed transit projects, including benefit-cost analysis. Unlike FTA's current evaluation process, benefit-cost analysis would attempt to monetize all benefits and costs, which experts told GAO would be a more comprehensive approach to evaluating projects. FTA is currently prohibited by statute from considering the dollar value of mobility improvements in evaluating projects. |
The availability to decision makers of timely, reliable, and complete data about the nation’s waters has significant environmental and financial implications. Water quality data, for example, are critical for determining which waters do not meet states’ standards and must, therefore, be targeted for potentially expensive cleanup. Similarly, decision makers need reliable and comprehensive data on the quantity of the nation’s water resources to support increasingly important—and contentious—decisions about how to allocate limited water resources among states and among a variety of competing uses. GAO and others, however, have documented shortages in the data available to make such decisions. At the same time, a large number of public and private organizations collect this kind of information—raising questions as to whether more efficient coordination of these data collection efforts can result in more data available for informed decision making. Under the Clean Water Act, states have primary responsibility for implementing programs to manage water quality. Their key responsibilities include establishing water quality standards to achieve designated uses (the purposes for which a given body of water is intended to serve), assessing whether the quality of their waters meets states’ water quality standards, and developing and implementing cleanup plans for waters that do not meet standards. Monitoring information on water quality is the linchpin that allows states to perform these responsibilities. States generally monitor water quality directly, but frequently supplement these data with data collected by federal agencies, volunteer groups, and other entities. Monitoring data can include information about the presence of chemicals such as chlorine, physical characteristics such as temperature, and biological characteristics such as the health and abundance of fish and other aquatic species. Figure 1 shows how monitoring water quality is essential to identifying water quality problems and determining whether actions to restore water quality are successful. As shown in figure 1, states compare monitoring data with their water quality standards. If a state’s assessment of a body of water indicates that it does not meet the standards—for example, if it has levels of chlorine that are too high to support aquatic life—then the body of water is considered as not supporting its intended use of aquatic life. In such cases, states are required, under section 303(d) of the act, to identify and list waters for which technology-based effluent limitations are not sufficient to meet water quality standards and for which pollutants need to be reduced. EPA must approve or disapprove the states’ lists. In developing their lists of impaired waters, states must use all existing and readily available water quality-related data to determine if a water body is impaired and identify the specific pollutant(s) causing impairment. Subsequently, states must develop a total maximum daily load (TMDL), as necessary, for each of the pollutants affecting each impaired body of water. TMDLs are used to restore water quality by identifying how much pollution a body of water can receive and still meet standards and then reducing the amount of pollution entering the water to that level. While states’ use of water quality data is critical to meeting the objectives of the Clean Water Act, other organizations also rely heavily on water quality data for a variety of purposes. The Army Corps of Engineers, for example, uses these data for a variety of reasons, including regulating water projects and issuing permits under section 404 of the act for the discharge of dredge and fill materials into navigable waters. Federal land management agencies such as the Department of the Interior’s Fish and Wildlife Service, National Park Service, and Bureau of Land Management and the Department of Agriculture’s Forest Service rely upon these data to fulfill their responsibilities to protect and restore aquatic resources on federal lands. These agencies also use these data to fulfill their responsibilities under various laws, such as the protection of critical habitat for plants and animals under the Endangered Species Act. In addition to these federal agencies, numerous public and private organizations at the local level rely on water quality data to ensure that public health and environmental goals are protected. Federal, state, local, tribal, and private organizations also rely heavily on water quantity data to fulfill critical responsibilities in ensuring an adequate water supply to meet competing needs. States are primarily responsible for governing the allocation and use of water in accordance with the laws developed by their state and interstate compacts—agreements that address water allocation, quality, and other issues on bodies of water that cross state borders. Key state responsibilities in complying with these compacts and laws include administering water rights to various users, allocating water in accordance with these water rights, maintaining instream flow requirements for habitat purposes, and enforcing the decrees and water laws of the state. To fulfill these responsibilities, states need water availability data, such as streamflow and snowpack data, to quantify how much water is and will be available for allocation, and water use data, including withdrawal and return flow data, to determine how much water is being consumed. They obtain these data mostly through the efforts of others, such as federal agencies and municipalities, although a few states also conduct their own monitoring. Federal agencies support states in their efforts to govern the allocation and use of water through many activities. Agencies, such as the Department of the Interior, assist states in developing, implementing, and enforcing interstate compacts; the U.S. Geological Survey, the National Oceanic and Atmospheric Administration’s (NOAA) National Weather Service, and Natural Resources Conservation Service, among others, collect and share information such as surface water, rainfall, and snowpack data, which help forecast water supply; and the Army Corps of Engineers and Bureau of Reclamation construct, operate, and maintain dams, reservoirs, and water distribution facilities to help meet the needs of water users, among other activities. Federal agencies also need data to support their own varying objectives on federal lands. Agencies responsible for managing natural resources—such as the Forest Service, Bureau of Land Management, Fish and Wildlife Service, and National Park Service—construct and/or maintain water storage and distribution facilities on their lands to provide water for uses such as visitor services, recreation, habitat, and flood control. These agencies also often collect water data or conduct water resources investigations in support of their own responsibilities, such as collection of supplemental streamgage information to assess habitat and recreational conditions. Additionally, numerous federal natural resources management agencies may become involved (e.g., by geography or other factors) in some aspect(s) of tribal water interests. Federal natural resources management agency policies generally include provisions to protect and support tribal water interests, in cooperation with the Bureau of Indian Affairs and the tribes. Other agencies needing water quantity data include local, regional, and interstate water authorities, as well as private firms that own and operate water resources systems. Scientists and recreational water users are also heavy users of water quantity data. These groups use data to, among other things, evaluate current water supplies and plan for future supplies; forecast floods and droughts; operate reservoirs for hydropower, flood control, or water supplies; navigate rivers and streams; and safely fish, canoe, kayak, or raft. Concerns over both water quality and water quantity often come together at the “watershed” level. As illustrated in figure 2, a watershed is an area that drains to a common waterway, such as a stream, lake, estuary, wetland, or ocean. Watersheds come in all shapes and sizes, and often cross county, state, and national boundaries. Depending on its scale, a watershed may refer to large or small river basins, sub-basins, tributary basins, or smaller hydrological units or drainage areas. Many federal agencies have long supported a watershed approach as the best way to manage the nation’s water resources. Army Corps of Engineers officials, for example, noted that the agency has been working in the watershed context and engaged in watershed-level planning and management for many years. They noted further that watershed analysis has been the “cornerstone” of planning and environmental review efforts for major Corps projects. Also, in a December 2002 memorandum, the EPA Assistant Administrator for Water reaffirmed the agency’s commitment to the watershed approach, noting that by focusing multistakeholder efforts within hydrologically defined boundaries to protect and restore our aquatic resources and ecosystems, the watershed approach “offers the most cost- effective opportunity to tackle today's challenges” in meeting the nation’s water needs. As the memorandum notes, the value in this approach is in taking a holistic approach to the water resource in a way that brings in the full range of federal, state, local, and private parties with a stake in the resource. Importantly, the watershed approach also allows for the identification and prioritization of problems affecting the resource and steps to address them. This is important because different watersheds may be affected by significantly different natural conditions and pollution problems. Moreover, even where watersheds are affected by similar pollutants, the causes of their pollution problems—and the steps needed to deal with them—can be quite different. For example, in the case of two watersheds affected by excessive levels of nitrogen, one may need to reduce discharges from wastewater treatment plants and other “point” sources, while the other may need to address nitrogen sources emanating from agricultural use. Moreover, water officials must also consider water availability issues, since the amount of water flowing through the watershed affects the ability of the watershed to assimilate the pollutant. These critical determinations, however, can only be made and defended if reliable and comprehensive data are available on the quality and quantity of the water resource and on the ecological and other factors that affect them. Unfortunately, the key data needed to support critical water management decisions are often incomplete and unreliable. According to the best available data from EPA, only about one-fifth of the nation’s total rivers and stream miles have been assessed to determine their compliance with states’ water quality standards. More generally, we reported in March 2000 that few of the 50 states had a majority of the data they need to make key water quality determinations, such as which of their waters do not meet state standards and what are their most significant sources of pollution. This apparent shortage of such data, however, belies the fact that numerous organizations do in fact collect this kind of information. Many federal agencies as well as a wide variety of other organizations at the regional, state, and local levels collect water quality and/or water quantity data. Consequently, questions have been raised as to whether better coordination among these numerous organizations in their data collection activities can provide decision makers with more of the vital information they need to make informed and defensible decisions on critical water- related issues. The Chairman of the Subcommittee on Water Resources and Environment, House Committee on Transportation and Infrastructure, asked GAO to address a number of issues concerning the water data that various organizations collect, and the degree to which their data collection efforts are coordinated with each other. Specifically, we were asked to determine (1) the key entities that collect water quality and water quantity data, including the types of data they collect, how they store their data, and how entities can access the data; and (2) the extent to which these entities coordinate their water quality and water quantity data collection efforts. To address the first objective, we identified and surveyed key federal agencies that collect water quality and/or water quantity data: the Department of Agriculture’s Agricultural Research Service, Cooperative State Research, Education and Extension Service, Natural Resources Conservation Service, and Forest Service; the Department of Commerce’s National Oceanic and Atmospheric Administration’s National Marine Fisheries Service, National Weather Service, and National Ocean Service; the Department of Defense’s Army Corps of Engineers; the Environmental Protection Agency; the Department of Energy’s Bonneville Power Administration; the Department of the Interior’s Bureau of Land Management, Bureau of Reclamation, Fish and Wildlife Service, U.S. Geological Survey, and National Park Service; and the Tennessee Valley Authority. Though not an exhaustive list of all federal agencies collecting water data, these key agencies were identified through discussions with federal water officials, identification of member agencies on the National Water Quality Monitoring Council, and EPA’s Guide to Federal Water Quality Programs and Information. As appropriate, we obtained separate information from different units within an agency. In each case, we obtained information on the types of data being collected, the methods in which the agencies store the data they collect, and the manner in which the data could be accessed by other parties. To obtain insights on data collection by states, local governments, and other organizations, we conducted site visits to three states—Colorado, Mississippi, and Virginia. The states were chosen on the basis of the diversity of entities involved in the collection of data in these states, geographic diversity, and their experiences in coordinating watershed data. During these site visits, we interviewed representatives of federal, state, and local agencies; watershed management groups; and members of academia, industry, environmental organizations, and volunteer monitoring groups. We also used the survey of federal agencies and the site visits to address the second objective to determine the extent to which data collectors coordinate their data collection efforts. Specifically, a number of questions in our federal agency survey addressed the extent to which data collection activities were coordinated with other federal agencies, as well as other entities. We also sought opinions on the most useful steps that could be taken to improve coordination. We supplemented these contacts by interviewing members of federal and state coordinating organizations, most notably the National Water Quality Monitoring Council and its state counterparts in Colorado, Maryland, and Virginia. In these instances, we sought information about past and ongoing efforts to coordinate data collection, seeking in particular to better understand the barriers these groups face in their coordination efforts. We also sought information about data coordination from other key organizations with particular knowledge about this issue, such as the Association of State and Interstate Water Pollution Control Administrators and the Advisory Committee on Water Information. As agreed with the Chairman’s office, in addressing the second objective, we also sought information on efforts to allow for the integration of data from separate collection efforts, so that direct comparisons can be made in a way that maximizes the usefulness of these data. This inquiry addressed, for example, the steps that agencies have taken or attempted to take to allow data users to integrate data from their agency with data from other sources. We examined this issue in our interviews with the full range of data users and data collectors contacted during our study. We also interviewed database managers from the key agencies that manage and store water data (most notably EPA and the U.S. Geological Survey) to identify current barriers to data integration and the steps needed to achieve better integration. We conducted our work from March 2003 through May 2004 in accordance with generally accepted government auditing standards. GAO contacts and staff acknowledgments are listed in appendix VI. Hundreds of entities collect water quality data, while fewer entities collect most of the available water quantity data. For water quality data, at least 15 federal agencies collect a wide variety of these data on a nationwide, regional, or project-specific basis. At the state level, multiple state agencies collect water quality data, including environmental, agricultural, conservation, health, and forestry agencies, and use these data to comply with federal regulations and to restore and protect water bodies. In addition, many local governments, volunteer monitoring groups, industries, members of academia, and others collect water quality data. Some water quality data are stored in two large national databases operated by the Environmental Protection Agency (EPA) and the U.S. Geological Survey; these databases are available through the Internet. However, many data collectors store their water quality data on a project-specific basis, such as in a database for a single research project, and these data generally are available, by request, only to those who know about the agency’s projects. While many entities collect water quality data, a small number of key federal agencies are responsible for collecting the largest share of the water quantity data collected nationwide. The U.S. Geological Survey collects streamgage data nationwide, NOAA’s National Weather Service collects precipitation data at over 10,000 locations nationwide, and the Department of Agriculture’s Natural Resources Conservation Service maintains an extensive automated system to collect snowpack data. These three agencies store their water quantity data in national databases that are accessible through the Internet. In addition, the Army Corps of Engineers funds the collection of considerable amounts of water quantity data. Other federal agencies, such as the Fish and Wildlife Service, also collect water quantity data, but generally on a project-specific basis with data available by request only. Some state agencies also collect water quantity data to better understand water availability and water use. At least 15 federal agencies, as well as state agencies, local governments, volunteer monitoring groups, industry groups, members of academia and others, collect water quality data. These data generally provide information on chemical, physical, or biological conditions of waters. The scope of the data collected varies widely—from national programs, such as the U.S. Geological Survey’s National Water Quality Assessment Program, to site- specific research projects, such as the Department of Agriculture’s testing of the effects of agricultural practices on water quality. Different entities also vary in how they store data and allow others to access them. In some cases, water quality data are stored in databases that are accessible via the Internet. In many cases, however, water quality data are stored on a project-specific basis and can be accessed only by request. The Clean Water Act establishes goals for attaining water quality, as measured by the biological, chemical, and physical conditions of waters. EPA guidelines discuss the different types of monitoring tests in each of these areas—each of which yields data about particular aspects of bodies of water. Biological monitoring measures the health of aquatic communities and includes a variety of techniques, such as assessing species’ health and abundance. Physical monitoring tests the physical characteristics of bodies of water, such as temperature and the amount of suspended solids in the water. Chemical monitoring tests for chemicals that may be present, such as chlorine or ammonia, and metals, such as mercury. These monitoring types and the parameters they measure are described in figure 3. A number of federal agencies and subagencies collect, or fund the collection of, considerable amounts of water quality data. GAO surveyed the following 15 key federal agencies that collect water quality data on a wide variety of parameters: Cooperative State Research, Education, and Extension Service National Oceanic and Atmospheric Administration U.S. Geological Survey We asked officials from these agencies to report on the specific chemical, physical, and biological parameters—as listed in figure 3—that their agencies collect. Each of the agencies reported that they collect data on all, or almost all, of the listed parameters shown in figure 3. Although these parameters are collected widely across the agencies, we found that the geographical scope of agency data collection for each of these parameters varies considerably. The U.S. Geological Survey operates several large national programs, including the National Stream Quality Accounting Network and the National Water Quality Assessment Programs. These programs describe and provide an understanding of water quality in major river basins and aquifer systems, as well as in small watersheds, and cover about two-thirds of the land area of the conterminous United States. Many federal and state agencies and local groups rely upon data collected by the U.S. Geological Survey for watershed management activities. The Army Corps of Engineers also collects water quality data on a broad geographical scale at many of its approximately 700 water projects. These projects primarily are operated to facilitate navigation, reduce flood or storm damages, provide water supply storage, or generate hydropower. In addition, the Corps also collects a considerable amount of water quality data for planning and design purposes, generally to understand impacts of projects in advance of their implementation. For example, before entering into a dredging cycle, the Corps collects short-term data to understand what pollutants will be released into a water body. Similarly, the Corps collects specific water quality data in response to Section 404 permit requests. In general, the Corps collects water quality data to address environmental issues, such as sediment and water quality for fish and wildlife. Most of the agencies we surveyed collect project-specific data in defined geographic regions. For example, the Tennessee Valley Authority (TVA) collects water quality data to evaluate ecological health in reservoirs throughout the Tennessee Valley, an area that includes almost all of Tennessee and parts of Mississippi, Kentucky, Alabama, Georgia, North Carolina, and Virginia. In addition, the Department of Energy’s Bonneville Power Administration (BPA) collects water quality data in conjunction with some of the hundreds of fish and wildlife projects it funds each year throughout the Pacific Northwest, including Oregon, Washington, Idaho, and western Montana, as well as small portions of Wyoming, Nevada, Utah, California, and eastern Montana. Agencies also collect data at varying frequencies. For example, a Bureau of Land Management official surveyed the agency’s field offices and found that most collect chemical, physical, and biological data annually. In contrast, other agencies, such as the U.S. Geological Survey and National Park Service, reported that they collect water quality data on a continuous or otherwise more frequent basis. There are two national databases for water quality data: EPA’s Storage and Retrieval System (STORET) and U.S. Geological Survey’s National Water Information System (NWIS). According to EPA officials, STORET contains biological, physical, and chemical data collected by over 120 organizations, including federal, state, and local agencies, American Indian tribes, volunteer groups, and academics. EPA officials reported that, as of January 2004, STORET contains approximately 18 million monitoring results collected from over 146,000 sites. Figure 4 depicts STORET’s monitoring coverage. Officials from five of the agencies we surveyed said they store at least some data in STORET. For example, the National Park Service uses STORET to store all of its data, while several other agencies, such as the Bureau of Land Management and the Bureau of Reclamation, store small amounts of data in STORET. The U.S. Geological Survey collects and analyzes chemical, physical, and biological properties of water and disseminates the data through NWIS to the public, state and local governments, public and private utilities, and other federal agencies involved with managing their water resources. The U.S. Geological Survey established NWIS in 1975 and made it available to the public through the Internet in July 2001. According to NWIS database managers, as of September 2003, NWIS was accessed about 16 million times a month. Unlike STORET, which contains data from multiple entities collected using a variety of data collection methods, NWIS contains only data collected by U.S. Geological Survey scientists or under U.S. Geological Survey approved data collection methods that pass a quality control check. According to officials from the Army Corps of Engineers and Bureau of Land Management, some of their agencies’ data are available through NWIS. In addition, some water quality data collected by the Army Corps of Engineers are stored in district offices in individual project files for which the data were collected. Many of these data are accessible upon request. While several federal agencies store at least some of their data in STORET and NWIS, officials in ten of the agencies we surveyed said that all or most of their water quality data are stored in databases that are specific to the project or program for which the data are collected. For example, officials from the Agricultural Research Service said that their data, collected through experiments conducted on farms and ranches to determine how agricultural practices affect water quality and verify the efficacy of best management practices, are stored in numerous, internal project-specific databases. In addition, according to an official from the National Oceanic and Atmospheric Administration’s (NOAA) National Ocean Service, the agency stores the data used to assess the health of marine and coastal ecosystems in internal program-specific databases. Data stored in STORET and NWIS are publicly available through the Internet. Users can search STORET and NWIS by geographic area, such as state or county, and by water quality parameters, such as chlorine or dissolved oxygen. Data within STORET become available on the Internet when users upload their data into the central version of the database. The availability of NWIS data varies depending on the type of data that users are trying to access. For example, some water quality data, such as real-time data that are gathered from gages in streams, may become available in NWIS every 4 hours. In other cases, it can take an average of 4 months for data to be processed, checked for quality, and made available through the NWIS Web site. Many federal agency officials we interviewed said that their data are available by request and/or through agency publications. However, several officials said that, most of the time, it would be difficult for the public to know that data are available because agencies do not always publicize information about individual projects. For example, the Cooperative State Research, Education, and Extension Service (CSREES) provides funding to collect water quality data in support of research and education objectives identified by individual investigators, but CSREES has no centralized database to store the data collected by researchers. Therefore, according to a CSREES official, potential data users would have to know about CSREES-funded projects in order to access the data. Similarly, officials from NOAA’s National Marine Fisheries Service said that the public would have difficulty accessing the data that are stored in project-specific databases, because there is no automated access through the NOAA’s National Marine Fisheries Service Web site. To address their considerable water quality management responsibilities, various state agencies (such as departments of the environment, health, fish and game, and conservation) collect and use water quality data to comply with federal requirements and to restore and protect water bodies. According to a study conducted by the Association of State and Interstate Water Pollution Control Administrators (ASIWPCA), 40 state and 2 interstate agencies with specific responsibilities for monitoring and/or assessing water quality spent a total of roughly $112 million on water quality monitoring in 2002. States vary in the types of data they collect, with some states collecting primarily chemical and physical data, while others focus on biological monitoring. For example, state agency officials we interviewed in Virginia, Mississippi, and Colorado said that their state focuses primarily on collecting chemical data parameters while, as we reported in January 2002, Illinois, Maine, and Ohio rely primarily on biological monitoring. States also vary in the extent to which their monitoring strategies target specific waters of interest or employ statistical sampling methods that allow inferences to be drawn about a larger number of waters. According to ASIWPCA, states tend to use traditional monitoring approaches, such as fixed stations—long-term, sometimes permanent, sampling sites—and special studies, which usually focus on a specific water quality problem. Recently, states have also adopted the following types of monitoring strategies to supplement these approaches: The rotating basin strategy identifies basins, sub-basins or watersheds within an area that are sampled sequentially. Usually, a state monitors about one-fifth of its basins each year. After 4 or 5 years, the state has sampled all, and it repeats the sampling sequence. The targeted monitoring strategy targets certain sites for concentrated monitoring based on a list of consideration and information needs, such as determining the effects of runoff from septic tanks or storm water or assessing current conditions in streams flowing to sensitive areas. The results of targeted monitoring can provide a good picture about water, identify sources of water impairment, and determine if management actions are improving water quality. However, the information gathered is location-specific and cannot be extended to other areas except through mathematical modeling. Probabilistic monitoring uses a sampling approach to provide comprehensive assessments of water quality conditions throughout an area. Sites are randomly selected from all of the waters in a watershed, and the results of monitoring are used to estimate water quality conditions in the larger area with known confidence. Probabilistic monitoring cannot provide information on specific sites unless the sites were included in the random selection. In addition, probabilistic sampling typically does not incorporate seasonal or other variation. A tiered monitoring strategy structures states’ monitoring programs so that the less expensive and most expedient monitoring techniques can be used first, followed by more expensive and time-consuming studies, if the initial studies demonstrate that more monitoring is warranted. The tiered approach may combine the techniques described above. For example, one tier may be a rotating basin probabilistic approach for gathering information on waters statewide, while a second tier may focus on monitoring trends on large rivers and urban streams. In March 2003, EPA issued guidance, “Elements of a State Water Monitoring and Assessment Program,” that recommends 10 basic elements of a state water-monitoring program and serves as a tool to help EPA and the states determine whether a monitoring program meets the requirements of the Clean Water Act. The elements include (1) developing a monitoring program strategy, (2) using an integrated monitoring design, and (3) using accessible electronic data systems. According to the guidance document, EPA believes that state monitoring programs can be upgraded to include all ten elements within ten years. According to EPA officials, states should develop a monitoring strategy by the end of fiscal year 2004 and should begin implementing the strategy in fiscal year 2005. EPA officials stated that they are working with states to implement the guidance in order to reduce inconsistencies and variations in state monitoring programs. After collecting data using the various monitoring strategies, states must store the data so that they can be readily retrieved for analysis and evaluation. According to an EPA official, as of March 2004, 31 states use STORET to store at least some of their data, and EPA is trying to have the remaining states and other federal agencies store their water quality data in STORET as well. ASIWPCA reports that state agencies are increasingly storing water quality data in national and statewide electronic databases, but a small number of agencies still use paper files as their predominant means for storing data. Our site visits confirm that states differ in how their data are stored. Of the states we visited, only Colorado uses STORET to store water quality data. Officials in Virginia and Mississippi reported that they used STORET through 1998, when EPA introduced a modernized version of STORET. Officials in both states said that since they could not easily put data in or retrieve data from the modernized STORET, both states’ Departments of Environmental Quality developed state databases to better meet their needs. In addition, Virginia Department of Environmental Quality officials said that some of their data exists only in paper files. As states’ data storage practices vary, so does the accessibility of their data. According to an ASIWPCA survey, water quality information is primarily available to the public in published reports and other printed materials as well as in electronic formats such as CD-ROMs. The survey also showed that, as their resources permit, states are moving toward making their data available via the Internet. Our site visits similarly revealed that the accessibility of data largely depends on the storage method the state uses. For example, Colorado’s water quality data are accessible through STORET. Since Virginia’s database is internal and is not Internet-accessible, data users must request data or access the data through publications. In Mississippi, the public can access water quality data through publications or by request from the Mississippi Department of Environmental Quality, though officials report that the state agency is moving toward developing a system that will be publicly accessible via the Internet. Local governments, volunteer monitoring groups, and others also collect water quality data for a variety of purposes, including monitoring the health of streams, lakes, and rivers, developing pollution reduction strategies, and conducting research. Local government agencies, such as water management districts, also participate in monitoring projects, often to understand and address recognized water quality problems. Local agencies may limit their data collection to particular geographic locations (e.g., a sewage treatment district or particular town lake) or may collect data for specific parameters, such as pH or dissolved oxygen. For example, according to a Thornton, Colorado, city official we interviewed during one of our site visits, the cities of Northglenn, Thornton, and Westminster, Colorado, were prompted to start the Clear Creek Watershed Group in 1981 after city officials found that excessive nutrients were causing odor and taste problems in the cities’ water supply. Similarly, a Fort Collins, Colorado, official explained that he helped to initiate a coordinated, regional watershed monitoring effort among some major municipal water providers because the quality of water entering water treatment plants was deteriorating. Local governments may also work with federal agencies to collect water quality data. For example, according to a National Park Service official, the agency worked with the city of Las Vegas to collect data on the treatment and disposal of wastewater at nearby Lake Mead. The Army Corps of Engineers partnered with the District of Columbia to conduct wetlands restoration of the Anacostia River, providing monitoring data and technical and project management expertise. In addition, U.S. Geological Survey officials noted that local governments participate in its Cooperative Water Program. According to the volunteer monitoring representative of the National Water Quality Monitoring Council (the National Council), an estimated 800 to 1,200 volunteer monitoring groups across the nation collect monitoring data with varying levels of technical expertise and financial resources. Volunteer monitoring groups collect data for a variety of parameters. For example, volunteers for the Virginia Save Our Streams organization primarily collect biological data through in-stream monitoring. Volunteers for another group, the Alliance for the Chesapeake Bay, collect streamside physical and chemical data, such as temperature, pH, and dissolved oxygen. States use volunteer monitoring groups’ data in a variety of ways. According to the volunteer monitoring representative of the National Council, states’ use of volunteer monitoring data varies along a continuum; some states use volunteer monitoring data for educational purposes, others use the data as a “red flag” to indicate areas where additional state monitoring is needed, and still others use the data to decide whether waters should be identified as impaired. For example, according to the volunteer monitoring representative, Rhode Island uses volunteer monitoring data to make decisions regarding which lakes are impaired. In Virginia, officials from the Department of Environmental Quality explained that the state uses volunteer monitoring data to assess the general conditions of waters, but not to decide on impairments. According to Mississippi Department of Environmental Quality officials, volunteer- collected turbidity data led to a state investigation that found that a farmer caused the pollution because he was clearing land too close to the edge of the river. Finally, we identified the following entities that also collect water quality data: Universities. Fifty-four Water Resources Research Institutes are located at land grant universities throughout the United States. According to an official from one of the institutes, the Virginia Water Resources Research Center, the Center has collected water quality data to develop several total maximum daily load (TMDL) reports. Industries. Industries collect water quality data to ensure that they are in compliance with permitted discharge levels, water quality standards, and TMDLs as well as research for improvements. For example, according to Weyerhaeuser officials, the company collects sediment data at some sites to determine their compliance with water quality standards. Interstate commissions. Several interstate commissions, such as the Susquehanna River Basin Commission and the Ohio River Valley Sanitation Commission, conduct water quality monitoring programs for a number of purposes, such as identifying problems that threaten the quality of water resources of multiple states and monitoring trends in water quality over time. As with water quality data, at least 15 federal agencies, as well as some state agencies, collect water quantity data. However, a small number of key federal agencies collect a large share of these data, which are often stored in nationwide databases and accessed widely by a variety of users. The other federal agencies generally collect project-specific water quantity data that are available in a variety of ways, depending on the agency. Water quantity data are used to measure both the availability of water in lakes, rivers, streams, and other water bodies, as well as the amount of water that is removed from streams for a variety of purposes, such as drinking water or agriculture. Water availability is measured by a number of data parameters, including streamflow, precipitation, and snowpack. In many cases, entities combine their data with others’ to measure or estimate the amount of water available for use. Water use refers to all in-stream and out-of-stream uses of water for human purposes from any water source. Water use is measured by parameters such as: (1) withdrawal, which is water removed from the ground or diverted from a surface-water source; (2) consumptive use, or the quantity of water that is not available for immediate reuse because it has been evaporated, transpired, or incorporated into products, plant tissue, or animal tissue; and (3) return flow, which is irrigation water that is not consumed by evapotranspiration and that returns to its source or another body of water. Fifteen federal agencies collect, or fund the collection of, water quantity data, including water availability data and water use data. Most of the agencies reported that they collect at least some water availability and water use data. However, we found that the frequency and geographical scope of water quantity data collection varies widely. Three entities, the U.S. Geological Survey, NOAA’s National Weather Service, and the Natural Resources Conservation Service, collect large amounts of data and store the data in national databases that are accessible through the Internet. In addition, the Army Corps of Engineers collects water quantity data and funds the collection of considerable amounts of additional data. Most of the other agencies collect limited water quantity data on a project-specific basis and store the data in internal, project-specific databases. These data are available in a variety of ways, depending on the agency. The U.S. Geological Survey is the federal agency primarily responsible for collecting, analyzing, and sharing data on water availability and use. In particular, the U.S. Geological Survey is the main collector of streamflow data, which measures the volume of water flowing through a stream using streamgages. Under the National Streamflow Information Program, the U.S. Geological Survey collects data through its national streamgage network, which continuously measures the level and flow of rivers and streams at 7000 stations nationwide (see fig. 5). It makes these data available to the public via the Internet. The U.S. Geological Survey is also a major collector of water use data under its National Water Use Information Program. Under this program, the U.S. Geological Survey compiles extensive national water use data collected from states every 5 years to establish long-term water use trends. Snowpack data is another key element in determining water availability because it helps western states forecast and manage future water supply. The Natural Resources Conservation Service is the key collector and provider of snowpack data through its Snow Survey and Water Supply Forecasting Program. As figure 6 shows, the Natural Resources Conservation Service collects snowpack data from over 700 automated SNOTEL (SNOwpack TELemetry) stations in 12 western states and Alaska. In addition, the Natural Resources Conservation Service collects snowpack data at over 900 manually sampled sites in the western states. Snowpack data is also collected in Vermont, New Hampshire, Pennsylvania, and Minnesota through its Soil Climate and Analysis Network. The snowpack water equivalent and depth are used to estimate annual water availability, spring runoff, and summer streamflows. Individuals, organizations, and state and federal agencies use these forecasts for decisions relating to agricultural production, fish and wildlife management, municipal and industrial water supply, urban development, flood control, recreation, power generation, and water quality management. Precipitation data are also important in determining how much water will be available for use, as well as in predicting floods. The National Weather Service collects most of this data through the Automated Surface Observer System, a joint effort of the National Weather Service, the Federal Aviation Administration, and the Department of Defense. Data in the Automated Surface Observer System are collected across the nation at major airports and other areas, as shown in figure 7. The National Weather Service also collects precipitation data through the Volunteer Cooperative Weather Observation Network. Under this program, volunteers collect data at 11,400 weather stations in rural and urban areas to provide data for weather forecasts and drought and flood warnings. According to an official from the National Weather Service, precipitation data are used by weather centers to make more accurate weather forecasts, which can result in significant savings from flood damage. In addition, the National Weather Service and the Natural Resources Conservation Service combine their data, together with the U.S. Geological Survey’s streamgage data, to forecast water supplies and floods. In partnership with the U.S. Geological Survey, the Army Corps of Engineers funds approximately 15 percent of the U.S. Geological Survey National Streamflow Information Program. This provides funding, at least in part, for about 2,160 of the approximate 7,200 stations. The Army Corps of Engineers also collects some water quantity data for various parameters in association with its water management projects. For example, the Army Corps of Engineers keeps track of rainfall amounts, reservoir storage, and inflow and outflow as part of operating specific projects. In addition, the Army Corps of Engineers collects stage data to monitor flood control efforts. Moreover, according to officials from the Army Corps of Engineers, the agency contributes to the analysis of water data by developing water resources software models that are used worldwide. Eleven other federal agencies we surveyed also collect water quantity data, though mostly on a site-specific basis. For example, the National Park Service collects site-specific data to, among other things, characterize hydrologic conditions within park units. In addition, TVA collects water quantity data, such as flow and storage volumes, in order to help decide how much water should be released from its dams. Streamflow, snowpack, and precipitation data are easily accessible through three large federal databases operated by the U.S. Geological Survey, Natural Resources Conservation Service, and the National Weather Service. The U.S. Geological Survey updates streamflow data continuously and makes these data available through NWIS. Through its SNOTEL system, the Natural Resources Conservation Service operates and maintains an extensive, automated system to collect snowpack data in the western United States. The National Weather Service stores precipitation data in the National Climatic Data Center and makes the data available through NOAA’s National Environmental Satellite, Data, and Information Service. According to the Army Corps of Engineers, its data are stored in a number of databases, including internal databases as well as the U.S. Geological Survey’s NWIS. According to the Corps, most of these data are available through their district or division Web pages, though some data are not available for security reasons. Most of the other 11 agencies we contacted that collect water quantity data store their data in internal databases, and the data are made available to the public in a variety of ways. For example, BPA stores its water quantity data in internal, project-specific databases and makes them available via the Internet and/or through publications. The Agricultural Research Service stores its water data in numerous databases, largely on a project-specific basis and makes them available via the Internet, by specific request, and/or through publications. The Fish and Wildlife Service stores its water quantity data in project-specific databases at the agency’s field offices and makes the data available on request. Many states also collect at least some water quantity data to manage their water resources, although the extent of their data collection varies. States need water availability data to forecast how much water can be used for a variety of purposes, such as agricultural or residential use, and often obtain these data from federal agencies. According to a U.S. Geological Survey official, the agency operates the core streamgaging network in most states through its Cooperative Water Program. Under this program, the U.S. Geological Survey enters into agreements with participating states to operate in-stream gages and to share the data collected from them. Officials in Mississippi, for example, said that the state contracts with the U.S. Geological Survey to collect its streamgage data. However, there are a few states that collect significant amounts of streamgage data. A U.S. Geological Survey official in Virginia explained that the U.S. Geological Survey and the Commonwealth of Virginia have historically worked together to operate a unified network of streamgages with uniform quality assurance protocols. In addition, Colorado officials said that the state operates a satellite monitoring system for collecting streamgage data, which is also coordinated with U.S. Geological Survey streamgage data collection efforts in the state. According to the U.S. Geological Survey, only one other state—Nebraska—collects a large share of its state’s streamgage data. In addition to streamgage data, states also require some precipitation data. An official from the National Weather Service said that while some states rely exclusively on the National Weather Service’s precipitation data, other states collect some of their own precipitation data to fill in data gaps. For example, New Jersey relies on university researchers, funded by the state Department of Transportation, to collect precipitation data that supplements National Weather Service data. States need water use data to support the operation of water supply utilities and water districts. In 2002, the National Research Council reported that more than 20 states maintain comprehensive site-specific water use databases, which were most commonly developed to support regulatory programs that register or permit water withdrawals. In many cases, these data are developed through cooperative projects between state water agencies and the U.S. Geological Survey while, in the remaining states, data are collected only for a subset of water use categories or areas within the states. Furthermore, some states have no state-level programs for water use data collection. As we noted in July 2003, state water managers place a high value on water quantity data collected under federal programs to support the states’ ability to complete specific water management activities. For example, 37 states reported that federal agencies’ data are important to their ability to determine the amount of available surface water. In addition, state water managers reported that data collected under federal programs may be more credible and consistent than the state data. The Army Corps of Engineers and EPA offered comments on a draft of this report that were germane to the material in this chapter. The Corps commented that the draft report should more fully discuss the range of water quality and water quantity data that the Corps collects and maintains. While the draft report had discussed a wide range of Corps data collection activities pertaining to both water quality and water quantity, we supplemented those discussions with additional detail in response to the Corps’ comment. EPA commented that the report should further emphasize the high cost of monitoring. To reflect this perspective, we included information from ASIWPCA that 40 states and 2 interstate agencies spent a total of roughly $112 million on water quality monitoring in 2002 and estimated their total resource need at $211 million. Despite the vast array of organizations collecting water quality data, we and others have documented a considerable shortage of these data. This shortage has impaired our understanding of the state of the nation’s waters and complicated decision making on such critical issues as which waters should be targeted for cleanup and how such cleanups can best be achieved. Better coordination among the numerous groups collecting data can help to close the gap between the availability of data and the much larger need for information. However, we found a number of barriers to achieving this goal. Specifically, organizations (1) collect data for disparate missions, (2) often use inconsistent data collection protocols, (3) are often unaware of data collected by others, and (4) often assign data coordination a low priority. These difficulties have not only perpetuated gaps and duplication of effort among data collectors but have also contributed to an “apples and oranges” problem in which the data that are collected cannot be easily synthesized to tell a more complete story. Taken together, the difficulties in coordinating data collection and in synthesizing available data have impeded our understanding of water quality issues and, in particular, have impeded the ability of watershed managers to make well-informed decisions. The shortage of reliable and complete water quality data, and its consequences for informed decision making, has been consistently documented by GAO and others. For example, our March 2000 report, Water Quality: Key EPA and State Decisions Limited by Inconsistent and Incomplete Data, concluded that data gaps limit states’ abilities to carry out key management and regulatory responsibilities and activities on water quality. The data gaps were cited as particularly serious for nonpoint sources, which are widely accepted as contributing to the majority of the nation’s water quality problems. Only six states reported that they had a majority of the data needed to assess whether their waters meet water quality standards. A vast majority of the states reported that they had less than half the data they needed to (1) identify nonpoint sources that result in waters not meeting standards and (2) develop total maximum daily loads (TMDLs) for those waters. Similar findings and conclusions have been documented by the National Research Council of the National Academies of Sciences, and the lack of data states have to make assessments has been acknowledged by the Association of State and Interstate Water Pollution Control Administrators (ASIWPCA) and other organizations. As we reported in March 2000, states overwhelmingly cited funding shortages as a primary constraint on efforts to monitor their waters. Forty- five states indicated that a lack of resources was a key limitation to making more progress on water quality issues, with a number of states noting specifically that state-imposed staffing constraints and shortages in lab funding have exacerbated the problem by limiting the number of samples that could be taken and analyzed. In the 4 years since that report was issued, there has been widespread acknowledgment of the need to (1) improve monitoring programs to allow better informed decisions about which waters to target for cleanup, (2) pursue watershed management strategies, and (3) make other key decisions. Nonetheless, the funding constraints impeding monitoring programs at that time are still present and, in many respects, have deteriorated further. In this context, both analysts and practitioners in the water quality community strongly support the concept of coordinating efforts to collect water quality data to make the most use of limited resources. Among the benefits cited, effective coordination improves the coverage of monitoring stations by more efficiently and strategically locating the monitoring stations of different groups. Similarly, as we found during our site visits, mutual understanding of different groups’ monitoring needs and resources has sometimes resulted in modifying monitoring procedures so that individual monitoring stations could meet the data needs of a greater number of users. Nonetheless, while we found some notable exceptions, officials in 14 of the 15 federal agencies we contacted told us that coordination was either not taking place or falling short of its potential. In addition, the officials noted that enhanced coordination could provide data users with better data about water quality conditions and a more complete picture of the health of watersheds. Among the array of examples cited are the following: An official from the Army Corps of Engineers pointed out that without mutual interest among agencies, water quality data collection efforts are very poorly coordinated. The official also noted that some agencies give a low priority to coordinating data collection within their own agencies. The official explained that other potential users of the data may have difficulty finding the correct points of contact to receive data and believes that enhanced coordination would bring more data into the hands of data users. Forest Service officials explained that enhanced coordination would help to minimize information gaps. They noted that there are over 2,500 listed segments of impaired waters on national forest system lands. According to the officials, the states are almost always deprived of data needed to develop TMDLs, and coordination between the Forest Service and the states could help minimize those data gaps and speed recovery of impaired waters. The officials we interviewed from the state environmental agencies agreed, acknowledging in particular that coordination among state monitoring efforts, and between states and other data-gathering entities, could be significantly improved. For example: According to officials from the Mississippi Department of Environmental Quality, if federal agencies notified states when they begin monitoring projects and shared their results, the state could assess more waters and possibly reduce duplication of effort. For example, the officials noted an instance in which the Fish and Wildlife Service paid the U.S. Geological Survey to operate streamgages in Mississippi, but the Fish and Wildlife Service did not alert the state that data were being collected. According to officials from the Virginia Department of Environmental Quality, the state generally has to solicit data from federal agencies because the agencies do not readily share data with the state. Furthermore, better coordination with volunteer groups could significantly increase the percent of assessed waters in the state. According to an official from the Illinois Environmental Protection Agency, many groups in the state collect water quality data, but coordination is needed to develop mutually agreed upon quality assurance project plans and to modify data collection procedures to allow data sharing. ASIWPCA’s Executive Director also cited the need for greater coordination. She noted opportunities to enhance monitoring programs through, among other things, (1) better coordinating monitoring efforts among all levels of government; (2) integrating multiple objectives with single monitoring efforts; (3) incorporating state-of-the-art approaches to link data systems and improve reporting; (4) creating statewide monitoring councils; (5) creating public/private monitoring partnerships; (6) establishing volunteer monitoring corps to increase the total number of waters monitored; and (7) eliminating duplicative monitoring between and among the various state and federal agencies. Given the strong consensus on the need for coordination—but the difficulty often encountered in achieving it—we asked federal and state officials, representatives of local governments and watershed groups, and others who have tried to coordinate data collection to explain the barriers that have impeded their efforts. As figure 8 shows, the most frequently cited problems were the following: Organizations often collect data to achieve specific missions, which sometimes affects their willingness and ability to modify their approaches toward data collection to make the results more widely usable, and which may even make organizations reluctant to share data they have already collected. Groups’ data collection protocols often vary, resulting in incomparable definitions to measure the same or similar pollutants, different detection limits, inconsistent levels of quality assurance, and inconsistent collection of metadata. Without a centralized clearinghouse on water quality data, many collectors are simply unaware of the data being collected by, or available from, other organizations. Data coordination is often assigned a low priority, as shown in a lack of support for national and state monitoring councils, which were established specifically to improve data coordination. The very nature of the organizations collecting water quality data varies widely—some are public, others are private; some are national, others are statewide or local; some are specifically charged with the responsibility, others do so voluntarily. As we were frequently told, these variations often lead to different data needs and priorities, which may affect the organizations’ ability—and willingness—to coordinate data collection strategies and to share available data. The disparate missions among the organizations that collect data were cited by 13 of the 15 federal agencies as a significant barrier to improved coordination. Even within the community of federal agencies, significant diversity in agency missions can lead to vastly different priorities regarding which data to collect and how to collect and analyze them. For example, the Environmental Protection Agency’s (EPA) primary interest in water quality data arises from its responsibility to ensure that waters are in compliance with states’ water quality standards. Accordingly, its monitoring approach (and those of the states that conduct monitoring programs to meet EPA requirements) generally focuses on determining whether certain thresholds are achieved or exceeded. The degree to which measurements are on one side or the other of these thresholds is generally of less consequence. On the other hand, the U.S. Geological Survey’s monitoring program is oriented toward obtaining precise measurements of water quality and then tracking changes in these values over time. Accordingly, its monitoring techniques allow for collecting specific measurements—and those techniques tend to be more expensive. For example, the U.S. Geological Survey may use relatively expensive meters to measure water quality parameters such as temperature, dissolved oxygen, pH, and conductivity. These meters require more calibration and maintenance to ensure accuracy than the test kits used by others seeking to determine compliance with state water quality standards. State officials have also emphasized how differing missions can affect the ability to coordinate monitoring strategies and share data. An ASIWPCA survey found that state officials identified conflicting state and federal data needs as among the top barriers to the effectiveness of their ambient monitoring program. Finally, some organizations have little incentive to share data, while others may have strong disincentives to do so. According to some federal agency officials we interviewed, academicians who collect research data and plan to publish their results may see little benefit in disclosing their findings early. Similarly, industry officials told us that they were often unwilling to share their water quality data with states in situations in which they believed the data could be unfairly used against them in a regulatory setting. When organizations differ in their overall approaches toward monitoring, the varying procedures they use to monitor may result in data that cannot be easily compared. A number of such varying procedures were cited in our interviews with federal officials and during our site visits. According to several federal officials, different organizations sometimes use different names or definitions to measure the same or similar parameters. For example, turbidity, transparency, and total suspended solids are used to determine the extent to which water bodies are affected by sediment. However, they are each measured differently, and, consequently, the data arising from measures of these parameters cannot be synthesized. Data collection methods for measuring even the same parameter can vary widely. Turbidity, which is a measure of the cloudiness of water, for example, can be measured using a meter, called a nephelometer, which provides a turbidity reading in nephelometric turbidity units, or it can be measured with a turbidity tube, which provides results in Jackson turbidity units. These two measures, however, cannot be used interchangeably. To address incomparable methods, the National Water Quality Monitoring Council has produced a National Environmental Methods Index Web site (www.nemi.gov). This index, which provides a compendium of methods to support monitoring programs, allows for the rapid comparison of methods and aims to ensure that data collectors more actively consider analytical methods when planning and implementing monitoring programs. Detection limits are the smallest concentration of a given parameter that can be measured. Data collectors may measure pollutants using different detection limits, which can limit the usefulness of their data to other groups. A Virginia monitoring manual noted, for example, that a test kit may have a high detection limit for total phosphorus and, therefore, might not be useful for the state if typical total phosphorus concentrations are lower. Different entities also report detection limits differently. For example, according to officials from the Army Corps of Engineers, some entities report pollutant concentrations that are below detection levels as zero; others report them as less than a certain detection limit; and still others report the measurements as the detection limit itself. These different methods for reporting similar findings make it difficult for data users to understand and use the data. Data collectors vary widely in the Quality Assurance and Quality Control methods they use to assure that their data meet minimal standards, and this variation may preclude wider use of data, according to federal and state officials we spoke with. For example, according to officials from the Virginia Department of Environmental Quality, they could not use data on pH levels collected by the Forest Service because the Service’s methodology did not meet EPA requirements for quality assurance. However, if the monitoring had originally been conducted using EPA’s approved method, the state could have used the data and probably would have added more waters to Virginia’s impaired waters list. In another instance, an official from the Army Corps of Engineers in Mississippi noted that the U.S. Geological Survey has rigorous quality assurance and quality control procedures, which results in a lag time between when the measurement was taken and when the data are accessible to the Army Corps of Engineers and the public. The official explained that, because of delays in receiving data, the Army Corps of Engineers is not always able to make optimum use of the data. Variations in quality assurance and quality control are of even greater concern when it comes to volunteer monitoring data. For example, according to officials from the Mississippi Department of Environmental Quality, data collected by Adopt-A-Stream volunteers, one of the volunteer organizations in Mississippi, are not used by the state because they are not of sufficient quality to use in identifying waters that do not meet standards, and because the state believes it has little control over volunteers. However, the data could potentially be used to target future monitoring. To address this concern, EPA’s Volunteer Monitor’s Guide to Quality Assurance Project Plans outlines steps that a volunteer program needs to take to document the field, lab, analytical, and data management procedures of its monitoring program. According to EPA officials, many volunteer programs develop such documentation in the form of Quality Assurance Project Plans, which are then submitted to the state water quality agency or the EPA regional office for review and approval. The officials noted that programs with approved plans are much more likely to have their data used. Metadata allow data users to understand characteristics about data collected by others, such as the methodology used to collect the data, and thus, determine whether these data are useful for their purposes. Officials from 9 of the 11 federal agencies we surveyed that use data to make watershed management decisions noted that a lack of metadata and/or inconsistency in metadata is a barrier to coordinating data collection efforts and data sharing. For example, according to an official from the Cooperative State Research, Education, and Extension Service (CSREES), without metadata, the reliability of data is suspect and, therefore, should not be used to make watershed management decisions. Similarly, according to officials from the National Oceanic and Atmospheric Administration’s (NOAA) National Marine Fisheries Service, data users need to know as much information as possible about the data that were collected so that data are not misinterpreted. To address this concern, the Methods and Data Comparability Board of the National Water Quality Monitoring Council is developing water quality data elements that specify the metadata needed so data users can understand and use data from other sources. According to some watershed officials, however, the list of metadata that was originally suggested contained too many metadata fields and will need to be made more manageable to be useful. Determining appropriate metadata standards is not an easy task. First, officials from several federal agencies explained that collecting and recording metadata can be expensive. An official from NOAA’s National Marine Fisheries Service, for example, explained that the collection and storage of metadata requires additional staff and resources that may not be available. Second, as some federal agency officials noted, data collectors that are monitoring water quality for a project-specific need may not be aware that the data they are gathering may be useful to others, so they may not be willing to collect metadata. As representatives of many groups indicated, coordinating data collection is difficult because they lack information about the data that other groups may be collecting. Of the 15 federal agencies we surveyed, 10 cited the lack of awareness of other groups’ data collection activities as a barrier to coordination. For example, an official from the Agricultural Research Service in Mississippi noted that even though he tries to identify other data collectors within the state, he is consistently surprised to find out that there are additional entities collecting water quality data. An official from the Bureau of Land Management explained that, because watershed boundaries do not coincide with political boundaries, it exacerbates the difficulty of identifying what entities are collecting data within the watershed. In addition to a lack of knowledge among data collectors about other entities that collect data, we also found a significant gap in knowledge about what data are collected within agencies. Many respondents to our survey could not provide completed information on the type of data their agency collects, frequency of data collection, and geographic areas of data collection. For example, over one-third of the agencies we surveyed were not able to provide complete information about their water quality data because there are no central water quality databases within the agencies. Most of the federal officials citing unawareness of others’ data collection efforts said that a clearinghouse to disseminate that information would go a long way toward addressing the problem. According to federal officials, clearinghouses can take various forms. For example, a clearinghouse might be similar to a phone directory, providing an index of data collectors and the type of data being collected. Or, a clearinghouse might provide an Internet “portal”—an access point from which data users can obtain information and access to data from multiple sources. Efforts to coordinate data collection activities are a low priority, as demonstrated by a lack of support accorded to federal and state monitoring councils that were formed to help coordinate the data collection efforts of their members and enhance data sharing and use. For example, the National Water Quality Monitoring Council (National Council) was established to implement a nationwide strategy to improve water quality monitoring, assessment, and reporting. This council is co-chaired by EPA and the U.S. Geological Survey and includes representatives from federal, interstate, state, tribal, local, and municipal governments, volunteer monitoring groups, and the private sector. According to its charter, the National Council aims, among other things, to improve institutional coordination and collaboration, comparability of collected data, quality assurance and control, and storage systems that preserve data for future use. The National Council reports to the Advisory Committee on Water Information, which advises the federal government, through the U.S. Geological Survey and the Water Information Coordination Program. Most of the respondents to our federal survey that were aware of the National Council and its efforts often cited the National Council as a positive influence in promoting better coordination among data collectors. However, almost all of these officials also noted that both a lack of funding and dedicated time among National Council participants has limited the council’s effectiveness. Several members of the National Council noted that participation on the council is voluntary and thus, as one member noted, “not part of a member’s job description.” Council members we interviewed also agree that the National Council lacks authority. The Office of Management and Budget memorandum that established the National Council does not stipulate that federal agencies must cooperate. For example, even though the Army Corps of Engineers participated in the National Council when it was first established, the agency has opted out of participating in the National Council for the past several years. The lack of priority for coordination at the national level is also prevalent at the state level. First, although the National Council and EPA have encouraged states to form councils to coordinate monitoring among the entities active in each state, as of September 2003, only seven state monitoring councils and three regional councils were active. Second, even where such councils have been active, they have generally experienced difficulty in making progress. During interviews with monitoring council members in Colorado and Virginia—the two states we visited that have active coordinating councils—officials reported that their councils were making less progress than anticipated. According to members of the Colorado Water Quality Monitoring Council, the council has struggled, in part because participants must volunteer their own time and its efforts are limited by time and resources. Similarly, a Virginia Water Monitoring Council member told us that while Virginia’s council has made some progress (such as sponsoring workshops, conferences, and annual meetings), the ability of the council to address water issues could be increased if the energy expended for fundraising was significantly reduced. An EPA study of eight of the state and regional monitoring councils substantiated these comments. EPA found that, although the councils have had some indirect effects, none has made a documented, “on-the ground” impact to water quality monitoring. The EPA study also identified many of the same problems we found during our site visits—a lack of funding, members pressed to balance their council participation with competing job demands, and the challenge of getting agency members to take off their “agency hats.” At the same time, according to EPA, state and regional monitoring councils can be effective in improving the availability of monitoring data if properly supported. For example, EPA officials and others have cited the Maryland Water Monitoring Council as a successful state council. The Maryland council has conducted monitoring design workshops and a stream monitoring roundtable to bring together organizations and individuals planning to monitor streams in Maryland, exchange information about the kinds of monitoring being planned, and prepare a geographically- referenced compilation of monitoring sites to ensure that everyone knows where monitoring is taking place. In addition, while the Colorado council has struggled, it has organized “data swaps” to allow monitoring organizations to share metadata and compare data collected by various groups. As we previously noted, EPA issued guidance to the states in March 2003 that recommends 10 basic elements of a state water monitoring and assessment program. While EPA’s guidance does not recommend coordinating data collection activities as one of the basic elements of state monitoring programs, it notes the importance of state monitoring program managers working with other state environmental managers and interested stakeholders as they develop their strategy. In addition, the guidance recommends that states identify required or likely sources of existing and available data and information and procedures for collecting or assembling it. Because currently established coordinating entities lack the resources, priority, and authority to make significant progress, some agency officials have suggested the need for a clearly designated coordinating body with both sufficient resources and authority. These agency officials differ in their suggestions about the structure of this coordinating body. For example, an official from the Advisory Committee on Water Information believes that, with enhanced authority, the Advisory Committee and its National Council could make significant progress toward improving the coordination of data collection efforts and increasing the amount of data watershed managers have available to make decisions. The official recognized that, while the coordinating entity will not be able to alter agency missions, it would be able to address such things as establishing a clearinghouse to identify who is collecting what type of data and developing clearly-defined and generally accepted government metadata standards for water data collection. Officials from the Army Corps of Engineers provided a suggestion for an alternative structure for a coordinating body. The officials believe that the designation of one lead agency to define, locate, and integrate available data sources within a specified time frame would make data more easily accessible, available in a useful format, and better enable local decision makers to make better informed decisions. The Corps officials explained that a lead agency could, for example, establish standards in cooperation with other agencies and establish a clearinghouse for data. The officials suggested that an appropriate lead agency would be one that already carries out and/or supports broad water data collection responsibilities. Water quality officials often noted that difficulties in data management are a factor inhibiting their ability to use water quality data to make watershed management decisions. These data management concerns commonly focused on two areas: (1) complexity of using EPA’s storage and retrieval system (STORET) and (2) inability to integrate data from various sources to provide a more complete picture of water quality within watersheds. From 1965 until 1998, water quality data were stored in the original STORET Water Quality File, which is now called “legacy STORET.” In 1999, EPA released “modernized STORET” to replace legacy STORET. This newer version contains data collected beginning in 1999, along with some older data that were transferred from legacy STORET. Some of the major changes between legacy STORET and modernized STORET include the following: Storing data in legacy STORET could only be accomplished by someone with a mainframe user ID and specialized training. In contrast, modernized STORET is installed on personal computers, and data can be entered on those personal computers without requiring access to an EPA computer. Local STORET users then choose if and when to upload their data into national STORET. Unlike legacy STORET, modernized STORET contains metadata on why the data were gathered; sampling and analytical methods used; the laboratory used to analyze the samples; the quality control checks used when sampling, handling the samples, and analyzing the data; and the personnel responsible for the data. EPA considers STORET to be its main repository for water monitoring data and a cornerstone of its data management activities and water program integration efforts. And, according to EPA officials, the agency has worked hard to resolve a number of issues affecting the database’s wider use. Nonetheless, officials from many of the entities we interviewed suggested that further progress is needed before they can effectively use STORET. They cited the following difficulties: (1) uploading data to STORET, (2) retrieving data from STORET, and (3) dealing with the system’s large number of data parameters. The last point in particular was cited by Forest Service officials, who noted that the large number of data parameters in the system made it cumbersome to use. Consequently, less than 5 percent of Forest Service data currently go into STORET, and the agency has yet to decide whether to consolidate their water quality data into STORET or expend resources to develop an in-house water quality module. Officials in two of the three states we visited held similar views. Officials from the Virginia Department of Environmental Quality reported that they have not used STORET since it was updated because of difficulties in uploading and retrieving data, and the state has instead opted to develop its own data storage system. Mississippi Department of Environmental Quality officials similarly reported that they store their data in two state-run databases. Officials from both states noted that they would prefer to have their data in STORET, but would need additional assistance from EPA to do so. On the other hand, one of the states we visited, Colorado, noted success in using STORET to store its water quality data. In addition, officials from EPA’s Denver office noted that other states, such as Utah, have also had success in using STORET. Some local government and volunteer monitoring groups also have encountered challenges using STORET. For example, a watershed group in Colorado noted that, while their group recognizes that STORET is a valuable data management system and made the decision to use the system in 2000, the group had only a limited amount of data in STORET as of fall 2003 because of difficulties uploading their data. The group explained that unified federal support for the system is lacking, and therefore, limited funding has been made available to address the difficulties STORET users encounter. In addition, a volunteer monitoring group from Virginia reported that while they had tried to put their data into STORET, they had too much difficulty uploading data into the system, and that EPA’s resources were, at the time, stretched too thinly to provide sufficient assistance. Moreover, officials from Big Dry Creek Watershed Association in Colorado reported that while they recognize the benefits to others of having their data in STORET, they do not perceive a benefit to their association that warrants spending the funding or time to do so. Many of these issues were echoed by state and interstate agencies in a 2002 ASIWPCA survey. Most survey respondents, for example, indicated that EPA does not have sufficient resources to support the system. Some also noted that STORET is incompatible with their internal state systems and reporting needs, data retrieval is difficult, and a good deal of staff effort must be spent to manage incompatibilities. EPA officials have acknowledged these problems, as well as concerns over insufficient training and technical support. Nonetheless, the agency has cited recent successes in dealing with STORET challenges, pointing to growth in the number of states and other organizations using the system. As of March 2004, over 120 organizations use STORET, including 31 states, four EPA offices, interstate organizations such as the Delaware River Basin Commission, federal agencies, American Indian tribes, watershed groups, and volunteer monitoring groups. According to EPA, over 7 million of the approximately 18 million monitoring results contained in STORET were added in 2003 alone. EPA officials noted that the agency has made efforts to encourage yet more states, federal agencies, and other groups to make greater use of the system by (1) working to make the system easier to use by, for example, releasing revised versions of STORET and a STORET Import Module which make data upload easier and (2) providing greater technical assistance. In addition, according to EPA, the agency developed a new STORET data warehouse in 2003 that has increased data retrieval speed by 200-fold. With the completion of the data warehouse, the agency plans to significantly increase customer outreach and support to better meet states’ needs for the STORET system. Another key data management concern is that many different databases with different formats and purposes are used to store water quality data, often making it extremely challenging for data users to integrate data from various sources. According to several federal agency officials, entities that collect water quality data need to coordinate their efforts during the planning phases of data collection to agree on how to manage data. Without such agreement, data collected often either cannot be used by other entities or entities must commit resources to integrate data. An EPA review of statewide watershed management approaches found data incompatibility affects states’ ability to compile data at the basin and watershed level. As a result, it can be difficult to obtain a complete picture of water quality problems and their sources. Furthermore, several states reported that federal and state data systems are often not compatible, and that more work is needed to build and manage databases across agencies that have standardized protocols, metadata reports, and georeferencing capabilities for mapping and modeling. The most significant example of incompatible databases involves the U.S. Geological Survey’s National Water Information System (NWIS) and EPA’s STORET. Officials from the U.S. Geological Survey explained that different philosophies and different approaches to the database designs have led to databases with data models that are not compatible. NWIS contains only U.S. Geological Survey generated data or data the U.S. Geological Survey has reviewed and ensured that data quality is known and acceptable. In contrast, STORET accepts data of varying quality from any source, contains significant metadata, and allows the data owner to change or delete data. According to an EPA official, NWIS was compatible with legacy STORET and, through an agreement with the U.S. Geological Survey at the time, NWIS data was regularly copied into legacy STORET. Furthermore, when EPA modernized STORET, the U.S. Geological Survey and EPA worked closely to ensure that modernized STORET and an expected modernized version of NWIS would remain compatible. However, NWIS was not modernized according to plan, and now the modernized STORET and NWIS are incompatible. Additionally, according to a U.S. Geological Survey official, for technical reasons the archived version of legacy STORET no longer contains NWIS data. As a result, according to federal and state agency officials, integrating data from these two primary water quality databases takes time and a significant commitment of resources. For example, an official from New Jersey’s Department of Environmental Protection explained that transferring data from NWIS into STORET—in order to form a more complete picture of water quality within the state—takes considerable time and effort from both state and U.S. Geological Survey staff. Similarly, an official from the National Park Service explained that the incompatibility of NWIS and STORET makes it very difficult to retrieve data from NWIS and combine it with National Park Service data stored in STORET to create one useable database of park water quality. The official explained that, to effectively use U.S. Geological Survey data from specific contracted studies, the National Park Service often requests that raw data be put into STORET. EPA and the U.S. Geological Survey have taken steps to address the issue of data incompatibility. In February 2003, EPA and the U.S. Geological Survey agreed to the following: Deliver data from NWIS and STORET in a common format to federal, state, and tribal organizations, as well as to the general public and scientific community. Ensure that the data from NWIS and STORET are documented to describe their quality so that users can determine the utility and comparability of the data. Their data systems will include metadata associated with each water- quality result as soon as possible. Recognize that much data exists for which available documentation is limited and yet these data are useful for certain purposes and, therefore, the agencies will not exclude such data from their systems because of these limitations. Facilitate and encourage the maximum use of metadata to enhance the usefulness of the information for multiple purposes. Work with the National Water Quality Monitoring Council to develop a geospatial Internet-based query tool (portal) for sharing data, especially relying on data from STORET and NWIS. Since data cannot be efficiently transported between the databases, the agreement between the agencies focuses on a data portal as an alternative to copying data into multiple databases. The agencies agreed to “strive to achieve these objectives as soon as is practicable within the constraints of available resources.” In addition to difficulties in integrating data from STORET and NWIS, some agency officials noted difficulty in integrating data within agencies. For example, according to EPA, the agency has historically stored water data collected under the Superfund program in various databases. Noting the inconvenience of this practice, four EPA regions are working to consolidate Superfund data in STORET. In addition, according to the Army Corps of Engineers, much of its data as well as data from other agencies is stored using different formats in different databases, making integrating the data and analyzing the information for decision making extremely difficult and time consuming. To address the difficulties integrating data, the Army Corps of Engineers believes using a Geographic Information System (GIS) as the foundation for managing water resources is the only viable solution to effectively integrate vast amounts of disparate data needed to effectively manage the nation’s water resources. Thus, according to Corps officials, the agency is taking steps to standardize and integrate disparate data sets by developing an “Enterprise GIS” to support watershed analyses. The Corps envisions that the Enterprise GIS data, output from watershed modeling efforts, and many of the analytical tools would be Web-enabled to make them accessible to federal, state, and local governments. The Corps acknowledges, however, that the agency’s implementation of Enterprise GIS at the national level has been slow, citing funding constraints. The acute shortage of accurate and reliable water data has been documented by GAO, the National Academies of Science, and other organizations. The consequences of this shortage have been amplified in recent years as states and local communities have come under increased pressure to identify and address—in a scientifically sound and legally defensible manner—which of their waters do not meet standards and should, therefore, be targeted for cleanup. The consequences of inadequate water data have also been amplified by the nation’s increased reliance on the watershed approach, a strategy whose success relies heavily on the availability of comprehensive and reliable information. With this critical need in mind, some may find it perplexing that literally hundreds of organizations collect water quality data that are not being sufficiently brought to bear on critical decisions. Our findings suggest that improved coordination could go a long way toward alleviating this problem. However, the national, regional, and state monitoring councils that exist to promote such coordination have frequently been impeded by a lack of authority to make key decisions, a shortage of funding to undertake key coordinating activities, and low priority attention from data collecting organizations. Among the most notable of these is the National Water Quality Monitoring Council, which is co-chaired by EPA and U.S. Geological Survey, and which includes representatives from federal, interstate, state, tribal, local, and municipal governments, watershed groups, volunteer monitoring groups, and the private sector. Some have cited these difficulties in calling for a clearly designated lead water data coordinating body at the national level; one with both sufficient resources and authority. They differ, however, on the precise form this body would take. One model would enhance the role of the National Water Quality Monitoring Council, as the nation’s premier water data coordinating body. Another approach suggested by some would be to designate a lead federal agency to assume this role—one that already carries out and/or supports broad water data collection responsibilities. We believe that it is most appropriate for the Congress to exercise the judgment call as to whether and how such an effective coordinating body should be established. To enhance and clearly define authority for coordinating the collection of water data nationwide, we recommend that the Congress consider formally designating a lead organization (either an existing water data coordinating entity or one of the federal agencies with broad water data collection responsibilities) for this purpose. Among its responsibilities, the organization would: Support the development and continued operation of regional and state monitoring councils. Coordinate the development of an Internet-based clearinghouse to convey what entities are collecting what types of data. As part of this effort, the organization could advance the development of a geospatial Internet-based query tool (portal) that would allow users access to information about water data available within a given watershed. Coordinate the development of clear guidance on metadata standards so that data users can integrate data from various sources. The U.S. Army Corps of Engineers, the Department of the Interior, and the Environmental Protection Agency offered comments on a draft of this report that were particularly germane to the material in this chapter. The Corps offered additional information about planned activities to use a comprehensive integrated watershed management approach, which we included in finalizing the chapter. The Department of the Interior cautioned that the designation of a lead water data organization would not necessarily remove all of the barriers that are currently limiting the coordination of data collection activities. Interior noted that while designating a lead organization or agency has value, resources are needed and some barriers, such as differing purposes for data collection and variation in data collection protocols, would remain. We agree and, accordingly, view Congress’ designation of a lead organization as an important step toward addressing the challenges of coordinating data collection. We believe that such a step would enhance and more clearly define the authority needed to address many of these barriers. Interior also stated that a crucial distinction between NWIS and other databases mentioned in the report, particularly STORET, is that NWIS serves not only as a data archive but also as a data processing system that applies quality control tests. In addition, Interior explained that establishing one large Federal database is neither feasible nor desirable. We agree with both points. Regarding the first point, we recognize that NWIS holds data that are consistently subjected to quality assurance and quality control, while STORET and other databases contain some data of varying or unknown quality. Regarding the second point, many federal agency officials and others noted that it would be neither realistic nor necessary to establish one database that contains all water data. Rather, they generally explained that an Internet-based tool that allows them to link to data sources in a particular geographic area would be both practical and sufficient. EPA agreed on the need for reliable, comprehensive, and accessible data on water quality to effectively implement the watershed approach. EPA noted, however, that the report should further discuss recent significant improvements to the STORET system and the emphasis placed on coordination and data sharing in EPA’s “Elements of a State Monitoring and Assessment Program” guidance. The draft report contained some information on these issues, but we incorporated additional detail in response to EPA’s comments. Many stakeholders use water quantity data to make decisions with important economic, environmental, and social implications. Among other things, water quantity data are needed to help make water quality determinations. The quantity of water flowing through a river, for example, affects the concentration of a regulated pollutant in that river. The importance of water quantity data, however, extends beyond their impacts on pollutant concentrations. Federal, state, local, tribal, and private organizations also rely heavily on water quantity data to fulfill critical responsibilities such as ensuring an adequate water supply to meet a variety of competing needs. Officials at both the federal and state level most often reported that their biggest concern about water quantity data is the lack of data available to make these economically and socially important watershed management decisions. However, where data are available, there is broad consensus among federal and state data collectors we interviewed that, while not always flawless, the coordination of water quantity collection efforts is less complicated and more effective than the coordination of water quality data collection. As pressure on existing supplies continues to grow, water supply and management issues, and therefore water quantity data, are increasingly important. Much as debits, credits, and savings in a financial budget need to be quantified to maintain fiscal responsibility, the nation’s water supply and use need to be comprehensively quantified within the water budget context to ensure adequate availability of water as water demands fluctuate regionally because of changes in climate, urban growth patterns, agricultural practices, and energy needs. Scientific water quantity data make it possible to understand and protect water for many economically, environmentally, and socially important uses such as safe drinking water, habitat for fish and wildlife, rivers and streams for recreational activities, and water allocations among competing uses by industry, agriculture, and municipalities. A broad group of stakeholders use water quantity data to support decisions concerning these uses. These stakeholders—water managers, engineers, scientists, emergency managers, recreational water users, and utilities—use water quantity data to evaluate current water supplies and plan for future supplies; forecast floods and droughts; operate reservoirs for hydropower, flood control, or water supplies; make informed evaluations of the nation’s water quality; navigate rivers and streams; and ensure safe fishing and boating. Many of these activities require decisions to be made on a daily basis, which means timely, yet reliable, data are necessary. Among federal and state officials we interviewed, the most frequently cited concern about water quantity data was the general lack of data available to aid decision making. As shown in figure 9, the majority of federal agencies using water quantity data for watershed management reported having “less” or “far less” than the amount of data that they need to make well- supported decisions, for almost all the listed water quantity parameters, according to our survey of 15 federal agencies. Additionally, in a 2003 GAO survey of state water quantity managers, managers in 39 states ranked expanding the number of federal data collection points, such as streamgage sites, as the most useful federal action to help their state meet its water quantity information needs. In particular, several officials at the federal and state level reported that the decline in U.S. Geological Survey streamgaging stations is a concern, and respondents from the National Oceanic and Atmospheric Administration’s (NOAA) National Weather Service and the Agricultural Research Service reported that there are gaps in precipitation monitoring stations. According to several federal and state agencies, they are particularly concerned about the continuing decline in U.S. Geological Survey streamgaging stations, which provide many entities with water quantity information needed for key watershed management decisions. Officials at the Colorado Department of Natural Resources explained that in their state, the U.S. Geological Survey has cut streamgage stations that collect data that the state needs. Where possible, the Colorado Department of Natural Resources has taken on the abandoned sites, but it has had to leave some abandoned because of resource constraints. U.S. Geological Survey officials in Mississippi reported that the state Department of Environmental Quality decided to drop Cooperative Program funding to support 19 streamgages, which accounted for half the state’s streamflow monitoring. According to officials at Mississippi’s Department of Environmental Quality, some of these gages collected data the state needs to enforce diversion permits, and others have 50 to 60 years of continuous data collection on record, which they do not want to discontinue. However, the state does not have the funds to support expensive U.S. Geological Survey gages, according to the state officials. Similarly, an Environmental Protection Agency (EPA) regional official reported that one state within its region—Wyoming—recently applied for EPA funding to reactivate needed U.S. Geological Survey streamgage stations. As figure 10 shows, a large number of U.S. Geological Survey long-record streamgages have been discontinued over the past 70 years. According to a U.S. Geological Survey headquarters official, the loss of long-record streamgages is a serious matter because trend data from these gages are requisites for understanding climate change issues and for designing bridges to withstand floods, among other concerns. While the number of long-record streamgages has declined over the past 70 years, the number of total gages remains largely the same from year to year. In many cases, as long-record gages were eliminated, new shorter-term gages were established through the Cooperative Program. The U.S. Geological Survey expects funding from cooperators to decline this year and the next due to current state fiscal constraints, which will likely cause the overall number of gages to go down in the next couple of years. Officials at two federal agencies also identified NOAA’s National Weather Service rain gauge data as an area with information gaps. According to the National Weather Service, while currently its observation systems primarily exist at airports, it is trying to improve coverage, especially in the West where the biggest gaps exist. According to a National Weather Service official, studies conducted by the Agricultural Research Service and the National Weather Service show that improving the coverage of monitoring sites to a 20 mile by 20 mile grid would improve stage forecasting by 50 percent. If this coverage is realized, the federal government could save $700 million annually through more accurate flood forecasts, according to the official. To achieve this better coverage, the National Weather Service is beginning to add 4,000 new sites and to upgrade 4,000 existing sites. As we previously reported, the U.S. Geological Survey and the National Weather Service stated that a lack of sufficient funding is their primary barrier to expanding or automating data collection. While the lack of funds for monitoring water quantity parallels the lack of funds for monitoring water quality, efforts to coordinate water quantity data collection have generally been successful and are comparatively unimpeded by barriers. Federal and state officials cited several key reasons for better coordination of water quantity data as follows: Water quantity data collection is more centralized among fewer entities, which allows users and collectors to more easily identify data sources that may be helpful in making watershed management decisions and encourages coordination to meet a common purpose. Critical, urgent, and controversial decisions concerning issues such as water rights and flood management require accurate and complete real- time water quantity data and provide an impetus for groups to collaboratively generate such data. Advanced technology, such as satellites that relay data monitored in stream to computers and radio technology that reports data from collection sites to the Internet, greatly improve the ability of data collectors to share data. The general consistency of water quantity data parameters, a result of the well-developed methods available to measure and report them, allows data users to more easily integrate data from separate collection efforts. Compared with water quality data, collection of water quantity data is more centralized among a smaller number of primary data collectors, according to several federal and state officials. As discussed in chapter 2, in most states, the U.S. Geological Survey collects the majority of streamgaging data, while other agencies have clearly delineated responsibilities for collecting other water quantity data. While these efforts are cleanly divided, they also share the common purpose of predicting and measuring the nation’s water availability and use, which facilitates better coordination, according to some officials. For example, once NOAA’s National Weather Service, the Natural Resources Conservation Service, and the U.S. Geological Survey collect their data, they combine them to forecast water supplies and floods. Some officials also cited the common purpose of data collection as a reason coordinating data collection efforts on water quantity has been more successful than for water quality. According to the U.S. Geological Survey, all states participate in its Cooperative Program, in which nonfederal entities and the U.S. Geological Survey jointly fund water resources projects that involve water quantity data collection. Accurate and complete data are critical in supporting urgent and controversial water quantity management decisions made by state and federal agencies. According to many federal and state officials, there is generally a more critical need for accurate and complete real-time water quantity data than there is for water quality because important decisions must be made daily with regard to water allocation, reservoir projects, flood and drought management, navigation, and evaluation of compliance with water withdrawal permits. According to water quantity officials in Virginia, the critical need for water quantity data increases as the quantity of available water becomes more equivalent to the amount of water being used, or where floods occur. In some of these instances, water quantity decisions must be made quickly with accurate data. For example, according to an Army Corps of Engineers official, when floods occur, managers must make critical on-the-spot decisions, such as which residents need to be evacuated or how much water should be released from a reservoir to reduce risk and optimize flood reduction. Similarly, according to a U.S. Geological Survey official in Virginia, during the state’s drought in 2002, discharge permit holders with limits on how much they could discharge at various streamflows relied on hourly streamflow data to be sure that their discharges were not exceeding permitted levels. Several federal and state officials explained that this critical need for data has prompted water quantity officials to coordinate better. Numerous officials also noted the need for accurate and complete data for controversial decisions, especially when they may be challenged in court. In particular, states need data to, among other things, administer water rights to various users, establish and maintain in-stream flow requirements for endangered species and, generally, to comply with interstate compacts. The need for adequate data for these sensitive decisions is especially critical in western states, like Colorado, where rising populations combined with increasing demand for water for recreation, scenic value, and fish and wildlife habitat, have resulted in conflicts and litigation. An official in Colorado explained that in his state, there is great emphasis on keeping track of water because “every drop of water is owned by someone.” When water is improperly allocated, states can face costly consequences, which encourages states to coordinate data collection and share results. For example, according to Colorado water officials, the state may be required to pay almost $30 million to Kansas as a result of litigation Kansas initiated when Colorado allegedly withdrew more than its share of water from the Arkansas River as a result of ground water pumping. The officials acknowledged that at the time, the state did not have adequate ground water use data. The state has since decided to focus its resources to bring high-quality data together to make well-supported decisions instead of paying for litigation and payments resulting from inadequately supported decisions. Toward this end, the state has established the Colorado Decision Support System, a central query-based data system that incorporates data from various entities in the state. Advanced technology within the water quantity field allows for data to be directly and almost instantaneously delivered to data users, which makes it easier to share data and facilitates coordination of water quantity data collection, according to many federal and state officials. Part of the reason that water quantity data is easier to collect and share is because many of the water quantity parameters for which groups collect data can be measured in situ through electronic equipment. This is not true of most water quality parameters, which require manually intensive sampling and subsequent lab processing and analysis to obtain the final data values. Where data are measured electronically, telemetry systems such as satellite technology—depicted in figure 11—can relay data from the instrument to data users almost immediately. For example, much of the U.S. Geological Survey’s streamflow data, which are collected continuously by electronic in-stream equipment, are available within 4 hours of collection through use of satellite systems or other telemetry systems such as phones and radios. Since the mid-1980s, the proportion of the U.S. Geological Survey’s streamgages with telemetry has increased dramatically, as shown in figure 12. The U.S. Geological Survey’s computers also have built-in checking routines, which provide some quality assurance, according to a Colorado U.S. Geological Survey official. Satellites, in particular, transmit much of the hydrologic data collected by the U.S. Geological Survey to data users. Once data are picked up by satellite, they can be transmitted to users in a couple of ways. For example, some data collected by the Bureau of Reclamation can be captured directly by users with their own domestic satellite receivers, or can be accessed on the Web through NOAA’s National Geophysical Data Center, a repository for satellite data within the National Environmental Satellite, Data, and Information Service. Another telemetry system—“meteor burst” communication technology— used by the Natural Resources Conservation Service also facilitates timely sharing of water quantity data. Meteor burst technology (see figure 13) is the ability to reflect radio signals, sent from remote locations, off of ionized meteorite trails 50 to 75 miles above the earth's surface. With this technology, collection sites as far apart as 1,200 miles can communicate with one another for short time intervals, which are sufficient to "burst" relatively short data messages between sending and receiving stations. This method of communications is preferable for transmitting snowpack data because, among other reasons, interference that mountains often cause in conventional communications is not a problem for a meteor burst system, long-term costs are lower than they are for satellite technology, and data transfer reliability is higher for meteor burst. The Natural Resources Conservation Service operates over 700 automated, high-elevation snow and climate measurement sites in 12 western states and Alaska; these sites use advanced radio technology to report data on the Internet about once each day. Water quantity parameters, such as streamflow and precipitation, are generally more uniform nationwide than water quality parameters, according to several federal and state officials, making it easier for groups to integrate data from separate collection efforts. For example, water withdrawal is measured as a volume of water in gallons, and stage is measured as the height of water in feet, which can be easily compared. Water quality parameters, on the other hand, are less uniform. Sediment concentration in water is one example of a measure that may be described by multiple parameters—total suspended solids, turbidity, and transparency—that are not easily integrated. According to several federal and state officials, water quantity parameters are more uniform partly because traditional parameters and the same methods of measurements have been around for decades. For example, the U.S. Geological Survey has operated its streamgaging network to measure streamflow since 1889, and the Army Corps of Engineers has collected stage data as far back as 1785 on the Mississippi River with more regular measurements beginning about 1838. Their monitoring methods and standardized techniques for converting stage data to flow data are established and relatively uniform among entities, according to an Army Corps of Engineers official. Many water quality parameters and assessment methods, on the other hand, are relatively new. For example, an EPA bioassessment guidance document noted that many natural resource agencies throughout the country have begun the process of developing and implementing biological assessment and criteria programs. In part because these processes are relatively new, sampling methods differ across agencies, impeding data sharing. In addition to water quantity parameters being more uniform, there are also fewer than for water quality, which lessens the burden of coordination according to some of the federal and state officials we spoke with. While water quantity can be characterized by a relatively small number of parameters (in magnitude of tens) concerning the volume of water available and the volume that is used, a much larger number of chemical, physical, and biological parameters (in magnitude of thousands) are required to provide an accurate picture of water quality. Chemical measures alone account for a large number of parameters because there are so many agricultural, industrial, pharmaceutical, and household chemicals in use today that are found in surface waters. According to a U.S. Geological Survey official, the agency’s water quantity monitoring largely concentrates on discharge and water height (stage) measurements. In contrast, the U.S. Geological Survey alone collects water quality data on about 500 different chemicals and identifies thousands of biological species in streams, lakes, and reservoirs. We found a broad consensus that, for a variety of reasons, water quantity data collection efforts have relatively been well coordinated. At the same time, we found that more water quantity data are needed to make well- supported watershed management decisions. The efficient collection and use of water quantity data will only grow in importance, as the nation’s population grows and water supplies continue to face increasing demands among competing uses. And given the inherent interrelationship between water quality and water quantity, it will also be increasingly important for data collectors to extend their collaborative efforts to include organizations that collect both water quantity and water quality data. The U.S. Army Corps of Engineers and the Department of the Interior offered comments on a draft of this report that were particularly germane to the material in this chapter. The Corps commented that the lead agency concept described in the previous chapter applies here as well, stating its belief that “designation of a lead federal agency by Congress to operate as a clearinghouse for water quantity data is an important step to improving data collection and management.” The Corps noted that setting up a clearinghouse of water quantity data could result in significant savings for the federal government, while also assisting state and local governments with their land use decisions. As noted in the conclusions to this chapter, there is an inherent interrelationship between water quality and water quantity. We recognize that it is increasingly important for data collectors to extend their collaborative efforts to include both water quantity and water quality data collection. The Department of the Interior expressed agreement with our concern that while water quantity data collection is comparatively well coordinated and consistent, the data currently being collected is not adequate to address the needs of decision makers trying to address water quantity-related questions. Interior explained that it is particularly troubled by the loss of many of the long-term data collection stations, which are needed for trend analysis to answer many important questions about flood and drought conditions and their recurrence. | Reliable and complete data are needed to assess watersheds--areas that drain into a common body of water--and allocate limited cleanup resources. Historically, water officials have expressed concern about a lack of water data. At the same time, numerous organizations collect a variety of water data. To address a number of issues concerning the water data that various organization collect, the Chairman of the Subcommittee on Water Resources and Environment, House Committee on Transportation and Infrastructure, asked GAO to determine (1) the key entities that collect water data, the types of data they collect, how they store the data, and how entities can access the data; and (2) the extent that water quality and water quantity data collection efforts are coordinated. At least 15 federal agencies collect a wide variety of water quality data. Most notably, the U.S. Geological Survey operates several large water quality monitoring programs across the nation. States also play a key role in water quality data collection to fulfill their responsibilities under the Clean Water Act. In addition, numerous local watershed groups, volunteer monitoring groups, industries, and academic groups collect water quality data. In contrast, collection of water quantity data is more centralized, with three federal agencies collecting the majority of data available nationwide. While GAO found notable exceptions, officials in almost all of the federal and state agencies contacted said that coordination of water quality data was falling short of its potential. Key barriers frequently identified as impeding better coordination of water quality data collection include (1) the significantly different purposes for which groups collect data, (2) inconsistencies in groups' data collection protocols, (3) an unawareness by data collectors as to which entities collect what types of data, and (4) low priority for data coordination, as shown in a lack of support for councils that promote improved coordination. GAO concluded that designating a lead organization with sufficient authority and resources to coordinate data collection could help alleviate these problems and ensure that watershed managers have better information upon which to base critical decisions. Data collectors strongly agree that coordinating water quantity data collection is considerably less problematic. Reasons include the fact that controversial water allocation decisions require accurate and complete water quantity data; that some of the technologies for measuring water quantity allow for immediate distribution of data; that water quantity data parameters are generally more consistent; and that coordination is simplified in that relatively fewer entities collect these data. Collectors of water quantity data generally agreed that an overall shortage of data was a more serious problem than a lack of coordination of the data that are collected. |
Student financial aid programs are administered by Education's Office of Student Financial Aid Programs under title IV of the Higher Education Act of 1965, as amended. The four major programs providing student aid currently in use are the Federal Family Education Loan Program (FFELP), the Federal Direct Loan Program (FDLP), the Federal Pell Grant Program, and campus-based programs. These programs together will make available over $50 billion to about 9 million students during the 1999-2000 academic year. FFELP and FDLP are the two largest postsecondary student loan programs, and Pell is the largest postsecondary grant program. FFELP provides student loans through private lending institutions; these loans are guaranteed against default by some 36 guaranty agencies and insured by the federal government. FDLP provides student loans directly from the federal government, while Pell provides grants to economically disadvantaged students. In many ways, Education's student financial aid delivery system is similar to functions performed in the banking industry, such as making loans, reporting account status, and collecting payments. The department currently maintains 11 major systems for administering student financial aid programs. These systems were developed independently over time by multiple contractors in response to new programs or mandates. They have resulted in a complex, highly heterogeneous systems environment. The systems range from legacy mainframes, several originally developed over 15 years ago, to recently developed client-server environments. Information systems are at the heart of the department's ability to carry out its mission. According to its own assessments, the student financial aid delivery process could experience major problems if the systems upon which it relies are not fully Year 2000 (Y2K) compliant in time. Such risks include delays in disbursements; reduction in Education's ability to transfer payments, process applications, or monitor program operations; and the potential inability of postsecondary education students to verify the status of their loans or grants. Last September, the department had reported to the Office of Management and Budget (OMB) that of its 14 mission-critical systems (11 involving student financial aid), 4 had been implemented and were operating as Y2K compliant. Education, along with other executive branch agencies, faced a March 31, 1999, OMB deadline for implementation of Y2K-compliant mission-critical systems. Given the situation at the time, we saw three key issues that threatened the department's ability to carry out its mission: systems testing, data exchanges, and business continuity and contingency planning. Thorough Y2K testing is essential to providing reasonable assurance that systems process dates correctly and will not jeopardize an organization's ability to perform core business operations. Agencies must test not only the Y2K compliance of individual applications, but also the complex interactions among numerous converted or replaced computer platforms, operating systems, utilities, applications, databases, and interfaces. Because of Education's late start and the compression of its schedule to meet the March 31 deadline, the time available for key testing activities of mission-critical systems was limited. We pointed out that Education needed to mitigate critical risks that affected its ability to award and track billions of dollars in student financial aid by ensuring adequate testing of its systems. We said that maintaining testing and implementation schedules while ensuring testing adequacy would be essential. Effectively addressing testing reduces the risk that the department's ability to deliver financial aid to students could be compromised. Data exchange—the transfer of information across systems—is the second area of risk we identified at Education last September. Conflicting formats or data processed on noncompliant systems could spread errors from system to system, compromising not only data but also the systems themselves. To mitigate this risk, organizations need to inventory and assess their data exchanges, reach agreements with exchange partners on formats and protocols, and develop contingency plans in the event of failure. Education's student financial aid environment is very large and complex; it includes over 7,000 schools, 6,500 lenders, and 36 loan guaranty agencies— not to mention other federal agencies. Figure 1 is a simplified graphic representation of that environment. As we reported in September, to address its data exchanges with schools, lenders, and guaranty agencies, Education dictated how the data that these institutions provide to the department should be formatted. The department handles this in one of two ways: it either provides software to institutions, such as EDExpress, or it provides the technical specifications for the institution to use in developing the necessary interface. As of last fall, the department had been active in coordinating with its data exchange partners. Beyond this, however, we pointed out that Education needed to engage in end-to-end testing of its mission-critical business processes, including data exchanges. Further complicating data exchange compliance is the need to ensure that data are not only formatted consistently but are accurate. As we have previously reported, Education has experienced serious data integrity problems in the past. As Education reported last September, its own surveys showed that many of its data exchange partners had a long way to go. For example, in the summer of 1998, the department and the American Association of Community Colleges conducted surveys of the Y2K readiness of postsecondary schools. They found that up to one-third of the schools did not have a compliance plan in place. The third area we discussed last September as critical for Education was business continuity and contingency planning. Some problems are inevitable as any organization enters the next century. It is vital, then, that realistic contingency plans be developed to ensure the continuity of core business operations in the event of Y2K-induced failures. And as our testimony pointed out, continuity and contingency plans must focus on more than agency systems alone; they must likewise address data provided by their business partners and the public infrastructure. One weak link anywhere in the chain of critical dependencies can cause major disruption. The department has been committed to developing business continuity and contingency plans for each mission-critical business process and its supporting systems. It initiated contingency planning in February 1998, and appointed a senior executive to manage the development and testing of continuity and contingency plans for all student financial aid operations. Completion of such plans was targeted for March 31, 1999. As of March 31, 1999, the Department of Education reported that all of its 14 mission-critical systems—including the 11 student financial aid delivery systems—were Y2K compliant and in operation. Our review of three of these systems found adequate test documentation. However, the department has not yet closed out four of its systems as completing the Y2K compliance process in accordance with Education-specific guidance; other systems issues also remain outstanding, although they are generally considered low-risk. Testing of data exchanges and end-to-end testing of key business processes are continuing according to the department's schedule, as is the refinement of business continuity and contingency plans. As part of our work, we selected three student financial aid systems and reviewed the contractor's change control/quality control process, test plans, and test results. We found adequate documentation supporting baseline, regression, and future date testing at the unit and system levels, as summarized in table 1. Education reported that its 14 mission-critical systems were compliant as of March 31, but it still has remaining tasks to complete for several of these systems before certifying them as completing the Y2K compliance process. The final step of the department's close out process includes a Year 2000 System Closeout form that is signed by the system manager, principal office coordinator, Year 2000 project management team liaison, and either the independent verification and validation (IV&V) contractor or a representative of the Year 2000 Program Office support contractor. The signatures certify that the system has completed the Y2K compliance process, consisting of successfully passing appropriate Y2K validation tests (including IV&V), and identifying and testing data exchanges. As of May 9, the department had not closed out 4 of its 14 mission-critical systems, including 2 student financial aid systems. Education expects to close out three of the remaining four systems by the end of this month but has not yet received final concurrence from IV&V contractors, who are waiting to review documentation pending from the department. According to Education officials, the fourth system Education’s Local Area Network (EDNET) requires additional funding for Y2K interoperability. Education has requested these funds as part of a supplemental budget request to OMB for Y2K emergency funding. In addition to closing out the remaining 4 systems, 7 of the other 10 mission-critical systems have remaining tasks (excluding data exchange testing) that still need to be completed. For example, the Campus Based System's computing environment was converted in February 1999 to another operating system; however, the contractor for the data center has not provided the IV&V contractor with an updated inventory of its hardware and system-related software (e.g., operating system, system utilities, compilers, etc.). The inventory is due to the IV&V contractor in mid-May for review. Another example of an open item is a noncompliant software product used by the Direct Loan Origination System. The software product was upgraded in April, but the IV&V contractor is still waiting for documentation. According to Education officials and the IV&V contractors, these open issues are considered low-risk items and are in the process of being resolved. With the exception of data exchange testing (discussed below), Education expects to resolve these issues over the next few months. The department needs to be diligent in making sure that these issues are indeed resolved expeditiously. The department is also currently in the process of developing a new mission-critical system—the Recipient Financial Management System—to replace the current Pell Grant Recipient Financial Management System. The first two phases of the Recipient Financial Management System are expected to be implemented on May 26, 1999, with the third phase following on June 25, 1999, and the final phase scheduled for implementation on August 13, 1999. According to department officials, the new system was developed to be Y2K compliant and is scheduled to begin compliance testing this month. Beyond the testing of individual mission-critical systems, we reported last fall that Education needed to devote a significant amount of time to testing its data exchanges as part of its end-to-end testing approach. In recognition of the importance of data exchanges, the Higher Education Amendments of 1998 specifically required that Education "fully test all data exchange routes for Year 2000 compliance via end-to-end testing, and submit a report describing the parameters and results of such tests to the Comptroller General no later than by March 31, 1999." In response to this mandate, the department submitted a report describing various aspects of its end-to-end testing approach and results to date. OMB has also identified student aid as one of 42 high-impact federal programs and has assigned the Department of Education as the lead agency. Education's approach includes the following: Testing and validating the data exchange software that the department develops and provides to postsecondary institutions to support the administration and application of federal student financial aid, which includes EDExpress, Free Application for Federal Student Aid (FAFSA) Express, and FAFSA on the Web. Testing all of its data exchanges during the renovation and validation process by simulating the trading partner's role (i.e., sending and receiving data to and from the systems). Testing the data exchange with the actual trading partner. A series of test dates has been scheduled for this purpose—as listed in table 2—to confirm that the transmission performs correctly for a particular entity. In addition to data exchange testing, as part of its continuing outreach activities to data exchange partners, Education is in the process of sending out another survey this month to over 7,000 postsecondary institutions to be used in assessing how educational institutions are progressing with Y2K compliance efforts. The department expects to have results by June 1999. Education also maintains an Internet web site that contains Y2K information such as "Dear Colleague" letters about Y2K efforts, Education- developed software certification letters, and its publication entitled Year 2000 Readiness Kit: A Compilation of Y2K Resources for Schools, Colleges, and Universities. Also in development, according to Education, are plans to demonstrate the readiness of its student aid application system by having students at a local university apply for aid on systems (at the university and at an Education data center) with the clock set forward to February 29, 2000. Education also reports that as of this month, 18 of the 36 student loan guaranty agencies have Y2K-compliant systems. Of the remaining 18 guaranty agencies still working on Y2K activities, 11 are expected to be compliant by June 1999, with another 3 expected to be compliant by September 1999. Of the remaining 4, 1 guaranty agency reports it will not be compliant until December 1999. As part of its oversight function, the department, which will include staff from the Office of the Inspector General, is planning site visits to several guaranty agencies over the next few months to review their Y2K efforts. In keeping with the department's commitment to engage in business continuity and contingency planning, in November 1998 it posted an invitation for comment on its Y2K contingency planning process for student financial aid. Since then, a draft plan dated February 5, 1999, has been posted to its web site for review and comment by external trading partners. The draft document contains detailed plans for eight key business processes and associated subprocesses, outlining the process goal, description, and impact analysis. For each subprocess, the business impact analysis addresses failure scenarios, time horizon to failure, normal performance levels, emergency performance levels, risk mitigation options, and contingency options. The mission-critical business processes are student aid application and eligibility determination; student aid origination and disbursement; student enrollment tracking and reporting; guarantor and lender payments; repayment and collection; institutional eligibility and monitoring; customer service and communication; and Federal Family Education Loan Program origination, disbursement, repayment, and collection. Education also intends to test its business continuity and contingency plans, and has requested additional funding in its supplemental budget request to OMB to do so. The department has conducted some preliminary tests and anticipates doing more. It currently expects to complete all of these tests by June 15, 1999. While much of the work on renovating and validating mission-critical systems has been completed, and the risk of student financial aid delivery system failures has been significantly reduced, the department needs to continue making Y2K a top priority. Accordingly, it needs to focus particular attention on the following activities. Expeditiously resolving open issues delaying certification of the remaining four mission-critical systems still pending formal closeout. Continue resolving and tracking open issues, including environmental or functional changes made to existing systems; in doing so, ensure the involvement of IV&V contractors. Ensuring that the new Recipient Financial Management System has been adequately tested for Y2K compliance as each phase is implemented between now and August. Continue end-to-end testing of critical business processes involving Education's internal systems and its external data exchange partners. Ensure that results are monitored for completeness and any problems that may arise are addressed promptly—including concerns raised by the IV&V contractors. Continue outreach activities with schools, guaranty agencies, and other participants in the student financial aid community to share successes and lessons learned to help further reduce the likelihood of Y2K failures. Continue refining and testing the student financial aid business continuity and contingency plans, encouraging the involvement of postsecondary institutions, guaranty agencies, and other external trading partners. In summary, Mr. Chairman, the Department of Education has made progress toward making its programs and supporting systems Year 2000 compliant. However, work remains to complete Education’s planned Y2K program so as to ensure that the risk of disruption to student financial aid delivery is minimized, and that the department is prepared to handle emergencies that may arise. This concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the Department of Education's efforts to ensure that its computer systems supporting critical student financial aid activities will be able to process information reliably through the turn of the century, focusing on: (1) the progress Education has made to date in making its information systems year 2000 compliant; and (2) the future tasks facing the department. GAO noted that: (1) as of March 31, 1999, Education reported that all of its 14 mission-critical systems--including the 11 student financial aid delivery systems--were year 2000 compliant and in operation; (2) GAO's review of three of these systems found adequate test documentation; (3) however, the department has not yet closed out four of its systems as completing the year 2000 compliance process in accordance with Education-specific guidance; other systems issues also remain outstanding, although they are generally considered low-risk; (4) testing of data exchanges and end-to-end testing of key business processes are continuing according to the department's schedule, as is the refinement of business continuity and contingency plans; (5) while much of the work on renovating and validating mission-critical systems has been completed, and the risk of student financial aid delivery system failures has been significantly reduced, the department needs to continue making year 2000 a top priority; and (6) accordingly, it needs to focus particular attention on the following activities: (a) expeditiously resolving open issues delaying certification of the remaining four mission-critical systems still pending formal closeout; (b) continue resolving and tracking open issues, including environmental or functional changes made to existing systems; in doing so, ensure the involvement of independent verification and validation (IV&V) contractors; (c) ensuring that the new Recipient Financial Management System has been adequately tested for year 2000 compliance as each phase is implemented between now and August; (d) continue end-to-end testing of critical business processes involving Education's internal systems and its external data exchange partners; (e) ensure that results are monitored for completeness and any problems that may arise are addressed promptly--including concerns raised by the IV&V contractors; (f) continue outreach activities with schools, guaranty agencies, and other participants in the student financial aid community to share successes and lessons learned to help further reduce the likelihood of year 2000 failures; and (g) continue refining and testing the student financial aid business continuity and contingency plans, encouraging the involvement of post secondary institutions, guaranty agencies, and other external trading partners. |
DOD’s primary representative for supplier-base issues is the Office of the Deputy Under Secretary of Defense for Industrial Policy (Industrial Policy). Its mission is to sustain an environment that ensures the industrial base on which DOD depends is reliable, cost-effective, and sufficient to meet its requirements. Industrial Policy defines reliability as suppliers providing contracted products and service in a timely manner; cost- effectiveness as the delivery of products and services at or below target costs; and sufficiency as suppliers delivering contracted products and services that meet prescribed performance requirements. DOD’s Program Executive Officers manage a portfolio of programs related to weapon systems. DOD also relies on a cadre of military and civilian officials— known as program managers—to lead the development and delivery of individual weapon systems. Program managers or their designees interact with prime contractors who manage subcontractors to provide the final good or service to DOD. Currently, DOD relies primarily on about six prime contractors who manage thousands of subcontractors for DOD systems. DOD has a variety of authorities, including laws, regulations, and an executive order, that govern its interaction with the defense supplier base. There are several key authorities available to DOD for maintaining information on its suppliers as well as ensuring a domestic capability for certain items, such as radiation-hardened microprocessors. In addition, the Department of Commerce has authority to assess the supplier base to support the national defense, and has conducted 15 supplier-base assessments in the past 5 years, including studies on imaging and sensor technology. See appendix II for a description of selected key defense supplier-base authorities. Although DOD has undertaken a variety of efforts to monitor the defense supplier base, it lacks a framework and consistent approach to identify and monitor concerns in the supplier base. The military services, Industrial Policy, and other DOD components collect information about the health and viability of certain defense supplier-base sectors. However, DOD has not applied departmentwide criteria to determine supplier-base characteristics that could result in reduced or nonavailability of needed items. As part of its supplier-base monitoring efforts, DOD has previously created lists of specific items that are considered critical at a point in time, but lists such as these run the risk of becoming obsolete and do not focus on supplier-base characteristics that could guide identification of problems. To better target its monitoring resources, Industrial Policy recently established criteria for supplier-base characteristics that could be indicators of supply concerns. These criteria have primarily been applied to the missile and space defense sectors and have not been used to guide the identification and monitoring of supplier-base concerns for all sectors departmentwide. The military services and other DOD components conduct studies on their respective suppliers, often in response to supplier concerns for individual programs. For example, the Army’s Aviation and Missile Research, Development, and Engineering Center studies availability issues for Army missile and space programs, such as the availability of raw materials for these programs. The Air Force Research Laboratory conducts assessments that range from annual studies of key supply sectors to evaluations of the supplier base for individual components or materials, such as beryllium. Within the Navy, the Fire Scout vertical takeoff and unmanned aerial vehicle program had an industrial capability assessment conducted of its supplier base before it proceeded to the production phase of the program. Officials from the Missile Defense Agency told us they have dedicated staff to monitor the supplier base for each of the agency’s 12 programs and have contracted for support to help improve supply-chain management between the agency’s program offices and their prime contractors. The Secretary of Defense is required by legislation to report annually to Congress on the supplier base. Industrial Policy prepares these reports, which provide a broad analysis of supplier trends and summarize supplier- base studies performed by various DOD components. For example, Industrial Policy reports on the percentage of prime contracts with a value of $25,000 or greater awarded to foreign suppliers. In addition, Industrial Policy also intermittently reports on foreign reliance for selected weapon programs. For example, in both 2001 and 2004, Industrial Policy reported to Congress on overall foreign reliance for 8 and 12 selected weapon programs, respectively. Industrial Policy also reports annually on industrial capabilities, including a macro-level summary of DOD’s seven supplier sectors and a summary of capabilities assessments conducted within DOD—which totaled 47 in 2007. Industrial Policy also provides quarterly updates on the financial and economic metrics of various defense suppliers; convened a roundtable of companies to identify barriers to conducting business with DOD; chartered a cross-department work group to collaborate on tasks related to defense supplier-base challenges, such as sole sources of supply and barriers to competition; and conducted other activities to foster knowledge of the defense supplier base. To support supplier-base analyses by Industrial Policy and the military services, the Defense Contract Management Agency’s Industrial Analysis Center conducts program- and sector-specific defense supplier-base studies, as well as conducting analysis to support DOD’s studies of foreign reliance. While these multiple efforts have provided the various DOD components with information about specific suppliers, they have not provided a DOD-wide view of supplier-base characteristics that could be indicators of problems—in large part because the efforts are not guided by departmentwide criteria for identifying and monitoring supplier-base concerns. In addition, DOD has also developed lists of items deemed critical at a point in time as part of its supplier-base monitoring efforts. For example, in 2003, after insufficient visibility, planning, and programming led to shortages of several mission-essential items during Operations Iraqi Freedom and Enduring Freedom, the Joint Staff directed the military services, the Defense Logistics Agency, the Defense Contract Management Agency, and the Combatant Commanders to create a list of the their respective top 20 “Critical Few” material readiness-shortfall items. Criteria for selecting items included those with high variances in wartime versus peacetime demand, military-unique characteristics without a commercial substitute, and limited industrial-base capacity. DOD developed a classified list of 25 items in 2003 that, according to officials, has not been updated. Similarly, an Army regulation and Air Force directive cite the development and use of “critical items lists.” However, officials from both services stated that the language in these authorities is outdated and the lists, if ever developed, are no longer used. According to Industrial Policy, lists such as these only capture items that are deemed critical at a point in time and, therefore, do not reflect changes in industry, technology, and DOD requirements. The Air Force has initiated efforts to establish criteria to track supplier- base concerns. Specifically, the Air Force’s Space and Missile Systems Center, under direction from the National Security Space Office, established a Space Industrial Base Program in order to address issues affecting the Air Force’s ability to develop and deploy space systems. According to Air Force officials, this action was a result of DOD Directive 5101.2. The center developed a method for identifying and tracking defense items with supplier-base concerns, defining such items as those whose loss or impending loss of manufacturers or suppliers has the potential to severely affect the program in terms of schedule, performance, or cost if left unresolved. Specifically, criteria for identifying and monitoring these items is based on supplier-base characteristics such as uneconomical production requirements, foreign-source competition, limited availability, or increasing cost of items and raw materials used in the manufacturing process. According to the Space and Missile Systems Center, based on the criteria it developed, it identified approximately 80 critical items in its space systems and coordinated with the Aerospace Corporation, a federally funded research and development center, to track the supplier base for these items. According to Industrial Policy, the breadth of DOD’s programs requires that it selectively monitor DOD’s supplier base. In turn, to better target supplier-monitoring resources, Industrial Policy recently established criteria for identifying conditions that could be indicators of supplier-base concerns for certain defense items, deeming these items as “important.” Its criteria for such important items include those produced by a sole source; used by three or more programs; representing obsolete, enabling, or emerging technologies; requiring long lead times to manufacture; or having limited surge-production capability. According to Industrial Policy, this internal effort grew out of DOD’s development of its critical asset list, and the organization uses the “important” designation to help it identify components and their suppliers that have the most potential to negatively affect production across program and service lines. However, while Industrial Policy uses these criteria, it is not aware of similar use by other DOD organizations. Industrial Policy has used these criteria to identify important components in the missile and space sectors, and has yet to use these criteria to guide the identification and monitoring of supplier-base concerns for all sectors departmentwide. According to Industrial Policy, the missile and space sectors have the preponderance of important items because they contain few commercial off-the-shelf components and a greater number of defense-unique components and, therefore, these sectors contain the most sole-source suppliers. According to Industrial Policy, these sectors are most likely to experience rapid production increases during times of conflict—another contributing factor. Examples of items identified in these sectors include thermal batteries, tactical missile rocket motors, lithium-ion batteries, and traveling-wave tubes. While still early in the process, Industrial Policy reported that it has used these criteria to help identify and work towards mitigating supplier-base concerns within the space and missile sectors. Specifically, the Defense Production Act Title III was used to improve domestic manufacturing performance for two items deemed important—traveling-wave tubes and long-life lithium-ion batteries. In a separate effort, Industrial Policy stated it is collaborating with the Defense Logistics Agency’s National Defense Stockpile Center to create departmentwide criteria for the terms, “critical,” “strategic,” and “important” and expects the Defense Logistics Agency to report to Congress by the end of calendar year 2008 on the results of this effort. As required by statute, in 2007 DOD established a Strategic Materials Protection Board to determine the need to provide long-term domestic supply of materials critical to national security to ensure that national defense needs are met, analyze risks associated with potential nonavailability of these materials from domestic sources, and recommend a strategy to the President to ensure domestic availability of these materials. The Board has initially defined critical materials as those that perform a unique function for defense systems and have no viable alternative; DOD dominates the market for the material; and has significant and unacceptable risk of supply disruption if there are insufficient U.S. or reliable non-U.S. suppliers. However, the Board’s focus is to assess only the criticality of materials, such as specialty metals, not to identify and track critical defense items or components. DOD often relies on the military services, program offices, or prime contractors to identify supplier-base concerns, including gaps and potential gaps, with no departmentwide requirement for when to report these gaps to higher-level offices. Over the past 5 years, most program officials we surveyed faced gaps in their supplier base or had sole sources of supply for certain items. To address these supplier concerns, programs often relied on the prime contractors, which had more detailed knowledge of the supplier base, and left it to the contractor’s judgment to report gaps and take actions to address supplier challenges. Further, program officials reported that they generally use their discretion in determining when to report identified gaps and planned actions to higher DOD levels. As a result, DOD’s ability to know when a departmentwide approach is needed to mitigate these concerns may be limited. DOD often relies on its individual program offices to ensure that their respective supplier bases are sufficient. According to officials from Industrial Policy, individual program offices are to ensure that their supplier base is sufficient, and Industrial Policy would become involved only when supplier-base concerns might affect multiple programs or more than one military service, therefore requiring a corporate DOD approach. Most of the program officials we surveyed had supplier-base concerns in the last 5 years (see table 1). Specifically, 16 of the 20 program officials we surveyed reported facing supplier gaps or potential gaps, including obsolescence of component parts or technologies, diminishing manufacturing sources for components, and production challenges. In addition, 15 of the 20 program officials identified sole sources of supply for components of their weapon systems. Seventeen of the program officials we surveyed said these supplier-base concerns were identified by their prime contractors, which maintain detailed knowledge of the supplier base. Many of the program officials we interviewed maintain frequent contact with their prime contractors and noted that this level of communication facilitates supplier-base knowledge. Specifically, 19 out of 20 program officials we surveyed said their prime contractor often identified and provided supplier-base information to them and that communication was frequent when a supplier-base concern arose. Program officials had varying degrees of knowledge of their supplier tiers—18 reported that they maintain knowledge of their program’s supplier base at the prime- contractor level, while 9 maintained knowledge of the lowest-tier subcontractor of the supply chain. One program official noted that knowledge of the lower-tier suppliers is gained as issues arise, and another stated that knowledge of these lower tiers is based on assessed “criticality” to the program—which is defined on a program-by-program basis. The four prime contractors that we interviewed about their own corporate insight into the supplier base noted that they had extensive internal corporate metrics to evaluate the health and performance of their subcontractors, which offered the companies a degree of visibility into their supply chains, from second-tier subcontractors to lower-tier suppliers of raw materials. For example, one of the prime contractors had software that allowed it to analyze and measure data on each supplier within its network. It captured data on each supplier’s performance based on the quality of its work and the delivery of its product, which resulted in a combined performance rating. Examples of other metrics tracked include supplier biography, report card results, trend analysis of performance ratings over a period of time such as a calendar year, and the combined performance rating of a part that a supplier manufactures for a particular system. To address reported supplier gaps, program offices took a variety of actions. For example, actions to address supplier gaps in the area of obsolescence ranged from large-scale purchases, known as life-time buys, to initiating component redesign. In other instances the gap has not yet been solved. The Space Tracking Surveillance System program relies on one company to supply the base materials used to produce nickel- hydrogen batteries, which are critical to this program. However, this company plans to cease production of these batteries in 2009 or shortly thereafter; yet an alternate source of supply has not been identified. In another instance the Hellfire Missile program is working with the Army Program Executive Officer for Missiles and Space along with Industrial Policy to request a waiver to procure a chemical that is no longer produced in the United States from a company in China. The program is also exploring whether a Navy facility could produce the chemical in the quantities needed by this and other military programs that use this chemical. Program officials and prime contractors we spoke with stated that they use their discretion for when to report supplier-base concerns. Programs are not required to report supplier issues to their program executive officer or to higher levels within DOD, such as Industrial Policy, and most programs do not have contractual requirements with their prime contractor to direct when a supplier issue must be reported. While program officials reported working closely with their prime contractors to address concerns once they were identified, program officials and prime contractors we spoke with told us that it is a judgment call as to when to report supplier-base concerns to higher levels within DOD. For example, for the 20 program officials we surveyed, 17 reported that they had shared information on supplier concerns with their cognizant program executive officer. However, only four programs, all of which faced supplier gaps in the last 5 years, reported sharing such information with Industrial Policy. Thirteen program officials we surveyed stated that no requirement exists for when their program office should report supplier-base concerns to higher levels within DOD. Similarly, nine of 20 program officials told us that no requirement exists for what should trigger a prime contractor to report a supplier-base concern to them. One of these programs, the B-2 Spirit stealth bomber, is in the process of creating a requirement for when its prime contractor should notify it of supplier concerns. According to program officials, the Hellfire missile and Navy Fire Scout programs have imposed contractual requirements on their prime contractors to report any supplier concerns. Other program officials stated that while no formal requirement existed, there was an understanding between their prime contractor and the program office that any activity that will affect schedule, which could include supplier-base concerns, must be reported to the program office. While addressing supplier gaps at the program- or program executive officer–level may be appropriate in many cases, program offices across the military services rely on the same supplier base in some instances. In such cases, concerns with these suppliers can become even more crucial if it is a sole-source supplier. For example, multiple DOD programs in the space sector rely on one provider for traveling-wave tube amplifiers needed for satellite navigation purposes. According to officials from the Air Force’s Space and Missile Systems Center, it closely tracks this supplier because any disruption in its production capability could adversely affect the cost, schedule, and performance of multiple space programs. In addition, officials from the Patriot Advanced Capability-3 missile program told us that production delays with its inertial measurement unit also affected the Army’s Tactical Missile System program, as it uses this same unit from this company. However, DOD may not be aware of these types of cross- department concerns in other supplier-base sectors because it does not have a framework for programs to report information on supplier gaps and vulnerabilities for critical items. In addition, Industrial Policy may benefit from receiving information on supplier gaps and vulnerabilities to help it achieve its mission to sustain an environment that ensures the industrial base on which DOD depends is reliable, cost-effective, and sufficient to meet its requirements. A framework for programs to report supplier-gap information could assist Industrial Policy’s decisions on when to activate available tools to mitigate supplier-base concerns, such as the authorities under the Defense Production Act. As we recently reported in a review of Defense Production Act use since its 2003 reauthorization, 25 DOD projects have received Title III funding over the past several years, totaling almost $420 million in assistance. Almost half of the projects received funds in order to establish a domestic source of supply or to help alleviate dependence on sole sources of supply. Recent major projects include Radiation Hardened Microelectronics Capital Expansion and a Beryllium Industrial Base Production Initiative. While DOD has a number of efforts to monitor its supplier base, these efforts lack a framework and set of characteristics to identify and track supplier-base concerns and allow for consistent reporting to higher levels within DOD, such as Industrial Policy. A failure to systematically identify and address supplier-base concerns could result in untimely discoveries of supply vulnerabilities, which could potentially affect DOD’s ability to meet national security objectives. While DOD components, such as the Air Force’s Space and Missile Systems Center, have taken action to identify and monitor supplier-base concerns, these efforts have been limited in scope or lacked departmentwide involvement. DOD has an opportunity to leverage the various efforts taken by its components into a departmentwide framework for identifying and monitoring supplier-base concerns. Considering the dynamic nature of the defense supplier base, this model could take into account recent efforts by Industrial Policy to establish characteristics that could be indicators of supply concerns. Further, by relying on individual program offices and their contractors to determine when it is appropriate to raise concerns, DOD cannot be assured that it is identifying all gaps that may need to be addressed at a departmentwide level. Until DOD establishes departmentwide characteristics for consistent identification and monitoring of supplier- base concerns and develops requirements for elevating supplier-base concerns—at both the contractor and program levels—it will continue to lack the visibility needed to oversee a robust supplier base. We are recommending that the Secretary of Defense direct Industrial Policy, in coordination with the military services and other relevant DOD components, to consider the following two actions to identify and monitor the supplier base: 1. Leverage existing DOD efforts to identify criteria of supplier-base problems and fully apply these criteria to guide the identification and monitoring of supplier-base concerns throughout DOD. 2. Create and disseminate DOD-wide written requirements for reporting potential concerns about supplier-base gaps. These requirements should delineate when, and to what level, supplier-base concerns should be elevated and should take into account the two levels of reporting—prime contractors to program offices and program offices to higher levels in DOD. DOD provided comments on a draft of this report. DOD also provided technical comments, which we incorporated as appropriate. In commenting on our first recommendation, DOD concurred with the need to leverage existing DOD efforts to identify criteria of supplier-base problems and fully apply these criteria to guide the identification and monitoring of supplier-base concerns throughout DOD. DOD indicated that its ongoing Defense Acquisition Guidebook update presents a fitting and timely opportunity to institutionalize these criteria into departmental acquisition policy. DOD partially concurred with our second recommendation, stating that while there is merit in having formal, published criteria for making judgments regarding when program offices should report supplier issues to Industrial Policy, similar formal reporting criteria or contractual mechanisms are not needed for prime contractors to report supplier-base concerns to the program office. DOD expects prime contractors to maintain internal corporate metrics to evaluate the health and performance of their subcontractors and likewise expects program offices to maintain frequent and open communication with their prime contractors on supplier-base issues. Our recommendation is for DOD to consider how best to facilitate the flow of this information between program offices and their prime contractors, regardless of whether it is through a contractual requirement or other means. This is particularly important given the large role that contractors play in monitoring the supplier base. While we found that almost all of the 20 program officials we surveyed relied on their prime contractors to provide supplier-base information, including identification of supplier-base concerns, there is no guidance to ensure that information is consistently elevated to the appropriate levels. As such, we maintain that a mechanism is needed to facilitate the flow of information from the prime contractor to the program office, and from the program office to higher levels within DOD— especially for those concerns whose characteristics meet the criteria for making judgments regarding suppliers and components for DOD. We also provided a draft of this report to the Department of Commerce. The department reviewed the draft and provided no comments. DOD’s written comments are reprinted in appendix III. We are sending copies of this report to interested congressional committees; the Secretaries of Defense and Commerce; and the Director, Office of Management and Budget. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 or [email protected] if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Others making key contributions to this report are listed in appendix IV. To assess Department of Defense (DOD) efforts to monitor its defense supplier base and identify and address gaps that might exist in its supplier base, we reviewed relevant laws and regulations, such as sections of Title 10, U.S. Code, the DOD 5000 series, National Security Space Acquisition Policy 03-01, and the Defense Production Act of 1950, as amended. We also met with officials and reviewed documents from multiple DOD components as well as defense companies, to discuss efforts, policies, and guidance. We met with officials from DOD’s Office of the Deputy Under Secretary of Defense for Industrial Policy (Industrial Policy) to review its processes and actions for monitoring the defense supplier base. We also discussed with Industrial Policy its role in preparing and submitting the Annual Industrial Capabilities Report to Congress. We met with the Defense Contract Management Agency’s Industrial Analysis Center to discuss its role in studying DOD’s supplier-base sectors. We met with officials from the U.S. Air Force, Army, Navy, and the Missile Defense Agency to review and discuss their policies and practices for monitoring the defense supplier base. We also met with officials from the Department of Commerce, Bureau of Industry and Security, to discuss their role in monitoring the defense supplier base through its authorities to conduct surveys and analyses, and prepare reports on specific sectors of the U.S. defense supplier base. We also met with a Senior Fellow of the International Security Program, Defense Industrial Initiatives Group, who at that time was with the Center for Strategic and International Studies, to discuss his studies and perspectives on the defense supplier base. In addition, we selected a nongeneralizable sample of 20 DOD weapon programs (see table 2) based on criteria including representation of the aerospace or electronics industry; representation of various stages of the acquisition life cycle, to include those with mature and emerging technologies; cross-representation of DOD components—Air Force, Army, Navy, and the Missile Defense Agency; and selection of at least one DX- rated program, based on our review of the most current list of approved DX programs, dated November 7, 2007, posted by Industrial Policy as of the time we selected the programs to survey. GAO also has ongoing work through its annual “Assessments of Selected Weapon Programs,” for many of these programs, which allowed the team to build upon our prior work efforts and existing DOD contacts. To better understand the general supplier-base knowledge, identification of supply gaps, and the use of domestic and international sourcing and tracking of these sources, we designed and administered a Web-based survey to program officials most knowledgeable about the supplier base for each of the 20 programs. We pretested a draft of our survey during January and February 2008, with officials at five DOD program offices. In the pretests, we were generally interested in the clarity of the questions as well as the flow and layout of the survey. After these pretests, we then made appropriate revisions to the survey instrument. We conducted the survey between April and June 2008, through a series of e-mails beginning on April 1 with prenotification e-mails, activated the survey on April 7, and sent follow-up e-mails to nonrespondents on April 14 and 22, 2008. We closed the survey on June 6, 2008, with a 100 percent response rate. To further determine how programs maintain knowledge of and monitor their supplier base, we then tailored follow-up questions to all 20 program officials to solicit information and documentation in areas such as communication between and among DOD and its prime contractors, and expansion on areas where programs experienced supplier gaps. We also met with and obtained information and documentation from the prime contractor for several of these programs, including officials from Boeing, Lockheed Martin, Northrop Grumman, and Raytheon. We conducted this performance audit from September 2007 to August 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 3 below describes several key authorities available to the Department of Defense (DOD) for both maintaining information on its suppliers as well as ensuring a domestic capability for certain items. In addition to the contact name above, John Neumann, Assistant Director; Tara Copp; Lisa Gardner; Michael Hanson; Ian Jefferies; Marie Ahearn; Jean McSween; and Karen Sloan made key contributions to this report. Defense Production Act: Agencies Lack Policies and Guidance for Use of Key Authorities. GAO-08-854. Washington, D.C.: June 26, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO- 08-467SP. Washington, D.C.: March 31, 2008. Defense Infrastructure: Management Actions Needed to Ensure Effectiveness of DOD’s Risk Management Approach for the Defense Industrial Base. GAO-07-1077. Washington, D.C.: August 31, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Highlights of a GAO Forum: Managing the Supplier Base in the 21st Century. GAO-06-533SP. Washington, D.C.: March 31, 2006. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 30, 2005. Federal Procurement: International Agreements Result in Waivers of Some U.S. Domestic Source Restrictions. GAO-05-188. Washington, D.C.: January 26, 2005. Defense Acquisitions: Knowledge of Software Suppliers Needed to Manage Risk. GAO-04-678. Washington, D.C.: May 25, 2004. Joint Strike Fighter Acquisition: Observations on the Supplier Base. GAO-04-554. Washington, D.C.: May 3, 2004. | The Department of Defense (DOD) relies on thousands of suppliers to provide weapons, equipment, and raw materials to meet U.S. national security objectives. Yet, increased globalization in the defense industry and consolidation of the defense supplier base into a few prime contractors has reduced competition and single-source suppliers have become more common for components and subsystems. For this report, GAO (1) assessed DOD's efforts to monitor the health of its defense supplier base, and (2) determined how DOD identifies and addresses gaps that might exist in its supplier base. To conduct its work, GAO reviewed supplier-base-related laws, regulations, and guidelines; met with officials from DOD's Office of Industrial Policy, defense contractors, and other DOD officials; and surveyed 20 major DOD weapon acquisition program officials on potential supplier-base gaps. DOD's efforts to monitor its supplier base lack a departmentwide framework and consistent approach. Its monitoring efforts generally respond to individual program supplier-base concerns or are broader assessments of selected sectors. As part of its supplier-base monitoring efforts, DOD has also previously identified lists of critical items--which according to DOD's Office of Industrial Policy (Industrial Policy) do not reflect the dynamic changes that occur in industry, technology, and DOD requirements. While DOD recently established criteria for identifying supplier-base characteristics that could be problem indicators--such as sole-source suppliers and obsolete or emerging technologies--these criteria have primarily been applied to the missile and space sectors and have not been used to guide the identification and monitoring of supplier-base concerns for all sectors departmentwide. DOD uses an informal approach to identify supplier-base concerns, often relying on the military services, program offices, or prime contractors to identify and report these concerns, including gaps or potential gaps. As no requirement for when to report such gaps to higher-level offices exist, knowledge of defense supplier-base gaps across DOD may be limited. While 16 of the 20 program officials GAO surveyed reported that they identified supplier gaps or potential gaps over the past 5 years, only 4 reported sharing this information with Industrial Policy. These gaps included obsolescence of components and items with only one available supplier. Program offices often relied on the prime contractor to identify and help address supplier-base gaps, and prime contractors and programs generally used their discretion as to when to report gaps to higher levels. As a result, Industrial Policy may not be receiving information to help it activate available tools, such as the authorities under the Defense Production Act, to mitigate supplier-base gaps. |
Although the DHS criteria for primary screening require an improved ability to detect certain nuclear materials at operational thresholds, ASPs could meet the criteria for improvement while still failing to detect anything more than lightly shielded material. DNDO officials acknowledge that passive radiation detection equipment, which includes both the new and current-generation portal monitors, is capable of detecting certain nuclear materials only when this material is unshielded or lightly shielded. For this reason, the DOE threat guidance used to set PVTs’ detection threshold is based on the equipment’s limited sensitivity to anything more than lightly shielded nuclear material rather than on the assumption that smugglers would take effective shielding measures. DOE developed the guidance in 2002 and 2003 when CBP began deploying PVTs for primary screening. DOE and national laboratory officials responsible for the guidance told us the assumption of light shielding was based not on an analysis of the capabilities of potential smugglers to take effective shielding measures but rather on the limited sensitivity of PVTs to detect anything more than certain lightly shielded nuclear materials. In contrast, PVTs are more sensitive to the relatively strong radiation signature of other nuclear materials, and the threat guidance assumes a higher level of shielding for setting the operational threshold for detection of such materials. However, even for such materials, the DOE threat guidance assumes that shielding would not exceed a level provided by the contents of an average cargo container. Moreover, DNDO has not completed efforts to fine-tune PVTs’ software and thereby improve sensitivity to nuclear materials. As a result, the criteria compare ASPs to the current performance of PVTs and do not take potential improvements into account, which affects any assessment of “significant” improvement over current technology. DNDO officials expect they can achieve small improvements to PVTs’ performance through additional development of “energy windowing,” a technique currently being used in PVTs to provide greater sensitivity than otherwise possible. Pacific Northwest National Laboratory officials responsible for developing the technique also told us small improvements may be possible, and CBP officials have repeatedly urged DNDO to investigate the potential of the technique. DNDO collected the data needed to further develop energy windowing during the 2008 performance testing at the Nevada Test Site but has not yet funded Pacific Northwest National Laboratory efforts to analyze the data and further develop the technique. Other aspects of the criteria for a significant increase in operational effectiveness require that ASPs either provide more than a marginal improvement in addressing other limitations of current-generation equipment or at least maintain the same level of performance in areas in which the current-generation equipment is considered adequate: The primary screening requirement for an 80 percent reduction in the rate of innocent alarms could result in hundreds of fewer secondary screenings per day, thereby reducing CBP’s workload and delays to commerce. The actual reduction in the volume of innocent alarms would vary and would be greatest at the nation’s busiest ports of entry, such as Los Angeles/Long Beach, where CBP officials report that PVTs generate up to about 600 innocent alarms per day. A DNDO official said the requirement for an 80 percent reduction in innocent alarms was developed in conjunction with CBP and was based on a level that would provide meaningful workload relief. The primary screening criteria requiring that ASPs provide at least the same level of sensitivity to plutonium and medical and industrial isotopes, but not specifying an improvement, were based on DNDO’s assessment that PVTs adequately detect such materials, which have a stronger radiation signature than HEU. In addition, CBP officials said that including medical and industrial isotopes in the criteria addressed a CBP requirement for verifying that those transporting certain quantities of these materials into the United States are properly licensed. The secondary screening requirement that ASPs reduce the probability of misidentifying special nuclear material by one-half addresses the inability of relatively small handheld devices to consistently locate and identify potential threats in large cargo containers. For example, a handheld device may fail to correctly identify special nuclear material if the material is well-shielded or the device is not placed close enough to a radiation source to obtain a recognizable measurement. According to CBP and DNDO, the requirement for a reduction in the average time to conduct secondary screenings is not more specific because the time varies significantly among ports of entry and types of cargo being screened. Improvements to the 2008 round of testing addressed concerns we raised about earlier rounds of ASP testing. However, the testing still had limitations, and the preliminary results are mixed. As we testified in September 2008, DHS’s improvements to the 2008 round of ASP testing addressed concerns we raised about previous tests. A particular area of improvement was in the performance testing at the Nevada Test Site, where DNDO compared the capability of ASP and current-generation equipment to detect and identify nuclear and radiological materials, including those that could be used in a nuclear weapon. The improvements addressed concerns we previously raised about the potential for bias and provided credibility to the results within the limited range of scenarios tested by DNDO. For example, we reported in 2007 that DNDO had allowed ASP contractors to adjust their systems after preliminary runs using the same radiological materials that would be used in the formal tests. In contrast, the plan for the 2008 performance test stipulated that there would be no system contractor involvement in test execution, and no ASP contractors were at the test location on the day we observed performance testing. Furthermore, DNDO officials told us, and we observed, that they did not conduct preliminary runs with threat objects used in the formal tests. In 2007, we reported that DNDO did not objectively test the handheld identification devices because it did not adhere to CBP’s standard operating procedure for using the devices to conduct a secondary inspection, which is fundamental to the equipment’s performance in the field. DNDO addressed this limitation in the 2008 round of performance testing: CBP officers operated the devices and adhered as closely to the standard operating procedure as test conditions allowed. While the test conditions did not allow CBP officers to obtain real-time technical support in interpreting the device’s measurements, as they would in the field to increase the probability of correctly identifying a radiation source, DNDO officials said they addressed this limitation. For example, they treated a decision by a CBP officer to indicate the need for technical support as a correct outcome if the test scenario involved the use of a potential threat, such as HEU. Other aspects of testing, while not specifically addressing concerns we previously raised, also added credibility to the test results. Based on our analysis of the performance test plan, we concluded that the test design was sufficient to identify statistically significant differences between the new technology and current-generation systems when there were relatively large differences in performance. Specifically, DNDO conducted a sufficient number of runs of each scenario used in the 2008 performance testing to identify such differences. With regard to the general conduct of the 2008 round of testing, two aspects, in particular, enhanced the overall rigor of the tests: (1) criteria for ensuring that ASPs met the requirements for each phase before advancing to the next, and (2) the participation of CBP and the DHS Science and Technology Directorate. The test and evaluation master plan established criteria requiring that the ASPs have no critical or severe issues rendering them completely unusable or impairing their function before starting or completing any test phase. In addition, the criteria established a cumulative limit of 10 issues requiring a work-around (e.g., a straightforward corrective step, such as a minor change in standard operating procedures) and 15 cosmetic issues not affecting proper functioning. DNDO and CBP adhered to the criteria even though doing so resulted in integration testing conducted at the Pacific Northwest National Laboratory taking longer than anticipated and delaying the start of field validation. For example, DNDO and CBP did not allow a vendor’s ASP system to complete integration testing until all critical or severe issues had been resolved. The involvement of CBP and the DHS Science and Technology Directorate provided an independent check, within DHS, of DNDO’s efforts to develop and test the new portal monitors. For example, the lead CBP official involved in ASP testing told us that DNDO provided an initial assessment of the severity of issues uncovered during testing, but CBP made the final decision on categorizing them as critical, severe, work-around, or cosmetic issues. CBP also added a final requirement to integration testing before proceeding to field validation to demonstrate ASPs’ ability to operate for 40 hours without additional problems. According to CBP officials, their efforts to resolve issues prior to field validation reflect the importance CBP places on ensuring that ASPs are sufficiently stable and technically mature to operate effectively in a working port of entry and thereby provide for a productive field validation. The DHS Science and Technology Directorate, which is responsible for developing and implementing the department’s test and evaluation policies and standards, will have the lead role in the final phase of ASP testing; the final phase, consisting of 21 days of continuous operation, is scheduled to begin at one seaport after the completion of field validation. The Science and Technology Directorate identified two critical questions to be addressed through operational testing: (1) Will the ASP system improve operational effectiveness (i.e., detection and identification of threats) relative to the current-generation system, and (2) is the ASP system suitable for use in the operational environment at land and sea ports of entry? The suitability of ASPs includes factors such as reliability, maintainability, and supportability. Because the operational testing conducted at one seaport is not sufficient to fully answer these questions—for example, because the testing will not allow threat objects to be inserted into cargo containers—the directorate plans to also conduct an independent analysis of the results from previous test phases, including performance testing. The 2008 testing still had limitations, which do not detract from the test results’ credibility but do require that results be appropriately qualified. Limitations included the following: The number of handheld identification device measurements collected during performance testing was sufficient to distinguish only particularly large differences from ASPs’ identification ability. In particular, the standard operating procedure for conducting secondary inspections using ASPs, which requires less time than when using handheld devices, allowed DNDO to collect more than twice as many ASP measurements and to test ASPs’ identification ability against more radiation sources than used to test handheld identification devices. The performance test results cannot be generalized beyond the limited set of scenarios tested. For example, DNDO used a variety of masking and shielding scenarios designed to include cases where both systems had 100 percent detection, cases where both had zero percent detection, and several configurations in between so as to estimate the point where detection capability ceased. However, the scenarios did not represent the full range of possibilities for concealing smuggled nuclear or radiological material. For example, DNDO only tested shielding and masking scenarios separately, to differentiate between the impacts of shielding and masking on the probabilities of detection and identification. As a result, the performance test results cannot show how well each system would detect and identify nuclear or radiological material that is both shielded and masked, which might be expected in an actual smuggling incident. Similarly, DNDO used a limited number of threat objects to test ASPs’ detection and identification performance, such as weapons-grade plutonium but not reactor-grade plutonium, which has a different isotopic composition. A report on special testing of ASPs conducted by Sandia National Laboratories in 2007 recommended that future tests use plutonium sources having alternative isotopic compositions. Sandia based its recommendations on results showing that the performance of ASP systems varied depending on the isotopic composition of plutonium. The Science and Technology Directorate’s operational testing is designed to demonstrate that the average time between equipment failures (the measure of ASPs’ reliability) is not less than 1,000 hours. Thus, the testing will not show how reliable the equipment will be over a longer term. DHS Science and Technology Directorate officials recognize this limitation and said they designed operational testing only to demonstrate compliance with the ASP performance specification. Furthermore, to the extent that the Science and Technology Directorate relies on performance test results to evaluate ASPs’ ability to detect and identify threats, its analysis of ASPs’ effectiveness will be subject to the same limitations as the original testing and analysis conducted by DNDO. The preliminary results presented to us by DNDO are mixed, particularly in the capability of ASPs used for primary screening to detect certain shielded nuclear materials. However, we did not obtain DNDO’s final report on performance testing conducted at the Nevada Test Site until early April 2009, and thus we had limited opportunity to evaluate the report. In addition, we are not commenting on the degree to which the final report provides a fair representation of ASPs’ performance. Preliminary results from performance testing show that the new portal monitors detected certain nuclear materials better than PVTs when shielding approximated DOE threat guidance, which is based on light shielding. In contrast, differences in system performance were less notable when shielding was slightly increased or decreased: Both the PVTs and ASPs were frequently able to detect certain nuclear materials when shielding was below threat guidance, and both systems had difficulty detecting such materials when shielding was somewhat greater than threat guidance. DNDO did not test ASPs or PVTs against moderate or greater shielding because such scenarios are beyond both systems’ ability. (See fig. 3 for a summary of performance test results for detection of certain nuclear materials.) With regard to secondary screening, ASPs performed better than handheld devices in identification of threats when masked by naturally occurring radioactive material. However, differences in the ability to identify certain shielded nuclear materials depended on the level of shielding, with increasing levels appearing to reduce any ASP advantages over the handheld identification devices—another indication of the fundamental limitation of passive radiation detection. Other phases of testing, particularly integration testing, uncovered multiple problems meeting requirements for successfully integrating the new technology into operations at ports of entry. Of the two ASP vendors participating in the 2008 round of testing, one has fallen several months behind in testing due to the severity of the problems it encountered during integration testing; the problems were so severe that it may have to redo previous test phases to be considered for certification. The other vendor’s system completed integration testing, but CBP suspended field validation of the system after 2 weeks because of serious performance problems that may require software revisions. In particular, CBP found that the performance problems resulted in an overall increase in the number of referrals for secondary screening compared to the existing equipment. According to CBP, this problem will require significant corrective actions before testing can resume; such corrective actions could in turn change the ability of the ASP system to detect threats. The problem identified during field validation was in addition to ones identified during integration testing, which required multiple work-arounds and cosmetic changes before proceeding to the next test phase. For example, one problem requiring a work-around related to the amount of time it takes for the ASP to sound an alarm when a potential threat material has been detected. Specifications require that ASPs alarm within two seconds of a vehicle exiting the ASP. However, during testing, the vendor’s ASP took longer to alarm when a particular isotope was detected. The work-around to be implemented during field validation requires that all vehicles be detained until cleared by the ASP; the effect on commerce must ultimately be ascertained during field validation. CBP officials anticipate that they will continue to uncover problems during the first few years of use if the new technology is deployed in the field. The officials do not necessarily regard such problems to be a sign that testing was not rigorous but rather a result of the complexity and newness of the technology and equipment. Delays to the schedule for the 2008 round of testing have allowed more time for analysis and review of results, particularly from performance testing conducted at the Nevada Test Site. The original schedule, which underestimated the time needed for testing, anticipated completion of testing in mid-September 2008 and the DHS Secretary’s decision on ASP certification between September and November 2008. DHS officials acknowledged that scheduling a certification decision shortly after completion of testing would leave limited time to complete final test reports and said the DHS Secretary could rely instead on preliminary reports if the results were favorable to ASPs. DHS’s most recent schedule anticipated a decision on ASP certification as early as May 2009, but DHS has not updated its schedule for testing and certification since suspending field validation in February 2009 due to ASP performance problems. Problems uncovered during testing of ASPs’ readiness to be integrated into operations at U.S. ports of entry have caused the greatest delays to date and have allowed more time for DNDO to analyze and review the results of performance testing. Integration testing was originally scheduled to conclude in late July 2008 for both ASP vendors. The one ASP system that successfully passed integration testing did not complete the test until late November 2008—approximately 4 months behind schedule. (The delays to integration testing were due in large part to the adherence of DNDO and CBP to the criteria discussed earlier for ensuring that ASPs met the requirements for each test phase.) In contrast, delays to performance testing, which was scheduled to run concurrently with integration testing, were relatively minor. Both ASP systems completed performance testing in August 2008, about a month later than DNDO originally planned. The schedule delays have allowed more time to conduct injection studies—computer simulations for testing the response of ASPs and PVTs to the radiation signatures of threat objects randomly “injected” (combined) into portal monitor records of actual cargo containers transported into the United States, including some containers with innocent sources of radiation. However, DNDO does not plan to complete the studies prior to the Secretary of Homeland Security’s decision on certification even though DNDO and other officials have indicated that the studies could provide additional insight into the capabilities and limitations of advanced portal monitors. According to DNDO officials, injection studies address the inability of performance testing conducted at the Nevada Test Site to replicate the wide variety of cargo coming into the United States and the inability to bring special nuclear material and other threat objects to ports of entry and place them in cargo during field validation. Similarly, while they acknowledged that injection studies have limitations, DOE national laboratory officials said the studies can increase the statistical confidence in comparisons of ASPs’ and PVTs’ probability of detecting threats concealed in cargo because of the possibility of supporting larger sample sizes than feasible with actual testing. A February 2008 DHS independent review team report on ASP testing also highlighted the benefits of injection studies, including the ability to explore ASP performance against a large number of threat scenarios at a practical cost and schedule and to permit an estimate of the minimum detectable amount for various threats. DNDO has the data needed to conduct the studies. It has supported efforts to collect data on the radiation signatures for a variety of threat objects, including special nuclear materials, as recorded by both ASP and PVT systems. It has also collected about 7,000 usable “stream-of-commerce” records from ASP and PVT systems installed at a seaport. Furthermore, DNDO had earlier indicated that injection studies could provide information comparing the performance of the two systems as part of the certification process for both primary and secondary screening. However, addressing deficiencies in the stream-of-commerce data delayed the studies, and DNDO subsequently decided that performance testing would provide sufficient information to support a decision on ASP certification. DNDO officials said they would instead use injection studies to support effective deployment of the new portal monitors. Given that radiation detection equipment is already being used at ports of entry to screen for smuggled nuclear or radiological materials, the decision whether to replace existing equipment requires that the benefits of the new portal monitors be weighed against the costs. DNDO acknowledges that ASPs are significantly more expensive than PVTs to deploy and maintain, and based on preliminary results from the 2008 testing, it is not yet clear that the $2 billion cost of DNDO’s deployment plan is justified. Even if ASPs are able to reduce the volume of innocent cargo referred for secondary screening, they are not expected to detect certain nuclear materials that are surrounded by a realistic level of shielding better than PVTs could. Preliminary results of DNDO’s performance testing show that ASPs outperformed the PVTs in detection of such materials during runs with light shielding, but ASPs’ performance rapidly deteriorated once shielding was slightly increased. Furthermore, DNDO and DOE officials acknowledged that the performance of both portal monitors in detecting such materials with a moderate amount of shielding would be similarly poor. This was one of the reasons that performance testing did not include runs with a moderate level of shielding. Two additional aspects of the 2008 round of testing call into question whether ASPs’ ability to provide a marginal improvement in detection of nuclear materials and reduce innocent alarms warrants the cost of the new technology. First, the DHS criteria for a significant increase in operational effectiveness do not take into account recent efforts to improve the current-generation portal monitors’ sensitivity to nuclear materials through the “energy windowing” technique, most likely at a much lower cost. Data on developing this technique were collected during the 2008 round of performance testing but have not been analyzed. Second, while DNDO made improvements to the 2008 round of ASP testing that provided credibility to the test results, its test schedule does not allow for completion of injection studies prior to certification even though the studies could provide additional insight into the performance of the new technology. Without results from injection studies, the Secretary of Homeland Security would have to make a decision on certification based on a limited number of test scenarios conducted at the Nevada Test Site. Assuming that the Secretary of Homeland Security certifies ASPs, CBP officials anticipate that they will discover problems with the equipment when they start using it in the field. Integration testing uncovered a number of such problems, which delayed testing and resulted in ASP vendors making multiple changes to their systems. Correcting such problems in the field could prove to be more costly and time consuming than correcting problems uncovered through testing, particularly if DNDO proceeds directly from certification to full-scale deployment, as allowed under the congressional certification requirement that ASPs provide a significant increase in operational effectiveness. We recommend that the Secretary of Homeland Security direct the Director of DNDO to take the following two actions to ensure a sound basis for a decision on ASP certification: Assess whether ASPs meet the criteria for a significant increase in operational effectiveness based on a valid comparison with PVTs’ full performance potential, including the potential to further develop PVTs’ use of energy windowing to provide greater sensitivity to threats. Such a comparison could also be factored into an updated cost-benefit analysis to determine whether it would be more cost-effective to continue to use PVTs or deploy ASPs for primary screening at particular ports of entry. Revise the schedule for ASP testing and certification to allow sufficient time for review and analysis of results from the final phases of testing and completion of all tests, including injection studies. If ASPs are certified, we further recommend that the Secretary of Homeland Security direct the Director of DNDO to develop an initial deployment plan that allows CBP to uncover and resolve any additional problems not identified through testing before proceeding to full-scale deployment—for example, by initially deploying ASPs at a limited number of ports of entry. We provided a draft of this report to DOE and DHS for their review and comment. DOE provided technical comments, which we have incorporated into our report as appropriate. DHS’s written comments are reproduced in appendix II. DHS agreed in part with our recommendations. Specifically, DHS stated that it believes its plan to deploy ASPs in phases, starting at a small number of low-impact locations, is in accordance with our recommendation to develop an initial deployment plan that allows problems to be uncovered and resolved prior to full-scale deployment. We agree that this deployment plan would address our recommendation and note that DHS’s comments are the first indication provided to us of the department’s intention to pursue such a plan. In contrast, DHS did not concur with our recommendations to (1) assess whether ASPs meet the criteria for a significant increase in operational effectiveness based on a comparison with PVTs’ full potential, including further developing PVTs’ use of energy windowing; and (2) revise the ASP testing and certification schedule to allow sufficient time for completion of all tests, including injection studies. With regard to energy windowing, DHS stated that using current PVT performance as a baseline for comparison is a valid approach because the majority of increased PVT performance through energy windowing has already been achieved. While DHS may be correct, its assessment is based on expert judgment rather than the results of testing and analysis being considered by the department to optimize the use of energy windowing. Given the marginal increase in sensitivity required of ASPs, we stand by our recommendation to assess ASPs against PVTs’ full potential. DHS can then factor PVTs’ full potential into a cost-benefit analysis prior to acquiring ASPs. On this point, DHS commented that its current cost-benefit analysis is a reasonable basis to guide programmatic decisions. However, upon receiving DHS’s comments, we contacted DNDO to obtain a copy of its cost-benefit analysis and were told the analysis is not yet complete. With regard to injection studies, DHS agreed that the schedule for ASP certification must allow sufficient time for review and analysis of test results but stated that DHS and DOE experts concluded injection studies were not required for certification. DHS instead stated that the series of ASP test campaigns would provide a technically defensible basis for assessing the new technology against the certification criteria. However, DHS did not rebut the reasons we cited for conducting injection studies prior to certification, including test delays that have allowed more time to conduct the studies and the ability to explore ASP performance against a large number of threat scenarios at a practical cost and schedule. On the contrary, DHS acknowledged the delays to testing and the usefulness of injection studies. Given that each phase of testing has revealed new information about the capabilities and limitations of ASPs, we believe conducting injection studies prior to certification would likely offer similar insights and would therefore be prudent prior to a certification decision. DHS provided additional comments regarding our assessment of the relative sensitivity of ASPs and PVTs and our characterization of the severity of the ASPs’ software problems uncovered during field validation. With regard to sensitivity, DHS implied that our characterization of the relative ability of ASPs and PVTs is inaccurate and misleading because we did not provide a complete analysis of test results. We disagree. First, in meetings to discuss the preliminary results of performance testing conducted at the Nevada Test Site, DNDO officials agreed with our understanding of the ability of ASPs and PVTs deployed for primary screening to detect shielded nuclear materials. Furthermore, contrary to the assertion that a complete analysis requires a comparison of ASPs to handheld identification devices, our presentation is consistent with DHS’s primary screening criterion for detection of shielded nuclear materials, which only requires that ASPs be compared with PVTs. Finally, while we agree that the performance test results require a more complete analysis, DNDO did not provide us with its final performance test report until early April 2009, after DHS provided its comments on our draft report. In the absence of the final report, which DNDO officials told us took longer than anticipated to complete, we summarized the preliminary results that DNDO presented to us during the course of our review as well as to congressional stakeholders. With regard to ASP software problems uncovered during field validation, we clarified our report in response to DHS’s comment that the severity of the problems has not yet been determined. DHS stated that its preliminary analysis indicates the problems should be resolved by routine adjustments to threshold settings rather than presumably more significant software “revisions.” However, given the history of lengthy delays during ASP testing, we believe that DHS’s assessment of the severity of problems encountered during field validation may be overly optimistic. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Homeland Security and Energy; the Administrator of NNSA; and interested congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To evaluate the degree to which Department of Homeland Security’s (DHS) criteria for a significant increase in operational effectiveness address the limitations of the current generation of radiation detection equipment, we clarified the intent of the criteria through the Domestic Nuclear Detection Office’s (DNDO) written answers to our questions and through interviews with U.S. Customs and Border Protection (CBP) officials. We also took steps to gain a fuller understanding of the strengths and limitations of the current-generation equipment, which the criteria use as a baseline for evaluating the effectiveness of advanced spectroscopic portals (ASP). In particular, we obtained copies of the Department of Energy (DOE) threat guidance and related documents used to set polyvinyl toluene (PVT) thresholds for detection of nuclear materials. We interviewed DOE and national laboratory officials responsible for the threat guidance about the process for developing it and the basis for its underlying assumptions, including shielding levels. We also interviewed DNDO and Pacific Northwest National Laboratory officials regarding the extent to which PVTs currently deployed at ports of entry meet the guidance and the development and use of energy windowing to enhance PVTs’ sensitivity to nuclear materials. To evaluate the rigor of the 2008 round of testing as a basis for determining ASPs’ operational effectiveness, we reviewed the test and evaluation master plan and plans for individual phases of testing, including system qualification testing conducted at vendors’ facilities, performance testing conducted at the Nevada Test Site for evaluating ASP detection and identification capabilities, and integration testing conducted at Pacific Northwest National Laboratory for evaluating the readiness of ASPs to be used in an operational environment at ports of entry. We also reviewed draft plans for field validation conducted at CBP ports of entry and the DHS Science and Technology Directorate’s independent operational test and evaluation. In reviewing these documents, we specifically evaluated the extent to which the performance test design was sufficient to identify statistically significant differences between the ASP and current- generation systems and whether DHS had addressed our concerns about previous rounds of ASP testing. We interviewed DNDO, CBP, and other DHS officials responsible for conducting and monitoring tests, and we observed, for one day each, performance testing at the Nevada Test Site and integration testing at DOE’s Pacific Northwest National Laboratory. We also interviewed representatives of entities that supported testing, including DOE’s National Nuclear Security Administration and Pacific Northwest National Laboratory, the National Institute of Standards and Technology, and the Johns Hopkins University Applied Physics Laboratory. We reviewed the DHS independent review team report of previous ASP testing conducted in 2007, and we interviewed the chair of the review team to clarify the report’s findings. Finally, we examined preliminary or final results for the phases of testing completed during our review, and we interviewed DNDO and CBP officials regarding the results. To evaluate the test schedule, we analyzed the initial working schedule DNDO provided to us in May 2008 and the schedule presented in the August 2008 test and evaluation master plan, and we tracked changes to the schedule and the reasons for any delays. We interviewed DNDO and other officials with a role in testing to determine the amount of time allowed for analysis and review of results. We interviewed DNDO and Pacific Northwest National Laboratory officials regarding the injection studies, including reasons for delays in the studies and plans for including the results as part of the ASP certification process. We conducted this performance audit from May 2008 to May 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Ned Woodward, Assistant Director; Dr. Timothy Persons, Chief Scientist; James Ashley; Steve Caldwell; Joseph Cook; Omari Norman; Alison O’Neill; Rebecca Shea; Kevin Tarmann; and Eugene Wisnoski made key contributions to this report. | The Department of Homeland Security's (DHS) Domestic Nuclear Detection Office (DNDO) is testing new advanced spectroscopic portal (ASP) radiation detection monitors. DNDO expects ASPs to reduce both the risk of missed threats and the rate of innocent alarms, which DNDO considers to be key limitations of radiation detection equipment currently used by Customs and Border Protection (CBP) at U.S. ports of entry. Congress has required that the Secretary of DHS certify that ASPs provide a significant increase in operational effectiveness before obligating funds for full-scale procurement. GAO was asked to review (1) the degree to which DHS's criteria for a significant increase in operational effectiveness address the limitations of existing radiation detection equipment, (2) the rigor of ASP testing and preliminary test results, and (3) the ASP test schedule. GAO reviewed the DHS criteria, analyzed test plans, and interviewed DHS officials. The DHS criteria for a significant increase in operational effectiveness require a minimal improvement in the detection of threats and a large reduction in innocent alarms. Specifically, the criteria require a marginal improvement in the detection of certain weapons-usable nuclear materials, considered to be a key limitation of current-generation portal monitors. The criteria require improved performance over the current detection threshold, which for certain nuclear materials is based on the equipment's limited sensitivity to anything more than lightly shielded materials, but do not specify a level of shielding that smugglers could realistically use. In addition, DNDO has not completed efforts to improve current-generation portal monitors' performance. As a result, the criteria do not take the current equipment's full potential into account. With regard to innocent alarms, the other key limitation of current equipment, meeting the criteria could result in hundreds fewer innocent alarms per day, thereby reducing CBP's workload and delays to commerce. DHS increased the rigor of ASP testing in comparison with previous tests. For example, DNDO mitigated the potential for bias in performance testing (a concern GAO raised about prior testing) by stipulating that there would be no ASP contractor involvement in test execution. Such improvements added credibility to the test results. However, the testing still had limitations, such as a limited set of scenarios used in performance testing to conceal test objects from detection. Moreover, the preliminary results are mixed. The results show that the new portal monitors have a limited ability to detect certain nuclear materials at anything more than light shielding levels: ASPs performed better than current-generation portal monitors in detection of such materials concealed by light shielding approximating the threat guidance for setting detection thresholds, but differences in sensitivity were less notable when shielding was slightly below or above that level. Testing also uncovered multiple problems in ASPs meeting the requirements for successful integration into operations at ports of entry. CBP officials anticipate that, if ASPs are certified, new problems will appear during the first few years of deployment in the field. While DNDO's schedule underestimated the time needed for ASP testing, test delays have allowed more time for review and analysis of results. DNDO's original schedule anticipated completion in September 2008. Problems uncovered during testing of ASPs' readiness to be integrated into operations at U.S. ports of entry caused the greatest delays to this schedule. DHS's most recent schedule anticipated a decision on ASP certification as early as May 2009, but DHS recently suspended field validation due to ASP performance problems and has not updated its schedule for testing and certification. In any case, DNDO does not plan to complete computer simulations that could provide additional insight into ASP capabilities and limitations prior to certification even though delays have allowed more time to conduct the simulations. DNDO officials believe the other tests are sufficient for ASPs to demonstrate a significant increase in operational effectiveness. |
The largest federal investments in health IT and patient electronic access to health information are the Medicare and Medicaid EHR Incentive Programs. These programs provide incentives to hospitals and health care professionals that are able to demonstrate meaningful use of a certified EHR system. Providers must attest that they have met certain measures in order to receive payment, with the required functions increasing in complexity as providers move through the stages of the program. Among the measures in the current programs are two that are specifically designed to capture the extent to which patients are able to electronically access their health information. Unless providers claimed an exclusion from reporting these measures, providers were required to successfully complete them in order to receive incentive payments for program year 2015. The measures are as follows: (1) Ability to electronically access health information. More than 50 percent of a providers’ patients must be provided timely access to view online, download, and transmit to a third party their health information, and (2) Actual electronic access. At least one of a provider’s patients must electronically view, download, or transmit to a third party their information during the 90-day reporting period. Providers participating in these EHR Programs must use certified EHR technology, which is technology that has been determined to conform to standards and certification criteria developed by ONC. These criteria do not specify a particular technical method for providing patients with access to their health information, but do specify parameters for accessing certain types of health information. According to ONC, many providers use some type of patient portal to provide access to these types of health information. A patient portal is a secure online website that gives patients 24-hour access to their personal health information and medical records from anywhere with an Internet connection. Patient portals are purchased by a provider and generally only include health information generated and made available by that individual provider. ONC released the most recent strategic plan for health IT, developed with input from federal and nonfederal stakeholders, in September 2015. This plan guides the actions of multiple federal agencies with regard to health IT. The plan outlines four primary goals, each with its own objectives for using health IT, to improve the health and well-being of individuals and communities. Two of these goals are to transform health care delivery and community health and advance person-centered and self-managed care. In addition to this strategic plan, in 2015 ONC developed, with input from federal and nonfederal stakeholders, a Shared Nationwide Interoperability Roadmap (which we refer to in this report as the Roadmap). The Roadmap proposes specific actions to advance the nation towards an interoperable health IT system that collectively improves health. The Roadmap includes the goals that patients can access their longitudinal electronic health information, contribute to this information, send and receive that information through a variety of technologies, and use that information to manage their health and participate in shared decision making with their health care providers. These Roadmap goals support the HHS strategic plan goals of advancing care and transforming health care delivery and community health. Performance measures assess performance via ongoing monitoring and reporting of program accomplishments, which include progress toward pre-established goals. Our previous work has found that performance measures can serve as an early warning system to management and as a vehicle for improving accountability to the public. We have also published guidance on assessing performance which states that it is important for performance measures to be tied to program goals and for agencies to ensure that their activities support their organizational missions and move them closer to accomplishing their strategic goals. In addition, our guidance to federal agencies on designing evaluations suggests that performance measures should include both process and outcome measures. (See table 1.) Outcome measures are particularly useful in assessing the status of program operations, identifying areas that need improvement, and ensuring accountability for end results. Furthermore, our guidance on assessing performance notes that leading organizations should not only establish performance measures but also use information from these performance measures to continuously improve processes, identify program priorities, and set improvement goals. Data from CMS show that most patients who received their health care from providers participating in the Medicare EHR Program had the ability to electronically access their health information. Information from our survey of providers and interviews with patients and providers show that this access is typically offered through patient portals and the type of information offered varies. In interviews, patients described the benefits and limitations of accessing their health information electronically. CMS data show that providers who participated in the 2015 Medicare EHR Program reported offering most of their patients the ability to electronically access their health information. In 2015, all participating hospitals and nearly all participating health care professionals reported offering electronic access to health information to 88 and 87 percent of their patients respectively, on average. This means that the providers gave the patients they saw or discharged all of the information necessary to electronically view, download, and transmit the patients’ health information, such as a website address, a username and password, and instructions for logging onto the website. Our survey of providers who participated in the 2014 Medicare EHR Program and interviews with providers further illustrate the extent to which providers offered their patients electronic access to their health information. Our survey found that nearly all providers routinely provided new patients with access to this information (92 percent of health care professionals and 91 percent of hospitals). Providers we interviewed also described circumstances in which a patient may not have been offered access. These circumstances included instances such as in emergency care, when offering electronic access may not be appropriate at the point of care, or for behavioral health data, when it might not be in the best interest of the patient to access the information. ONC has published information on the extent to which non-federal acute- care hospitals and office-based physicians provide their patients with access to their health information. ONC reported that in 2015, almost all hospitals (95 percent) offered patients the ability to electronically view their health information, and about 7 out of 10 (69 percent) hospitals provided their patients with the ability to view, download, and transmit their health information electronically. In addition, ONC recently reported that among office-based physicians, 63 percent provided patients with the ability to electronically view their health information. ONC reported in another data brief that in 2014, nearly 4 in 10 Americans were offered electronic access to their medical records. Our survey of providers who participated in the 2014 Medicare EHR Program found that most providers offered electronic access to patients through a patient portal, and our interviews suggest that patients often received access to a different portal for each provider. A patient portal is a secure website that allows patients to access information contained in their provider’s EHR system and is managed by the provider. EHR vendors and providers we interviewed noted that patients generally have to manage separate login information for each provider-specific portal. Many patients we interviewed confirmed this; for example, a number of patients we interviewed said that they had access to more than one portal, each of which contained their health information from a different care setting (e.g., hospital stays, general practitioner, and different specialists). The types of health information that providers made available to patients varied, but our survey of providers indicated that most routinely offer access to most types of patient information. The Medicare EHR Program requires that participants make certain types of information available, such as laboratory test results and current medications. According to our survey, an estimated 94 percent of hospitals and 77 percent of health care professionals routinely offered access to laboratory test results, which are required by the program. (See table 2.) Our survey also showed that fewer providers routinely offered access to certain information that is not required by the program and that they find less helpful for their patients to view. For example, 46 percent of hospitals and 54 percent of health care professionals reported routinely offering access to clinician notes, which are not required by the program. Additionally, our survey showed that fewer providers find it helpful for patients to view clinician notes and radiological images than for patients to view information such as laboratory results and current medications. Representatives from two hospitals we interviewed explained that their hospitals relied on a committee to decide what information to make available to patients through the portal and how soon after it is available to the provider to make it available to the patient. One EHR vendor we spoke with noted that the vendor automatically makes almost all information, including clinician notes, available to patients through the vendor’s patient portal by default, though many vendors allow providers to limit the types of information they routinely make available to patients. Patients we interviewed said that the type of information made available in their portals was incomplete and inconsistent across providers. Though many patients talked about accessing their lab results through their portal, multiple patients said that their results were not always available for them to view. For example, one patient said that sometimes her lab results are posted, and other times they are never made available to her, and she does not have a sense of when the results will be made available. Three patients expressed frustration that their vital signs information such as weight and blood pressure was not available through their portals, particularly since they knew that their providers collected this information during visits. Another patient said that she has observed a lot of variability in what information providers make available through their portals, with some doctors providing access to detailed information such as clinical notes and lab results, and others only making basic information available, such as appointment reminders and vital signs information collected during visits. During our interviews with 33 patients who have accessed their health information electronically, patients described numerous benefits from the ability to electronically access their health information. These benefits included the ability to communicate better with their health care providers, track health information over time, and share information with other providers. Multiple patients described circumstances in which they used information in their portal to improve their interactions with their provider and adhere to provider recommendations. For example, one patient described how he logged into the portal after a visit to review instructions from his provider that he had forgotten. Patients also noted that electronically accessing their health information made them feel empowered or more proactive to manage their health, particularly over time. For example, patients described using their electronic access to view specific test results over time to see whether their condition was changing, or to access diagnostic information that gave them the ability to do more research on their medical condition. One patient described using the information in her portal to notice a trend in her lab results and also learn of a condition she had of which her provider failed to inform her. Patients also described using their patient portals to share information with other providers. Multiple patients described printing out medical information from their portals, such as lab results, and bringing that information to appointments with other providers. Patients noted that portals make sharing health information very convenient. However, patients also described some limitations with their access, many of which were related to the functionality of the portal. Patients we interviewed stated that they were able to view their health information electronically, but many patients said that it was not clear that this information could be electronically downloaded or transmitted. Patients also expressed frustration with the amount of time and effort it took to set up electronic access through their providers, managing multiple passwords for their many portals, and understanding each portal’s user interface. Many patients said that the information itself was often incorrect or not presented in helpful ways, and some patients noted that there was no simple way to correct or denote incorrect information within the portal. For example, one patient said that another person’s information was included in her record, and it took multiple requests to her provider to remove this information from her record. Another patient was frustrated that information about his weight that was captured in his yearly physical was not available in the portal in a way that would allow him to track his weight over time. Multiple patients said that an overall limitation is that they could not aggregate all of their health information into a single longitudinal health record. While there are health IT products available to help patients and providers aggregate information, they are not in high demand. For example patient-purchased personal health records (PHR) can enable patients to aggregate electronic information from disparate sources into a single record. Health IT developers said that there are PHR products available for patients who attempt to generate such a record, but our survey and patient interviews indicate that these products are not widely used. Health IT developers noted that these products have limited functionality because they or the users (e.g., patients) cannot access information stored in EHR systems, and one developer noted that a lack of standardization limits the ability to present information in a meaningful way. Patients we interviewed generally stated that they were not using these products, and health IT developers agreed that consumer demand is low. Additionally, relatively few hospitals and health care professionals we surveyed reported having the capability to submit information to PHR products. Provider-purchased products can also help patients and providers aggregate longitudinal health information, according to health IT product developers and EHR vendors we interviewed. For example, one health IT developer explained that providers can currently purchase their product to display information from multiple EHR systems in a single portal; this product would need to be purchased separately from the EHR system and would require additional configuration. However, according to our survey, we estimate that most providers offer patient portals that are packaged with their EHR system. One EHR vendor representative said that the company was currently in the process of developing a product that will enable patients to access information from multiple providers using their EHR system. However, that product has not been released for provider and patient use. Providers participating in the Medicare EHR Program in 2015 reported that relatively few patients electronically accessed their health information when it was made available to them. In other words, few of these patients logged into a patient portal and viewed, downloaded, or transmitted their health information. Our analysis of 2015 Medicare EHR Program data collected by CMS showed that among participating hospitals, 15 percent of their patients electronically accessed their available health information; among physicians and other health care professionals, this percentage was twice as much, with about a third (30 percent) of their patients accessing their available health information. (See fig. 1.) Examining access rates by provider characteristics, our analysis shows that some types of non-hospital based providers reported relatively low percentages of patients accessing their health information electronically in 2015. Analyzing 2015 Medicare EHR Program data supplemented with other HHS data, we found that among non-hospital based providers participating in the 2015 program: health care professionals located in areas with a higher (i.e., above the national median) percentage of residents in poverty and located in rural areas reported lower levels of electronic access to health information, compared with professionals in lower-poverty areas and professionals located in urban areas; health care professionals with 50 or fewer group practice members reported notably lower levels of electronic access to health information compared with professionals with larger numbers of group practice members; and health care professionals other than general or specialty practitioners—including chiropractors, dentists, and podiatrists— reported notably lower levels of electronic access to health information compared with professionals in general practice or specialty practice. (See fig. 2.) Examining access rates by age, our analysis of data from the 2015 Medicare EHR Program and data from HRSA’s Area Health Resources File indicates that the level of electronic access to health information reported by both hospitals and health care professionals was lower among those located in areas with a higher percentage of the population over age 65. (See fig. 3.) The findings from our analysis of access rates by patient age are consistent with other evidence suggesting that older patients may be less likely to access their health information electronically compared with younger patients. Providers we interviewed and who responded to our survey, as well as health IT developers we interviewed, said that a patient’s age affected the extent to which she electronically accesses her health information. Multiple providers who responded to our survey and that we interviewed conveyed that, in their experience, older patients are less likely to electronically access their information. Providers and health IT developers noted that younger patients and those with chronic conditions are most likely to want electronic access to their health information. Some providers we surveyed and interviewed attributed the lack of interest in accessing health information electronically among older patients to a decreased likelihood of having access to a computer or web- enabled device. One provider stated that his hospital serves a large elderly population and that this was the biggest challenge to meeting the requirement under the 2014 Medicare EHR Program that over 5 percent of patients access their health information electronically. A recent data analysis by ONC found no differences in rates of access to or the viewing of online medical records by age, but the analysis did find that individuals between the ages of 50 and 59 had significantly higher rates of electronically communicating with health care providers, looking up test results online, and using smartphone health applications compared with individuals 70 years or older. More generally, a 2013 survey conducted by the Pew Research Center found that adults age 65 or older were most likely to say that they never go online. Age is not the only determinant as to whether patients electronically access their health information. According to studies we reviewed, patients may not access their health information frequently because they do not have a reason to do so. In 2015, ONC reported that for 2013 and 2014, about three-quarters of surveyed individuals who reported that they did not access their medical records online indicated that they did not do so because they did not have a need to use the information. Similarly, another study found that most patients who report rarely or never accessing electronic health information say that they do not have a need to do so. According to patients we interviewed, patients who electronically access their health information typically do so before or after a health care encounter. For example, patients we interviewed said that they accessed information in their portal to review information before or after an encounter with a provider—for example, to review lab test results, communicate with their providers about a recent appointment, or share information between providers during visits. About half of the patients we interviewed also described using portals offered by their providers to access “convenience features” related to receiving health care, such as features used to see appointment reminders, request medication refills, message their provider, or schedule an appointment. Similarly, one of the studies we reviewed found that consumers expressed preferences to use online access to their health information primarily for needs that occur before or after a health care encounter (e.g., to view recently completed lab work or notes from a recent physician visit) or because they are accessing convenience features offered in their provider’s portal, such as online appointment scheduling or to request medication refills. In our survey of 2014 Medicare EHR Program participants, providers reported using a variety of outreach strategies and other efforts to encourage their patients to access the health information made available to them. These methods include promoting the use of patient portals and providing prizes and other incentives to access the portals. (See table 3.) Providers we interviewed similarly reported undertaking a variety of efforts to encourage patients to electronically access their health information. For example, a hospital representative stated that to increase patient access, staff members tell patients about the portal and take steps to register patients for the portal at every interaction. Another hospital representative explained that hospital staff individually assist patients and even help patients obtain a private e-mail address to register for the portal, if necessary. Yet another hospital representative said that the hospital staff wore buttons instructing patients to ask staff about the portal, and the hospital also installed billboards to remind patients to ask staff about the portal. Despite these efforts, this hospital representative said that they struggled to meet the patient electronic access requirements under the Medicare EHR program. Our interviews with the 33 patients and analysis of Medicare EHR program data and our survey data indicate that the type of portal that providers use may influence the extent to which patients access their available health information. In particular, patients we interviewed noted that they sometimes experienced technical difficulties when attempting to access information through the portal or were confused by the portal’s user interfaces. For example, patients noted that they were sometimes unable to access information in their portals due to the sites being down for maintenance or that their portals were not optimized for viewing on a mobile device, which limited their ability to use the portal. Several patients also expressed frustration with the user interface of the portal offered by their providers, noting that it was difficult to navigate and find the information they wanted. About two-thirds of the providers we surveyed reported taking steps to improve their patient portal’s usability or design. Our provider survey data indicate that most providers offer patient portals that are packaged with their EHR system and therefore provided by the same vendor. One vendor we interviewed noted that it allows for some customization for each customer. We viewed demonstrations of three EHR systems’ patient portals, and observed that the portal design does vary by vendor. For example, the portals we viewed had differences in their interfaces, including where to access health information and how tabs were labeled. Our analysis of Medicare EHR Program data from ONC and CMS confirms that the type of portal itself may affect the extent to which patients access their available health information; the average percentage of patients that accessed their available health information varied depending on the provider’s reported EHR vendor. (See figs. 4 and 5.) HHS officials said that two agencies, CMS and ONC, have programs or other efforts aimed at increasing the ability of patients to electronically access their health information, including the ability to access longitudinal health information and aggregate it in a single location. In the case of CMS, agency officials told us that the Medicare and Medicaid EHR Programs have made a significant contribution towards achieving these goals. The two programs require participating hospitals and health care professionals to provide electronic access to health information to a specified portion of patients. According to CMS officials, the programs support HHS’s strategic goals to improve health care through the meaningful use of health information technology. In the case of ONC, agency officials identified multiple efforts they are undertaking to increase patients’ ability to electronically access their health information, including longitudinal health information. Some examples of these efforts include the following: Patient Engagement Playbook. The playbook is a tool developed by ONC to assist providers in engaging patients with health IT by, for example, using patient portals to engage patients in their health and care. Blue Button Initiative. This initiative includes three distinct efforts – a connector, a voluntary pledge program, and a research project. The connector is a website that helps patients locate their health information online and assists in the development of apps and tools to help consumers understand their health information. The voluntary pledge encourages public and private organizations—such as providers, hospitals, technology companies, and non-profit organizations—to commit to making health information available to patients electronically and to encourage patient access. The research effort is designed to understand the unmet needs and challenges facing stakeholders. Health IT Certification Standards and Certification Criteria. These standards and criteria identify certain vocabularies and structured formats that must be included in certified EHR systems and other EHR technology that providers are required to use in order to participate in the EHR programs. Consumer Health Data Aggregator Challenge. By awarding private sector innovation, this challenge aims to spur the development of third-party consumer-facing applications that use open, standardized application programming interfaces to help consumers aggregate their data in one place. (See Appendix II for a list of ONC’s programs and efforts most directly related to increasing patients’ ability to electronically access their health information.) Both CMS and ONC officials told us that their efforts aim to increase the extent to which patients can electronically access their health information. Officials said that their efforts are guided by goals such as the Roadmap’s long-term milestone of enabling patients to access longitudinal health information, contribute to their electronic health information (e.g., send data from wearable devices to their electronic health record), and direct their health information into any location of their choice (e.g., to a PHR application purchased by the patient that aggregates all their health information in a single location). According to ONC officials, the agency’s efforts all support HHS’s Federal Health IT Strategic Plan as well as ONC’s Roadmap, which establishes several milestones for the agency’s ongoing efforts to increase patients’ ability to access their health information electronically. These milestones are the following: 1) a majority of individuals are able to securely access their electronic health information and direct it to the destination of their choice (to be achieved between 2015 and 2017); 2) individuals regularly access and contribute to their longitudinal electronic health information via health IT, send and receive that information through a variety of emerging technologies, and use that information to manage their health and participate in shared decision- making with their care, support, and service teams (to be achieved between 2018 and 2020); and 3) individuals are able to seamlessly integrate and compile longitudinal electronic health information across online tools, mobile platforms, and devices to participate in shared decision-making with their care, support, and service teams (to be achieved between 2021 and 2024). HHS does not have information on the effectiveness of CMS’s and ONC’s efforts to increase the ability of patients to access their health information electronically. Although ONC measures some progress related to these efforts and the Medicare EHR Program, ONC does not directly measure the impact of these efforts on increasing patients’ electronic access to health information. In the case of CMS, officials told us that while they track the number of providers that participate in the Medicare EHR Program, the agency does not directly measure the extent to which the program specifically affects patients’ ability to access their health information electronically. However, HHS officials stated that they do monitor the program by seeking public comments during the rulemaking process and by publicly reporting statistics. Officials told us that ONC collaborates with CMS to monitor and review the EHR Programs and has used the results of these analyses to modify the programs over time. ONC officials told us there is a data use agreement in place that allows ONC to analyze Medicare EHR Program data. Additionally, ONC commissions evaluations of programs initiated under the HITECH Act, including the Medicare EHR Program. While ONC’s data analyses and commissioned evaluation provide information concerning patient access to electronic health information and patient engagement, these efforts do not measure the impact of the Medicare EHR Program on patients’ ability to access their health information electronically. In the case of ONC, ONC measures a range of outcomes associated with its multiple efforts, but the office does not measure the extent to which its individual efforts are having an effect on patients’ ability to access their health information electronically—by determining, for example, if providers that participate in these initiatives have higher rates of patient access. ONC officials stated that they use metrics as a means of assessing whether the technologies and resources made available through ONC’s efforts are being utilized. For example, ONC officials told us they count the number of website visits to the Patient Engagement Playbook page, the number of providers and other stakeholders who have pledged to make electronic health information available to their patients through ONC’s Blue Button Initiative, and the number of times patients access educational videos about their right to access their health information online. According to officials, ONC also uses nationally representative surveys of hospitals, other providers, and patients that are fielded by various organizations to measure the extent to which patients access their health information electronically; however, the surveys cannot be used to measure whether, or to what extent, ONC’s efforts most directly related to patient access are achieving their intended effects. ONC’s survey data identify, for example, how many patients reported being able to electronically view, download, or transmit their health information as well as if patients sent their health information to an app, mobile device, or PHR. The survey of patients provides information on how patients are accessing their health information and what they do with that information once accessed. For example, the survey asks patients whether they have attempted to electronically send their health information to another electronic location such as a PHR application. Finally, the survey also asks patients about the extent to which they experience any challenges when electronically accessing their health information. ONC officials told us that they plan to conduct a consumer survey with different questions in 2017; however, ONC has not finalized the questions for this survey. According to ONC officials, these surveys help the agency understand other factors, such as how broadband access and language influence patient access and whether progress is being made generally towards the Roadmap goal of increasing patients’ ability to access their health information electronically. HHS lacks information on the effectiveness of CMS’s and ONC’s efforts because it has not developed outcome measures. For example, ONC cannot determine if patient electronic access is higher for participants in the Blue Button Initiative compared with non-participants or if providers who use the Patient Engagement Playbook achieve more patient electronic access than non-users. In our prior work we have identified the use of outcome measures as a leading principle for measuring performance. Guidance for federal agencies based on these principles calls for federal agencies to include outcome measures that address the status of program operations, identify areas that need improvement, ensure accountability for end results, and measure progress towards agency strategic goals—in this case, HHS’s goals related to increasing patients’ ability to access their health information electronically. Without outcome-focused performance measures, HHS cannot determine whether, or to what extent, each of its efforts are contributing to the department’s overall goals, or if these efforts need to be modified in any way. Through CMS’s Medicare and Medicaid EHR Programs and ONC’s multiple individual initiatives, HHS supports a wide range of efforts intended to increase patients’ electronic access to their health information. HHS’s investment in these efforts has been significant— since 2009 HHS has spent over $35 billion on the development and adoption of health information technology. CMS’s and ONC’s efforts aim to encourage the use of technologies that allow patients to electronically access their longitudinal health information, contribute to that information, and direct it to any location of their choice. While HHS’s investment in health information technology is significant, HHS lacks the ability to determine whether, or to what extent, CMS’s and ONC’s efforts are helping HHS achieve its goals. ONC is largely responsible for measuring the nation’s progress towards increasing patients’ electronic access to health information. However, ONC has not developed outcome measures to directly measure the effectiveness of its individual efforts, identify areas that need improvement, and ensure accountability for achieving results. Without such outcome-focused performance measures linked to relevant agency goals, ONC—and by extension, HHS—cannot determine whether, or to what extent, each of the programs and efforts is contributing to overall goals, or if these efforts need to be modified in any way. To help ensure that its efforts to increase patients’ electronic access to health information are successful, the Secretary of HHS should direct ONC to take two actions. First, develop performance measures to assess outcomes of key efforts related to patients’ electronic access to longitudinal health information. Such actions may include, for example, determining whether the number of providers that participate in these initiatives have higher rates of patient access to electronic health information. Second, use the information these performance measures provide to make program adjustments, as appropriate. Such actions may include, for example, assessing the status of program operations or identifying areas that need improvement in order to help achieve program goals related to increasing patients’ ability to access their health information electronically. We provided a draft of this report to HHS for its review and comment. HHS provided written comments, which are reprinted in appendix III. HHS also provided technical comments, which we incorporated as appropriate. In its written comments, HHS concurred with both of our recommendations. With regard to our first recommendation, which calls for HHS to develop performance measures to assess the outcomes of key efforts related to patients’ electronic access to longitudinal health information, HHS noted that ONC is committed to assessing the effects of health IT adoption and use. HHS detailed efforts on the part of ONC and CMS to assess progress in patients’ access to their electronic health information and said that the department has used these assessments to modify its programs for encouraging such use over time. HHS stated that there has been an increase in patients’ ability to electronically access and use their health information and noted that we said this in our report. With regard to our statement that ONC is primarily responsible for assessing the effects of the Medicare EHR Program, HHS raised concerns that this statement was misleading because assessing the impact of the program is a coordinated effort between ONC and CMS. In response, we changed our description of the roles of ONC and CMS to reflect HHS’s comment. While HHS has worked to assess the impact of its efforts, it agreed that ONC has not developed a specific means for measuring outcomes associated with ONC’s efforts aimed at furthering patients’ ability to electronically access their health information. HHS also noted that ONC is required by HITECH and the Medicare Access and Children’s Health Insurance Program Reauthorization Act to develop performance measures for the adoption of EHRs and related efforts to facilitate the electronic use and exchange of health information. HHS stated that these required performance measures involved nationwide surveys that go beyond the scope of the Medicare EHR Program data discussed in this report. Therefore, HHS stated that ONC would make every effort to develop performance measures for patient education and outreach initiatives but would have to balance these efforts with its efforts to develop measures for the adoption of EHRs, interoperability, and patient engagement nationwide. In concurring with our second recommendation, that ONC use the information the performance measures provide to make program adjustments, HHS stated that it is committed to using performance measures to guide program improvement. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV. This appendix provides additional details regarding our analysis of Medicare EHR Program data and our nationally representative provider survey. The data analysis and the provider survey were used, in part, to describe the extent and type of electronic access to health information currently available to patients, the extent to which patients electronically access their health information, and the actions providers are taking to encourage such access. We analyzed data from the Centers for Medicare & Medicaid Services (CMS) as supplemented with other government data to (1) determine the number of providers—that is, hospitals and health care professionals (e.g., physicians)—that participated in the 2015 Medicare EHR Program; (2) determine the number of program participants who reported each of two measures related to patient electronic access to health information; (3) determine the extent to which program participants are offering patients, and patients are using, the ability to electronically access their health information; and (4) examine the characteristics of providers that were associated with higher or lower percentages of patients who actually accessed their available health information. We assessed the reliability of these data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) consulting agency officials who are knowledgeable about these data. We determined that these data were sufficiently reliable for the purposes of our reporting objectives. Number of providers that participated in the Medicare EHR Program. To determine the number of providers that participated in the Medicare EHR Program in 2015, we analyzed data extracted from CMS’s National Level Repository that represented all successful attestations. CMS collected these data from January 2016 to March 2016. We counted the number of unique providers that were included in the 2015 program data (whom we refer to as “participants”). Number of program participants who reported two measures related to patient electronic access to health information. To determine the number of participants who reported two measures related to patient electronic access to health information, we counted the number of unique providers who reported a number for (1) the percentage of patients who were offered the ability to electronically view, download, and transmit their health information, and (2) the percentage of patients who actually electronically viewed, downloaded, or transmitted their health information. The extent to which program participants are offering patients—and patients are using—the ability to electronically access their health information. To determine the extent to which program participants offered patients the ability to electronically access their health information, we computed the average of the reported percentages of patients who were offered the ability to view, download, and transmit their health information by their provider. To determine the extent to which program participants’ patients actually used the ability to electronically access their health information, we computed the average of the reported percentage of patients who actually viewed, downloaded, or transmitted their health information. To determine the extent to which program participants’ patients actually used the ability to electronically access their health information when it was available, we divided the number of patients who actually accessed their health information by the number of patients who were offered access for each participant, and computed the average. Characteristics of providers associated with higher or lower percentages of patients who actually accessed their available health information. To examine the characteristics of providers that were associated with higher or lower percentages of patients who actually accessed their available health information, we analyzed data on provider characteristics from CMS, the Office of the National Coordinator for Health Information Technology (ONC), the Health Resources and Services Administration (HRSA), and the U.S. Census Bureau. Each characteristic is divided into two or more categories. For example, the characteristic “geographic region” is divided into four categories— Midwest, Northeast, South, and West regions. As part of this analysis, we computed the average percentage of patients who actually accessed their available health information for providers within each characteristic category without controlling for other characteristics. We examined the following provider characteristics: Regional characteristics. We analyzed data on the following regional characteristics using providers’ business zip code: Metropolitan status. We used the 2015-2016 HRSA Area Health Resources File to determine whether providers were located in a metropolitan area—an area that has at least one urbanized area of 50,000 people, among other criteria. We then categorized providers in metropolitan areas as being located in urban areas and providers that were not as being in rural areas. Geographic region. We used information from the U.S. Census Bureau to identify the U.S. census region—Midwest, Northeast, South, or West—where providers were located or practiced. County residents living in poverty. We used information from the HRSA Area Health Resources File to calculate the 2014 national median percentage of counties’ residents living under the poverty line. We then categorized providers into “higher poverty” areas if they were located in a county above the national median percentage of residents living in poverty and “lower poverty” areas if they were located in a county below or equal to the median. County residents over age 65. We used information from the HRSA Area Health Resources File to estimate the 2014 national median of counties’ percentage of residents over age 65. We then categorized providers into “higher population over 65” areas if they were located in a county above the national median percentage of residents over age 65 and “lower population over age 65” areas if they were located in a county below or equal to the median. Hospital type. We analyzed data on the following categorizations of hospital type: Hospital classification. We determined whether hospitals were classified as acute care hospitals or critical access hospitals by using data from CMS’s Hospital Compare file. Ownership type. We used data on ownership type from CMS’s Hospital Compare file to create three categories of ownership: (a) for- profit, (b) nonprofit, and (c) government-owned. Health care professional characteristics. We analyzed data on the following categorizations of professional characteristics: Health care professional specialty. We obtained data on professionals’ primary specialty from CMS’s Physician Compare file. We then consolidated these specialties into the following three categories: (a) general practice physician, (b) specialty practice physician, and (c) other, which includes chiropractors, podiatrists, and dentists. Number of health care professionals in the practice. We estimated the number of professionals in each practice by using data from CMS’s Physician Compare file. We subsequently created three practice size categories: (a) practice of 1 to 10 professionals, (b) practice of 11 to 50 professionals, and (c) practice of 51 or more professionals. We surveyed a nationally representative sample of providers who participated in the 2014 Medicare EHR Program about how they are providing patients with the ability to electronically access health information. The survey was designed to collect information from providers related to patient electronic access to health information, including the methods and specific technology providers use to give patients electronic access to their health information, the types of health information providers make available through these technologies, and any methods providers use to encourage patient electronic access. The survey was also designed to capture providers’ perspectives on the benefits of patients having such electronic access to their health information—specifically whether providers saw it beneficial for patients to electronically view, download, or transmit certain types of health information. The target population for this survey was all hospitals and health care professionals who reported to the Medicare EHR Program that 5 percent or more of their patients viewed, downloaded, or transmitted their health information for the 2014 program year. Using 2014 program data provided to us by CMS, we identified 60,321 hospitals and health care professionals to be included in the population for this survey. We selected a stratified random sample of 1,867 hospitals and providers as described in table 4 below. We stratified the population by type (hospitals and health care professionals) and the reported percentage of patients who electronically accessed their health information in 2014. We computed the sample sizes separately for hospitals and professionals needed to achieve a precision of plus or minus 5 percentage points or fewer at the 95 percent confidence level. We then increased the sample sizes in each group for expected response rates of about 50 and 30 percent for hospitals and health care professionals, respectively. (See table 4.) A link to this web-based survey was emailed to these 1,867 providers via the email addresses included in the program data provided by CMS. We received valid responses from 428 (23 percent) out of the 1,867 hospitals and health care professionals selected in our stratified random sample. The weighted response rate, which accounts for the differential sampling fractions within strata, is 21 percent for the full sample, 28 percent for hospitals, and 20 percent for health care professionals. (See table 5.) We conducted an analysis of our survey results to identify potential sources of nonresponse bias using two methods. First, we examined the response propensity of the sampled hospitals and health care professionals by several demographic characteristics. These characteristics included region, metropolitan status, specialty type, size of practice, hospital type, and hospital ownership type. Second, we compared weighted estimates from respondents and nonrespondents to known population values for measures that are related to the survey outcomes for which we were most interested. We conducted statistical tests of differences, at the 95 percent confidence level, between estimates and known population values, and between respondents and nonrespondents. These analyses were conducted separately for hospitals and health care professionals. Based on this analysis, we did not observe significant differences in response propensities or between known population values and estimates for nearly all of the characteristics we examined. However, we did observe significant differences by ownership type for hospitals and by region for health care professionals. Specifically, we found that proprietary and physician-owned hospitals were significantly under- represented by our respondents. Additionally, we found that professionals in the Northeast and South were significantly under-represented, while professionals in the Midwest and West were over-represented by our respondents. To ensure that the survey results appropriately represented the population of 60,321 hospitals and health care professionals, we weighted the results from the 428 respondents by the inverse of the probability of selection (base weight) and a nonresponse adjustment factor to account for nonresponse and the differences in response propensities we identified. The nonresponse adjustment factor was calculated using weighting class adjustments where adjustment cells were based on strata, hospital ownership type, and region. We repeated the nonresponse bias analysis using the adjusted weights and found no significant differences with known population and the weighed estimates for all of the characteristics we examined. This provided us with evidence that the nonresponse weighting class adjustments help mitigate any potential nonresponse bias introduced by the differences in response propensities we identified. Based on the results of this nonresponse bias analysis and the weighting adjustments, we determined that weighted estimates generated from these survey results are generalizable to the population of hospitals and health care professionals and are sufficiently reliable for the purposes of this report. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. Voluntary pledge where organizations, such as providers and hospitals, commit to advance efforts to increase patient access to and use of their health data. Website to help consumers locate where to find health information online and assist the development of apps and tools to helps consumers understand and use their health information. Research designed to understand the experience of stakeholders surrounding patient access to their own health information and use of electronic health information. The research focuses on how empathy can help to understand the unmet needs and challenges facing stakeholders. Educational video for consumers about their right to access their health information. Intervention aimed at increasing the percentage of patients who enroll in online portals to view, download, and transmit their health records and communicate online with their clinicians. Four videos that show how health information technology is being used for patient engagement, access, and care coordination. Interactive document that walks providers through strategies they can use to engage patients with the use of health IT. Tool that identifies key considerations for adopting health information exchange based on personal health records. Infographic regarding an individual’s right to access their medical records (developed and published in conjunction with the Department of Health and Human Services (HHS) Office for Civil Rights). Easy–to–understand videos for consumers, captioned in English and Spanish, about individuals’ right to access their health information under HIPAA, addressing issues including fees and requesting that health information be sent to a third party (developed and published in conjunction with the HHS Office for Civil Rights). Published report on gaps in legal oversight between the collection of electronic health information regulated by HIPAA and not regulated by HIPAA so that consumers can be better aware of the privacy and security conditions of how they manage their digital health. Task force to identify priority recommendations for ONC that will help enable consumers to leverage API technology to access patient data, while ensuring the appropriate level of privacy and security protection. Provides internal and external stakeholders with common connection points to ONC’s standards and technology efforts. Tech Lab is organized around four areas: 1) standards coordination; 2) testing and utilities; 3) pilots; and 4) innovation. Challenge to stimulate consumer-mediated exchange; will help create API solutions to help individuals securely and electronically authorize the movement of health data to destinations of their choice. Challenge to spur the development of third-party consumer-facing applications that use open, standardized APIs to help consumers aggregate their data in one place. A working group aimed at developing a set of privacy and security specifications that enable a consumer to control the authorization of access to RESTful health-related data sharing APIs, and to facilitate the development of interoperable implementations of these specifications by others. Tool for health care providers, practice staff, and hospital administrators who want to leverage health IT using patient portals to engage patients in their health and care. Update of the 2011 Model Privacy Notice to be more broadly applicable (beyond personal health records). According to ONC, this update provides open–source content that technology developers can use to notify consumers of their privacy and security practices. The 2015 Edition of ONC’s Health IT Certification Criteria supports the certification of health IT, including APIs, to support patient access to health data and view, download, and transmit functions that continue to support patient access to their health information, including via both encrypted and unencrypted email transmission to any third party the patient chooses. Commitment from health care industry to make electronic health records work better for consumers and providers. In addition to the contact named above, Tom Conahan, Assistant Director; Andrea E. Richardson, Analyst-in-Charge; Marisa Beatley; A. Elizabeth Dobrenz; and Courtney Liesener made key contributions to this report. Also contributing were Jim Ashley, Carolyn Fitzgerald, Krister Friday, Monica Perez-Nelson, and Katie Singh. | HHS's goal is that all Americans will be able to electronically access their longitudinal health information, that is, their health information over time. HHS's efforts to achieve this goal include the Medicare EHR Program and other efforts to encourage providers to make patient health information available and for patients to access such information. GAO was asked to review the state of patients' electronic access to their health information. This report (1) describes the electronic access to health information available to patients, and patients' views of this access, (2) describes the extent to which patients electronically access their health information, and actions providers reported taking to encourage such access, and (3) evaluates HHS's efforts to advance patients' ability to electronically access their health information. GAO analyzed data from HHS and other sources; reviewed applicable strategic planning documents; surveyed a generalizable sample of providers that participated in the Medicare EHR program; and interviewed HHS officials and a nongeneralizable sample of patients, providers, and health information technology product developers. Since 2009, the Department of Health and Human Services (HHS) has invested over $35 billion in health information technology, including efforts to enhance patient access to and use of electronic health information. One of the largest programs is the Centers for Medicare & Medicaid Services' (CMS) Medicare Electronic Health Record Incentive Program (Medicare EHR Program), which, among other things, encourages providers to make electronic health information available to patients. Program data for 2015 show that health care providers that participated in the program (3,218 hospitals and 194,200 health care professionals such as physicians) offered most of their patients the ability to electronically access health information. Patients generally described this access as beneficial, but noted limitations such as the inability to aggregate their longitudinal health information from multiple sources into a single record. Data from the 2015 Medicare EHR Program show that relatively few patients electronically access their health information when offered the ability to do so. Patients GAO interviewed described primarily accessing health information before or after a health care encounter, such as reviewing the results of a laboratory test or sharing information with another provider. While HHS has multiple efforts to enhance patients' ability to access their electronic health information, it lacks information on the effectiveness of these efforts. The Office of the National Coordinator for Health Information Technology (ONC) within HHS collaborates with CMS to assess CMS's Medicare EHR Program as well as its own efforts to enhance patient access to and use of electronic health information. However, ONC has not developed outcome measures for these efforts consistent with leading principles for measuring performance. Without such measures, HHS lacks critical information necessary to determine whether each of its efforts are contributing to the department's overall goals, or if these efforts need to be modified in any way. GAO recommends that HHS 1) develop performance measures to assess outcomes of key efforts related to patients' electronic access to longitudinal health information, and 2) use the information from these measures to help achieve program goals. HHS concurred with the recommendations. |
According to the State Department’s 2002 Annual Performance Plan, the department’s counterterrorism goals are to reduce the number of terrorist attacks, bring terrorists to justice, reduce or eliminate state-sponsored terrorist acts, delegitimize the use of terror as a political tool, enhance the U.S. response to terrorism overseas, and strengthen international cooperation and operational capabilities to combat terrorism. The Secretary of State is responsible for coordinating all U.S. civilian departments and agencies that provide counterterrorism assistance overseas. The Secretary also is responsible for managing all U.S. bilateral and multilateral relationships intended to combat terrorism abroad. State requested over $2.3 billion to combat terrorism in fiscal year 2003. This includes more than $1 billion for overseas embassy security and construction, as well as for counterterrorism assistance and training to countries cooperating with the global coalition against terrorism. Table 1 provides a breakdown of State’s funding to combat terrorism. By contrast, State spent about $1.6 billion in fiscal year 2001 and received about $1.8 billion to combat terrorism in fiscal year 2002. State received an additional $203 million through the Emergency Response Fund as part of the $40 billion appropriated by the Congress in response to the September 11, 2001, terrorist attacks against the United States. The Office of Management and Budget reported that determining precise funding levels associated with activities to combat terrorism is difficult because departments may not isolate those activities from other program activities. Some activities serve multiple purposes—for example, upgrades to embassy security help protect against terrorism as well as other crimes. The State Department conducts multifaceted activities in an effort to prevent terrorist attacks on Americans abroad. For example, to protect U.S. officials, property, and information abroad, the Bureau of Diplomatic Security provides local guards for embassies and armored vehicles for embassy personnel (see fig. 2). In addition, it provides undercover teams to detect terrorist surveillance activities. Following the 1998 embassy bombings in Africa, State upgraded security for all missions, which included strengthening building exteriors, lobby entrances, and the walls and fences at embassy perimeters (see fig. 3). The upgrades also included closed-circuit television monitors, explosive detection devices, walk- through metal detectors, and reinforced walls and security doors to provide protection inside the embassy. In addition, State plans to replace some existing embassies with buildings that meet current security standards, such as having a 100-foot setback from streets surrounding embassies. State also has programs to protect national security information discussed at meetings or stored on computers. These programs include U.S. Marine security guards controlling access to embassies, efforts to prevent foreign intelligence agencies from detecting emanations from computer equipment, and computer security programs. State has several programs to help warn Americans living and traveling abroad against potential threats, including those posed by terrorists. For example, to warn Americans about travel-related dangers, in fiscal year 2001 the Bureau of Consular Affairs issued 64 travel warnings, 134 public announcements, and 189 consular information sheets. In addition, missions employ a “warden system” to warn Americans registered with an embassy of threats against their security. The system varies by mission but uses telephone, E-mail, fax, and other technologies as appropriate. Finally, the Bureau of Diplomatic Security manages the Overseas Security Advisory Councils program. The councils are a voluntary, joint effort between State and the private sector to exchange threat- and security-related information. Councils currently operate in 47 countries. In addition, State manages and funds programs to train foreign government and law enforcement officials to combat terrorism abroad. These programs include the following: the Antiterrorism Assistance Program, implemented by the Bureau of Diplomatic Security, to enhance the antiterrorism skills of law enforcement and security personnel in foreign countries; the International Law Enforcement Academies, managed by the Bureau for International Narcotics and Law Enforcement Affairs, to provide law enforcement training in four locations around the world. The Departments of State, the Treasury, and Justice—including the Bureau of Diplomatic Security, Federal Bureau of Investigation, and other U.S. law enforcement agencies—provide the on-site training; the Department of Justice's Overseas Prosecutorial Development and Assistance Training and the International Criminal Investigation Training Assistance Program. The State Department provides policy oversight and funds this training, which is intended to build rule-of-law institutions, and includes general law enforcement and anticrime training for foreign nationals. State conducts numerous programs and activities intended to disrupt and destroy terrorist organizations. These programs and activities rely on military, multilateral, economic, law enforcement, and other capacities, as the following examples illustrate: The Bureau of Political-Military Affairs coordinates with Department of Defense on military cooperation with other countries. It has been State’s liaison with the coalition supporting Operation Enduring Freedom, processing 72 requests for military assistance from coalition partners since September 11, 2001. The Bureau of International Organization Affairs helped craft and adopt United Nations Security Council Resolution 1373, obligating all member nations to fight terrorism and report on their implementation of the resolution. It also assisted with resolutions extending U.N. sanctions on al Qaeda and the Taliban and on certain African regimes, including those whose activities benefit terrorists. The Department of State’s Office of the Coordinator for Counterterrorism, the Bureau of International Narcotics and Law Enforcement, and the Economic Bureau work with the Department of the Treasury and other agencies to stem the flow of money and other material support to terrorists. According to the State Department, since September 11, the United States has blocked $34.3 million in terrorist related assets. The Office of the Legal Advisor pursues extradition and mutual legal assistance treaties with foreign governments. The Office of the Legal Advisor also works with the U.N. and with other nations in drafting multilateral agreements, treaties, and conventions on counterterrorism. The Bureau of Diplomatic Security, working with the Department of Justice, cooperates with foreign intelligence, security, and law enforcement entities to track and capture terrorists in foreign countries, assist in their extradition to the United States, and block attempted terrorist attacks on U.S. citizens and assets abroad. The Office of the Coordinator for Counterterrorism, in conjunction with the Department of Justice and other agencies, coordinates State’s role in facilitating the arrest of suspected terrorists through an overseas arrest, known as a rendition, when the United States lacks an extradition treaty. The Bureau of Diplomatic Security manages the Rewards for Justice Program. This program offers payment for information leading to the prevention of a terrorist attack or the arrest and prosecution of designated individuals involved in international terrorism. These rewards reach up to $25 million for those involved in the September 11 attacks. The Bureau of Intelligence and Research prepares intelligence and threat reports for the Secretary of State, high-level department officials, and ambassadors at U.S. missions. It also monitors governmentwide intelligence activities to ensure their compatibility with U.S. foreign policy objectives related to terrorism, and it seeks to expand the sharing of interagency data on known terrorist suspects. The State Department is responsible for leading the U.S. response to terrorist incidents abroad. This includes measures to protect Americans, minimize incident damage, terminate terrorist attacks, and bring terrorists to trial. Once an attack has occurred, State’s activities include measures to alleviate damage, protect public health, and provide emergency assistance. The Office of the Coordinator for Counterterrorism facilitates the planning and implementation of the U.S. government response to a terrorist incident overseas. In a given country, the ambassador would act as the on-scene coordinator for the response effort. (See figure 4.) In addition, several other bureaus respond to the aftermath of a terrorist attack and help friendly governments prepare to respond to an attack by conducting joint training exercises. The Bureau of Political-Military Affairs is tasked with helping to prepare U.S. forces, foreign governments, and international organizations to respond to the consequences of a chemical, biological, radiological, or nuclear incident overseas. For example, the bureau is developing a database of international assets that could be used to respond to the consequences of a terrorist attack using weapons of mass destruction. It also participates in major interagency international exercises, which are led by DOD. In addition, the bureau assisted in the first operational deployment of a U.S. consequence management task force, working with the DOD regional command responsible for conducting the war in Afghanistan. Several bureaus and offices deploy emergency response teams to respond to terrorist attacks. For example, the Office of the Coordinator for Counterterrorism deploys multi-agency specialists in the Foreign Emergency Support Team (FEST) to assist missions in responding to ongoing terrorist attacks. For example, at the request of the Ambassador, the FEST can be dispatched rapidly to the mission. As one component of this team, the Bureau of Political-Military Affairs can deploy a Consequence Management Support Team to assist missions in managing the aftermath of terrorist attacks. In addition, the Bureau of Overseas Buildings Operations Emergency Response Team helps secure embassy grounds and restore communications following a crisis. See appendix II for a comprehensive list of State’s programs and activities to combat terrorism. The State Department is responsible for coordinating all federal agencies’ efforts to combat terrorism abroad. These include the Departments of Defense, Justice, and the Treasury; the various intelligence agencies; the FBI and other law enforcement agencies; and USAID. In addition, State coordinates U.S. efforts to combat terrorism multilaterally through international organizations and bilaterally with foreign nations. State uses a variety of methods to coordinate its efforts to combat terrorism abroad, including the following: In Washington, D.C., State participates in National Security Council interagency working groups, issue-specific working groups, and ad hoc working groups. For example, the Office of the Coordinator for Counterterrorism maintains policy oversight and provides leadership for the interagency Technical Support Working Group—a forum that identifies, prioritizes, and coordinates interagency and international applied research and development needs and requirements to combat terrorism. At U.S. embassies, State implements mission performance plans that coordinate embassy activities to combat terrorism, country team subgroups on terrorism, emergency action committees to organize embassy response to terrorist threats and incidents, and ad hoc working groups. For example, selected embassies have country team subgroups dedicated to law enforcement matters, chaired by the Deputy Chief of Mission. Working with related bureaus and agencies such as the Regional Security Office, FBI Legal Attaché, and Treasury Department Financial Attaché, these subgroups coordinate efforts to combat terrorism among the various agencies overseas. In Washington, D.C., and elsewhere, State exchanges personnel with other agencies for liaison purposes. In Washington, D.C., for example, State personnel serve as liaisons at the CIA’s Counter-Terrorism Center. The department also provides each U.S. regional military command with a Political Advisor, who helps the respective commanders coordinate with State Department Headquarters and with U.S. embassies on regional and bilateral matters, including efforts to combat terrorism. We received written comments from the Department of State that are reprinted in appendix III. State wrote that the report is a “useful guide” and “good outline” of State’s activities and roles in the campaign against terrorism. State noted that there are many more often intangible and hard- to-measure actions taking place as part of the department’s contribution to fighting terrorism. State also provided technical comments, which we incorporated where appropriate. We are sending copies of this report to interested congressional committees and to the Secretary of State. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Another GAO contact and staff acknowledgments are listed in appendix IV of this report. The Department of State coordinates U.S. government efforts to combat terrorism abroad. Within the department, multiple bureaus and offices manage programs and activities to combat terrorism. State also works with several U.S. and foreign government agencies in carrying out these programs and activities. Table 2 presents the programs and activities and the bureaus responsible for managing them. The table also presents information about some of the U.S. government agencies with which State cooperates. Table 2 describes: the strategic framework of State’s efforts to combat terrorism abroad; State’s programs and activities to prevent terrorism abroad; State’s programs and activities to disrupt and destroy terrorist State’s programs and activities to respond to terrorist incidents abroad. In addition to the contact named above, Edward George, Addison Ricks, Steve Caldwell, Mark Pross, James Lawson, Lori Kmetz, Yolanda Elserwy, Reid Lowe, and Cheryl Weissman made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | Efforts to combat terrorism have become an increasingly important part of government activities. These efforts have also become important in the United States' relations with other countries and with international organizations, such as the United Nations (U.N.). The Department of State is charged with coordinating these international efforts and protecting Americans abroad. State has helped direct the U.S. efforts to combat terrorism abroad by building the global coalition against terrorism, including providing diplomatic support for military operations in Afghanistan and other countries. State has also supported international law enforcement efforts to identify, arrest, and bring terrorists to justice, as well as performing other activities intended to reduce the number of terrorist attacks. The State Department conducts multifaceted activities in its effort to prevent terrorist attacks on Americans abroad. For Americans traveling and living abroad, State issues public travel warnings and operates warning systems to convey terrorism-related information. For American businesses and universities operating overseas, State uses the Overseas Security Advisory Councils--voluntary partnerships between the State Department and the private sector--to exchange threat information. To disrupt and destroy terrorist organizations abroad, State has numerous programs and activities that rely on military, multilateral, economic, law enforcement, intelligence, and other capabilities. State uses extradition treaties to bring terrorists to trial in the United States and cooperates with foreign intelligence, security, and law enforcement entities to track and capture terrorists in foreign countries. If the United States has no extradition agreements with a country, then State, with the Department of Justice, can work to obtain the arrest of suspected terrorist overseas through renditions. The State Department leads the U.S. response to terrorist incidents abroad. This includes diplomatic measures to protect Americans, minimize damage, terminate terrorist attacks, and bring terrorists to justice. To coordinate the U.S. effort to combat terrorism internationally, State uses a variety of mechanisms to work with the Departments of Defense, Justice, and the Treasury; the intelligence agencies; the Federal Bureau of Investigation; and others. These mechanisms include interagency working groups at the headquarters level in Washington, D.C., emergency action committees at U.S. missions overseas, and liaison exchanges with other government agencies. |
The Congress passed PRWORA in 1996, making sweeping changes to national welfare policy and placing new emphasis on the goal of work and personal responsibility. The Congress recognized the unique economic hardship facing the 40 percent of American Indians living on reservations by exempting anyone living on reservations with high unemployment from the law’s 60-month time limit on receipt of TANF cash assistance.Furthermore, the act gave federally recognized American Indian tribes the option to administer their own TANF programs either individually or as part of a consortium, an option they did not have in the past. Under the Aid to Families With Dependent Children (AFDC) program, the precursor to TANF, tribal members enrolled in state welfare programs. Under PRWORA, tribes implementing their own TANF programs have greater flexibility than states in some areas. For example, for state programs, PRWORA sets numerical goals for the percentage of adults to be participating in work activities and specifically defines the approved work activities that count for the purposes of meeting these federal participation rate goals. The law set state work participation rate goals at 25 percent in fiscal year 1997, increasing to 50 percent in fiscal year 2002. In contrast, tribes can set their own participation rate goals and may define work activities more broadly, subject to approval from HHS. Finally, while states must adhere to a federal time limit on cash benefits of 60 months or less, tribal programs can set their own time limits. Tribes have the same flexibility as states to set their own eligibility requirements and to determine what policies will govern mandatory sanctions for noncompliance with program rules. Tribes and states also have the same flexibility to determine what types of work supports, such as childcare, transportation, and job training, they will provide to recipients. Some of the requirements to which tribal TANF programs are subject differ from those to which states are subject. For example, eligible tribes must submit a 3-year tribal TANF plan directly to HHS for review and approval; HHS does not approve states’ plans, though it certifies that they are complete. Unlike states, whose TANF grants are based on the highest of three possible funding formulas, tribal grants must be based on the amount the state spent in fiscal year 1994 for all American Indians residing in the tribe’s designated service area. In addition, tribes are not eligible for several sources of additional TANF funding that were originally provided for the states. These include performance bonuses, a population/poverty adjuster (for high-population/low-spending states), and a contingency fund for states experiencing economic downturns. Finally, whereas a state can receive a caseload reduction credit, which reduces its work participation rate goal when its caseloads falls, tribes are not eligible to receive caseload reduction credits. Tribes have used various strategies to stimulate economic development; however, unemployment and poverty rates remain high on reservations. To improve the economy on reservations, tribes own many types of enterprises. Despite these efforts, most Indians living on reservations are poor, and many tribes lack some of the key factors research has shown to be associated with economic growth on reservations. While some tribes encourage private companies owned by nonmembers to locate on their reservations, many tribes responding to our survey place more emphasis on developing tribally owned enterprises. Eighty-seven of the 133 tribes responding to our survey question reported that they place more emphasis on promoting tribally owned enterprises than on encouraging private companies owned by nonmembers to locate on reservations. Tribes have launched their own enterprises in a number of sectors, which could include gaming, tourism, manufacturing, natural resources, and agriculture or ranching (see fig. 1). Of the 110 tribes with enterprises that responded to our survey question, 22 have enterprises that are concentrated in a single sector and 88 have enterprises in more than one sector. Many tribes own and operate gaming facilities. Contrary to the common perception that tribal gaming has dramatically improved the economic circumstances for many tribes, the most lucrative account for a small percent of all tribally owned gaming facilities. According to our 1997 report, which provides the most recent comprehensive analysis of tribal gaming revenues, 40 percent of total gaming revenues were generated by only 8 of 178 tribally operated gaming facilities. For example, the Coeur d’Alene gaming facility in Idaho, near Spokane, Washington, and Lake Coeur d’Alene, a major tourist area, generates about $20 million in profit per year. In contrast, officials from the San Carlos Apache Tribe indicated that its gaming facility, located in a remote area, 90 miles from Phoenix, Arizona, barely makes enough money to cover its costs. Furthermore, gaming facilities do not always generate employment for tribal members. Nationally, only a quarter of all jobs in tribally operated gaming facilities are held by American Indians. The practice of distributing gaming royalties to tribal members is not widespread and, contrary to common perception, payments that are made are not making tribal members wealthy. About a quarter of the tribes that responded to our survey question distributed a portion of their revenues from gaming facilities and other enterprises through per capita payments to members. Of the 87 tribes that reported operating a gaming facility, 28 reported providing per capita payments to members. Of those, 16 provided payments of less than $5,000 (see table 1). Despite tribes’ efforts to stimulate the economy on reservations, American Indian families on reservations still have high unemployment and poverty rates. BIA has reported that in 1999—the most recent year for which data are available—more than 40 percent of American Indians living on or near reservations between the ages of 16 and 64 were unemployed, and of those who were employed, a third were below the federal poverty guideline. Unemployment was even higher on some reservations. For example, on the Blackfeet reservation, 74 percent of adults were not employed and 22 percent of employed adults were poor. Our survey results indicate that poverty and unemployment rates remain high on many reservations. Fifty of the 127 tribes with reservations that responded to our survey question reported that at least half of all families living on their reservations had incomes below the federal poverty level. In addition, 51 tribes reported that 50 percent or more of adults living on the tribes’ reservations were unemployed. Tribal officials we visited indicated that the isolated geographic location and distance from markets of many reservations as well as a lack of education and job skills among workers living on the reservation impact economic growth. For example, a modular home manufacturing plant on the Blackfeet Reservation in Montana has had trouble finding and keeping enough workers with construction skills to expand its business. To overcome this obstacle, the enterprise has worked with the local community college to offer construction training to tribal members on the reservation. The gaming facility owned by the White Mountain Apache tribe was forced to hire non-tribal members. Officials explained that because members lack the basic work and life skills needed to hold such jobs, nonmembers hold most of the better-paid jobs. A number of tribes also lack some key factors research has shown to be important for economic growth on reservations. These include fully exercised sovereignty, effective governing institutions, and a strategic orientation. For example, 45 of the 142 tribes that responded to our survey question stated that they are not participating in a self-governance initiative. In addition, although research indicates the separation of tribal governance and economic development contributes to effective governing institutions, 78 of the 145 tribes that responded to our survey question stated that they do not have an economic development committee or organization that is separate from their tribal government. Finally, 56 of the 140 tribes that responded to our survey question reported they did not have a written plan for improving economic conditions on the reservation, although research indicates that having such a formal approach is an indicator of strategic orientation. The number of American Indian families receiving cash assistance in state TANF programs in the 34 states with federally recognized Indian tribes decreased between 1994 and 2001, from almost 68,000 to about 26,000.Part of this decline occurred because many American Indian TANF recipients were served by tribal TANF programs in 2001 and are not included in the data. While data on tribal TANF program caseloads are not available for 2001, tribes have estimated that they could serve as many as 22,000 families. Even if those participating in tribal TANF programs were taken into account, the decline in American Indian families receiving TANF is significant. In comparison, the number of all families receiving TANF fell from about 3.4 million families in 1994 to about 1.5 million in 2001. In some states, the share of the caseload made up of American Indians has risen. According to HHS data, the share of the TANF caseload made up of American Indians increased in 6 of the 34 states with federally recognized tribes. As shown in figure 2, the increase has been greatest in South Dakota, Montana, and North Dakota. In South Dakota, the proportion of cash assistance families that were American Indian increased from under 60 percent in 1994 to about 80 percent in 2001. According to the 2000 census, about 8 percent of South Dakota’s population were American Indians. Although data are not available to confirm this, it is possible that the decline in the number of American Indians receiving TANF has predominantly occurred among those not living on reservations, who represent a majority of all American Indians. Based on responses to our survey, the size of the TANF caseload on some reservations has in fact stayed about the same or even increased. Forty-nine of 97 tribes responding to our survey question reported that the number of tribal members receiving TANF was about the same size or larger than it had been in 1997. Several factors may contribute to the lack of welfare caseload decline among American Indians in certain places. These include the scarcity of jobs on reservations; the difficulty reservation residents have accessing work supports, such as job training and child care; and cultural or religious ties to tribal lands and strong ties to families and communities that make it difficult for many American Indians to relocate. In addition, like many other TANF recipients, many American Indian TANF recipients have characteristics such as low education levels and few job skills, which can make it difficult for them to get and keep jobs. PRWORA gives tribal TANF programs flexibility in many areas to tailor their programs to their communities, for example, by defining their own work activities and work participation rate goals, time limits, and eligibility requirements. The 36 tribal TANF programs are given the flexibility to define the activities they count toward meeting the work participation requirement more broadly than state TANF programs, subject to approval by HHS. According to data provided by tribal TANF programs to HHS, about a fifth of all adults engaged in work activities participate in activities that would not count toward meeting work participation rate goals under state plans (see fig. 3), but do count toward meeting work participation goals under tribal programs. For example, the Port Gamble S’Klallam tribe, whose reservation is located on Washington’s Puget Sound, allows recipients to count time spent engaged in traditional subsistence gathering and fishing towards meeting the TANF work requirement. In general, rather than adopting an approach similar to most states that emphasizes job search and work, tribal TANF programs tend to encourage recipients to engage in education or training activities. While all of the tribal TANF program officials that responded to our survey question reported using TANF funds for job search, screening and assessment, and other employment services, most also used TANF funds for a variety of education services. Fourteen of the 18 tribal TANF programs responding to our survey question reported that a greater share of their recipients were enrolled in educational activities such as high school equivalency programs, community college, or other job training, than were engaged in employment. In contrast, a majority of TANF recipients engaged in work activities in state programs are in unsubsidized jobs. Officials from several of the tribes we visited reported that their tribal TANF programs emphasize education and training activities because their recipients have low rates of high school completion and high rates of illiteracy. Tribal TANF programs have flexibility to set their own time limits, subject to HHS approval. To date, HHS has not approved any tribal TANF plans with a time limit of greater than 60 months, although at least one tribe has submitted a plan proposing a longer time limit. Thirty-four of the 36 tribal TANF programs have time limits of 60 months; 2 programs have 24-month time limits. While a state may exempt no more than 20 percent of its caseload from time limits due to hardship, tribal programs have the flexibility to determine the share of the caseload they are allowed to exempt from time limits due to hardship. A majority of tribes have the same exemption limit as states, but HHS has approved 10 plans with higher exemption rates. If tribes want to extend benefits beyond the level approved in their plans, they must pay for the benefits with their own funds. Many tribal TANF programs are not subject to time limits because the unemployment rate on the reservations is greater than 50 percent. PRWORA exempts any month from counting toward an individual’s time limit if that individual is living on a reservation with a population of at least 1,000 and an unemployment rate of 50 percent or greater, whether they are enrolled in a tribal program or a state program. Of the 29 tribal TANF programs that serve a single tribe, 16 are located on reservations that have unemployment rates of 50 percent or greater, according to the most recent BIA data. Tribes also have the flexibility to determine many of their own eligibility requirements. This includes the flexibility to determine the area that will be covered by their programs (the service area). Some tribes define their service area as their reservation or land base, while others serve families residing in nearby communities or within the counties that overlap with their reservations (see fig. 4). Tribes also have the flexibility to determine whom they will serve (the service population). Some tribes base eligibility on race or tribal membership; others serve all families in their service areas. Figure 5 shows the decisions all 36 tribal TANF programs have made about their service populations. Tribes have faced a number of challenges in implementing tribal TANF programs. Many tribes have found that data on the number of American Indians are inaccurate, complicating the determination of tribal TANF grant amounts and making it difficult to design and plan programs. Because tribes do not have the infrastructure they need to start their programs, tribes have had to solicit contributions from a variety of different sources to cover their significant start-up costs and ongoing operating expenses. In addition, because tribes do not have experience operating welfare programs, they lack the expertise needed to administer key program features, including determining eligibility. Some tribes have requested and received technical assistance from states and the federal government to help them develop this expertise. The challenges tribes have to overcome in order to plan, develop, and implement tribal TANF programs include, among others: Obtaining the population data necessary to conduct reliable feasibility studies and to plan and design tribal TANF programs. HHS and tribal officials told us that state data on American Indians is inaccurate, complicating the determination of TANF grant amounts and making it difficult to design and plan programs. The law specifies that federal tribal TANF grants must be based on the funds expended on American Indians who were residing in the program’s designated service area and receiving AFDC from the state in fiscal year 1994. In practice, however, few states collected reliable data on the race of AFDC recipients in 1994, so some tribes negotiate the number on which their grant will be based, according to tribal officials. Having accurate data on American Indian caseloads is also critical for tribes as they design their programs and make decisions about how to allocate their resources. The degree to which any tribal TANF program’s federal grant corresponds to its current caseload varies substantially. Some officials attribute this to underestimates of the number of American Indian families who were receiving AFDC in 1994. Others believe that eligible families are more likely to seek benefits from a tribal program, in part because of increased outreach. Changes in the economy and population growth over the past decade have also led to fluctuations in public assistance caseloads on some reservations. The majority of tribes with TANF programs responding to our survey question, 19 of 21, reported that the number of families they were currently serving was the same as, or smaller than, the number of families on which their grant was based. However, 2 of the 21 tribes reported that their TANF caseload was larger than the caseload on which their grant was based. Securing or leveraging the resources needed to establish the infrastructure needed to administer tribal TANF. Because most tribes starting tribal TANF programs do not have the infrastructure they need in place, they have secured and leveraged funding from a variety of sources to meet the basic “start-up” costs involved in setting up a new program. These start-up costs include those for basic infrastructure such as information technology systems. In addition, tribal TANF programs are not eligible to receive any of the performance incentives currently available to states. One infrastructure need that tribes have found particularly difficult to meet is the development of new information systems. Like states, tribal TANF programs are permitted to spend as much of their federal TANF grant on management information systems as they choose, and some tribes have developed systems for their new TANF programs. Unlike states, tribes did not receive additional federal funds expressly for the purpose of developing and operating automated information systems under AFDC, the precursor to the TANF program. Although most of the tribal TANF programs reported using an automated system to report TANF data, many—8 of 27—do not. For example, the Fort Belknap tribal TANF program in Montana has a caseload of 175 families, yet it does not have an automated information system for the collection, processing, and reporting of TANF data. Eleven tribes reported having an automated system devoted to their TANF program. Others use the state’s computer system or contract with the state to collect, store, or process data for federal reporting purposes. Because most tribal TANF funds are used to provide benefits and services to TANF recipients, some tribes have leveraged funds from other federal programs or relied on other sources, including state TANF funds and tribal government contributions. States recognize that it is in their best interest if tribal TANF programs succeed, and therefore most provide at least some of their state maintenance of effort (MOE) funds to tribal programs in their state. HHS reports that 29 of 36 tribal TANF programs receive MOE funds from the states. Some states provide tribes with a share of MOE proportionate to the population they are serving; others provide some start-up costs; and others have not provided any funds. There is little incentive for states to contribute MOE to tribes. The law does not require states to contribute MOE to tribal programs, and in fact, if a tribe opts to administer a tribal TANF program, the state’s MOE requirement drops by an amount that is proportional to the population served by the tribal program. However, any contributions made by states to tribal TANF programs do count toward a state’s MOE requirement. Most tribal TANF programs that responded to our survey question, 24 of 27, reported that their tribal government made contributions to their TANF program. Eighteen of these respondents reported their tribes contributed office space or buildings. In addition, 15 programs received contributions from the tribal governments to cover other start-up costs. In addition to securing resources from federal, state, and tribal governments, some tribes have leveraged other funds to enable them to administer tribal TANF with limited resources. One way tribes have been able to do this is by combining TANF and other tribally administered federal employment and training programs into a single program with a single budget through a “477” plan. Tribes with 477 plans are able to save on administrative costs and reduce duplication of services by streamlining the administration of related programs. For example, a tribe with a 477 plan could provide job search and job preparation services to all tribal members through a single program, rather than having a separate program for TANF recipients. To date, 13 tribal TANF programs responding to our survey question have included TANF in their 477 plans. Two of the tribes we visited, the Confederated Salish and Kootenai tribe and the Sisseton- Wahpeton tribe, included their tribal TANF programs in 477 plans, and both tribes indicated that the ability to combine funding sources and streamline service delivery was instrumental in allowing them to administer tribal TANF within their budget constraints. Developing the expertise to better implement tribal TANF programs. Because they do not have experience in administering welfare programs, tribal TANF program administrators have had to quickly develop the expertise to plan and operate tribal TANF programs. Tribal TANF administrators have had to train staff on eligibility determination, data reporting requirements, and administration. They have also had to set up information systems, conduct feasibility studies, and leverage resources to help cover their costs. Most of the tribes that responded to our survey reported that states provided them with at least some technical assistance in these areas, but the amount of assistance provided by states varied. PRWORA does not require states to provide technical assistance to tribes, but 19 tribes reported that the state helped them to a great or very great extent in developing their initial concept paper describing their TANF program. In addition, 26 tribal TANF programs reported that they had received technical assistance and support from the state in developing or operating automated systems to collect and report TANF program data. A number of programs reported that they received assistance from the state on other aspects of administering a TANF program. Tribes also reported that HHS has provided them with technical assistance when asked. Tribal officials indicated that certain types of technical assistance were not readily available to them from states or the federal government. For example, tribes interested in administering tribal TANF often conduct studies to help them determine whether it is feasible to administer their own programs, but neither states nor the federal government had provided tribes with technical assistance on how best to conduct a feasibility study that would provide them with all of the information they needed to make an informed decision. Similarly, some of the tribes we visited indicated that they have little access to information about the “best practices” of other tribal TANF programs, which could help them meet TANF goals. PRWORA gives tribes a new opportunity to exercise their sovereignty by administering their own TANF programs. At this early stage of tribal TANF implementation, we see tribes making progress in exercising their flexibility by tailoring the design of their programs and engaging their members in a broad array of work activities. However, tribes face challenges in developing the data, systems, and expertise they need to operate their programs. While tribes have moved forward in establishing their own programs, it is not yet known whether these programs will help recipients find employment before reaching time limits. In addition, it is not yet clear whether the flexibility afforded to tribal TANF programs will allow them to continue to provide benefits and services to those who reach the time limit without obtaining a job. Whether tribal TANF programs will be successful in moving more American Indians from welfare into the workforce will ultimately depend on not only the ability of the programs to meet their recipients’ need for income support, education, and training, but also the success of economic development efforts in providing employment opportunities for American Indians. Mr. Chairman, this concludes my prepared statement. I look forward to sharing the results of our final study with you in August. I will be happy to respond to any questions you or other Members of the Committee may have. For future contacts regarding this testimony, please call Cynthia M. Fagnoni at (202) 512-7215 or Clarita Mrena at (202)512-3022. Individuals making key contributions to this testimony included Kathryn Larin, Carolyn Blocker, Mark McArdle, Bob Sampson, Catherine Hurley, and Corinna Nicolaou. | Under welfare reform, American Indian tribes have the option to run Temporary Assistance for Needy Families (TANF) programs either alone or as part of a consortium of other tribes rather than receiving benefits and services from state TANF programs. Because of the difficult economic circumstances on many reservations, the law also gives tribal TANF programs more flexibility to design their programs than it gives to states. Tribes have used various strategies to stimulate economic development; however, unemployment and poverty rates remain high on reservations, and prospects for economic growth are limited. Nationally, the number of American Indian families receiving TANF assistance has declined significantly in recent years. On some reservations, however, caseloads have remained the same or increased. American Indians represent an increasing proportion of the total TANF caseload in some states. To date, 172 tribes, either alone or as part of a consortium, have used the act's flexibility to design and administer their own TANF programs. Tribes face challenges in implementing tribal TANF programs, including a lack of (1) reliable data on the number of American Indian TANF recipients; (2) infrastructure support, such as information systems; and (3) experience and expertise in administering welfare programs. |
Vast sums of money funnel into America’s higher education system each year through student financial aid programs authorized by Title IV of HEA, as amended. In 1995, about $35.2 billion in aid was made available to almost 7 million students to attend postsecondary institutions, with aid available projected to reach $40 billion in 1997. As funding for Title IV programs has increased, so have losses to the federal government from honoring its guarantee on student loans. In 1968, the government paid $2 million to cover loan defaults; in 1987, default payments exceeded $1 billion; and by 1991, default claim payments reached a staggering $3.2 billion. In 1992, GAO listed the student loan program as 1 of 17 high-risk federal program areas especially vulnerable to waste, fraud, abuse, and mismanagement. More specifically, we found, among other things, that (1) schools used the program as a source of easy income with little regard for students’ educational prospects or the likelihood of their repaying loans and (2) management weaknesses plagued the Department that prevented it from keeping on top of these problems. The proprietary school sector has been associated with some of the worst examples of program abuse. In the United States, 5,235 proprietary schools represent about 50 percent of all postsecondary institutions. Most are small, enrolling fewer than 100 students, and offer occupational training of 2 years or less in fields ranging from interior design to computer programming. Proprietary schools enrolled more than 1 million students in fall 1993—about 10 percent of all undergraduates. Compared with nonprofit institutions, proprietary schools enroll higher percentages of women, minorities, and low-income students. About 67 percent of proprietary school students receive federal student aid under Title IV. While average default rates for all postsecondary institutions reached an all-time high of 22 percent in 1990, the default rate for proprietary schools exceeded 41 percent. This disparity has triggered numerous investigations. Congressional investigations, for example, discovered evidence of fraud and abuse by proprietary school owners. The Congress found that some proprietary schools focused their efforts on enrolling educationally disadvantaged students and obtaining federal funds rather than on providing meaningful training or education. The Congress also concluded that the regulatory oversight system of Title IV programs provided little or no assurance that schools were educating students efficiently or effectively. Several recommendations emanating from these findings were included in the 1992 amendments to HEA. The Title IV regulatory structure includes three actors—the Department of Education, states, and accrediting agencies—known as the “triad.” Because of concern about federal interference in school operations, curriculum, and instruction, the Department has relied on accrediting agencies and states to determine and enforce standards of program quality. HEA recognizes the roles of the Department, the states, and the accrediting agencies as providing a framework for a shared responsibility for ensuring that the “gate” to student financial aid programs opens only to those institutions that provide students with quality education or training worth the time, energy, and money they invest. The Department plays two roles in gatekeeping. First, it verifies institutions’ eligibility and certifies their financial and administrative capacity. In verifying institutional eligibility, the Department reviews documents provided by schools to ensure their compliance with state authorization and accreditation requirements; eligibility renewal is conducted every 4 years. In certifying that a school meets financial responsibility requirements, the Department determines whether the school can pay its bills, is financially sound, and that the owners and employees have not previously been convicted of defrauding the federal government. In certifying that institutions meet administrative requirements, the Department determines whether institutions have personnel resources adequate to administer Title IV programs and to maintain student records. Second, the Department grants recognition to accrediting agencies, meaning that the Department certifies that such agencies are reliable authorities as to what constitutes quality education or training provided by postsecondary institutions. In deciding whether to recognize accrediting agencies, the Secretary considers the recommendations of the National Advisory Committee on Institutional Quality and Integrity. The advisory committee consists of 15 members who are representatives of, or knowledgeable about, postsecondary education and training. Appointed by the Secretary of Education, committee members serve 3-year terms. The advisory committee generally holds public meetings twice a year to review petitions for recognition from accrediting agencies. The Department’s Accrediting Agency Evaluation Branch is responsible for reviewing information submitted by the accrediting agencies in support of their petitions. Branch officials analyze submitted materials, physically observe an accrediting agency’s operations and decision-making activities, and report their findings to the advisory committee. States use a variety of approaches to regulate postsecondary educational institutions. Some states establish standards concerning things like minimum qualifications of full-time faculty and the amount of library materials and instructional space. Other state agencies define certain consumer protection measures, such as refund policies. In the normal course of regulating commerce, all states require postsecondary institutions to have a license to operate within their borders. Because of concerns about program integrity, the Congress, in amending HEA in 1992, decided to strengthen the role of states in the regulatory structure by authorizing the creation of State Postsecondary Review Entities (SPRE). Under the amendments, the Department would identify institutions for review by SPREs, using 11 criteria indicative of possible financial or administrative distress. To review institutions, SPREs would use state standards to assess such things as advertising and promotion, financial and administrative practices, student outcomes, and program success. On the basis of their findings, SPREs would recommend to the Department whether institutions should retain Title IV eligibility. The Congress terminated funding for SPREs in 1995. The practice of accreditation arose as a means of having nongovernmental, peer evaluation of educational institutions and programs to ensure a consistent level of quality. Accrediting agencies adopt criteria they consider to reflect the qualities of a sound educational program and develop procedures for evaluating institutions to determine whether they operate at basic levels of quality. As outlined by the Department of Education, the functions of accreditation include certifying that an institution or program has met established standards, assisting students in identifying acceptable institutions, assisting institutions in determining the acceptability of transfer credits, creating goals for self-improvement of weaker programs and stimulating a general raising of standards among educational institutions, establishing criteria for professional certification and licensure, and identifying institutions and programs for the investment of public and private funds. Generally, to obtain initial accreditation, institutions must prepare an in-depth self-evaluation that measures their performance against standards established by the accrediting agency. The accrediting agency, in turn, sends a team of its representatives to the institution to assess whether the applicant meets established standards. A report, containing a recommendation based on the institution’s self-evaluation and the accrediting agency’s team findings, is reviewed by the accrediting agency’s executive panel. The panel either grants accreditation for a specified period of time, typically no longer than 5 years, or denies accreditation. Once accredited, institutions undergo periodic re-evaluation. To retain accreditation, institutions pay sustaining fees and submit status reports to their accrediting agencies annually. The reports detail information on an institution’s operations and finances and include information on such things as student enrollment, completion or retention rates, placement rates, and default rates. In addition, institutions are required to notify their accrediting agencies of any significant changes at their institutions involving such things as a change in mission or objectives, management, or ownership. Accrediting agencies judge whether institutions continue to comply with their standards on the basis of the information submitted by institutions and other information such as complaints. Whenever an accrediting agency believes that an institution may not be in compliance, the agency can take a variety of actions. For example, agencies may require institutions simply to provide more information so that they can render a judgment, conduct site visits to gather information, require institutions to take specific actions that address areas of concern, or, in rare instances, ultimately revoke accreditation. Recent information points to some favorable trends regarding the participation of proprietary schools in the Title IV program. Fewer proprietary schools participate in Title IV programs now than 5 years ago, a trend reflected in decreased numbers of schools accredited by the six primary accrediting agencies. Proprietary schools receive a much smaller share of Title IV aid dollars now than in the past. And, while the default rates for proprietary school students are still far above those associated with nonprofit institutions, the rates have declined over the past few years. For the six agencies we contacted, we observed a trend toward accrediting fewer institutions since 1992 (see table 1). Agency officials pointed out a number of reasons for the decreases, including recent changes in Title IV regulations, more aggressive oversight by accrediting agencies, school closures, and the fact that schools once accredited by two or more agencies are now accredited only by one. We observed no clear trends in other accreditation decisions such as an increasing or decreasing propensity to grant, deny, or revoke school accreditation over the past few years. Some accrediting agency officials told us that because they effectively prescreen institutions applying for accreditation, they would not expect to see much change in the number of cases in which accreditation is denied or applications are withdrawn. Proprietary schools’ share of Title IV aid has steadily declined since the late 1980s. For example, about 25 percent of all Pell grant dollars went to students attending proprietary schools in 1986-87, but by 1992-93 that figure declined to about 18 percent (see fig. 1). While total Pell grant expenditures rose from $3.4 billion to $6.2 billion over these years, the amount retained by proprietary schools only increased from $.9 billion to $1.1 billion. For the subsidized Stafford loan program, the proprietary school share declined from nearly 35 percent of all dollars in 1986-87 to about 10 percent in 1992-93. In the Federal Family Education Loan Program, total dollars increased from $9.1 billion to $14.6 billion between 1986-87 and 1992-93, but dollars going to proprietary schools fell from $3.2 billion to $1.7 billion. The proportion of proprietary school students receiving Title IV aid has been declining as well, although these students remain more likely than others to receive aid. The proportion receiving aid fell from nearly 80 percent in 1986-87 to about 67 percent in 1992-93, while the proportion of students receiving aid at the public and private nonprofit schools remained steady. Furthermore, for proprietary school students who receive aid, the average dollar amount has risen more slowly than for students in other sectors. Average aid received by proprietary school students went up by 20 percent between 1986-87 and 1992-93; in contrast, the increase was 34 percent for public school students and 47 percent for private nonprofit school students. Loan default rates for proprietary school students have been declining in recent years, from 36.2 percent in 1991 to 23.9 percent in 1993 (see fig. 2), while default rates in other sectors have not changed. However, students at proprietary schools are still more likely than others to default on student loans. The most recent rates for 2- and 4-year nonprofit schools were 14 and 7 percent, respectively. One new measure adopted in the 1992 HEA amendments to help tighten eligibility for Title IV student financial aid programs was the so-called 85-15 rule. This provision prohibits proprietary schools from participating in Title IV programs if more than 85 percent of their revenues come from these programs. The presumption under the rule is that if proprietary schools are providing good services, they should be able to attract a reasonable percentage of their revenues from sources other than Title IV programs. In other words, the 85-15 rule is based on the notion that proprietary schools which rely overwhelmingly on Title IV funds may be poorly performing institutions that do not serve their students well and may be misusing student aid programs, and therefore should not be subsidized with federal student aid dollars. Since the 85-15 rule went into effect last July, proprietary schools that fail to meet the standard must report this to the Department within 90 days following the end of their fiscal year. Schools that meet the standard must include a statement attesting to that fact in their audited financial statements due to the Department within 120 days following the end of their fiscal year. The period has now elapsed for the vast majority of schools. Thus far, however, only four proprietary schools have notified the Department of their failure to meet the 85-15 standard. This finding may have a variety of possible explanations. For example, it may be that very few schools actually had more than 85 percent of their revenues coming from Title IV when the rule became law or that most such schools adjusted their operations to meet the standard when it took effect. Conversely, the actual number of schools that failed to meet the 85-15 standard could be substantially higher. According to the Department, about 25 percent of the 830 proprietary schools that submitted financial statements during the past 2 months have not properly documented whether they met the 85-15 standard. These schools may have met the 85-15 standard but misunderstood the reporting rules, or they may have failed to meet the 85-15 standard and intentionally not reported this fact in an attempt to avoid or postpone losing their Title IV eligibility. At the Chairman’s request, we recently initiated a study to address the core of this issue: Is there a clear relationship between reliance on Title IV revenues and school performance? Using data from national accrediting associations, state oversight agencies, and the Department, we will attempt to determine whether greater reliance on Title IV funds is associated with poorer outcomes, such as lower graduation and placement rates. Annually, students receive over $3 billion from Title IV programs to attend postsecondary institutions that offer occupational training without regard to labor market circumstances. While Department regulations stipulate that proprietary schools—the principal vendors of occupational education and training under Title IV—provide instruction to “prepare students for gainful employment in a recognized occupation,” schools are not required to consider students’ likelihood of securing such employment. Students who enroll in occupational education programs, obtain grants, and incur significant debt often risk being unable to find work because they have been trained for fields in which no job demand exists. Proprietary school students are particularly vulnerable in this situation because, according to current research, unlike university graduates, they are less likely to relocate outside of their surrounding geographic region. The Department’s Inspector General (IG) recently estimated that about $725 million in Title IV funds are spent annually to train cosmetology students at proprietary schools, yet the supply of cosmetologists routinely exceeds demand. For example, in 1990, 96,000 cosmetologists were trained nationwide, adding to a labor market already supplied with 1.8 million licensed cosmetologists. For that year, according to the Bureau of Labor Statistics, only 597,000 people found employment as cosmetologists, about one-third of all licensed cosmetologists. In Texas, the IG also found that, not surprisingly, the default rate for cosmetology students exceeded 40 percent in 1990. At the Chairman’s request, we have also initiated a study to address this issue. States have information readily available to project future employment opportunity trends by occupation. We are analyzing its usefulness in identifying occupations that, in the short term, have an over- or undersupply of trained workers. Using this data in conjunction with databases from the Department, we hope to determine the pervasiveness of this problem and the Title IV costs associated with it. We expect to report our results on this matter to you early next year. Mr. Chairman, this concludes my prepared remarks, and, as I mentioned, we will be reporting to you in the near future on the results of our ongoing work for the Subcommittee. I am happy to answer any questions you may have at this time. For more information about this testimony, please call Wayne B. Upshaw at (202) 512-7006 or C. Jeff Appel at (617) 565-7513. Other major contributors to this testimony included Ben Jordan, Nancy Kinter-Meyer, Gene Kuehneman, Carol Patey, Jill Schamberger, Tim Silva, and Jim Spaulding. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined whether proprietary schools receiving Title IV funding are providing students with quality educational programs. GAO found that: (1) fewer proprietary schools have been accredited since 1992 because of increases in school closures and oversight by accrediting agencies; (2) the proportion of proprietary school students receiving Title IV aid fell from 80 percent in the 1986-87 school year to 67 percent in the 1992-93 school year; (3) loan default rates fell, but remained substantially higher than those for students attending nonprofit institutions; (4) the 1992 Higher Education Act Amendments adopted a rule prohibiting schools from participating in Title IV programs if they receive more than 85 percent of their revenue from Title IV programs; (5) since the so-called 85-15 rule went into effect, only four proprietary schools have notified the Department of Education of their failure to meet the 85-percent standard; (6) schools not meeting the standard had more than 85 percent of their revenue coming from Title IV funding, improperly documented their eligibility, misunderstood the reporting rules, or intentionally misrepresented their findings; and (7) proprietary school students incur significant debt and are often unable to find jobs in their fields. |
SSA administers two of the largest disability programs: the Disability Insurance (DI) program, enacted in 1956, and the Supplemental Security Income (SSI) program, enacted in 1972. In order to be eligible for DI or SSI benefits based on a disability, an individual must meet the definition of disability for these programs—that is, they must have a medically determinable physical or mental impairment that (1) prevents the individual from engaging in any substantial gainful activity, and (2) has lasted or is expected to last at least one year or result in death. To determine eligibility, SSA uses a five-step sequential process that is intended, in part, to expedite disability decisions when possible and limit administrative costs by conducting less intensive assessments at earlier steps (see fig. 1). At steps 1 and 2 of the process, SSA determines whether an applicant is working and meets income thresholds as well as the medical severity of impairments. If so, the applicant moves to step 3 of the process. At this step, SSA examiners assess the applicant’s medical impairment(s) against the Listings of Impairments, also known as the medical listings, which are organized into 14 major body systems for adults and reflect medical conditions that have been determined by the agency to be severe enough to qualify an applicant for benefits. If the individual’s impairment meets or is equal in severity to one or more of those in the listings, the individual is determined to have a disability. If not, SSA performs an assessment of the individual’s physical and mental residual functional capacity. Based on this assessment, SSA determines whether the individual is able to perform past relevant work (step 4) or any work that is performed in the national economy (step 5). To inform determinations at steps 4 and 5, SSA uses a Department of Labor database—known as the Dictionary of Occupational Titles (DOT)—for an inventory of occupations performed in the national economy. Since 2003, SSA’s and other federal disability programs have remained on our high-risk list, in part, because their programs emphasize medical conditions in assessing work capacity without adequate consideration of work opportunities afforded by advances in medicine, technology, and job demands. Since the 1990s, we, along with SSA’s Office of Inspector General and the Social Security Advisory Board, have expressed concerns that the medical listings being used no longer provide sufficient criteria to evaluate disability applicants’ inability to work and that SSA was simply extending the listings instead of periodically updating them. In 2008, we reported that SSA had established a new process for revising the listings—referred to by SSA as the “business process”—to better incorporate feedback into its continuous updates. This process, which has been in effect since 2003, includes incorporating feedback from multiple parties, including medical experts and claims examiners, to update their medical criteria. SSA should also gather external feedback through comments associated with regulatory actions, such as the publication of advanced notices of proposed rulemaking (advanced notices) and notices of proposed rulemaking (notices) in the Federal Register. In addition, one year after a revision is made, SSA should conduct a study reviewing the changes. According to SSA documentation, this internal case study, now referred to as the postimplementation study, should involve surveying the field regarding the results of the regulation and areas to improve, as well as reviewing the data to determine whether expectations from the revision have been proven. With respect to information on jobs in the national economy that supports SSA’s occupational criteria, we and others have reported that the DOT, which SSA still relies on to assess eligibility at steps 4 and 5 of the process, is outdated. The DOT has not been updated since 1991, and Labor has since replaced the DOT with a new database called the Occupational Information Network (O*NET).determined that O*NET is not sufficiently detailed for evaluating DI and SSI disability claims and therefore has begun developing its own OIS in order to better reflect the physical and mental demands of work in the national economy. Since our last review in 2008, SSA has made several changes that hold promise for improving medical listings updates. First, the agency is using a two-tiered system for ongoing revisions to the listings. Under this system, SSA first completes a comprehensive listings update for a body system that reviews all the diseases and disorders listed within that system and makes revisions it determines are needed. For subsequent updates of listings for a body system that underwent a comprehensive revision, SSA will pursue a more targeted approach—that is, SSA will conduct ongoing reviews with the expectation of making targeted revisions for a small number of medical diseases or disorders that need to be updated. Agency officials told us that targeted updates should be completed more quickly than comprehensive updates, allowing them to focus on the most critical changes needed. However, officials also noted that these ongoing reviews could result in major or even no changes, as appropriate. As of early March 2012, SSA had begun the ongoing review process to consider opportunities for targeted revisions for 8 out of 14 adult body systems that were recently comprehensively revised. Also as of early March 2012, the agency had not yet completed comprehensive revisions for the six remaining systems, which the agency expects to do before they conduct subsequent reviews under the targeted approach. Another change, according to agency officials, is that in 2010 the SSA Commissioner set a 5-year cycle time for updating listings for each body system. Previously, SSA set expiration dates for periodically updating listings according to each body system, ranging from 3 to 8 years, but frequently extended them. SSA officials believe that conducting targeted reviews will generally allow the agency to conclude any necessary revisions prior to the 5-year expiration period. Additionally, they expect that using the “business process,” which requires early public notification of changes and obtaining necessary data and feedback from internal and external parties, should help keep continuous reviews on track. See figure 2 for the status and expiration dates of listings for the 14 adult body systems, undergoing review for either comprehensive or possible targeted revisions, as of early March 2012. SSA has made another change by more extensively engaging the medical community to identify ways to improve the medical listings. For example, SSA contracted with the Institute of Medicine to study its medical criteria for determining disability and to make recommendations for improving the timeliness and accuracy of its disability decisions, resulting in a 2007 report with recommendations and a symposium of experts in 2010. SSA has addressed some of the institute’s recommendations, such as making better use of its administrative data to update criteria and creating a standing committee through the institute to provide recommendations for listings revisions. SSA continues to face delays in completing both comprehensive and other ongoing updates. For example, as of early March 2012, SSA officials told us they still needed to complete comprehensive revisions for listings of six body systems that have been ongoing for the last 19 to 33 years, after numerous extensions beyond the original expiration periods (see table 1). Two of the remaining six body system listings—mental and neurological disorders, which are among those SSA uses most frequently in its eligibility determination process—have not been comprehensively revised for 27 years.to expire in 2012. Of these four, SSA is developing a notice of proposed rulemaking for three of them and has issued a notice on the fourth. However, it is unclear whether SSA will complete the revisions before they are set to expire. In 2008, SSA began a multiyear project to develop a new source of occupational information that will replace the outdated information currently being used to determine if claimants are able to do their past work or any other work in the national economy. Since the 1960s, SSA has been using the DOT, which contains a list of job titles found in the national economy and was last updated in 1991. The DOT provides SSA with descriptions of the physical demands of work—such as climbing, balancing, and environmental requirements—for each of the more than 12,000 occupations listed. According to SSA, these descriptions have been essential to its evaluations of how much a claimant can do despite his or her impairment and whether this level of functioning enables the claimant to do his or her past work or any other work. After its last limited update, Labor decided to replace the DOT with O*NET, which has far fewer job titles compared with the DOT, but has served Labor’s purposes more efficiently. According to an SSA report, after investigating potential alternatives, SSA decided that O*NET and other existing databases with occupational information were not sufficiently detailed and able to withstand legal challenges for use in its decision-making process. SSA further decided to develop its own occupational information system, which would contain detailed information as in the DOT, but would also include additional information, such as the mental demands of work. In addition, the OIS should (1) meet SSA’s legal, program, and data requirements; (2) be flexible enough to incorporate changes in SSA’s policies and processes; and (3) be able to be updated to reflect the evolving workplace environment. In 2008, SSA began taking several steps to guide the development of its OIS. SSA created an internal office and working group, as well as an Occupational Information Development Advisory Panel, comprised of external experts in areas related to the development of occupational information systems. The advisory panel holds quarterly public meetings and has several subcommittees that review material and make recommendations to SSA on developing various components of the OIS. For example, in a 2009 report, the advisory panel supported the need for SSA to develop a new source of occupational information, rather than adapt O*NET, and recommended the type of data SSA should collect, as well as suggested ways to classify occupations. To further inform its efforts, SSA has sought input from agencies or organizations that either collect occupational information or also use the DOT. For example, SSA officials held initial meetings with Labor and U.S. Census Bureau officials to gain information on sampling methods used for the O*NET, the Occupational Employment Statistics program, and Census Bureau’s household surveys. in the process of completing a Memorandum of Understanding that will formalize their collaboration efforts on the new OIS. According to an SSA official, as the OIS project progresses, SSA plans to convene ad hoc roundtables with experts and other agency officials to explore specific subject areas, such as sampling issues. Besides working with Labor and Census Bureau officials, SSA officials and panel members have sought input from other experts and current users of the DOT, such as SSA disability adjudicators and external rehabilitation professionals, by conducting a user needs analysis in 2009 and presenting the OIS project at events and conferences. The Occupational Employment Statistics program produces employment and wage estimates for approximately 800 occupations. The Census Bureau’s household surveys include (1) the American Community Survey, which is an ongoing survey that provides annual data on demographics such as age, education, and disabilities, and (2) the Current Population Survey , which is primarily a labor force survey, conducted every month by the Census Bureau for the Bureau of Labor Statistics and provides data such as the national unemployment rate. key components of the OIS in order to implement the OIS by 2016 at an estimated cost of $108 million. For example, the plan includes several baseline activities to identify and study other occupational information systems and various approaches for analyzing occupations that may inform or could be leveraged in SSA’s OIS data collection. The plan also includes activities to identify the primary occupational, functional, and vocational characteristics of current beneficiaries. Other key components of the plan include developing descriptions of work requirements, such as the physical and mental demands for jobs, and data collection and analysis strategies. SSA also plans to develop a strategy for piloting data collection nationwide within this time frame. As of February 2012, SSA had made progress on many of the baseline activities outlined in its research and development plan for the OIS. For example, according to an SSA official, its investigation of existing occupational information systems, now complete, has resulted in useful information about design issues other organizations have confronted and mitigated when creating their own system. Additionally, SSA’s preliminary analysis of its own administrative data identified the most frequently cited occupations and functional and vocational characteristics of disability applicants. SSA officials told us the agency will target the occupations identified in this analysis for its pilot studies of the OIS. Also in 2011, SSA completed a comprehensive framework for assessing an individual’s capacity to work—key to informing the OIS content, according to SSA officials—which was based on recommendations of outside experts as well as SSA’s policy and program requirements. While SSA has made progress on several key activities, agency officials delayed 2011 completion dates for certain activities and anticipate making additional changes to its timeline as a result of not meeting its staffing goals for fiscal year 2011. For example, the activities that were delayed by several months included finalizing reports for the baseline studies and conducting a literature review that would inform how occupations might be analyzed for the OIS. SSA officials told us that they would have needed to have the full complement of projected 2012 staff by September 2011 to complete all of the 2012 planned activities within the estimated schedule. However, SSA officials said they did not have the budget to hire new staff in September 2011. To address this challenge, SSA officials hired consultants to meet some of their needs. SSA officials also met with the Office of Personnel Management to explore the possibility of an interagency agreement that would allow SSA to use one or two of the Office of Personnel Management’s industrial organizational psychologists to help on a part-time basis. As part of our ongoing work, we are assessing SSA’s current OIS project schedule and cost estimates against best practices, and have preliminarily identified some gaps in SSA’s approach. For example, best practices require cost estimates to be comprehensive and include information about life cycle costs—that is, how much the project is expected to cost over time. However, while SSA has estimated the cost to research and develop the OIS, the estimate does not project the future costs to implement or maintain the system. The cost of sustaining an OIS could be significant, based on other agencies’ experiences maintaining their systems for collecting national occupational information. We preliminarily identified other gaps, such as lack of documentation describing step by step how the cost estimate was developed so that those unfamiliar with the program could understand how it was created. For our final report due later in 2012, we plan to deliver more comprehensive findings on how well SSA is managing the development of its OIS against best practices, such as estimating costs of the OIS and ensuring that the project schedule reliably estimates related activities, the length of time they will take, and how they are interrelated. We will also identify any mitigation strategies the agency may have to address project risks, such as the risk of the agency not receiving full funding. Chairman Johnson, Ranking Member Becerra, and Members of the Subcommittee, this concludes my prepared statement. I will be happy to respond to any questions. For further information regarding this testimony, please contact me at 202-512-7215 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Michele Grgich, Assistant Director, James Bennett, Kate Blumenreich, Julie DeVault, Alex Galuten, Sheila McCoy, Patricia M. Owens, Anjali Tekchandani, Kathleen Van Gelder, and Walter Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | SSA administers two of the largest disability programs, with annual benefit payments that have grown fivefold over the last 20 yearsfrom $35 billion in 1990 to over $164 billion in 2010and the agency receives millions of new applications annually. GAO has designated federal disability programs as a high-risk area, in part because eligibility criteria have not been updated to reflect medical and technological advances and labor market changes. Given the size and cost of its disability programs, SSA needs updated criteria to appropriately determine who qualifies for benefits. In this statement, GAO discusses initial observations from its ongoing review and assessment of SSAs efforts to (1) update its medical criteria and (2) develop a new occupational information system. To do this, GAO reviewed prior GAO and SSA Inspector General reports; relevant federal laws and regulations; program documentation including policies, procedures, strategic goals, and supporting project plans; and cost estimates. GAO also interviewed SSA officials, project stakeholders, experts, and representatives from other agencies that administer disability programs. This work is ongoing and GAO has no recommendations at this time. GAO plans to issue its final report later in 2012. The Social Security Administration (SSA) has made several changes to improve the process it uses for updating its medical criteria, but continues to face challenges ensuring timely updates. SSAs medical criteria for adults are in the form of listings of medical conditions and impairments organized under 14 body systems, which SSA periodically updates. To help ensure timely, periodic updates of a body systems listings, SSA is moving away from comprehensively revising a body systems listings toward a more targeted approach, wherein SSA selects for revision those impairment listings most in need of change. To date, SSA has completed comprehensive revisions of listings for 8 of the 14 body systems and now is in the process of reviewing them to determine whether and which targeted revisions are appropriate. In 2010, the SSA Commissioner set a 5-year cycle time for updating listings for each body system, replacing the agencys prior practice of setting expiration dates for listings that ranged from 3 to 8 years and then frequently extending them. To further increase the timeliness and accuracy of decisions, SSA has sought recommendations from the Institute of Medicine and has acted on some of them, such as creating a standing committee to provide advice on updating the listings. However, SSA continues to face challenges keeping its listings up to date. For example, SSA is still working on completing comprehensive revisions of listings for six body systems that have been ongoing for 19 to 33 years. SSA staff told us that a lack of staff and expertise, along with the complexity and unpredictability of the regulatory process, have made it challenging to maintain its schedule of periodic updates for all listings. SSA has embarked on an ambitious plan to produce by 2016 an occupational inventory database to support its disability benefit decisions, but it is too soon to determine if SSA will meet key time frames. SSA currently relies on an occupational information source developed by the Department of Labor that was updated for the last time in 1991 and is viewed by many as outdated. In 2008, SSA initiated a project to develop its own occupational information system (OIS), which SSA expects will provide up-to-date information on the physical and mental demands of work, and in sufficient detail to support its disability benefit decisions. To guide the creation of its OIS, SSA established an advisory panel, collaborated with outside experts and other agencies, and in July 2011 issued a research and development plan detailing all relevant activities and goals between 2010 and 2016. As of February 2012, SSA had completed many initial research efforts, including investigating other types of occupational information systems and identifying job analysis methods. Despite preliminary progress, it is too early to determine if SSA will meet its target implementation date. SSA officials told us that due to staffing shortages it did not meet all initial goals on time and may need to adjust its time frames for future activities. While GAO is still evaluating SSAs schedule and cost estimates against best practices, we have preliminarily identified some potential gaps in SSAs approach, such as not reflecting the costs to both implement and maintain a new OIS. |
Approximately 2.6 million federal employees throughout the United States and abroad execute the responsibilities of the federal government. Federal employees work in every state, with about 90 percent outside the Washington, D.C., metropolitan area. They perform functions across a multitude of sectors, from those vital to the long-term well-being of the country—such as environmental protection, intelligence, social work, and financial services—to those directly charged with aspects of public safety—including corrections, airport and aviation safety, medical services, border protection, and agricultural safety. Worker protection strategies are crucial to sustain an adequate workforce during a pandemic. During the peak of an outbreak of a severe influenza pandemic in the United States, an estimated 40 percent of the workforce could be unable to work because of illness, the need to care for ill family members, or fear of infection. Under the Implementation Plan, all federal agencies are expected to develop their own pandemic plans that along with other requirements, describe how each agency will provide for the safety and health of its employees and support the federal government’s efforts to prepare for, respond to, and recover from a pandemic. Because the dynamic nature of pandemic influenza requires that the scope of federal government continuity of operations (COOP) planning includes preparing for a catastrophic event that is not geographically or temporally bounded, the Federal Emergency Management Agency concluded that planning for a pandemic requires a state of preparedness that is beyond traditional federal government COOP planning. For example, for pandemic planning purposes, essential functions may be more inclusive and extend longer than the 30-day traditional COOP-essential functions. Our survey questions for the 24 agencies were drawn from pandemic planning checklists and federal guidance to help agencies plan for protecting their employees during a pandemic. The 24 agencies we surveyed reported being in various stages of formulating their pandemic plans. While most of the agencies had developed plans, several reported that they were still formulating their plans. For example, in February 2009, the Small Business Administration (SBA) reported that it had begun to draft a more complete pandemic influenza annex to its COOP plan with an estimated completion date of spring 2009. The Department of Defense (DOD) had completed its overarching departmentwide plan, and DOD reported that its installations were tailoring their Force Health Protection Plans to include pandemic influenza considerations. Identifying essential functions and enumerating the employees who would perform them is the first step in training those employees, communicating the risks and expectations of working during a pandemic, and planning and budgeting for measures that would mitigate those risks. Nineteen agencies reported that they had identified essential functions at both the department and component levels that cannot be continued through telework in the event of pandemic influenza or, in the case of the Office of Personnel Management (OPM), the U.S. Agency for International Development (USAID), and the National Science Foundation (NSF), determined that all of their essential or important government functions could be performed remotely. Of the remaining 5 agencies, DOJ reported identifying essential functions at the component level but noted that it was revising its department-level plan. At the time of our survey, the General Services Administration (GSA) reported not identifying its essential functions in the event of a pandemic while three agencies—DOD, SBA, and the Department of Housing and Urban Development (HUD)—were in the process of either identifying essential functions or determining which functions could be continued through telework. The pandemic coordinators in three agencies did not know whether the employees who performed essential functions in their agencies had been notified that they might be expected to continue operations during a pandemic. We also asked the pandemic coordinators from the 24 agencies whether they had planned or budgeted for any of seven potential measures to protect workers whose duties require their on-site presence during a pandemic. The measures included in our survey included procurement of personal protective equipment such as masks and gloves; supplemental cleaning programs for common areas; distribution of hygiene supplies (hand sanitizers, trash receptacles with hands-free lids, etc.); obtaining antiviral medications; arrangements to obtain pandemic vaccines to the extent available; prioritization of employees for vaccinations; and prioritization of employees for antiviral medications. Federal pandemic guidance recommends the measures according to risk assessments for employees, and therefore, based on the agencies’ mission and activities, not all measures are equally appropriate for all agencies. The most frequently reported measure was procurement of personal protective equipment with 19 agencies reporting that they had planned or budgeted for this measure. For example, DHS reported that it had done fit testing of employees for N95 respirators and training on the proper use of other personal protective equipment and had pre-positioned stockpiles of the equipment for employees in 52 locations. Prioritization of employees for vaccinations was the measure least frequently reported with 11 agencies reporting that they had taken this measure. The survey showed that agencies’ most frequently cited social distancing strategies involved using telework and flexible schedules for their workforces. Restrictions on meetings and gatherings and avoiding unnecessary travel were also part of 18 agencies’ plans. Although many of the agencies’ pandemic influenza plans rely on social distancing strategies, primarily telework, to carry out the functions of the federal government in the event of a pandemic outbreak, only one agency, NSF stated that it tested its IT infrastructure to a great extent. The agency reported assessing its telework system formally several times each year and each day through various means. On the other hand, five agencies reported testing their IT systems to little or not extent. Table 1 shows the survey responses. Given the potential severity of a pandemic, it is important that employees understand the policies and requirements of their agencies and the alternatives, such as telework, that may be available to them. Many employees and their supervisors will have questions about their rights, entitlements, alternative work arrangements, benefits, leave and pay flexibilities, and hiring flexibilities available during the turmoil created by a pandemic. Therefore, it is important that each agency implement a process to communicate its human capital guidance for emergencies to managers and make staff aware of that guidance. Twenty-one of the 24 pandemic coordinators surveyed reported making information available to their employees on how human capital policies and flexibilities will change in the event of a pandemic outbreak. Three agencies—DOC, GSA, and SSA—reported that they have not. Of the agencies that reported making information available, two had done so indirectly. HUD stated that it shared information with unions, and Treasury reported that it briefed its human capital officers on the human capital policies and flexibilities available to address pandemic issues. BOP, a component of DOJ, has the mission of protecting society by confining offenders in the controlled environments of prisons and community-based facilities that are safe, humane, cost-efficient, and appropriately secure and that provide work and other self-improvement opportunities to assist offenders in becoming law-abiding citizens. Approximately 35,000 federal employees ensure the security of federal prisons and provide inmates with programs and services. BOP’s pandemic influenza plan was developed through its Office of Emergency Preparedness and was disseminated to its central office and six regional offices in May 2008. BOP’s pandemic plan addresses the need for infection control measures to mitigate influenza transmission and calls for education of correctional workers and the inmate population. Accordingly, all facilities are instructed that they should have readily available and ample supplies of bar soap and liquid soap in the restrooms, alcohol-based wipes throughout the facility, and hand sanitizers if approved by the warden. Based on a historical review of the 1918 pandemic influenza and HHS’ pandemic planning assumptions, BOP intends to supply antiviral medication to 15 percent of correctional workers and inmates in each facility if the influenza outbreak is geographically spread throughout the United States. BOP has some challenges in preparing for pandemic influenza. For example, social distancing measures to protect correctional workers are difficult to implement at the facility level. BOP officials said that there are many situations in which close contact is inevitable between correctional workers and inmates and where personal protective equipment, such as gloves and masks, would not be feasible. A unique pandemic planning challenge facing federal correctional workers is the maintenance of an effective custodial relationship between them and the inmates in federal prisons. According to BOP officials, this relationship depends on communication and mutual trust, as correctional workers in federal prisons do not carry weapons or batons inside the cellblocks. Rather, they use verbal methods of communication to keep order. BOP officials at United States Penitentiary Leavenworth said that they would not allow a situation where correctional workers wear N95 respirators or surgical masks but the inmates do not. Despite the challenges BOP faces with pandemic influenza planning, the bureau has advantages, which are unique to its facilities. Every correctional facility is a closed and self-contained system, and each facility is somewhat self-sufficient, maintaining a 30-day supply of food, water, and other necessities for any type of contingency. Correctional facilities also have well-tested experience in emergency and health hazard planning and management and infection control, which provides them with a solid foundation to build on for pandemic influenza preparedness. Additionally, correctional facilities generally have strong ties with their local communities, important because pandemic influenza will be largely addressed by the resources available to each community it affects. FMS, a component of Treasury, provides central payment services to federal agencies, operates the federal government’s collections and deposit systems, provides governmentwide accounting and reporting services, and manages the collection of delinquent debt owed to the government. FMS has four regional financial centers that are production facilities that rely heavily on integrated computer and telecommunications systems to perform their mission. However, they also rely on light manufacturing operations to print and enclose checks for releasing at specific times of the month. Nearly 206 million of FMS’s payments were disbursed by check in fiscal year 2008. A regional center Deputy Director said that the organization is aware that the basis of part of the U.S. economy rests on the regional financial centers and that they will need to issue payments even during a pandemic. For the most part, the regional financial centers are planning that in the event of a pandemic, the nature of their business will be unchanged, but there will be issues with sickness, absenteeism, communication, and hygiene that they must address. Employees whose positions require, on a daily basis, direct handling of materials or on-site activity that cannot be handled remotely or at an alternative worksite are not eligible for telework. According to an FMS official, even with a minimum crew on-site to produce paper checks, there will be instances when employees will need to be within 3 feet of other employees. As part of the regional center pandemic plans, officials researched the types of supplies they would need based on the risks faced in their facilities. For example, in the Kansas City regional financial center the janitorial staff now routinely wipes off door handles, tabletops, and other high-traffic areas. As another part of the Kansas City regional plan, the center stocks such items as N95 respirators, gloves, hand sanitizers, disinfectants, and fanny packs that include items such as ready to eat meals, hand-cranked flashlights, small first-aid kits, and emergency blankets. The FMS regional financial centers face some unique pandemic planning challenges. Since the centers are production facilities with large open spaces as well as enclosed office areas, pandemic planning requires different responses for different areas. An FMS official noted that employees’ response and diligence in following disease containment measures in the different areas would be what determines the success of those measures. Scheduling of production personnel is also a challenge. Since the production of the checks must be done according to a deadline and internal controls must be maintained, schedules are not flexible. FMS officials had not made any arrangements for pandemic pharmaceutical interventions for the regional financial centers in part because the relatively small number of essential employees required to be on-site, as well as the large open spaces in the regional facilities, make social distancing measures more feasible. FAA, a component of DOT, expects the National Airspace System to function throughout an influenza pandemic, in accordance with the preparedness and response goal of sustaining infrastructure and mitigating impact to the economy and the functioning of society. Maintaining the functioning of the National Airspace System will require that FAA’s air traffic controllers, who ensure that aircraft remain safely separated from other aircraft, vehicles, and terrain, continue to work on-site. While FAA expects the demand for air traffic control, which manages cargo as well as passenger travel, to be reduced in the event of a severe pandemic outbreak, its contingency plans assume full air traffic levels as a starting baseline. According to an FAA official, although passenger travel may be diminished, the shipping of cargo may increase. The Air Traffic Organization, FAA’s line of business responsible for the air traffic management services that air traffic controllers provide, had not directed facilities, such as its air route traffic control centers, to develop pandemic-specific plans or incorporate these pandemic plans into their all- hazards contingency plans. FAA officials said that all-hazards contingency and continuity plans are adapted to the facility level and are regularly implemented during natural disasters such as hurricanes. Although these plans are not specific to a pandemic, FAA officials reported that the all- hazards plans allow the Air Traffic Organization to mitigate the impact of adverse events, including reduced staffing levels on National Airspace Systems operations. The Air Traffic Organization plans to direct its facilities to develop pandemic-specific plans or enhance their preexisting all-hazards contingency plans at the local field facility level after a number of actions, such as the development of an FAA workforce protection policy, are completed. Protecting air traffic controllers in the event of a pandemic outbreak is particularly challenging for several reasons. Air traffic controllers work in proximity to one another; the 6 feet of separation recommended for social distancing during a pandemic by the Centers for Disease Control and Prevention and the Occupational Safety and Health Administration is not possible for them. In addition, air traffic controllers cannot use personal protective equipment such as N95 respirators or surgical masks, as these impede the clear verbal communication necessary to maintain aviation safety. FAA recently completed a study examining the feasibility of air traffic controllers using powered air purifying respirators. Because of a number of concerns with using the respirators, such as noise, visibility, and comfort, FAA officials concluded that their long-term use during a pandemic appears to be impractical. Moreover, cross-certification of air traffic controllers is problematic. Attaining full performance levels for the controllers takes up to 3 years, and air traffic controllers proficient in one area of airspace cannot replace controllers proficient in another airspace without training and certification. Finally, FAA regulations on medication for air traffic controllers are strict because certain medications may impair an air traffic controller’s performance. The Office of Aviation Medicine’s policy on the use of antiviral medication for prophylactic use by on-duty controllers was still in draft as of early 2009. The survey results from the 24 CFO Act agency pandemic coordinators, as well as information from the case study agencies, indicate that a wide range of pandemic planning activities are under way and that all of the agencies are taking steps to some degree to protect their workers in the event of a pandemic. However, agencies’ progress is uneven, and while we recognize that the pandemic planning process is evolving and is characterized by uncertainty and constrained resources, some agencies are clearly in the earlier stages of developing their pandemic plans and being able to provide the health protection related to the risk of exposure their essential employees may experience. Under the HSC’s Implementation Plan, DHS was charged with, among other things, monitoring and reporting to the Executive Office of the President on the readiness of departments and agencies to continue their operations while protecting their workers during an influenza pandemic. DHS officials reported that in late 2006 or early 2007 they asked HSC representatives with direct responsibility for the Implementation Plan for clarification on the issue of reporting agencies’ ability to continue their operations while protecting their workers during a pandemic. DHS officials said they were informed that they did not have to prepare a report. Instead, according to White House counsel representatives, the HSC planned to take on the monitoring role through its agency pandemic plan certification process. In November 2006, the HSC issued Key Elements of Departmental Pandemic Influenza Operational Plan (Key Elements), which covered areas such as dealing with the safety and health of department employees and essential functions and services and how agencies will maintain them in the event of significant and sustained absenteeism during a pandemic. The Key Elements document stated that to ensure uniform preparedness across the U.S. government, the HSC was including a request that by December 2006 the agencies certify in writing to the HSC that they were addressing applicable elements of the checklist. Subsequently, in August 2008, the HSC revised the Key Elements to reflect current federal government guidance on pandemic planning and included a request for recertification. However, the HSC’s certification process, as implemented, did not provide for monitoring and reporting as envisioned in the Implementation Plan regarding agencies’ abilities to continue operations in the event of a pandemic while protecting their employees. In addition, as originally envisioned in the Implementation Plan, the report was to be directed to the Executive Office of the President, with no provision in the plan for the report to be made available to the Congress. The spring 2009 outbreak of H1N1 influenza accentuates the responsibility of agencies to have pandemic plans that ensure their ability to continue operations while protecting their workers who serve the American public. As evidenced by our survey results and case studies, some agencies are not close to having operational pandemic plans, particularly at the facility level. In addition, there is no real monitoring mechanism in place to ensure that agencies’ workforce pandemic plans are complete. A monitoring process should be in place that would ensure that federal agencies are making progress in developing their plans to protect their workforce in the event of a pandemic and that agencies have the information and guidance they need to develop operational pandemic plans. To address this issue, our report recommended that the HSC request that the Secretary of Homeland Security monitor and report to the Executive Office of the President on the readiness of agencies to continue their operations while protecting their workers during an influenza pandemic. The reporting should include an assessment of the agencies’ progress in developing their plans including any key challenges and gaps in the plans. The request should also establish a specific time frame for reporting on these efforts. We also suggested that to help support its oversight responsibilities, the Congress may want to consider requiring DHS to report to it on agencies’ progress in developing and implementing their pandemic plans, including any key challenges and gaps in the plans. The HSC commented that the report makes useful points regarding opportunities for enhanced monitoring and reporting within the executive branch concerning agencies’ progress in developing plans to protect their workforce. DHS commented that our recommendations would contribute to its future efforts to ensure that government entities are well prepared for what may come next. Mr. Chairman and Members of the Subcommittee, this completes my statement. I would be pleased to respond to any questions that you might have. For further information on this testimony, please contact Bernice Steinhardt, Director, Strategic Issues, at (202) 512-6543 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony include William J. Doherty, Assistant Director, Judith C. Kordahl, Senior Analyst, and Karin Fangman, Deputy Assistant General Counsel. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As evidenced by the spring 2009 outbreak of the H1N1 virus, an influenza pandemic remains a real threat to the nation and the world and has the potential to shut down work critical to the smooth functioning of society. This testimony addresses (1) the extent to which federal agencies have made pandemic plans to protect workers who cannot work remotely and are not first responders; (2) the pandemic plans selected agencies have for certain occupations performing essential functions other than first response; and (3) the opportunities to improve agencies' workforce pandemic plans. The issues discussed in the testimony are based on the GAO report, Influenza Pandemic: Increased Agency Accountability Could Help Protect Federal Employees Serving the Public in the Event of a Pandemic ( GAO-09-404 , June 12, 2009). In this report, GAO recommended that the Homeland Security Council (HSC) request that the Department of Homeland Security (DHS) monitor and report to the Executive Office of the President on the readiness of agencies to continue operations while protecting their employees in the event of a pandemic. To help carry out its oversight role, the Congress may want to consider requiring a similar report from DHS. The HSC noted that it will give serious consideration to the findings and recommendations in the report, and DHS said the report will contribute to its efforts to ensure government entities are well prepared for what may come next. GAO surveyed the 24 agencies employing nearly all federal workers to gain an overview of governmentwide pandemic influenza preparedness efforts and found that a wide range of pandemic planning activities are under way. However, as of early 2009, several agencies reported that they were still developing their pandemic plans and their measures to protect their workforce. For example, several agencies had yet to identify essential functions during a pandemic that cannot be performed remotely. In addition, although many of the agencies' pandemic plans rely on telework to carry out their functions, five agencies reported testing their information technology capability to little or no extent. To get a more in-depth picture of agency planning, GAO selected three case study agencies that represent essential occupations other than first response that cannot be performed remotely. The three case study occupations--correctional workers, production staff disbursing federal checks, and air traffic controllers--showed differences in the degree to which their individual facilities had operational pandemic plans. For example, the Bureau of Prisons' correctional workers had only recently been required to develop pandemic plans for their correctional facilities. Nevertheless, the Bureau of Prisons has considerable experience limiting the spread of infectious disease within its correctional facilities and had also made arrangements for antiviral medications for a portion of its workers and inmates. The Department of the Treasury's Financial Management Service, which has production staff involved in disbursing federal payments such as Social Security checks, had pandemic plans for its four regional centers and had stockpiled personal protective equipment such as respirators, gloves, and hand sanitizers at the centers. Air traffic control management facilities, where air traffic controllers work, had not yet developed facility pandemic plans or incorporated pandemic plans into their all-hazards contingency plans. The Federal Aviation Administration had recently completed a study to determine the feasibility of the use of respirators by air traffic controllers and concluded that their long-term use during a pandemic appears to be impractical. There is no mechanism in place to monitor and report on agencies' progress in developing workforce pandemic plans. Under the National Strategy for Pandemic Influenza Implementation Plan, DHS was required to monitor and report on the readiness of departments and agencies to continue operations while protecting their employees during an influenza pandemic. The HSC, however, informed DHS in late 2006 or early 2007 that no specific reports on this were required to be submitted. Rather, the HSC requested that agencies certify to the council that they were addressing in their plans the applicable elements of a pandemic checklist in 2006 and again in 2008. This process did not include any assessment or reporting on the status of agency plans. Given agencies' uneven progress in developing their pandemic plans, monitoring and reporting would enhance agencies' accountability for protecting their employees in the event of a pandemic. |
At the end of fiscal year 2007, the number of civilian and military personnel in DOD’s acquisition workforce totaled over 126,000—of which civilian personnel comprised 89 percent. According to DOD, these in- house personnel represent more than 70 percent of the total federal acquisition workforce. DOD defines its acquisition workforce to include 13 career fields, based on the Defense Acquisition Workforce Improvement Act of 1990. From fiscal years 2001 to 2007, the number of civilian and military acquisition personnel in these 13 fields declined overall by 2.5 percent; however, some career fields have increased substantially, while others have shown dramatic declines. Table 1 shows the 13 fields, the number of military and civilian personnel in each of these fields in 2001 and 2007, and the percentage change between those 2 years. During this same time period, the number of contracting actions valued at over $100,000 increased by 62 percent and dollars obligated on contracts increased by 116 percent, according to DOD. Moreover, DOD has reported that the number of major defense acquisition programs has increased from 70 to 95. To augment its declining in-house acquisition workforce, DOD has relied more heavily on contractor personnel. In addition to the overall decline in its in-house acquisition workforce and an increased workload, DOD faces shifting workforce demographics and a changing strategic environment. The U.S. workforce as a whole is aging and experiencing a shift in the labor pool away from persons with science and technical degrees. According to DOD, advances in technology, such as the ability to do jobs from almost anywhere in the world, are also driving workforce changes and increasing global competition for the most highly educated and skilled personnel. To address these and other challenges— including wars in Afghanistan and Iraq, an evolving mission to combat threats around the world, and an increased need to collaborate with both domestic and international partners—DOD has begun to establish a more centralized management framework for forecasting, recruiting, developing, and sustaining the talent pool needed to meet its national security mission. Several components in the Office of the Secretary of Defense (OSD) share policy and guidance responsibility for the workforce. The Under Secretary of Defense for Personnel and Readiness serves as the Chief Human Capital Officer for DOD—both for military and civilian personnel—and has overall responsibility for the development of the department’s competency-based workforce planning and its civilian human capital strategic plan. Within the Office of Personnel and Readiness, the Office of Civilian Personnel Policy has overall responsibility for managing DOD’s civilian workforce and has the lead role in developing and overseeing implementation of the plan. For example, the Implementation Report for the DOD Civilian Human Capital Strategic Plan 2006-2010 lists enterprisewide skills and competencies for 25 mission-critical occupations, which the department has begun to assess in terms of future needs, budget-based projections, and anticipated gaps. Another OSD component, AT&L, is responsible for managing DOD’s acquisition workforce, including tailoring policies and guidance specific to the acquisition workforce and managing the training and certification of that workforce. As required by the National Defense Authorization Act for Fiscal Year 2008 (2008 NDAA), AT&L has drafted an addendum for the implementation report for the civilian human capital strategic plan to specifically address management and oversight of the acquisition workforce. Each military service has its own corresponding personnel and acquisition offices that develop additional service-specific guidance, and provide management and oversight of its workforce. The services have generally delegated the determination of workforce needs to the command levels and their corresponding program offices. Although each service uses a different management structure, the commands typically make overall organizational budgetary and personnel allocations, whereas the program offices identify acquisition workforce needs; make decisions regarding the civilian, military, and contractor makeup of the workforce; and provide the day-to-day management of the workforce. In addition, each service designates organizations aligned by one or more career fields to monitor and manage career paths and training, and to identify gaps in current skill sets. DOD lacks critical departmentwide information in several areas necessary to assess, manage, and oversee the acquisition workforce and help ensure it has a sufficient acquisition workforce to meet DOD’s national security mission. Specifically, AT&L does not have key pieces of information regarding its in-house acquisition workforce, such as complete data on skill sets, which are needed to accurately identify its workforce gaps. In addition, it lacks information on the use and skill sets of contractor personnel performing acquisition-related functions. Omitting these data from DOD’s assessments not only skews analyses of workforce gaps, but also limits DOD’s ability to make informed workforce allocation decisions. Critical success factors for human capital management include collecting data on workforce competencies and skills mix, and evaluating human capital approaches—including those for acquiring and retaining talent— for how well they support efforts to achieve program results. Such efforts, linked to strategic goals and objectives, can enable an agency to recognize, prepare, and obtain the knowledge, skills, abilities, and size for the workforce it needs to pursue its current and future missions. DOD has increasingly relied on contractors to perform core missions, but has yet to develop a workforce strategy for determining the appropriate mix of contractor and government personnel. Our prior work has noted the importance of effective human capital management to better ensure that agencies have the right staff who are doing the right jobs in the right place at the right time by making flexible use of its internal workforce and appropriate use of contractors. We have also reported that decisions regarding the use of contractors should be based on strategic planning regarding what types of work are best done by the agency or contracted out. While DOD planning documents state that the workforce should be managed from a “total force” perspective—which calls for contractor personnel to be managed along with civilian and military personnel — DOD does not collect departmentwide data on contractor personnel. Program offices, however, do have information about contractor personnel. Data we obtained from 66 program offices show that contractor personnel comprised more than a third of those programs’ acquisition- related positions (see table 2). According to MDA officials, the agency collects and uses such data in its agency-level workforce allocation processes, which in turn has helped inform staffing and resource decisions at the program office level. Because contractor personnel likely comprise a substantial part of all personnel supporting program offices, AT&L is missing information on a key segment of the department’s total acquisition workforce (in-house and contractor personnel). DOD also lacks information on factors driving program offices’ decisions to use contractor personnel rather than hire in-house personnel. DOD guidance for determining the workforce mix outlines the basis on which officials should make decisions regarding what type of personnel— military, civilian, or contractor—should fill a given position. The guidance’s primary emphasis is on whether the work is considered to be an inherently governmental function, not on whether it is a function that is needed to ensure institutional capacity. The guidance also states that using the least costly alternative should be an important factor when determining the workforce mix. However, of the 31 program offices that reported information about the reasons for using contractor personnel, only 1 indicated that reduced cost was a key factor in the decision to use contractor personnel rather than civilian personnel. Instead, 25 cited staffing limits, the speed of hiring, or both as main factors in their decisions to use contractor personnel. Additionally, 22 program offices cited a lack of in-house expertise as a reason for using contractor personnel, and 17 of those indicated that the particular expertise sought is generally not hired by the government. In addition, at 3 of the 4 program offices we visited, officials said that they often hire contractors because they may face limits on the number of civilian personnel they can hire, and because budgetary provisions may allow program offices to use program funds to pay for additional contractor personnel, but not for hiring civilian personnel. Program officials also cited the lengthy hiring process for civilian personnel as a reason for using contractor personnel. AT&L’s lack of key pieces of information hinders its ability to determine gaps in the number and skill sets of acquisition personnel needed to meet DOD’s current and future missions. At a fundamental level, workforce gaps are determined by comparing the number and skill sets of the personnel that an organization has with what it needs. However, AT&L lacks information on both what it has and what it needs. With regard to information on the personnel it has, AT&L not only lacks information on contractor personnel, but it also lacks complete information on the skill sets of the current acquisition workforce and whether these skill sets are sufficient to accomplish its missions. AT&L is currently conducting a competency assessment to identify the skill sets of its current acquisition workforce. While this assessment will provide useful information regarding the skill sets of the current in-house acquisition workforce, it is not designed to determine the size, composition, and skill sets of an acquisition workforce needed to meet the department’s missions. AT&L also lacks complete information on the acquisition workforce needed to meet DOD’s mission. The personnel numbers that AT&L uses to reflect needs are derived from the budget. Because these personnel numbers are constrained by the size of the budget, they likely do not reflect the full needs of acquisition programs. Of the 66 program offices that provided data to us, 13 reported that their authorized personnel levels are lower than those they requested. In a report on DOD’s workforce management, RAND noted that the mismatch between needs and available resources means that managers have an incentive to focus on managing the budget process instead of identifying the resources needed to fulfill the mission and then allocating resources within the constraints of the budget. AT&L has begun to respond to recent legislative requirements aimed at improving DOD’s management and oversight of its acquisition workforce, including developing data, tools, and processes to more fully assess and monitor its acquisition workforce. Each service has also recently initiated, to varying degrees, additional efforts to assess its own workforce at the service level. Some recent DOD efforts aimed at improving the broader workforce may also provide information to support AT&L’s acquisition workforce efforts. While it is too early to determine the extent to which these efforts will improve the department’s management and oversight, the lack of information on contractor personnel raises concerns about whether AT&L will have the information it needs to adequately assess, manage, and oversee the total acquisition workforce. As required by the 2008 NDAA, AT&L plans to issue an addendum to the Implementation Report for the DOD Civilian Human Capital Strategic Plan 2006-2010. According to DOD, this addendum will lay out AT&L’s strategy for managing and overseeing the acquisition workforce. The addendum is to provide an analysis of the status of the civilian acquisition workforce and discuss AT&L’s efforts for implementing the Acquisition Workforce Development Fund, which the 2008 NDAA required DOD to establish and fund. AT&L has focused its implementation efforts in three key areas: (1) recruiting and hiring, (2) training and development, and (3) retention and recognition. AT&L has established a steering board responsible for oversight on all aspects of the fund, including the approval of the use of funds for each proposed initiative. In addition to the addendum to the implementation report, AT&L created its own human capital plan in an effort to integrate competencies, training, processes, tools, policy, and structure for improving the acquisition workforce. AT&L has also developed some tools and begun initiatives designed to help with its management of the acquisition workforce, such as its competency assessment that is scheduled to be completed in March 2010. AT&L recently established the Defense Acquisition Workforce Joint Assessment Team tasked with assessing and making recommendations regarding component workforce size, total force mix, future funding levels, and other significant workforce issues. According to an AT&L official, the team will also develop an estimate of the acquisition workforce needed to meet the department’s mission that is unconstrained by the budget. Table 3 provides a brief description of AT&L’s recent efforts. Each service has also begun to take a more focused look at its acquisition workforce by developing service-specific acquisition workforce plans and designating leads tasked with monitoring career paths and training, and identifying gaps in current skill sets. For example, responsibility for different aspects of the Navy’s acquisition workforce has recently been distributed among a number of corporate-level offices—such as Manpower and Reserve Affairs; Research, Development, and Acquisition; and Manpower, Personnel, Training, and Education. To illustrate, Research, Development, and Acquisition will develop and maintain acquisition strategic guidance and provide management oversight of the capabilities of the Navy’s acquisition workforce. Table 4 provides examples of service-level workforce initiatives. In addition to the AT&L and service-level initiatives, some DOD efforts aimed at improving the broader workforce may provide information that can assist AT&L in assessing, managing, and overseeing the acquisition workforce. Some promising initiatives include the following: The Office of Civilian Personnel Policy recently established a Civilian Workforce Capability and Readiness Program, and in November 2008 officially established a corresponding program management office tasked with monitoring overall civilian workforce trends and conducting competency assessments and gap analyses. DOD, through its components, is developing an annual inventory of contracts for services performed in the preceding fiscal year. This inventory is required to include, among other things, information identifying the missions and functions performed by contractors, the number of full-time contractor personnel equivalents that were paid for performance of the activity, and the funding source for the contracted work. The Army issued its first inventory, which determined the equivalent number of contractor personnel it used in fiscal year 2007 based on the number of hours of work paid for under its service contracts. DOD has issued guidance directing programs to consider using DOD civilian personnel to perform new functions or functions currently performed by contractor personnel in cases where those functions could be performed by DOD civilian personnel. The guidance also requires that DOD civilian personnel be given special consideration to perform certain categories of functions, including functions performed by DOD civilian personnel at any time during the previous 10 years and those closely associated with the performance of an inherently governmental function. When the inventory of contracts for services is completed, DOD is mandated by the 2008 NDAA to use the inventory as a tool to identify functions currently performed by contractor personnel that could be performed by DOD civilian personnel. DOD is developing additional guidance and a tool to assist in developing cost comparisons for evaluating the use of in-house personnel rather than contractor personnel. These initiatives have the potential to enhance DOD’s acquisition workforce management practices and oversight activities. However, these efforts may not provide the comprehensive information DOD needs to manage and oversee its acquisition workforce. For example, although the Army has issued its first inventory of its service contracts, inventories for all DOD components are not scheduled to be completed before June 2011. Further, as currently planned, the inventory will not include information on the skill sets and functions of contractor personnel. As DOD continues to develop and implement departmentwide initiatives aimed at providing better oversight of the acquisition workforce, some of the practices employed by leading organizations for managing their workforces could provide insights for DOD’s efforts. These practices include: identifying gaps in the current workforce by assessing the overall competencies needed to achieve business objectives, compared to current competencies; establishing mechanisms to track and evaluate the effectiveness of initiatives to close workforce gaps; taking a strategic approach in deciding when and how to use contractor personnel to supplement the workforce; and tracking and analyzing data on contractor personnel. We have previously reported many of these practices as critical factors for providing good strategic human capital management. The leading organizations we reviewed develop gap analyses and workforce plans from estimates of the number and composition of personnel with specific workforce competencies needed to achieve the organization’s objectives. For example, Lockheed Martin assesses the skill mix needed to fulfill future work orders and compares this with the firm’s current skill mix to identify potential workforce gaps. An official at Lockheed Martin said one such assessment indicated that the company needed skill sets different from those needed in the past because it is receiving more proposals for logistics work associated with support and delivery contracts, rather than its traditional system development work. Table 5 provides examples of how companies we reviewed link workforce assessments to their organizational objectives. These leading organizations also assess their efforts to close workforce gaps by tracking data on specific recruiting and retention metrics. For example, Microsoft assesses the quality of its new hires based on the performance ratings and retention for their first 2 years with the company. According to a company official, this allows Microsoft to compare the results of using its different hiring sources, such as college recruiting and other entry-level hiring methods. Similarly, Deloitte uses performance ratings, retention data, and employee satisfaction surveys to help determine a return on investment from its college recruiting efforts and to identify schools that tend to supply high-quality talent that the company is able to retain. Table 6 provides examples of recruiting and retention metrics used by the companies we reviewed. In addition to tracking data on metrics, Deloitte uses quantitative models that analyze workforce demographics and other factors to predict actions of job candidates and employees. Data from such metrics and models can be used to inform other workforce decisions and focus limited resources for use where the greatest benefit is expected. Finally, the companies we reviewed take a strategic approach to determining when to use contractor support. Officials from Deloitte, General Electric, and Rolls Royce said they generally use contractors to facilitate flexibility and meet peak work demands without hiring additional, permanent, full-time employees. Some of the companies also place limits on their use of contractor employees. General Electric, for example, uses contractor personnel for temporary support and generally limits their use for a given operation to 1 year in order to prevent the use of temporary personnel to fill ongoing or permanent roles. Additionally, General Electric and Lockheed Martin limit the use of contractor personnel to noncore functions. An official from General Electric said that it rarely outsources essential, sophisticated, or strategic functions, or large components of its business. Likewise, Lockheed Martin does not outsource capabilities that are seen as discriminators that set the company apart from its market competitors. Deloitte, General Electric, Lockheed Martin, and Microsoft also maintain and analyze data on their contractor employees in order to mitigate risks, ensure compliance with in-house regulations and security requirements, or to ensure that reliance on contractor support creates value for the company. An official at Deloitte noted, for example, that if work involving contractor support continues for an extended period, the business unit might be advised to request additional full-time employee positions in its next planning cycle or streamline its process to eliminate the need for contractor support. At Rolls Royce, an official told us that one unit uses an algorithm to determine the percentage of work being outsourced by computing the number of full-time-equivalent personnel needed to complete the same level of work performed through outsourcing. This information is important because of the cost of outsourcing. According to the company official, outsourcing may be more costly—all other factors being equal—because of the profit consideration for the contractor. As a result, outsourcing decisions can become a trade-off between multiple factors, such as cost, quality, capacity, capability, and speed. Major shifts in workforce demographics and a changing strategic environment present significant challenges for DOD in assessing and overseeing an acquisition workforce that has the capacity to acquire needed goods and services, as well as monitor the work of contractors. While recent and planned actions of AT&L and other DOD components could help DOD address many of these challenges, the department has yet to determine the acquisition workforce that it needs to fulfill its mission or develop information about contractor personnel. While DOD has begun to estimate the number of full-time-equivalent contractor personnel through its inventory of contracts for services, this effort will not identify the skill sets and functions of contractor personnel performing acquisition-related work or the length of time for which they are used. At the same time, DOD lacks guidance on the appropriate circumstances under which contractor personnel may perform acquisition work. Without such guidance, DOD runs the risk of not maintaining sufficient institutional capacity to perform its missions. Until DOD maintains detailed departmentwide information on its contractor personnel performing acquisition-related work, it will continue to have insufficient information regarding the composition, range of skills, and the functions performed by this key component of the acquisition workforce. Without this information upon which to act, the department runs the risk of not having the right number and appropriate mix of civilian, military, and contractor personnel it needs to accomplish its missions. To better ensure that DOD’s acquisition workforce is the right size with the right skills and that the department is making the best use of its resources, we recommend that the Secretary of Defense take the following four actions: Collect and track data on contractor personnel who supplement the acquisition workforce—including their functions performed, skill sets, and length of service—and conduct analyses using these data to inform acquisition workforce decisions regarding the appropriate number and mix of civilian, military, and contractor personnel the department needs. Identify and update on an ongoing basis the number and skill sets of the total acquisition workforce—including civilian, military, and contractor personnel—that the department needs to fulfill its mission. DOD should use this information to better inform its resource allocation decisions. Review and revise the criteria and guidance for using contractor personnel to clarify under what circumstances and the extent to which it is appropriate to use contractor personnel to perform acquisition- related functions. Develop a tracking mechanism to determine whether the guidance has been appropriately implemented across the department. The tracking mechanism should collect information on the reasons contractor personnel are being used, such as whether they were used because of civilian staffing limits, civilian hiring time frames, a lack of in-house expertise, budgetary provisions, cost, or other reasons. DOD provided written comments on a draft of this report. DOD concurred with three recommendations and partially concurred with one recommendation. DOD’s comments appear in appendix I. DOD also provided technical comments on the draft report which we incorporated as appropriate. DOD partially concurred with the draft recommendation to collect and track data on contractor personnel to inform the department’s acquisition workforce decisions. DOD stated that it agrees that information on contractor personnel supporting the acquisition mission is necessary for improved acquisition workforce planning, especially with regard to the number and the acquisition functions performed. The department also noted that establishing a contractual requirement to capture more detailed workforce information, such as skill sets and length of service of contractor personnel, needs to be carefully considered. We agree that the manner in which data on contractor personnel are to be collected should be carefully considered. We continue to believe that comprehensive data on contractor personnel are needed to accurately identify the department’s acquisition workforce gaps and inform its decisions on the appropriate mix of in-house or contractor personnel. DOD concurred with our recommendation to identify and update on an ongoing basis the number and skill sets of the total acquisition workforce that it needs to fulfill its mission and stated that it has an ongoing effort to accomplish this. DOD states that its ongoing efforts will address this recommendation; however, the efforts cited in its response improve DOD’s information only on its in-house acquisition workforce and do not identify the total acquisition workforce, including contractor personnel, the department needs to meet its missions. We revised the recommendation to clarify that DOD’s acquisition workforce management and oversight should encompass contractor as well as civilian and military personnel. DOD also concurred with our recommendations to revise the criteria and guidance for using contractor personnel to perform acquisition-related functions, and to develop a tracking mechanism to determine whether the revised guidance is being appropriately implemented across the department. We are sending copies of this report to the Secretary of Defense. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5274 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Katherine V. Schinasi, Managing Director; Ann Calvaresi-Barr, Director; Carol Dawn Petersen, Assistant Director; Ruth “Eli” DeVan; Kristine Heuwinkel; Victoria Klepacz; John Krump; Teague Lyons; Andrew H. Redd; Ron Schwenn; Karen Sloan; Brian Smith; Angela D. Thomas; and Adam Yu made key contributions to this report. Human Capital: Opportunities Exist to Build on Recent Progress to Strengthen DOD’s Civilian Human Capital Strategic Plan. GAO-09-235. Washington, D.C.: February 10, 2009. High Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Department of Homeland Security: A Strategic Approach Is Needed to Better Ensure the Acquisition Workforce Can Meet Mission Needs. GAO-09-30. Washington, D.C.: November 19, 2008. Human Capital: Transforming Federal Recruiting and Hiring Efforts. GAO-08-762T. Washington, D.C.: May 8, 2008. Defense Contracting: Army Case Study Delineates Concerns with Use of Contractors as Contract Specialists. GAO-08-360. Washington, D.C.: March 26, 2008. Defense Management: DOD Needs to Reexamine Its Extensive Reliance on Contractors and Continue to Improve Management and Oversight. GAO-08-572T. Washington, D.C.: March 11, 2008. Federal Acquisition: Oversight Plan Needed to Help Implement Acquisition Advisory Panel’s Recommendations. GAO-08-515T. Washington, D.C.: February 27, 2008. The Department of Defense’s Civilian Human Capital Strategic Plan Does Not Meet Most Statutory Requirements. GAO-08-439R. Washington, D.C.: February 6, 2008. Defense Acquisitions: DOD’s Increased Reliance on Service Contractors Exacerbates Long-standing Challenges. GAO-08-621T. Washington, D.C.: January 23, 2008. Department of Homeland Security: Improved Assessment and Oversight Needed to Manage Risk of Contracting for Selected Services. GAO-07-990. Washington, D.C.: September 17, 2007. Federal Acquisitions and Contracting: Systemic Challenges Need Attention. GAO-07-1098T. Washington, D.C.: July 17, 2007. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD’s Acquisition of Services. GAO-07-832T. Washington, D.C.: May 10, 2007. Highlights of a GAO Forum: Federal Acquisition Challenges and Opportunities in the 21st Century. GAO-07-45SP. Washington, D.C.: October 2006. Framework for Assessing the Acquisition Function At Federal Agencies. GAO-05-218G. Washington, D.C.: September 2005. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. | Since 2001, the Department of Defense's (DOD) spending on goods and services has more than doubled to $388 billion in 2008, while the number of civilian and military acquisition personnel has remained relatively stable. To augment its in-house workforce, DOD relies heavily on contractor personnel. If it does not maintain an adequate workforce, DOD places its billion-dollar acquisitions at an increased risk of poor outcomes and vulnerability to fraud, waste, and abuse. GAO was asked to (1) assess DOD's ability to determine whether it has a sufficient acquisition workforce, (2) assess DOD initiatives to improve the management and oversight of its acquisition workforce, and (3) discuss practices of leading organizations that could provide insights for DOD's acquisition workforce oversight. To do this, GAO analyzed key DOD studies, obtained data from 66 major weapon system program offices across DOD, and interviewed officials from 4 program offices. GAO also met with representatives from six companies recognized as leaders in workforce management. DOD lacks critical departmentwide information to ensure its acquisition workforce is sufficient to meet its national security mission. First, in its acquisition workforce assessments, DOD does not collect or track information on contractor personnel, despite their being a key segment of the total acquisition workforce. DOD also lacks information on why contractor personnel are used, which limits its ability to determine whether decisions to use contractors to augment the in-house acquisition workforce are appropriate. GAO found that program office decisions to use contractor personnel are often driven by factors such as quicker hiring time frames and civilian staffing limits, rather than by the skills needed or the nature or criticality of the work. Second, DOD's lack of key pieces of information limits its ability to determine gaps in the acquisition workforce it needs to meet current and future missions. For example, DOD lacks information on the use and skill sets of contractor personnel, and lacks complete information on the skill sets of its in-house personnel. Omitting data on contractor personnel and needed skills from DOD's workforce assessments not only skews analyses of workforce gaps, but also limits DOD's ability to make informed workforce allocation decisions and determine whether the total acquisition workforce--in-house and contractor personnel--is sufficient to accomplish its mission. DOD has initiated several recent actions aimed at improving the management and oversight of its acquisition workforce. For example, DOD is developing a plan for managing the civilian acquisition workforce and is establishing practices for overseeing additional hiring, recruiting, and retention activities. It has also taken actions to develop some of the data and tools necessary to monitor the acquisition workforce, such as a competency assessment scheduled to be completed in March 2010. Each military service and agency has also begun, to varying degrees, efforts to assess its workforce at the service level. In addition, some efforts aimed at improving DOD's overall workforce may also provide additional information to support acquisition workforce efforts. However, these initiatives may not provide the comprehensive information DOD needs to manage and oversee its acquisition workforce. To manage their workforces, the leading organizations GAO reviewed (1) identify gaps in their current workforces by assessing the overall competencies needed to achieve business objectives; (2) establish mechanisms to track and evaluate the effectiveness of their initiatives to close these gaps; (3) take a strategic approach in deciding when to use contractor personnel to supplement the workforce, such as limiting the use of contractor personnel to performing noncore-business functions and meeting surges in work demands; and (4) track and analyze data on contractor personnel. These practices could provide insights to DOD as it moves forward with its acquisition workforce initiatives. |
The mission of the Internal Revenue Service, a bureau within the Department of the Treasury (Treasury), is to provide America’s taxpayers top quality service by helping them understand and meet their tax responsibilities and by applying the federal tax laws with integrity and fairness to all. In carrying out its mission, IRS annually collects over $2 trillion in taxes from millions of individual taxpayers and numerous other types of taxpayers and manages the distribution of over $300 billion in refunds. To guide its future direction, the agency has two strategic goals: (1) improve taxpayer service to make voluntary compliance easier and (2) enforce the law to ensure everyone meets their obligations to pay taxes. IRS is organized into four primary operating divisions to meet the needs of specific taxpayer segments: The Wage and Investment Division services individual taxpayers and provides the information, support, and assistance these taxpayers need to fulfill their tax obligations. The Small Business and Self-Employed Division services all fully or partially self-employed individuals and corporations and partnerships with assets of $10 million or less. The Large Business and International Division services corporations and partnerships with assets greater than $10 million. The Tax Exempt and Government Entities Division services a large and unique economic sector of organizations, which include pension plans, exempt organizations, governmental entities, and tax-exempt bond issuers. IRS’s Modernization and InformationTechnology Services (MITS) organization is responsible for delivering IT services and solutions to support tax administration as well as the operations of the broader organization. MITS also supports the delivery of IRS’s business systems modernization efforts and improvement of customer service, and its responsibilities include management of all IT investments in both the development, modernization, and enhancement phase and the operations and maintenance phase. MITS is headed by the Chief Technology Officer. Within MITS, the Strategy and Planning Office, headed by the Associate Chief Information Officer for Strategy and Planning, has primary responsibility for defining and implementing the IT investment management process. The Strategy and Planning office includes a Strategy and Capital Planning (S&CP) group that focuses on IRS-wide IT strategy and capital planning and investment controls. The S&CP office also helps ensure the alignment of IT investments with Treasury’s and IRS’s strategies, as well as with best practices for investment management. It includes the following offices: Investment Planning and Selection Office—responsible for enabling the prioritization and selection of significant IT investments. IT Strategic Planning Office—responsible for determining strategic alignment between the functional areas of the Strategy and Planning office and MITS. Transition Management Office—responsible for assessing organizational readiness through an examination of people, process, assets, and financials of new, enhanced, and retired systems through procedures and tools and communication with MITS business partners. Estimation Program Office—responsible for developing and using government and industry best estimation practices in the delivery of full IT life cycle estimates. Investment Management Office—responsible for serving as the primary interface with Treasury’s capital planning and investment control organizations to coordinate actions including baseline change requests, budget formulation documents, and Office of Management and Budget (OMB) IT Dashboard reporting. Investment Evaluation Office—responsible for examining whether an IT investment has met its intended objectives and yielded expected benefits as projected in the business case. The office is also responsible for examining the current performance of an investment and measures the performance against baseline parameters such as cost, schedules, and performance measures, and makes recommendations to IRS senior executives to aid investment management decisions to optimize the IRS IT portfolio. The Strategy and Planning office also includes the Financial Management Services group, which has responsibility for providing guidelines and direction on federal budget and financial policy for IT investments and operations. The group provides guidance on all matters pertaining to budget and financial policy, budget formulation, and financial analysis, including the management of IT expenses across the agency. Figure 1 shows a simplified and partial organizational chart of IRS. IT plays a critical role in enabling IRS to carry out its mission and responsibilities. For example, the agency relies on information systems to process tax returns, account for tax revenues collected, send bills for taxes owed, issue refunds, assist in the selection of tax returns for audit, and provide telecommunications services for all business activities, including the public’s toll-free access to tax information. The President’s fiscal year 2012 budget request for IRS is $13.3 billion. Of this requested amount, about $2.67 billion is for IT investments. According to IRS, about $447 million, or 17 percent, is to be spent on development, modernization, or enhancement activities; $1.88 billion, or 70 percent, is to be spent on operations and maintenance activities; and the remaining $344 million, or 13 percent, is for efforts associated with implementation of the Patient Protection and Affordable Care Act. IRS expects to fund 31 major systems representing about $1.68 billion, or 63 percent, of the total IT request, and 124 nonmajor systems representing $1 billion, or 37 percent, of the total request. Over the years, we have reviewed IRS’s Business Systems Modernization (BSM) program, the agency’s ongoing effort to modernize its tax administration and internal management systems, on an annual basis and also performed other work relevant to investment management at IRS: Since 1999, we have reviewed and reported on IRS’s Business Systems Modernization program. In particular, we have reported on program management capabilities and controls that are critical to the effective management of this program, such as cost and schedule estimates, requirements development and management, and postimplementation reviews of deployed projects. Accordingly, we have made numerous recommendations aimed at strengthening these controls and capabilities. Most recently, in our May 2010 review of the Business Systems Modernization program, we reported that while IRS had done much to define the phases of its Customer Account Data Engine 2 strategy for managing individual taxpayer accounts, the agency had not defined specific time frames for addressing key planning activities for the second phase, including defining core requirements. We recommended that IRS take several actions to improve program management capabilities and controls, including defining specific time frames for planning activities for the second phase to guide progress. In commenting on a draft of this report, IRS stated it would review the recommendations and provide a detailed corrective action plan to address them. As part of our annual audit of IRS’s financial statements, we assess the effectiveness of the agency’s information security controls over its key financial and tax processing systems, information, and interconnected networks. In March 2011, we reported that although IRS had made progress in correcting information security weaknesses that we have reported previously, many weaknesses had not been corrected, and we identified many new weaknesses during our audit of its fiscal year 2010 financial statements. Specifically, 65 out of 88 previously reported weaknesses—about 74 percent—had not yet been corrected. In addition, we identified 37 new weaknesses. These weaknesses relate to access controls, configuration management, and segregation of duties. Weaknesses in these areas increase the likelihood of errors in financial data that result in misstatement and expose sensitive information and systems to unauthorized use, disclosure, modification, and loss. An underlying reason for these weaknesses—both old and new—is that IRS has not yet fully implemented key components of a comprehensive information security program. These weaknesses continue to jeopardize the confidentiality, integrity, and availability of the financial and sensitive taxpayer information processed by IRS’s systems and, considered collectively, were the basis of our determination that IRS had a material weakness in internal control over its financial reporting related to information security in fiscal year 2010. In March 2011, we provided an update on IRS’s implementation of its Customer Account Data Engine 2 strategy for managing individual taxpayer accounts, noting weaknesses in the agency’s efforts to improve the credibility of cost estimates and that IRS had not yet finalized expected benefits or set related quantitative targets for the second phase. We recommended that IRS (1) improve the credibility of revised cost estimates by including all costs or provide a rationale for excluding costs, and adjust costs for inflation, and (2) identify all of the second phase benefits, set the related targets, and identify how systems and business process might be affected. IRS agreed with our recommendations. Treasury’s Inspector General for Tax Administration has also recently reported on investment management issues at IRS: In July 2010, the organization reported on IRS’s process to manage and control IT investments. It reported that IRS had recently merged its investment management activities into the Strategy and Capital Planning office, and stated that this office was in the process of updating IRS’s Capital Planning and Investment Control Process Guide, developing desk guides for business cases and data calls, and identifying the steps for implementing a systematic investment selection, monitoring, and review process. It also reported that it concurred with the Strategy and Capital Planning office’s November 2008 self-assessment that IRS was at the ITIM Stage 2 maturity level, and was moving toward the Stage 3 level of developing a complete investment portfolio. In addition to the groups within the MITS Strategy and Planning office mentioned above, several groups and individuals play a role in IRS’s process to manage its IT investments. Involvement from these groups and individuals is necessary to complete aspects of the process including reviewing, approving, and selecting proposed investments; monitoring the investments through their implementation; and evaluating the results once they have become operational. Table 1 identifies the groups that have a role in this process and shows their composition and responsibilities. IRS’s investment management process consists of four phases: preselect, select, control, and evaluate. Each phase is to be completed before beginning the subsequent phase. The preselect phase, which IRS began using during the summer of 2009, is to determine which proposals for new investments can move into the select phase and be considered for inclusion in the IRS IT portfolio. The process is intended to identify the specific business need an investment is expected to address and determine its alignment with the IRS strategic plan. Only investments that best support IRS’s strategic plan and priorities are to be promoted through the preselect process and progress to the following phases. During this phase, a business owner prepares a two-page business case summary that, among other things, documents alignment with the agency priorities established by IRS’s Senior Executive Team. In addition, a preliminary economic analysis accompanies the business case for each proposal. The Strategy and Capital Planning office is to provide the ERT with an initial overview of the submissions ensuring the data are complete and consistent with Senior Executive Team priorities and the IRS strategic vision. The ERT is to review these documents and determines whether the proposals can move forward. The select phase is the process by which proposals approved during the preselect phase are further reviewed by the ERT and selected for inclusion in IRS’s budget submission. Business cases are further developed from the two-page summary that is prepared for the preselect phase, to include added information such as three technical alternatives, a risk analysis, and performance measures. A solution concept and cost estimate document that further refines the investment proposal and strengthens the business case is also developed. The investment summary is to be provided to the Deputy Commissioners to be used to determine which investments are to be considered for inclusion in the agency’s portfolio. The ERT makes recommendations based on an investment’s strategic value assessment, benefits, economic/risk assessments, standards, performance measures, and major project milestones and deliverables and works with the Deputy Commissioners to reach consensus on the proposals to recommend for the agency’s budget submission, which then go to the Commissioner for final approval. The investments selected by the Commissioner are forwarded to the Department of the Treasury and then to OMB for funding approval. Once IRS’s budget appropriation is funded, the investments proceed to the control phase. The purpose of the control phase is to provide oversight of projects that have been selected or are already under way. Prior to entering the control phase, an investment must have a developed project plan that includes objectives, an acquisition plan, risk management plan, schedule, deliverables, and projected/actual cost and benefits. Additionally the investment must have established a governance board investment review schedule and obtained governance approval to enter the control phase. During the control phase, Organizational Level Governance Boards and Management Level Governance Boards are to oversee nonmajor projects within their respective areas of responsibility and lend support to the ESCs. The ESCs serve as advisory boards to the MEG, IRS’s highest- level governance board in overseeing IT projects. The ESCs are to monitor and track the progress and performance of ongoing IT investments against projected cost, schedule, and performance measures, and against quantitative and qualitative measures delivered through various mechanisms including health assessments, reviews of corrective actions plans, and milestone exit reviews. Specifically, a monthly health assessment is conducted to determine the extent to which investments are being effectively managed by reviewing key indicators such as cost and schedule. The health assessments are submitted to the ESCs for review and used by project managers to manage the project. A corrective action plan or baseline change request must be submitted and approved by the appropriate governance boards for investments that vary more than 10 percent from their original baseline in cost, schedule, or scope. The S&CP’s Investment Management Office works with the project managers to validate all data used in investment reviews for accuracy and completeness. During the control phase, the ESCs conduct milestone reviews to determine whether an investment is ready to proceed to the next stage of development. The IRS Chief Technology Officer is provided with summary IT portfolio cost and schedule reports, which include information on relevant performance measures. After an investment is deemed ready for deployment based on the decision of the Chief Technology Officer and governance bodies, it proceeds to the evaluate phase. The evaluate phase involves an annual process to determine the extent to which a major IT investment has met its intended objectives and yielded expected benefits. Once the investment has been implemented, it should be continually monitored for performance, reliability, maintenance activities, cost, resource allocation, defects, problems, and changes. There are two subprocesses that are undertaken depending on the age and life-cycle stage of the investment: the postimplementation review and the operational analysis. Nonmajor investments are not required to undergo either of these processes. A postimplementation review is done to identify an IT investment’s impact on mission performance, focusing on the investment’s impact on stakeholders and customers as well as its ability to deliver results and meet baseline goals. It is intended to identify potential improvements to IT project management practices and is performed by completing an assessment that compares expected performance goals established during the select phase with actual results, and to identify lessons learned for both the investment and the investment management process. The postimplementation review is required annually for all major IT investments that (1) fully exited the acquisition phase and moved into operations and maintenance in the past 6-12 months, (2) implemented a major release or modification, or (3) were retired or terminated during either development or operations. Once the postimplementation review data have been collected and reviewed, the project sponsor is to provide a formal presentation to the Chief Technology Officer that summarizes the investment evaluation as well as provide recommendations. According to IRS, because of resource constraints, postimplementation reviews are being performed only for Business Systems Modernization projects. An operational analysis is to be conducted once an investment or meaningful project segment has moved into the operations and maintenance stage and has had a postimplementation review conducted. The purpose of an operational analysis is to identify investments that are potential candidates for modification, acceleration, replacement, or retirement. It is to be done by assessing the ability of a mature system or application to continue meeting user needs and performance goals based upon the performance of the system relative to the cost of replacing the system. If the system is determined to be a potential candidate for replacement or modification, a business case will need to be developed in the preselect phase. The operational analysis is to be performed biannually for all major investments in operations and maintenance, but not for any major investments already identified as requiring replacement. If any changes to the investment’s acquisition baseline goals are required, the appropriate governance authority must approve them. Project managers are to report the operational analysis results on an annual basis as part of their budget submission. These results may also be submitted as lessons learned back into the other phases of the investment management process. To provide a method for evaluating and assessing how well an agency is selecting and managing its IT resources, GAO developed the ITIM framework. The ITIM framework is a maturity model composed of five progressive stages of maturity that an agency can achieve in its investment management capabilities. It was developed on the basis of our research into the IT investment management practices of leading private- and public-sector organizations. In each of the five stages, the framework identifies critical processes for making successful IT investments. The maturity stages are cumulative; that is, in order to attain a higher stage, the agency must have institutionalized all of the critical processes at the lower stages. The framework can be used to assess the maturity of an agency’s investment management processes and as a tool for organizational improvement. The overriding purpose of the framework is to encourage investment processes that increase business value and mission performance, reduce risk, and increase accountability and transparency in the decision process. We have used the framework in several of our evaluations, and a number of agencies have adopted it. These agencies have used ITIM for purposes ranging from self-assessment to the redesign of their IT investment management processes. ITIM’s five maturity stages represent steps toward achieving stable and mature processes for managing IT investments. Each stage builds on the lower stages; the successful attainment of each stage leads to improvement in the organization’s ability to manage its investments. With the exception of the first stage, each maturity stage is composed of critical processes that must be implemented and institutionalized in order for the organization to achieve that stage. These critical processes are further broken down into key practices that describe the types of activities that an organization should be performing to successfully implement each critical process. It is not unusual for an organization to be performing key practices from more than one maturity stage at the same time, but efforts to improve investment management capabilities should focus on implementing all lower-stage practices before addressing higher-stage practices. In the ITIM framework, Stage 2 critical processes lay the foundation for sound IT investment processes by helping the agency to attain successful, predictable, and repeatable investment control processes at the project level. Specifically, Stage 2 encompasses building a sound investment management foundation by establishing basic capabilities for selecting new IT projects. It involves developing the capability to control projects so that they finish predictably within established cost and schedule expectations and having the capability to identify potential exposures to risk and put in place strategies to mitigate that risk. It also involves instituting an IT investment board, which includes defining its membership, guidance policies, operations, roles, responsibilities, and authorities for one or, if applicable, more IT investment boards within the organization, and, if appropriate, each board’s support staff. The basic selection processes established in Stage 2 lay the foundation for more mature selection capabilities in Stage 3, which represents a major step forward in maturity, in which the agency moves from project-centric processes to a portfolio approach, evaluating potential investments by how well they support the agency’s mission, strategies, and goals. Stage 3 requires that an organization continually assess both proposed and ongoing projects as parts of a complete investment portfolio—an integrated and competing set of investment options. It focuses on establishing a consistent, well-defined perspective on the IT investment portfolio and maintaining mature, integrated selection (and reselection), control, and evaluation processes, which are to be evaluated during postimplementation reviews. This portfolio perspective allows decision makers to consider the interaction among investments and the contributions to organizational mission goals and strategies that could be made by alternative portfolio selections, rather than to focus exclusively on the balance between the costs and benefits of individual investments. Stages 4 and 5 require the use of evaluation techniques to continuously improve both the investment portfolio and the investment processes in order to better achieve strategic outcomes. At Stage 4 maturity, an organization has the capacity to conduct IT succession activities and, therefore, can plan and implement the deselection of obsolete, high-risk, or low-value IT investments. An organization with Stage 5 maturity conducts proactive monitoring for breakthrough information technologies that will enable it to change and improve its business performance. Organizations implementing Stages 2 and 3 have in place the selection, control, and evaluation processes that are consistent with the Clinger- Cohen Act. Stages 4 and 5 define key attributes that are associated with the most capable organizations. Figure 2 shows the five ITIM stages of maturity and the critical processes associated with each stage. As defined by the model, each critical process consists of key practices that must be executed to implement the critical process. In December 2010, OMB issued its 25 Point Implementation Plan to Reform Federal Information Technology Management, a plan spanning 18 months to reform IT management throughout the federal government. A key goal of the plan is to foster more effective management of large- scale IT programs. One way the plan recommends this be done is through streamlining governance and improving accountability. According to the plan, this involves reforming and strengthening investment review boards to enable them to more adequately manage agency IT portfolios, redefining the role of agency chief information officers and the federal Chief Information Officers Council to focus on portfolio management, and rolling out “TechStat” reviews at the agency and bureau levels to focus attention on IT investments, including those that are poorly performing or may need to be retired if they no longer meet the needs of the organization. In order to have the capabilities to effectively manage IT investments, an agency should (1) build an investment foundation by putting basic, project-level control and selection practices in place (Stage 2 capabilities) and (2) manage its projects as a portfolio of investments, treating them as an integrated package of competing investment options and pursuing those that best meet the strategic goals, objectives, and mission of the agency (Stage 3 capabilities). IRS has established most of the foundational practices needed to manage its IT investments. Specifically, the department has executed 30 of the 38 key practices identified by the ITIM as foundational for successful IT management (Stage 2), including all the practices needed to provide investment oversight and capture investment information, and most of those needed to ensure that projects support business needs. In addition, IRS has initiated efforts to manage its investments as a portfolio, which, if fully executed, will provide IRS with the capability to determine whether it is selecting the mix of investments that best meet the agency’s mission needs. Despite these strengths, weaknesses remain in IRS’s execution of certain critical Stage 2 processes. Specifically, IRS does not have an enterprisewide IT investment board with sufficient representation from IT and business units that is responsible for the entire investment management process, and the agency has not fully documented its investment management process. In addition, IRS does not have a process, including defined criteria, for reselecting ongoing investments. Until it addresses these weaknesses, IRS cannot be assured that it is making the best decisions regarding whether its investments support ongoing and future business needs. At the ITIM Stage 2 level of maturity, an organization has attained repeatable, successful IT project-level investment control and basic selection processes. Through these processes, the organization can identify expectation gaps early and take the appropriate steps to address them. According to ITIM, critical processes at Stage 2 include (1) defining IT investment board operations, (2) identifying the business needs for each IT investment, (3) developing a basic process for selecting new IT proposals and reselecting ongoing investments, (4) developing project- level investment control processes, and (5) collecting information about existing investments to inform investment management decisions. Table 2 describes the purpose of each of these Stage 2 critical processes. IRS has executed most of the key practices associated with the Stage 2 processes. These include all of the key practices associated with providing investment oversight and capturing investment information and most of the practices associated with meeting business needs. However, IRS can improve the practices associated with the instituting the investment board and selecting the investment critical processes. Table 3 summarizes the status of IRS’s Stage 2 critical processes, showing how many associated key practices the agency has executed. The establishment of decision-making bodies or boards is a key component of the IT investment management process. At the Stage 2 level of maturity, organizations define one or more boards, provide resources to support the boards’ operations, and appoint members who have expertise in both operational and technical aspects of proposed investments. The boards should operate according to a written IT investment process guide that is tailored to the organization’s unique characteristics, thus ensuring that consistent and effective management practices are implemented across the organization. The organization selects board members who are knowledgeable about policies and procedures for managing investments. Organizations at the Stage 2 level of maturity also take steps to ensure that executives and line managers support and carry out the decisions of the investment board. According to the ITIM, organizations should, among other things, (1) establish an enterprisewide IT investment board composed of senior executives from IT and business units that is responsible for defining and implementing the organization’s IT governance process, (2) have a documented IT investment process that directs each investment board’s operations, and (3) establish management controls for ensuring that investment boards’ decisions are carried out. (The complete list of key practices is provided in table 4.) IRS has executed six of the eight key practices for this critical process. For example, the agency has adequate resources for supporting the investment management process. These include the Strategy and Capital Planning office, which supports the ERT in ensuring proposed investments align with the agency’s Senior Executive Team priorities, and lower-level governance boards, which support the MEG in overseeing projects once selected. IRS also has a portfolio management tool that supports the process. In addition, to ensure investment boards’ decisions are carried out, the agency has established for the MEG, as well as for the lower-level governance boards supporting it, a coordinator position responsible for recording and tracking all board action items until closure. Despite these strengths, IRS has not fully documented its investment management process. Specifically, while IRS has several documents defining various aspects of its investment management process, none fully describe the preselect phase, which IRS began using during the summer of 2009; the select phase; or the role of the Executive Review Team. In addition, the guidance does not specify the manner in which IT investment-related processes will be coordinated with other organizational plans, processes, and documents—including, at a minimum, the strategic plan, budget, and enterprise architecture. IRS’s Associate Chief Information Officer for Strategy and Planning acknowledged the shortcomings in its documentation and stated that the agency intends to update it by the end of the fiscal year. Until this happens, IRS cannot be assured that its investment management process will be carried out in a consistent manner or coordinated with other relevant processes to ensure investment decisions are fully informed. In addition, IRS does not have an enterprisewide investment board with sufficient representation from both IT and business units that is responsible for the entire investment management process. Specifically, the select phase is primarily carried out by two senior executives (the Executive Review Team), working with several individuals, rather than a larger body composed of representatives from IRS’s IT and business units, and as a result, the perspective and expertise represented are not as broad as they would be with a larger board. Further, the responsibility for the select and control phases lies with two different groups rather than a single body, and it is not clear whether or how these groups are coordinating to ensure that the results of one phase are used to inform decisions made in the other, as would happen with a single board responsible for implementing all phases of the investment management process. IRS officials recognized the need for this coordination and stated they would address it by briefing the MEG (and later the ESCs) semiannually on the results of the select phase. In addition, the Associate Chief Information Officer for Strategy and Planning stated that “touch- points” between the investment management phases would be included in the investment management guidance that is expected to be updated by the end of the fiscal year. However, until IRS takes these actions and provides for broader business and IT representation among the groups responsible for carrying out the selection phase, it will have less assurance that its decision-making process is being optimized. Table 4 shows the rating for each key practice required to implement the critical process for instituting the investment board at the Stage 2 level of maturity and summarizes the evidence that supports these ratings. Defining business needs for each IT project helps to ensure that projects and systems support an organization’s business needs and meet users’ needs. This critical process ensures that an organization’s business objectives and its IT management strategy are linked. According to the ITIM, effectively meeting business needs requires, among other things, (1) documenting business needs with stated goals and objectives, (2) identifying specific users and other beneficiaries of IT projects and systems, (3) providing adequate resources to ensure that projects and systems support the organization’s business needs and meet users’ needs, and (4) periodically evaluating the alignment of IT projects and systems with the organization’s strategic goals and objectives. (The complete list of key practices is provided in table 5.) IRS has executed five of the seven key practices for ensuring business needs are met. Specifically, IRS has documented its business mission, with stated goals and objectives, in its IRS Strategic Plan for fiscal years 2009-2013. In addition, resources are devoted to ensuring that IT projects and systems support the organization’s business needs and meet users’ needs, including a portfolio management tool, several investment support groups, and a business case template in which new project proposals are required to show alignment with strategic goals and Senior Executive Team priorities. Further, IRS defines and documents business needs for both proposed and ongoing IT projects in its portfolio management tool. In addition, IRS’s enterprise life-cycle guidance calls for users to participate in project management throughout each project’s life cycle. For the four projects we reviewed, we verified that business needs and specific users and other beneficiaries were identified and documented in the portfolio management tool. In addition, we verified that users are involved in project management throughout the life cycle of the projects. Finally, IRS has several processes for defining and documenting business needs for proposed and ongoing projects and systems, including the preselect process in which proposed investments are aligned with the Senior Executive Team priorities that reflect strategic goals and objectives and the annual update of IRS’s Enterprise Transition Plan. This document, which provides a 3- to 5-year road map for deploying IT investments, among other things, aligns investments with IRS’s business domains (i.e., functions). Last year, IRS also initiated a Business-Technology Alignment initiative to align business units’ strategic focus areas with key technologies. We verified that the four projects we reviewed were aligned with strategic goals and objectives. However, while IRS has documented procedures for ensuring that IT projects and systems support IRS’s business needs, these procedures do not address actions to be taken when ongoing projects no longer support business needs. In addition, while IRS stated that proposed projects that do not align with the Senior Executive Team priorities are not accepted, the agency did not describe a process for taking corrective actions when ongoing projects are not aligned with business needs or provide supporting examples. Until IRS performs all the key practices associated with the Meeting Business Needs critical process, it will have less assurance that it is investing in only those projects that are needed to meet the agency’s business needs. Table 5 shows the rating for each key practice required to implement the critical process for meeting business needs at the Stage 2 level of maturity and summarizes the evidence that supports these ratings. Selecting new IT proposals and reselecting ongoing investments require a well-defined and disciplined process to provide the agency’s investment boards, business units, and developers with a common understanding of the process and the cost, benefit, schedule, and risk criteria that will be used both to select new projects and to reselect ongoing projects for continued funding. According to the ITIM, this critical process requires, among other things, (1) providing adequate resources for investment selection activities, (2) making funding decisions for new proposals according to an established process, and (3) using a defined selection process to select new investments and reselect ongoing investments. (The complete list of key practices is provided in table 6.) IRS has executed 6 of the 10 key practices associated with selecting an investment. The agency has aligned its funding decisions with its selection process for new and ongoing investments by having the Financial Management Services group issue guidance that integrates the funding initiatives with the investment selection process. IRS’s portfolio management tool contains forms for entering information related to the select phase. We verified that the four systems we reviewed—the Integrated Customer Communication Environment system, the Integration Collection System, the Integrated Data Retrieval System, and the Security Audit and Analysis System—used the forms in the portfolio management tool for entering select data. IRS has also documented criteria for analyzing, prioritizing, and selecting new investments in its capital planning guide that address its strategic goals. However, weaknesses remain in the organization’s ability to select investments. Although IRS has documentation that addresses the investment selection process, the guidance does not fully document the current process being used. For example, the guidance does not specify the roles and responsibilities of the ERT that has been involved in the selection process over the last 2 years. As previously noted, IRS recognizes this shortcoming in its documentation and stated that it plans to address it by the end of the fiscal year. Until IRS has documented policies and procedures that reflect the current process for selecting new investments, there is a risk that projects will not be selected in a consistent manner and IRS will not have the transparency that is needed to increase effectiveness. In addition, IRS has not established a process, including supporting criteria, for analyzing, prioritizing, and reselecting ongoing investments. MITS senior managers are expected to use a series of questions to evaluate their continued need for IT investments—in particular those in operations and maintenance—however, these questions are more focused on identifying savings and efficiencies than on evaluating the need for continued funding. Examples of these questions include the following: (1) Is there a less expensive option to provide maintenance support? (2) Can multiple project resources be combined to reduce costs? Until IRS establishes a process including criteria for reselecting investments, it will not be adequately assured that it is objectively continuing to fund the right projects. Considering that investments in operations and maintenance represent $1.88 billion, or 70 percent of IRS’s total IT budget request of $2.67 billion for fiscal year 2012, IRS could be funding millions of dollars in investments that are no longer needed and which could be made available for investments that better support the agency’s needs. Table 6 shows the rating for each key practice required to implement the critical process for selecting an investment at the Stage 2 level of maturity and summarizes the evidence that supports these ratings. An organization should effectively oversee its IT projects throughout all phases of their life cycles. An investment board should observe each project’s performance and progress toward predefined cost and schedule expectations as well as each project’s anticipated benefits and risk exposure. This does not mean that a departmental board should micromanage each project to provide effective oversight; rather, it means that the departmental board should be actively involved in all IT investments and proposals that are high cost or high risk or have significant scope and duration and, at a minimum, should have a mechanism for maintaining visibility of other investments. The board should also employ early-warning systems that enable it to take corrective actions at the first sign of cost, schedule, and performance slippages. According to the ITIM, effective project oversight requires, among other things, (1) having written policies and procedures for management oversight; (2) developing and maintaining an approved management plan for each IT project; (3) making up-to-date cost and schedule data for each project available to the oversight boards; (4) having regular reviews by each investment board of each project’s performance against stated expectations; and (5) ensuring that corrective actions for each underperforming project are documented, agreed to, implemented, and tracked until the desired outcome is achieved. (The complete list of key practices is provided in table 7.) IRS has executed all seven key practices associated with effective project oversight. The agency has developed written policies and procedures for management oversight of its investments. These include (1) a tiered escalation guide that outlines the process for elevating a project to a higher level of control or governance for review, mitigation, and resolution when resolution cannot be reached at a project’s respective level of control or governance, and (2) written procedures and a template for conducting milestone exit reviews to assess a project’s readiness for moving to the next phase of its life cycle or exiting a milestone. In addition, the agency has adequate resources for overseeing IT projects that lend support to the MEG, IRS’s highest governance board for overseeing projects during the control phase. To support the MEG, IRS has lower-level governance bodies—ESCs, Organizational Level Governance Boards, and Management Level Governance Boards—for overseeing the agency’s IT investments. For example, each quarter, the ESC cochairs review projects that are experiencing significant cost variances and schedule slippages. The agency also maintains an automated system for tracking project action items assigned during governance board meetings until mitigated. IRS also requires project management plans that document cost, schedule, benefit, and risk expectations. We verified that these project management plans were developed for the four projects we reviewed. Table 7 shows the rating for each key practice required to provide investment oversight and summarizes the evidence that supports these ratings. To make informed decisions regarding IT investments, an organization must be able to acquire, store, and retrieve pertinent information about each investment. During this critical process, the organization identifies its IT assets and uses a comprehensive repository to store pertinent investment information. This repository of IT investment information is used to track the organization’s IT resources to provide insights and trends about major IT cost and management drivers. The information in the repository serves to highlight lessons learned and to support current and future investment decisions. According to the ITIM framework, effectively capturing investment information requires, among other things, (1) developing documented policies and procedures for identifying and collecting information about IT projects and systems to support the investment management process, (2) assigning an official with responsibility for ensuring that the investment information collected meets the needs of the investment management process, (3) collecting and retaining easily accessible relevant investment information relating to identified IT investments, and (4) ensuring that information repositories are used by decision makers to support investment management and related decisions. IRS has executed all six practices associated with capturing investment information. For example, according to IRS officials, the Chief Technology Officer is responsible for ensuring that collected investment information meets the needs of the investment management process. Also, the agency has adequate resources for supporting the process, including the Investment Planning and Selection Office, the Estimation Program Office, and the Investment Management Office, which work together in the development and compilation of relevant investment information. Additionally, IRS has a number of tools to identify and collect investment information, including a portfolio management tool and project and action item tracking systems. Captured investment information is easily accessible to decision makers through reports generated by IRS’s portfolio management tool, quarterly briefings, and monthly health assessments that use six key performance indicators to determine an investment’s status. Table 8 shows the rating for each key practice required to implement this Stage 2 critical process and summarizes the evidence that supports these ratings. Once an agency has attained Stage 2 maturity, it needs to implement critical processes for managing its investments as a portfolio (Stage 3). Such capabilities enable an agency to consider its investments comprehensively, so that collectively the investments optimally address the organization’s mission, strategic goals, and objectives. Managing IT investments as a portfolio also allows an organization to determine its priorities and make decisions about which projects to fund and continue to fund based on analyses of the relative organizational value and risks of all projects, including projects that are proposed, under development, and in operation. Although investments may initially be organized into subordinate portfolios—based on, for example, business lines or life-cycle stages—and managed by subordinate investment boards, they should ultimately be aggregated into this enterprise-level portfolio. According to the ITIM framework, Stage 3 maturity includes (1) defining the portfolio criteria, (2) creating the portfolio, (3) evaluating the portfolio, and (4) conducting postimplementation reviews. During our review, we noted activities the agency had performed to manage its investments as a portfolio. For example, under the critical process for creating the portfolio, the agency provided evidence that it was capturing and maintaining investment information for future reference and that it had developed an Enterprise Portfolio and Sequencing Plan to guide its IT investments. IRS also has begun addressing the critical process for conducting postimplementation reviews. The agency has developed guidance that (1) specifies that the review should be conducted 6-12 months after a project’s deployment, (2) defines roles and responsibilities for conducting the review, and (3) identifies templates for supporting the process. IRS provided examples of the results of two such reviews. According to IRS officials, the agency has not concentrated on implementing Stage 3 key practices because the agency has focused its resources on establishing the Stage 2 practices associated with building the IT investment management foundation. Full implantation of the Stage 3 critical processes associated with portfolio management will provide IRS with the capability to determine whether it is selecting the mix of products that best meet the agency’s mission needs. Given the importance of IT to IRS’s mission, it is critical that the agency adopt an effective institutional approach to IT investment management. To its credit, IRS has implemented most of the key practices for such an approach, laying the groundwork for greater maturity. Most notably, the agency has established a strong process for overseeing its investments, implementing all the key practices associated with providing investment oversight. This should provide greater assurance that projects’ progress in meeting cost, schedule, risk, and benefit expectations is tracked and that corrective actions are taken when these expectations are not being met. However, IRS has yet to fully document its investment management process, which increases the risk that the process will not be implemented consistently or institutionalized. In addition, because of the Executive Review Team’s composition and the manner in which responsibilities for the select and control phases are assigned, IRS may not be optimizing its investment decision-making process. Finally, IRS has not established a structured process, including supporting criteria, for reselecting these projects. Considering the size of IRS’s IT budget, not having a process for reselecting ongoing projects could result in potentially millions of dollars being spent with no assurance that the funds are being used wisely. We recommend that the Commissioner of Internal Revenue direct the appropriate officials to take the following four actions. ensure that the investment management guidance that is expected to be updated by the end of the fiscal year fully documents the preselect and select phases and the role of the Executive Review Team, and specifies the manner in which IT investment-related processes will be coordinated; assign investment management responsibilities to optimize the decision-making process by ensuring that (1) selection decisions are made by a group that includes sufficient representation from business and IT units to provide broad perspective and expertise, and (2) investment decisions are fully informed by the results of relevant phases of the investment management process; define and implement a process for taking corrective actions when ongoing projects are not aligned with strategic goals and objectives; and define and implement a process, including defined criteria, for reselecting ongoing projects. In written comments on a draft of this report, IRS’s Commissioner concurred with our recommendations and stated that the agency would provide a detailed corrective action plan addressing each recommendation. The Commissioner further stated that IRS appreciated that the report recognized the progress the agency has made in providing investment oversight and capturing investment information. He also noted that IRS is reviewing its existing governance structure and the accountabilities of the various boards and is in the process of creating an Investment Review Board with broad senior-level representation. IRS’s comments are reprinted in appendix II. We are sending copies of this report to interested congressional committees and the Commissioner of Internal Revenue. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objective of our review was to assess the Internal Revenue Service’s (IRS) capabilities for managing its information technology (IT) investments. Our analysis was based on practices contained in GAO’s Information Technology Investment Management (ITIM) framework and the framework’s associated evaluation methodology, and focused on the agency’s implementation of critical processes and key practices for managing its business systems investments. To address this objective, we asked IRS to complete a self-assessment of its investment management process and provide supporting documentation. We reviewed the results of this self-assessment of Stage 2 practices and compared them against our ITIM framework and validated and updated the results of the self-assessment through document reviews and interviews with officials. We reviewed written policies, procedures, guidance, and other documentation that provided evidence of executed practices, including IRS’s Capital Planning and Investment Control Guide, Enterprise Transition Plan, Tiered Program Management Escalation Guide, Enterprise and Domain Processes and Procedures Manual–Release 1.3, Program Governance Office Procedure Guide v.1.0, Post Implementation Review Process Guide, Exhibit 300 Scoring Guide, portfolio management tool guidance, and various memorandums. We also reviewed Modernization and Information Technology Services Enterprise Governance committee, Executive Steering Committee, Organization Level Governance Board, and Management Level Governance Board meeting materials and other documentation. In addition, we conducted interviews with officials from IRS’s Modernization, Information Technology, and Services organization, Strategy and Capital Planning office, and Financial Management Services group. Together, these three organizations have the responsibility to oversee and ensure that IRS’s IT investment management process is implemented and followed. In comparing the evidence collected from our document reviews and interviews with the key practices in our ITIM framework, we rated the key practices as “executed” on the basis of whether the agency demonstrated (by providing evidence of performance) that it had met the criteria of the key practice. A key practice was rated as “not executed” when we found insufficient evidence of a practice during the review or when we determined that there were significant weaknesses in IRS’s execution of the key practice. In addition, IRS was provided with the opportunity to produce evidence for key practices rated as not executed. We did not assess progress in establishing the capabilities found in Stages 3, 4, and 5 because the agency officials acknowledged that IRS had not executed the key practices in these higher-maturity stages. We confirmed our analysis of IRS’s investment management process by examining supporting documentation. However, it was not within our scope to evaluate the outputs or outcomes of this process. As part of our analysis, we selected four projects as case studies to verify that the critical processes and key practices were being applied. The projects selected (1) are in different life-cycle phases, (2) represent a mix of major and nonmajor investments (different levels of funding), and (3) support different business domains. The four projects are described below: The Integrated Collection System is a major information system within IRS’s filing and payment compliance business domain that is to improve revenue collections by providing electronic case processing to revenue officers and their managers. The Integration Collection System is to enable field revenue officers access to the most current taxpayer information using laptop computers for quicker case resolution and improved customer service. The system has investments in development and operations and maintenance. It is a major system and had a fiscal year 2010 cost of approximately $9.1 million. The Integrated Customer Communication Environment, within the customer service business domain, is to support issue resolution by providing taxpayers with fast and efficient access to the information they need for pre- and postfiling. These applications use voice response, Internet, and other computer technology to provide quick, accurate, and convenient service to taxpayers 24 hours a day in real time. The system has investments in development and operations and maintenance. It is a major system and had a fiscal year 2010 cost of approximately $16.6 million. The Integrated Data Retrieval System is a mission-critical system within IRS’s managing taxpayer accounts business domain, consisting of databases and operating programs that support IRS employees working active tax cases within each business function across the entire IRS. This system manages data that have been retrieved from the Tax Master Files, allowing IRS employees to take specific actions on taxpayer account issues, track status, and post transaction updates back to the Master Files. The system has investments in development and operations and maintenance. It is a major system and had a fiscal year 2010 cost of approximately $19.6 million. The Security Audit and Analysis System, within the security services and privacy business domain, implements a data warehousing solution to provide online analytical processing of audit trail data. The system is to enable IRS to detect potential unauthorized accesses to IRS systems and provide analysis capabilities and reporting on data for all modernized and some current processing environment applications. The system has investments in development and operations and maintenance. It is a nonmajor investment and had a fiscal year 2010 cost of approximately $1.6 million. For these four projects, we reviewed the portfolio management tool documentation associated with each project and status reports. We also obtained investment information from the boards responsible for managing the projects. We conducted this performance audit from January 2010 to July 2011 at IRS’s offices in the Washington, D.C., area. Our work was done in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain significant, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individual named above, Sabine R. Paul, Assistant Director; William G. Barrick; James M. Crimmer; Lee A. McCracken; and Tomas Ramirez made key contributions to this report. | The Internal Revenue Service (IRS) relies extensively on information technology (IT) to carry out its mission. For fiscal year 2012, IRS requested about $2.67 billion for IT. Given the size and significance of these investments, GAO was asked to evaluate IRS's capabilities for managing its IT investments. To address this objective, GAO reviewed IRS policies and procedures and assessed them using GAO's IT investment management (ITIM) framework and associated methodology, focusing on the framework's stage relevant to building a foundation for investment management (Stage 2). GAO also interviewed officials responsible for IRS's investment management process. IRS has established most of the foundational practices needed to manage its IT investments. Specifically, the agency has executed 30 of the 38 key practices identified by the ITIM framework as foundational for successful IT investment management, including all the practices needed to provide investment oversight and capture investment information. For example, IRS has defined and implemented a tiered governance structure to oversee its projects and has several mechanisms for the boards to regularly review IT investments' performance. The agency has also established procedures for identifying and collecting information about its investments to inform decision making. Despite these strengths, IRS can improve its investment management process in two key areas. First, IRS does not have an enterprisewide IT investment board with sufficient representation from IT and business units that is responsible for the entire investment management process, and as a result may not be optimizing its decision-making process. Specifically, project selection is carried out by a team of two senior executives representing IRS's deputy commissioners, rather than a larger body composed of representatives from both IT and business units, and as a result, the perspective and expertise represented are not as broad as they would be with a larger board. Further, because the responsibility for the select and control phases lies with different groups rather than a single body, results of one process are not used to inform decisions made in the other, as would happen with a single board responsible for implementing all phases of the investment management process. IRS stated that it plans to address this coordination issue. Second, IRS does not have a process, including defined criteria, for reselecting (i.e., deciding whether to continue funding) ongoing projects. Given the size of its IT budget, IRS could be spending millions of dollars with no assurance that the funds are being used wisely. GAO is making recommendations to the Commissioner of Internal Revenue, including assigning responsibilities for implementing the investment management process to optimize decision making, and defining and implementing a process for deciding whether to continue funding ongoing projects. In commenting on a draft of this report, IRS concurred with GAO's recommendations. |
The Tariff Act of 1930, as amended, generally requires imported articles—such as clothing, appliances, and canned and frozen goods—to be marked by country of origin. Under the statute, however, certain articles, including fresh produce, are not required to be marked individually. For these items, the container holding the article must be marked by the country of origin. U.S. Customs Service rulings provide that when fresh produce is taken out of its container and put into an open bin or display rack, there is no obligation to identify the items by the country of origin. Three states—Florida, Maine, and Texas—have enacted country-of-origin labeling laws for fresh produce. Florida requires all imported fresh produce to be identified by the country-of-origin by, for example, marking each produce item or placing a sign or label adjacent to the bin. Maine requires country-of-origin labeling for fresh produce at the retail level when it has been imported from countries identified as having specific pesticide violations. Texas requires country-of-origin labeling for fresh grapefruit. In addition, labeling laws for fresh produce have been proposed in at least five other states: California, Connecticut, Oregon, Rhode Island, and Virginia. Most large grocery stores carry over 200 produce items. Fresh produce is often imported to fill seasonal needs when U.S. production is not sufficient to cover demand or to satisfy the demand for tropical fruits not normally grown in the United States. Two-thirds of imported fresh produce arrives between December and April, when U.S. production is low and limited to the southern portions of the country. The majority of these imports are warm-season vegetables like peppers, squash, and cucumbers, although some imports, such as tomatoes, occur year round. Total U.S. consumption of fresh produce has increased 43 percent since 1980, from about 56 billion pounds to nearly 80 billion pounds in 1997, the latest year for which the U.S. Department of Agriculture (USDA) has compiled such data. During this same period, the amount of fresh produce the United States imported more than doubled—from 7.5 billion pounds to 16 billion pounds. The domestic share increased by one third—from about 48 billion to about 64 billion pounds. In 1997, most imported produce came from Mexico, Canada, and Chile, as shown in figure 1. The United States is also the world’s largest exporter of fresh produce, valued at $2.9 billion in 1998. Three-fourths of exported U.S. produce goes to Canada, the European Union, Japan, Hong Kong, and Mexico. Complying with mandatory country-of-origin labeling for fresh produce could change the way retailers and others involved in the production and distribution of produce do business, thereby affecting their costs and consumers’ choices. Furthermore, such a law could be difficult to enforce. The fresh produce industry and retailers will have to incur costs to comply with a mandatory country-of-origin labeling law. The additional efforts and associated costs for compliance would depend on the specific requirements of the law and the extent to which current practices would have to be changed. For example, some produce is already labeled with a brand sticker. In these cases, compliance would require adding the name of the country to the sticker. For unlabeled produce, the additional effort would be more significant. Associations we spoke with representing grocery retailers are particularly concerned that a labeling law would be unduly burdensome for a number of reasons. First, retailers would have to display the same produce items from different countries separately if each individual item is not marked, which in some cases would result in only partially filled bins. According to these retailers, consumers are less likely to buy from such bins because they are less appealing, causing the retailers to lose sales. Second, retailers report that they do not have sufficient display space to separate produce and still stock all the different varieties consumers want. Large grocery stores usually carry over 200 produce items. Third, because the country of origin of retailers’ produce shipments may vary each week, retailers would incur costs to change store signs and labels to reflect the origins of the different shipments. According to the Food Marketing Institute, an association representing grocery retailers, it would take about 2 staff hours per store per week to ensure that imported produce is properly labeled. Costs would also be incurred if retailers were required to maintain paperwork at each store as evidence of the origin of these multiple shipments. Florida does not require its retail stores to maintain paperwork documenting the country of origin. It is unclear who would bear the burden of compliance. A law requiring retailers to ensure that produce is properly labeled would initially place at least some of the compliance costs on retailers. However, retailers would not necessarily bear all these costs. Retailers could raise prices to pass their costs to consumers. However, if consumers reduce their purchases of fresh produce in response, retailers will absorb part of the cost through lower sales volume. For produce that does not have close substitutes, and for which consumer demand is relatively insensitive to price changes, retailers are likely to be more successful in passing costs on to consumers through price increases without experiencing significant declines in sales volume. Retailers may decide to require their suppliers to either package produce or label individual produce items. If retailers can impose this requirement without paying more for the same quantity and quality, they will have shifted the labeling costs to their suppliers. Consumer responses may also influence the eventual effect of a country-of-origin law. If consumers prefer domestic produce, they may buy more domestic and less imported produce, which would allow domestic producers to gain market share and/or raise their prices. However, if foreign countries respond by imposing their own labeling requirements, and if this resulted in foreign consumers’ buying less U.S. produce, then U.S. exports could suffer. It is also possible that a country-of-origin labeling requirement would result in fewer choices for consumers. This would occur if retailers decide to stock more prepackaged produce, which would already be labeled, and fewer bulk items, which would have to be labeled. Furthermore, if a law required labeling for imported produce only, retailers could decide to stock fewer imported produce items in order to avoid the compliance burden. An additional cost would be borne by restaurants and other food service providers if the labeling law applies to them. They would have to let their customers know the country of origin of the produce they use, which could involve, for example, changing information on menus each time the source of the produce changed. According to the National Restaurant Association, the cost of changing menus would be “prohibitive.” According to Food and Drug Administration (FDA) and USDA officials we spoke with, enforcing a labeling law would require significant additional resources. The agency enforcing such a law would have to implement a system to ensure that the identity of produce is maintained throughout the distribution chain. While inspectors could ensure that retailers have signs or labels in place and could review documentation—if it were available—they might not be able to determine from a visual inspection that produce in a particular bin was from the country designated on the sign or label. Such documentation is often unavailable at the retail store. It is also unclear who would be responsible for these inspections. Grocery store inspections for compliance with federal health and safety laws are now generally conducted by state and local officials, often under memorandums of understanding with the Food and Drug Administration. USDA officials pointed out that if state and local governments were to carry out the inspections required by a federal country-of-origin labeling law, such a law would have to specify the states’ enforcement role and provide funding for enforcement activities. In commenting on a Senate amendment to the fiscal year 1999 appropriations bill regarding country-of-origin labeling, FDA expressed “reservations about its priority as a public health issue, its cost to administer, and [FDA’s] ability to enforce it.” FDA further noted that the cost of enforcement “would be significant,” and “it is unclear that enforcement would even be possible.” Among other enforcement problems, FDA cited the need for accompanying paperwork to verify country-of-origin labels and said this would place “an enormous burden” on industry. FDA estimated that the federal cost for 1-year’s monitoring under this proposed amendment would be about $56 million. The three states that have labeling laws vary in their degree of enforcement. In Florida, which has a mandatory labeling law for all imported produce, enforcement occurs during the course of routine state health inspections that are conducted about twice each year in every store. During the routine inspections, officials check the shipping boxes and packages in the store against the display signs or labels—a task they estimate requires about 15 minutes per visit. However, they said they sometimes have no reliable means to verify the accuracy of these signs and labels. When violations are found, Florida officials said that it takes 5 minutes to process paperwork for new violations and 30 minutes for repeat violations. Figure 2 shows produce labeled in Florida grocery stores. According to the Inspection Manager for Maine’s Department of Agriculture, Maine does not enforce its country-of-origin labeling requirements because the list of countries to be identified keeps changing and paperwork to verify the country of origin is often unavailable. In Texas, the labeling law applies only to grapefruit. According to a Texas Department of Agriculture official, grapefruit is rarely imported into Texas, and the labeling law is not currently being enforced. Depending on what it might require and how it might be implemented, a law mandating country-of-origin labeling for fresh produce could have adverse trade implications. U.S. trading partners might challenge the law’s consistency with international trade obligations or take steps to increase their own country-of-origin labeling requirements. Moreover, according to USDA officials, enacting a labeling law could make it more difficult for the United States to oppose foreign countries’ labeling requirements that it finds objectionable. Any labeling law would need to be consistent with U.S. international trade obligations in order to withstand potential challenges from U.S. trading partners. International trade rules that the United States has agreed to, such as those embodied in the World Trade Organization (WTO) and the North American Free Trade Agreement (NAFTA), permit country-of-origin labeling. For example, WTO provisions recognize the need to protect consumers from inaccurate information while minimizing the difficulties and inconveniences labeling measures may cause to commerce. WTO rules require, among other things, that the labeling of imported products must not result in serious damage to the product, a material reduction in its value, or an unreasonable increase in its cost. Correspondence from the Office of the U.S. Trade Representative (USTR) stated that our trading partners could raise concerns that country-of-origin labeling requirements adversely affect their exports by raising costs. Similarly, NAFTA requires that any country-of-origin marking requirement must be applied in a manner that would minimize difficulties, costs, and inconveniences to a country’s commerce. USTR and Department of State officials stated that Mexico requested consultations to discuss its concerns that one recently proposed U.S. country-of-origin labeling bill would violate certain NAFTA provisions on country-of-origin marking. USDA officials and food industry representatives expressed concern that mandatory country-of-origin labeling at the retail level could be viewed as a trade barrier and might lead to actions that could hurt U.S. exports. For example, a country currently exporting produce to the United States may be concerned about additional costs if its exporters are required to label loose produce. Such a country could respond by enacting or more strictly enforcing retail labeling laws that could hinder U.S. exports. The officials were also concerned that adopting mandatory country-of-origin labeling at the retail level could complicate U.S. efforts to address other countries’ labeling laws that the United States found objectionable. According to USDA officials, the United States has opposed certain country-of-origin labeling in other countries for various reasons, including concerns about the potential of those laws to raise the costs of U.S. exports and discourage consumers from purchasing imported goods. While U.S. representatives have worked informally and cooperatively to oppose certain foreign country-of-origin labeling requirements, the United States has not formally challenged any such requirements within the WTO. WTO officials said they were unaware of any formal challenges to any country’s country-of-origin labeling requirement. However, USDA and WTO officials agreed that the absence of any formal challenge does not necessarily indicate that existing country-of-origin labeling requirements are consistent with WTO rules. Moreover, the absence of formal challenges to existing laws does not preclude these laws from being challenged in the future. Finally, because the United States is such a large importer and exporter of fresh produce, officials with USDA and the Department of State pointed out that a U.S. labeling law is more likely to be formally challenged than are other countries’ laws. In February and March 1999, we surveyed U.S. embassy agricultural attachés in 45 countries with which the United States exports and imports agricultural products to determine which countries have and enforce country-of-origin labeling requirements for fresh produce at the retail level. Our survey included 28 countries that account for most of the U.S. produce imports and exports and 17 countries that USDA identified as having produce labeling requirements. Of the 28 countries, 13 (46 percent) require country-of-origin labeling for bulk produce at the retail level, and 15 require such labeling for packaged produce. Attachés in these countries reported the countries with requirements generally have a high level of compliance and moderate to high levels of enforcement.Appendix I identifies the U.S. trading partners that require country-of-origin labeling for fresh produce and the scope of their requirements. Considerable time—several weeks or months—generally passes between the outbreak of a produce-related illness, the identification of the cause, and a warning to the public about the risks of eating a specific produce item, according to the Centers for Disease Control and Prevention (CDC) and FDA officials. By the time a warning is issued, country-of-origin labeling would benefit consumers only if they remembered the country of origin or still had the produce, or if the produce were still in the store. Consequently, country-of-origin labeling would be of limited value in helping consumers respond to a warning of an outbreak. Several factors contribute to the delays in identifying causes of foodborne illness, including how quickly consumers become ill after purchasing and eating the food and whether they seek medical attention. State and local agencies report known or suspected foodborne illnesses to CDC, which uses this information to identify patterns of related illnesses—outbreaks—and to work with state, local, and FDA officials to identify the source. Once the source is identified, state and local public health officials generally issue a warning to the public if the product is still available in the marketplace. In most cases of foodborne illness, however, officials are not able to identify the specific point at which the food associated with the outbreak became contaminated. Between 1990 and 1998, CDC identified 98 outbreaks of foodborne illnesses linked to fresh produce. In 86 of these cases, the point of contamination was never identified. The remaining 12 cases were traced to contamination in food handling and to seed that was contaminated. Appendix II provides information on outbreaks of illnesses related to contaminated fresh produce since 1990. Because of the time needed to identify the cause of an outbreak, country-of-origin labeling would not generally be useful in preventing more consumers from becoming ill. For example, when cyclospora-contaminated raspberries from Guatemala caused outbreaks of illnesses in 1996 and 1997, many individuals did not become ill until a week or more after they ate the fruit. CDC officials said that country-of-origin labeling might be a starting point in tracing the source of contamination if a person who had eaten a contaminated product remembered the source for that product. However, they said that more detailed information identifying every step from farm to table—for both domestically grown and imported produce—would be of greater use in tracing the source of an outbreak and identifying the practices that resulted in the contamination. Identifying such practices may enable officials to devise control measures that could be used throughout the industry to decrease the potential for additional illnesses. CDC officials also pointed out that a country-of-origin labeling law would be more useful to them if it required retailers to keep better records, including invoices and shipping documents. Such records would allow investigators to identify the source of produce that was in grocery stores at a particular time in the past. Finally, FDA and CDC officials observed that a law exempting food service establishments from country-of-origin labeling would be of limited value because many identified outbreaks have been traced to food served in restaurants or at catered meals. U.S. consumers are eating more meals, including more fresh produce, outside the home. Indeed, a significant portion of the illnesses that were traced to Guatemalan raspberries were contracted from meals eaten outside the home. Surveys representing households nationwide, sponsored by the produce industry between 1990 and 1998, showed that between 74 and 83 percent of consumers favor mandatory country-of-origin labeling for fresh produce at the retail level. However, when asked to rate the importance of several types of labeling information, households reported information on freshness as most important, followed by information on nutrition, storage and handling, and preparation tips. Information on country-of-origin was ranked fifth, as shown in figure 3. In addition, most consumers would prefer to buy U.S. produce if all other factors—price, taste and appearance—were equal. And, about half of all consumers would be willing to pay “a little more to get U.S. produce.”However, the survey did not specify the additional amount that consumers would be willing to pay. Furthermore, according to a 1998 industry-sponsored nationwide survey, 70 percent of consumers believe that domestically grown produce is safer. In the same survey, about half of consumers reported having concerns about health and safety and growing conditions, and about one-third had concerns with cleanliness and handling when buying imported produce. Despite these concerns, officials with USDA, CDC, and FDA, told us that sufficient data are not available to compare the safety of domestic and imported produce. However, CDC officials told us that, in the absence of specific food production controls, the potential for contaminated produce increases where poor sanitary conditions and polluted water are more prevalent. In addition, Consumers Union—a nationally recognized consumer group—used data collected by USDA’s Agricultural Marketing Service to compare the extent to which multiple pesticide residues were found in selected domestic and imported fresh produce. For its analysis, Consumers Union developed a toxicity index, which it used to compare the pesticide residues. According to this analysis, pesticide residues on imported peaches, winter squash, apples, and green beans had lower toxicity levels than those found on their domestically grown counterparts. In contrast, the pesticide residues on domestically grown tomatoes and grapes were less toxic than their imported counterparts. The study acknowledges that almost all of the pesticide residues on the samples were within the tolerance levels allowed by the Environmental Protection Agency (EPA). We did not independently determine the validity of the toxicity index developed by Consumers Union or verify its analysis or results. However, according to FDA officials, pesticide residues present a lower health risk than the disease-causing bacteria that can be found on food. We provided the departments of Agriculture and State, Office of the U.S. Trade Representative, CDC, U.S. Customs Service, EPA, and FDA with a draft of this report for their review and comment. These agencies generally agreed with the facts presented in the report and provided technical comments, which we incorporated as appropriate. Officials commenting on the report included the Deputy Administrator, Fruit and Vegetable Programs, Agricultural Marketing Service, USDA; the Economic/Commercial Officer in the Agricultural Trade Policy Division, Department of State; the Director of Agricultural Affairs and Technical Barriers to Trade, Office of the U.S. Trade Representative; the Director of Food Safety Initiative Activities, Division of Bacterial and Mycotic Diseases, National Center for Infectious Diseases, CDC; a Senior Attorney, Office of Regulations and Rulings, U.S. Customs Service; the Interim Associate Commissioner for Legislative Affairs, FDA. We performed our review from November 1998 through March 1999 in accordance with generally accepted government auditing standards. Our scope and methodology are discussed in appendix III. Copies of this report will be sent to Senator Richard Lugar, Chairman, and Senator Tom Harkin, Ranking Minority Member, Senate Committee on Agriculture, Nutrition, and Forestry; and Representative Larry Combest, Chairman, and Representative Charles Stenholm, Ranking Minority Member, House Committee on Agriculture. We are also sending copies to the Honorable Dan Glickman, Secretary of Agriculture; the Honorable Madeleine Korbel Albright, Secretary of State; the Honorable Jane Henney, M.D., Commissioner, Food and Drug Administration; the Honorable Jeffrey P. Koplan, M.D., Director, Centers for Disease Control and Prevention; the Honorable Raymond W. Kelly, Commissioner of the U.S. Customs Service; the Honorable Jacob J. Lew, Office of Management and Budget; and Ambassador Charlene Barshefsky, the U.S. Trade Representative. We will also make copies available to others upon request. If you would like more information on this report, please contact me at (202) 512-5138. Major contributors to this report are listed in appendix IV. This appendix identifies the U.S. trading partners that have country-of-origin labeling requirements for fresh produce at the retail level, the nature and scope of these requirements, and the record of U.S. challenges to those requirements. Table I.1 identifies U.S. trading partner countries, their requirements for loose or packaged fresh produce to be labeled at the retail level, and the degree of compliance and enforcement with those requirements. This information is based on our survey of U.S. agricultural attachés for 45 countries. Of the 45 countries, 28 account for most of U.S. trade in produce. We also surveyed the 17 countries that were not among the largest produce trading partners but were identified in the Foreign Agricultural Service’s 1998 Foreign Country of Origin Labeling Survey as having produce labeling requirements. As the table indicates, 13 of the 28 major produce trading partners require country-of-origin labeling for loose produce at the retail level, and 15 require labeling for packaged produce. Attachés reported that these countries generally have a high level of compliance and a moderate to high level of enforcement. Officials of the World Trade Organization, the departments of Agriculture and State, the Office of the U.S. Trade Representative, and U.S. agricultural attaches were not able to identify any formal U.S. challenges to country-of-origin labeling requirements for fresh produce. Table I.1: Trading Partner Countries’ Requirements for Country-Of-Origin Labeling of Fresh Produce at the Retail Level (continued) (Table notes on next page) Agricultural attaches were uncertain about this information. Table II.1 provides information on the 98 outbreaks of produce-related illnesses that were identified between 1990 and 1998 by the Centers for Disease Control and Prevention (CDC). Contamination may occur when fresh produce is grown, harvested, washed, sorted, packed, transported, or prepared. As the table shows, food safety officials could not identify the source of the contamination in 86 of these cases. Food safety experts believe that there is not sufficient information to assess the relative safety of fresh produce from the United States and foreign countries. Salmonella Senftenberg Contaminated seed. Salmonella Oranienberg Unknown. Unknown. Cabbage (cole slaw) Unknown; field contamination suspected. Unknown. United States or Canada Cabbage (cole slaw) Unknown; field contamination suspected. Contaminated seed. Contaminated seed. Unknown; wash water or ice for packing suspected. Unknown; cross contamination by food handlers suspected. United States (Idaho) Contaminated seed. United States (Kansas and Missouri) Contaminated seed. Cyclospora cayetanensis Unknown; nonpotable water may have been used in pesticide spray mix. Mesclun lettuce (baby lettuce) Cyclospora cayetanensis Unknown. Cyclospora cayetanensis Unknown. Unknown; food handler suspected. Cross contamination from turkey. (continued) Unavailable. Contaminated seed. Unknown. Cyclospora cayetanensis Unknown; nonpotable water may have been used in pesticide spray mix. E. coli O157:H7 (baby lettuce mix) Unknown; contamination in the field suspected. Unavailable. Unavailable. Unavailable. Unavailable. Unavailable. Imported (country-of-origin unknown) Contaminated seed. Unknown. Unknown; food handler suspected. Unknown (produce suspected) Cyclospora cayetanensis Unknown. Imported (country-of-origin unknown) Contaminated seed. United States (Idaho) Cross contamination with raw meat product during preparation. United States (Montana) Unknown; field contamination likely but unsanitary handling practices at the grocery store may have also occurred. Cross contamination from ground beef. Unavailable. Contaminated by asymptomatic food handler. Unavailable. (continued) Unknown. Unknown. Unknown; cross contamination with raw ground beef suspected. Unknown; food handler suspected. Unknown; contamination at harvest suspected. Unknown; cross contamination suspected. Unavailable. Unknown; food handler suspected. Unavailable. Unavailable. Greens (edible fern fronds) Unavailable. Unavailable. Unavailable. Unavailable. Unavailable. United States (South Carolina) Unknown; wash water suspected. Unavailable. Unknown; cross contamination suspected. Unknown. Unknown; cross contamination suspected. Salad (carrots) Enterotoxigenic E. coli (ETEC) Unknown; contaminated carrots suspected. Tabouleh salad (carrots) Enterotoxigenic E. coli (ETEC) Unknown; contaminated carrots suspected. Unavailable. Unknown; food handler suspected. Unknown; foodhandler or cross contamination suspected. (continued) Unavailable. Unavailable. Unavailable. Unknown; manure in home garden suspected. Unavailable. Unavailable. Unavailable. Unavailable. Unavailable. Unknown; contamination in field suspected. Unavailable. Unavailable. United States (Florida) Unknown; improper handling (temperature abuse) suspected. Unavailable. Unavailable. Unavailable. Unavailable. Unavailable. Unavailable. Unknown; possible contamination from ice used in shipping. Unknown. Unavailable. United States (South Carolina) Unknown; wash water suspected. Unavailable. Unavailable. Unknown. Unavailable. Unavailable. Unavailable. Unavailable. Unavailable. (continued) Unavailable. Unavailable. Unavailable. Unavailable. As requested by the Senate and House conferees for the Omnibus Consolidated and Emergency Supplemental Appropriations Act, 1999, we reviewed a number of issues associated with the potential costs and benefits of a mandatory labeling requirement. Specifically, this report provides information on (1) the potential costs associated with compliance and enforcement of a mandatory country-of-origin labeling requirement at the retail level for fresh produce, (2) the potential trade issues associated with such a requirement, (3) the potential impact of such a requirement on the ability of the federal government and the public to respond to outbreaks of illness caused by contaminated fresh produce, and (4) consumers’ views of country-of-origin labeling. Finally, appendix I identifies U.S. trading partners that have country-of-origin labeling requirements for fresh produce, the nature and scope of those requirements, and the record of U.S. challenges to those requirements. To determine the potential costs associated with compliance and enforcement, we interviewed officials and reviewed documents from USDA’s Agricultural Marketing Service and the Foreign Agricultural Service; the U.S. Customs Service; the Food and Drug Administration; and the International Trade Commission. We also interviewed officials from the Food Marketing Institute and the Florida Retail Federation and visited several Florida groceries—both large chains and small independent stores—to examine how imported produce is labeled and how inspections are conducted. We interviewed officials from the United Fresh Fruit and Vegetable Association; the Food Industry Trade Coalition, which included representatives from the Food Distributors International, the National Grocers Association, ConAgra, Inc., the Chilean Fresh Fruit Association, the National Fisheries Institute, the Meat Importers Council of America Inc., the American Food Institute, and the National Food Processors Association; the Fresh Produce Association of the Americas; the Florida Fruit and Vegetable Association; the Northwest Horticultural Council; the Western Growers Association; and Chiquita Brands, Inc. To determine compliance and enforcement with state labeling laws, we interviewed officials from agricultural departments in Maine, Texas, and Florida. To determine the potential trade implications, we reviewed documents and interviewed officials from the Office of the U.S. Trade Representative, the Foreign Agricultural Service, the Department of State, and the World Trade Organization. We also examined international trade agreements. To identify U.S. trading partners that have country-of-origin labeling requirements for fresh produce, we reviewed the survey conducted by the Foreign Agricultural Service, 1998 Foreign Country of Origin Labeling Survey, February 4, 1998. In addition, we developed a questionnaire to determine the nature and scope of other countries’ labeling requirements, which the Service sent electronically to the U.S. embassy agricultural attachés for 45 countries. Twenty-eight of the countries were selected because they are the countries with whom we import or export significant dollar volumes of fresh produce. The remaining 17 countries we surveyed were included because they were identified as requiring country-of-origin labeling in the Foreign Agricultural Service’s 1998 survey. We received responses for 45 countries. The survey was conducted in February and March 1999. To determine the potential impact on the federal government’s and consumers’ ability to respond to outbreaks of illness from fresh produce, we interviewed officials and obtained documents from the CDC, FDA, the U.S. Department of Agriculture, and Florida’s Department of Health. We also discussed these issues with consumer groups. To determine the potential impact of mandatory country-of-origin labeling on consumers, we reviewed the Tariff Act of 1930 and related regulations and rulings and discussed these issues with Customs officials. We also examined documents and interviewed officials with consumer groups, including the National Consumers League, the Center for Science in the Public Interest, and the Safe Food Coalition. We also analyzed the results of eight consumer surveys conducted from 1990 to 1998 to determine consumer opinions regarding mandatory country-of-origin labeling. The surveys were identified by industry experts and through literature searches. For the data we included in our report, we obtained frequency counts, survey instruments, and other documents, in order to review the wording of questions, sampling, mode of administration, research strategies, and the effects of sponsorship. We used only data that we judged to be reliable and valid. Five surveys, conducted between 1990 and 1998, represented households nationwide that have purchased fresh produce in the past year. These surveys were published by Vance Publishing Corporation for The Packer newspaper and were published in its annual supplement, Fresh Trends. Another nationwide survey was conducted by the Charlton Research Group in 1996 for the Desert Grape Growers League. Two surveys of Florida consumers were conducted by the University of South Florida’s Agriculture Institute in 1997 and the University of Florida in 1998. We also spoke with officials and obtained documents from CDC, FDA, the U.S. Department of Agriculture’s Agricultural Marketing Service, Florida’s Department of Health, the Environmental Working Group, and Consumers Union about the relative safety of imported and U.S. produce. We conducted our review from November 1998 through March 1999 in accordance with generally accepted government auditing standards. Erin Lansburgh, Assistant Director Beverly A. Peterson, Evaluator-in-Charge Daniel F. Alspaugh Erin K. Barlow Shirley Brothwell Richard Burkard Daniel E. Coates Oliver Easterwood Fran Featherston Alice Feldesman Paul Pansini Carol Herrnstadt Shulman Janice M. Turner The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO provided information on the: (1) potential costs associated with the compliance and enforcement of a mandatory country-of-origin labeling requirement at the retail level for fresh produce; (2) potential trade issues associated with such a requirement; (3) potential impact of such a requirement on the ability of the federal government and the public to respond to outbreaks of illness caused by contaminated fresh produce; and (4) consumers' views of country-of-origin labeling. GAO noted that: (1) the magnitude of compliance and enforcement costs for a country-of-origin labeling requirement at the retail level would depend on several factors, including the extent to which labeling practices would have to be changed; (2) according to an association representing grocery retailers, changing store signs to ensure that produce is properly labeled would cost about 2 staff hours per store per week; (3) however, it is unclear who would bear the burden of any such additional labeling costs--retailers could absorb some or all of the costs or pass them to consumers or to their suppliers; (4) regarding enforcement, the Food and Drug Administration, in commenting on a recently proposed bill, estimated that federal monitoring would cost about $56 million annually and said that enforcement would be difficult; (5) inspectors would need documentary evidence to determine the country-of-origin of the many produce items on display, and this documentation is often not available at each retail store; (6) enforcement is carried out in only one of the three states with labeling laws; (7) Florida inspectors told GAO that they sometimes have no reliable means to verify the accuracy of labels; (8) according to Department of Agriculture officials and industry representatives, mandatory labeling at the retail level could be viewed by other countries as a trade barrier; (9) officials also noted that countries concerned with a labeling law could take actions that could adversely affect U.S. exports; (10) about half of the countries that account for most of the U.S. trade in produce require country-of-origin labeling for fresh produce at the retail level; (11) when outbreaks of foodborne illness occur, country-of-origin labeling for fresh produce would be of limited benefit to food safety agencies in tracing the source of contamination and to the public in responding to a warning of an outbreak; (12) it can take weeks or months for food safety agencies to identify an outbreak, determine the type of food involved, identify the source of the food contamination, and issue a warning; (13) retail labeling would help consumers only if they remembered the country of origin or still had the produce, or if the produce were still in the store; and (14) according to nationwide surveys sponsored by the fresh produce industry, between 74 and 83 percent of consumers favor mandatory country-of-origin labeling for fresh produce, although they rated information on freshness, nutrition, and handling and storage as more important. |
The department is facing near-and long-term internal fiscal pressures as it attempts to balance competing demands to support ongoing operations, rebuild readiness following extended military operations, and manage increasing personnel and health care costs as well as significant cost growth in its weapon systems programs. For more than a decade, DOD has dominated GAO’s list of federal programs and operations at high risk of being vulnerable to fraud, waste, abuse. In fact, all of the DOD programs on GAO’s High-Risk List relate to business operations, including systems and processes related to management of contracts, finances, supply chain, and support infrastructure, as well as weapon systems acquisition. Long-standing and pervasive weaknesses in DOD’s financial management and related business processes and systems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the financial status and cost of DOD activities to Congress and DOD decision makers; (2) adversely impacted its operational efficiency and mission performance in areas of major weapons system support and logistics; and (3) left the department vulnerable to fraud, waste, and abuse. Because of the complexity and long-term nature of DOD’s transformation efforts, GAO has reported the need for a chief management officer (CMO) position and a comprehensive, enterprisewide business transformation plan. In May 2007, DOD designated the Deputy Secretary of Defense as the CMO. In addition, the National Defense Authorization Acts for Fiscal Years 2008 and 2009 contained provisions that codified the CMO and Deputy CMO (DCMO) positions, required DOD to develop a strategic management plan, and required the Secretaries of the military departments to designate their Undersecretaries as CMOs and to develop business transformation plans. DOD financial managers are responsible for the functions of budgeting, financing, accounting for transactions and events, and reporting of financial and budgetary information. To maintain accountability over the use of public funds, DOD must carry out financial management functions such as recording, tracking, and reporting its budgeted spending, actual spending, and the value of its assets and liabilities. DOD relies on a complex network of organizations and personnel to execute these functions. Also, its financial managers must work closely with other departmental personnel to ensure that transactions and events with financial consequences, such as awarding and administering contracts, managing military and civilian personnel, and authorizing employee travel, are properly monitored, controlled, and reported, in part, to ensure that DOD does not violate spending limitations established in legislation or other legal provisions regarding the use of funds. Before fiscal year 1991, the military services and defense agencies independently managed their finance and accounting operations. According to DOD, these decentralized operations were highly inefficient and failed to produce reliable information. On November 26, 1990, DOD created the Defense Finance and Accounting Service (DFAS) as its accounting agency to consolidate, standardize, and integrate finance and accounting requirements, functions, procedures, operations, and systems. The military services and defense agencies pay for finance and accounting services provided by DFAS using their operations and maintenance appropriations. The military services continue to perform certain finance and accounting activities at each military installation. These activities vary by military service depending on what the services wanted to maintain in-house and the number of personnel they were willing to transfer to DFAS. As DOD’s accounting agency, DFAS records these transactions in the accounting records, prepares thousands of reports used by managers throughout DOD and by the Congress, and prepares DOD-wide and service-specific financial statements. The military services play a vital role in that they authorize the expenditure of funds and are the source of most of the financial information that allows DFAS to make payroll and contractor payments. The military services also have responsibility for most of DOD assets and the related information needed by DFAS to prepare annual financial statements required under the Chief Financial Officers Act. DOD accounting personnel are responsible for accounting for funds received through congressional appropriations, the sale of goods and services by working capital fund businesses, revenue generated through nonappropriated fund activities, and the sales of military systems and equipment to foreign governments or international organizations. DOD’s finance activities generally involve paying the salaries of its employees, paying retirees and annuitants, reimbursing its employees for travel- related expenses, paying contractors and vendors for goods and services, and collecting debts owed to DOD. DOD defines its accounting activities to include accumulating and recording operating and capital expenses as well as appropriations, revenues, and other receipts. According to DOD’s fiscal year 2012 budget request, in fiscal year 2010 DFAS processed approximately 198 million payment-related transactions and disbursed over $578 billion; accounted for 1,129 active DOD appropriation accounts; and processed more that 11 million commercial invoices. DOD financial management was designated as a high-risk area by GAO in 1995. Pervasive deficiencies in financial management processes, systems, and controls, and the resulting lack of data reliability, continue to impair management’s ability to assess the resources needed for DOD operations; track and control costs; ensure basic accountability; anticipate future costs; measure performance; maintain funds control; and reduce the risk of loss from fraud, waste, and abuse. Other business operations, including the high-risk areas of contract management, supply chain management, support infrastructure management, and weapon systems acquisition are directly impacted by the problems in financial management. We have reported that continuing weaknesses in these business operations result in billions of dollars of wasted resources, reduced efficiency, ineffective performance, and inadequate accountability. Examples of the pervasive weaknesses in the department’s business operations are highlighted below. DOD invests billions of dollars to acquire weapon systems, but it lacks the financial management processes and capabilities it needs to track and report on the cost of weapon systems in a reliable manner. We reported on this issue over 20 years ago, but the problems continue to persist. In July 2010, we reported that although DOD and the military departments have efforts underway to begin addressing these financial management weaknesses, problems continue to exist and remediation and improvement efforts would require the support of other business areas beyond the financial community before they could be fully addressed. DOD also requests billions of dollars each year to maintain its weapon systems, but it has limited ability to identify, aggregate, and use financial management information for managing and controlling operating and support costs. Operating and support costs can account for a significant portion of a weapon system’s total life-cycle costs, including costs for repair parts, maintenance, and contract services. In July 2010, we reported that the department lacked key information needed to manage and reduce operating and support costs for most of the weapon systems we reviewed—including cost estimates and historical data on actual operating and support costs. For acquiring and maintaining weapon systems, the lack of complete and reliable financial information hampers DOD officials in analyzing the rate of cost growth, identifying cost drivers, and developing plans for managing and controlling these costs. Without timely, reliable, and useful financial information on cost, DOD management lacks information needed to accurately report on acquisition costs, allocate resources to programs, or evaluate program performance. In June 2010, we reported that the Army Budget Office lacked an adequate funds control process to provide it with ongoing assurance that obligations and expenditures do not exceed funds available in the Military Personnel–Army (MPA) appropriation. We found that an obligation of $200 million in excess of available funds in the Army’s military personnel account violated the Antideficiency Act. The overobligation likely stemmed, in part, from lack of communication between Army Budget and program managers so that Army Budget’s accounting records reflected estimates instead of actual amounts until it was too late to control the incurrence of excessive obligations in violation of the act. Thus, at any given time in the fiscal year, Army Budget did not know the actual obligation and expenditure levels of the account. Army Budget explained that it relies on estimated obligations—despite the availability of actual data from program managers—because of inadequate financial management systems. The lack of adequate process and system controls to maintain effective funds control impacted the Army’s ability to prevent, identify, correct, and report potential violations of the Antideficiency Act. In our February 2011 report on the Defense Centers of Excellence (DCOE), we found that DOD’s TRICARE Management Activity (TMA) had misclassified $102.7 million of the nearly $112 million in DCOE advisory and assistance contract obligations. The proper classification and recording of costs are basic financial management functions that are also key in analyzing areas for potential future savings. Without adequate financial management processes, systems, and controls, DOD components are at risk of reporting inaccurate, inconsistent, and unreliable data for financial reporting and management decision making and potentially exceeding authorized spending limits. The lack of effective internal controls hinders management’s ability to have reasonable assurance that their allocated resources are used effectively, properly, and in compliance with budget and appropriations law. Over the years, DOD has initiated several broad-based reform efforts to address its long-standing financial management weaknesses. However, as we have reported, those efforts did not achieve their intended purpose of improving the department’s financial management operations. In 2005, the DOD Comptroller established the DOD FIAR Directorate to develop, manage, and implement a strategic approach for addressing the department’s financial management weaknesses for achieving auditability, and for integrating these efforts with other improvement activities, such as the department’s business system modernization efforts. In May 2009, we identified several concerns with the adequacy of the FIAR Plan as a strategic and management tool to resolve DOD’s financial management difficulties and thereby position the department to be able to produce auditable financial statements. Overall, since the issuance of the first FIAR Plan in December 2005, improvement efforts have not resulted in the fundamental transformation of operations necessary to resolve the department’s long-standing financial management deficiencies. However, DOD has made significant improvements to the FIAR Plan that, if implemented effectively, could result in significant improvement in DOD’s financial management and progress toward auditability, but progress in taking corrective actions and resolving deficiencies remains slow. While none of the military services has obtained an unqualified (clean) audit opinion, some DOD organizations, such as the Army Corps of Engineers, DFAS, the Defense Contract Audit Agency, and the DOD Inspector General, have achieved this goal. Moreover, some DOD components that have not yet received clean audit opinions are beginning to reap the benefits of strengthened controls and processes gained through ongoing efforts to improve their financial management operations and reporting capabilities. Lessons learned from the Marine Corps’ Statement of Budgetary Resources audit can provide a roadmap to help other components better stage their audit readiness efforts by strengthening their financial management processes to increase data reliability as they develop action plans to become audit ready. In August 2009, the DOD Comptroller sought to further focus efforts of the department and components, in order to achieve certain short- and long- term results, by giving priority to improving processes and controls that support the financial information most often used to manage the department. Accordingly, DOD revised its FIAR strategy and methodology to focus on the DOD Comptroller’s two priorities—budgetary information and asset accountability. The first priority is to strengthen processes, controls, and systems that produce DOD’s budgetary information and the department’s Statements of Budgetary Resources. The second priority is to improve the accuracy and reliability of management information pertaining to the department’s mission-critical assets, including military equipment, real property, and general equipment, and validating improvement through existence and completeness testing. The DOD Comptroller directed the DOD components participating in the FIAR Plan—the departments of the Army, Navy, Air Force and the Defense Logistics Agency—to use a standard process and aggressively modify their activities to support and emphasize achievement of the priorities. GAO supports DOD’s current approach of focusing and prioritizing efforts in order to achieve incremental progress in addressing weaknesses and making progress toward audit readiness. Budgetary and asset information is widely used by DOD managers at all levels, so its reliability is vital to daily operations and management. DOD needs to provide accountability over the existence and completeness of its assets. Problems with asset accountability can further complicate critical functions, such as planning for the current troop withdrawals. In May 2010, DOD introduced a new phased approach that divides progress toward achieving financial statement auditability into five waves (or phases) of concerted improvement activities (see appendix I). According to DOD, the components’ implementation of the methodology described in the 2010 FIAR Plan is essential to the success of the department’s efforts to ultimately achieve full financial statement auditability. To assist the components in their efforts, the FIAR guidance, issued along with the revised plan, details the implementation of the methodology with an emphasis on internal controls and supporting documentation that recognizes both the challenge of resolving the many internal control weaknesses and the fundamental importance of establishing effective and efficient financial management. The FIAR Guidance provides the process for the components to follow, through their individual Financial Improvement Plans, in assessing processes, controls, and systems; identifying and correcting weaknesses; assessing, validating, and sustaining corrective actions; and achieving full auditability. The guidance directs the components to identify responsible organizations and personnel and resource requirements for improvement work. In developing their plans, components use a standard template that comprises data fields aligned to the methodology. The consistent application of a standard methodology for assessing the components’ current financial management capabilities can help establish valid baselines against which to measure, sustain, and report progress. Improving the department’s financial management operations and thereby providing DOD management and the Congress more accurate and reliable information on the results of its business operations will not be an easy task. It is critical that the current initiatives being led by the DOD Deputy Chief Management Officer and the DOD Comptroller be continued and provided with sufficient resources and ongoing monitoring in the future. Absent continued momentum and necessary future investments, the current initiatives may falter, similar to previous efforts. Below are some of the key challenges that the department must address in order for the financial management operations of the department to improve to the point where DOD may be able to produce auditable financial statements. Committed and sustained leadership. The FIAR Plan is in its sixth year and continues to evolve based on lessons learned, corrective actions, and policy changes that refine and build on the plan. The DOD Comptroller has expressed commitment to the FIAR goals, and established a focused approach that is intended to help DOD achieve successes in the near term. But the financial transformation needed at DOD, and its removal from GAO’s high-risk list, is a long-term endeavor. Improving financial management will need to be a cross-functional endeavor. It requires the involvement of DOD operations performing other business functions that interact with financial management—including those in the high-risk areas of contract management, supply chain management, support infrastructure management, and weapon systems acquisition. As acknowledged by DOD officials, sustained and active involvement of the department’s Chief Management Officer, the Deputy Chief Management Officer, the military departments’ Chief Management Officers, the DOD Comptroller, and other senior leaders is critical. Within every administration, there are changes at the senior leadership; therefore, it is paramount that the current initiative be institutionalized throughout the department—at all working levels—in order for success to be achieved. Effective plan to correct internal control weaknesses. In May 2009, we reported that the FIAR Plan did not establish a baseline of the department’s state of internal control and financial management weaknesses as its starting point. Such a baseline could be used to assess and plan for the necessary improvements and remediation to be used to measure incremental progress toward achieving estimated milestones for each DOD component and the department. DOD currently has efforts underway to address known internal control weaknesses through three interrelated programs: (1) Internal Controls over Financial Reporting (ICOFR) program, (2) ERP implementation, and (3) FIAR Plan. However, the effectiveness of these three interrelated efforts at establishing a baseline remains to be seen. Furthermore, DOD has yet to identify the specific control actions that need to be taken in Waves 4 and 5 of the FIAR Plan, which deal with asset accountability and other financial reporting matters. Because of the department’s complexity and magnitude, developing and implementing a comprehensive plan that identifies DOD’s internal control weaknesses will not be an easy task. But it is a task that is critical to resolving the long-standing weaknesses and will require consistent management oversight and monitoring for it to be successful. Competent financial management workforce. Effective financial management in DOD will require a knowledgeable and skilled workforce that includes individuals who are trained and certified in accounting, well versed in government accounting practices and standards, and experienced in information technology. Hiring and retaining such a skilled workforce is a challenge DOD must meet to succeed in its transformation to efficient, effective, and accountable business operations. The National Defense Authorization Act for Fiscal Year 2006 directed DOD to develop a strategic plan to shape and improve the department’s civilian workforce. The plan was to, among other things, include assessments of (1) existing critical skills and competencies in DOD’s civilian workforce, (2) future critical skills and competencies needed over the next decade, and (3) any gaps in the existing or future critical skills and competencies identified. In addition, DOD was to submit a plan of action for developing and reshaping the civilian employee workforce to address any identified gaps, as well as specific recruiting and retention goals and strategies on how to train, compensate, and motivate civilian employees. In developing the plan, the department identified financial management as one of its enterprisewide mission-critical occupations. In July 2011, we reported that DOD’s 2009 overall civilian workforce plan had addressed some legislative requirements, including assessing the critical skills of its existing civilian workforce. Although some aspects of the legislative requirements were addressed, DOD still has significant work to do. For example, while the plan included gap analyses related to the number of personnel needed for some of the mission-critical occupations, the department had only discussed competency gap analyses for 3 mission-critical occupations—language, logistics management, and information technology management. A competency gap for financial management was not included in the department’s analysis. Until DOD analyzes personnel needs and gaps in the financial management area, it will not be in a position to develop an effective financial management recruitment, retention, and investment strategy to successfully address its financial management challenges. Accountability and effective oversight. The department established a governance structure for the FIAR Plan, which includes review bodies for governance and oversight. The governance structure is intended to provide the vision and oversight necessary to align financial improvement and audit readiness efforts across the department. To monitor progress and hold individuals accountable for progress, DOD managers and oversight bodies need reliable, valid, meaningful metrics to measure performance and the results of corrective actions. In May 2009, we reported that the FIAR Plan did not have clear results-oriented metrics. To its credit, DOD has taken action to begin defining results-oriented FIAR metrics it intends to use to provide visibility of component-level progress in assessment; and testing and remediation activities, including progress in identifying and addressing supporting documentation issues. We have not yet had an opportunity to assess implementation of these metrics—including the components’ control over the accuracy of supporting data—or their usefulness in monitoring and redirecting actions. Ensuring effective monitoring and oversight of progress—especially by the leadership in the components—will be key to bringing about effective implementation, through the components’ Financial Improvement Plans, of the department’s financial management and related business process reform. If the department’s future FIAR Plan updates provide a comprehensive strategy for completing Waves 4 and 5, the plan can serve as an effective tool to help guide and direct the department’s financial management reform efforts. Effective oversight holds individuals accountable for carrying out their responsibilities. DOD has introduced incentives such as including FIAR goals in Senior Executive Service Performance Plans, increased reprogramming thresholds granted to components that receive a positive audit opinion on their Statement of Budgetary Resources, audit costs funded by the Office of the Secretary of Defense after a successful audit, and publicizing and rewarding components for successful audits. The challenge now is to evaluate and validate these and other incentives to determine their effectiveness and whether the right mix of incentives has been established. Well-defined enterprise architecture. For decades, DOD has been challenged in modernizing its timeworn business systems. Since 1995, we have designated DOD’s business systems modernization program as high risk. Between 2001 and 2005, we reported that the modernization program had spent hundreds of millions of dollars on an enterprise architecture and investment management structures that had limited value. Accordingly, we made explicit architecture and investment management-related recommendations. Congress included provisions in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 that were consistent with our recommendations. In response, DOD continues to take steps to comply with the act’s provisions and to satisfy relevant system modernization management guidance. Collectively, these steps address best practices in implementing the statutory provisions concerning the business enterprise architecture and review of systems costing in excess of $1 million. However, long-standing challenges that we previously identified remain to be addressed. Specifically, while DOD continues to release updates to its corporate enterprise architecture, the architecture has yet to be federated through development of aligned subordinate architectures for each of the military departments. In this regard, each of the military departments has made progress in managing its respective architecture program, but there are still limitations in the scope and completeness, as well as the maturity of the military departments’ architecture programs. For example, while each department has established or is in the process of establishing an executive committee with responsibility and accountability for the enterprise architecture, none has fully developed an enterprise architecture methodology or a well-defined business enterprise architecture and transition plan to guide and constrain business transformation initiatives. In addition, while DOD continues to establish investment management processes, the DOD enterprise and the military departments’ approaches to business systems investment management still lack the defined policies and procedures to be considered effective investment selection, control, and evaluation mechanisms. Until DOD fully implements these longstanding institutional modernization management controls, its business systems modernization will likely remain a high-risk program. Successful implementation of the ERPs. The department has invested billions of dollars and will invest billions more to implement the ERPs. DOD officials have said that successful implementation of ERPs is key to transforming the department’s business operations, including financial management, and in improving the department’s capability to provide DOD management and Congress with accurate and reliable information on the results of DOD’s operations. DOD has stated that the ERPs will replace over 500 legacy systems. The successful implementation of the ERPs is not only critical for addressing long-standing weaknesses in financial management, but equally important for helping to resolve weaknesses in other high-risk areas such as business transformation, business system modernization, and supply chain management. Over the years we have reported that the department has not effectively employed acquisition management controls to help ensure the ERPs deliver the promised capabilities on time and within budget. Delays in the successful implementation of ERPs have extended the use of existing duplicative, stovepiped systems, and continued funding of the existing legacy systems longer than anticipated. Additionally, the continued implementation problems can erode savings that were estimated to accrue to DOD as a result of modernizing its business systems and thereby reduce funds that could be used for other DOD priorities. To help improve the department’s management oversight of its ERPs, we have recommended that DOD define success for ERP implementation in the context of business operations and in a way that is measurable. Accepted practices in system development include testing the system in terms of the organization’s mission and operations—whether the system performs as envisioned at expected levels of cost and risk when implemented within the organization’s business operations. Developing and using specific performance measures to evaluate a system effort should help management understand whether the expected benefits are being realized. Without performance measures to evaluate how well these systems are accomplishing their desired goals, DOD decision makers, including program managers, do not have all the information they need to evaluate their investments to determine whether the individual programs are helping DOD achieve business transformation and thereby improve upon its primary mission of supporting the warfighter. Another key element in DOD efforts to modernize its business systems is investment management policies and procedures. We reported in June 2011 that DOD’s oversight process does not provide sufficient visibility into the military department’s investment management activities, including its reviews of systems that are in operations and maintenance made and smaller investments. As discussed in our information technology investment management framework and previous reports on DOD’s investment management of its business systems, adequately documenting both policies and associated procedures that govern how an organization manages its information technology projects and investment portfolios is important because doing so provides the basis for rigor, discipline, and repeatability in how investments are selected and controlled across the entire organization. Until DOD fully defines missing policies and procedures, it is unlikely that the department’s over 2,200 business systems will be managed in a consistent, repeatable, and effective manner that, among other things, maximizes mission performance while minimizing or eliminating system overlap and duplication. To this point, there is evidence showing that DOD is not managing its systems in this manner. For example, DOD reported that of its 79 major business and other IT investments, about a third are encountering cost, schedule, and performance shortfalls requiring immediate and sustained management attention. In addition, we have previously reported that DOD’s business system environment has been characterized by (1) little standardization, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) manual data entry into multiple systems. Because DOD spends billions of dollars annually on its business systems and related IT infrastructure, the potential for identifying and avoiding the costs associated with duplicative functionality across its business system investments is significant. In closing, I am encouraged by the recent efforts and commitment DOD’s leaders have shown toward improving the department’s financial management. Progress we have seen includes recently issued guidance to aid DOD components in their efforts to address their financial management weaknesses and achieve audit readiness; standardized component financial improvement plans to facilitate oversight and monitoring; and the sharing of lessons learned. In addition, the DCMO and the DOD Comptroller have shown commitment and leadership in moving DOD’s financial management improvement efforts forward. The revised FIAR strategy is still in the early stages of implementation, and DOD has a long way and many long-standing challenges to overcome, particularly with regard to sustained commitment, leadership, and oversight, before the department and its military components are fully auditable, and DOD financial management is no longer considered high risk. However, the department is heading in the right direction and making progress. Some of the most difficult challenges ahead lie in the effective implementation of the department’s strategy by the Army, Navy, Air Force, and DLA, including successful implementation of ERP systems and integration of financial management improvement efforts with other DOD initiatives. GAO will continue to monitor the progress of and provide feedback on the status of DOD’s financial management improvement efforts. We currently have work in progress to assess implementation of the department’s FIAR strategy and efforts toward auditability. As a final point, I want to emphasize the value of sustained congressional interest in the department’s financial management improvement efforts, as demonstrated by this Subcommittee’s leadership. Chairman McCaskill and Ranking Member Ayotte, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For further information regarding this testimony, please contact Asif A. Khan, (202) 512-9095 or [email protected]. Key contributors to this testimony include J. Christopher Martin, Senior-Level Technologist; F. Abe Dymond, Assistant Director; Gayle Fischer, Assistant Director; Greg Pugnetti, Assistant Director; Darby Smith, Assistant Director; Beatrice Alff; Steve Donahue; Keith McDaniel; Maxine Hattery; Hal Santarelli; and Sandy Silzer. The first three waves focus on achieving the DOD Comptroller’s interim budgetary and asset accountability priorities, while the remaining two waves are intended to complete actions needed to achieve full financial statement auditability. However, the department has not yet fully defined its strategy for completing waves 4 and 5. Each wave focuses on assessing and strengthening internal controls and business systems related to the stage of auditability addressed in the wave. Wave 1—Appropriations Received Audit focuses on the appropriations receipt and distribution process, including funding appropriated by Congress for the current fiscal year and related apportionment/reapportionment activity by the OMB, as well as allotment and sub-allotment activity within the department. Wave 2—Statement of Budgetary Resources Audit focuses on supporting the budget-related data (e.g., status of funds received, obligated, and expended) used for management decision making and reporting, including the Statement of Budgetary Resources. In addition to fund balance with Treasury reporting and reconciliation, other significant end-to-end business processes in this wave include procure-to-pay, hire- to-retire, order-to-cash, and budget-to-report. Wave 3—Mission Critical Assets Existence and Completeness Audit focuses on ensuring that all assets (including military equipment, general equipment, real property, inventory, and operating materials and supplies) that are recorded in the department’s accountable property systems of record exist; all of the reporting entities’ assets are recorded in those systems of record; reporting entities have the right (ownership) to report these assets; and the assets are consistently categorized, summarized, and reported. Wave 4—Full Audit Except for Legacy Asset Valuation includes the valuation assertion over new asset acquisitions and validation of management’s assertion regarding new asset acquisitions, and it depends on remediation of the existence and completeness assertions in Wave 3. Also, proper contract structure for cost accumulation and cost accounting data must be in place prior to completion of the valuation assertion for new acquisitions. It involves the budgetary transactions covered by the Statement of Budgetary Resources effort in Wave 2, including accounts receivable, revenue, accounts payable, expenses, environmental liabilities, and other liabilities. Wave 5—Full Financial Statement Audit focuses efforts on assessing and strengthening, as necessary, internal controls, processes, and business systems involved in supporting the valuations reported for legacy assets once efforts to ensure control over the valuation of new assets acquired and the existence and completeness of all mission assets are deemed effective on a go-forward basis. Given the lack of documentation to support the values of the department’s legacy assets, federal accounting standards allow for the use of alternative methods to provide reasonable estimates for the cost of these assets. In the context of this phased approach, DOD’s dual focus on budgetary and asset information offers the potential to obtain preliminary assessments regarding the effectiveness of current processes and controls and identify potential issues that may adversely impact subsequent waves. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As one of the largest and most complex organizations in the world, the Department of Defense (DOD) faces many challenges in resolving serious problems in its financial management and related business operations and systems. DOD is required by various statutes to (1) improve its financial management processes, controls, and systems to ensure that complete, reliable, consistent, and timely information is prepared and responsive to the financial information needs of agency management and oversight bodies, and (2) produce audited financial statements. Over the years, DOD has initiated numerous efforts to improve the department's financial management operations and achieve an unqualified (clean) opinion on the reliability of its reported financial information. These efforts have fallen short of sustained improvement in financial management or financial statement auditability. The Subcommittee has asked GAO to provide its perspective on the status of DOD's financial management weaknesses and its efforts to resolve them; the challenges DOD continues to face in improving its financial management and operations; and the status of its efforts to implement automated business systems as a critical element of DOD's Financial Improvement and Audit Readiness strategy. DOD financial management has been on GAO's high-risk list since 1995 and, despite several reform initiatives, remains on the list today. Pervasive deficiencies in financial management processes, systems, and controls, and the resulting lack of data reliability, continue to impair management's ability to assess the resources needed for DOD operations; track and control costs; ensure basic accountability; anticipate future costs; measure performance; maintain funds control; and reduce the risk of loss from fraud, waste, and abuse. DOD spends billions of dollars each year to maintain key business operations intended to support the warfighter, including systems and processes related to the management of contracts, finances, supply chain, support infrastructure, and weapon systems acquisition. These operations are directly impacted by the problems in financial management. In addition, the long-standing financial management weaknesses have precluded DOD from being able to undergo the scrutiny of a financial statement audit. DOD's past strategies for improving financial management were ineffective, but recent initiatives are encouraging. In 2005, DOD issued its Financial Improvement and Audit Readiness (FIAR) Plan for improving financial management and reporting. In 2009, the DOD Comptroller directed that FIAR efforts focus on financial information in two priority areas: budget and mission-critical assets. The FIAR Plan also has a new phased approach that comprises five waves of concerted improvement activities. The first three waves focus on the two priority areas, and the last two on working toward full auditability. The plan is being implemented largely through the Army, Navy, and Air Force military departments and the Defense Logistics Agency, lending increased importance to the committed leadership in these components. Improving the department's financial management operations and thereby providing DOD management and Congress more accurate and reliable information on the results of its business operations will not be an easy task. It is critical that current initiatives related to improving the efficiency and effectiveness of financial management that have the support of the DOD's Deputy Chief Management Officer and Comptroller continue with sustained leadership and monitoring. Absent continued momentum and necessary future investments, current initiatives may falter. Below are some of the key challenges that DOD must address for its financial management to improve to the point where DOD is able to produce auditable financial statements: (1) committed and sustained leadership, (2) effective plan to correct internal control weaknesses, (3) competent financial management workforce, (4) accountability and effective oversight, (5) well-defined enterprise architecture, and (6) successful implementation of the enterprise resource planning systems. |
DOE’s missions include developing, maintaining, and securing the nation’s nuclear weapons capability; cleaning up the environmental legacy resulting from over 50 years of producing nuclear weapons; and conducting basic energy and science research and development. The department carries out these diverse missions at over 50 major installations in 35 states. With a DOE workforce of about 16,000 employees and over 100,000 contractor staff, the department relies on its contractors to manage and operate its facilities and accomplish its missions. DOE manages these functions through its program offices at DOE headquarters and its field offices. The three largest program offices— Environmental Management, Defense Programs, and Science—accounted for over 70 percent of DOE’s budget for fiscal year 2001. DOE’s reliance on contractors to carry out its missions and the department’s history of both inadequate management and oversight and failure to hold its contractors accountable for results led us to designate DOE contract management as a high-risk area vulnerable to fraud, waste, abuse, and mismanagement. In response to these and other criticisms, DOE began evaluating its contracting practices and, in February 1994, issued a report—Making Contracting Work Better and Cost Less—that contained 48 recommendations. The recommendations included three key areas: selecting alternatives to traditional contracting arrangements used for management and operation of its sites, increasing competition to improve performance, and developing and using performance-based contracting tools. To facilitate and oversee the implementation of the contract reform recommendations, in June 1994, DOE established the Contract Reform Project Office, which became the Office of Contract Reform and Privatization in 1997. This office, which monitored and assessed the progress of DOE’s contract reform initiative, was disbanded in late 2001 as part of the department’s reorganization of its support offices. DOE’s Office of Management, Budget, and Evaluation/Chief Financial Officer is now responsible for oversight of DOE’s contract reform efforts. Since 1996, the department has made progress in implementing three key contract reform initiatives—developing alternative contracting approaches, increasing competition, and converting to performance-based contracts, although DOE continues to address challenges in implementing these initiatives. Concerning alternative contracting approaches, DOE encouraged the use of different types of contracts aimed at improving contractor performance and results. However, DOE did not use a systematic approach to determine the best contract type for a given situation and experienced problems with implementation. To become more systematic in making this contract selection decision, DOE has been developing a formal strategy to evaluate contract and financing alternatives and the risks associated with various approaches. In the second reform area—increasing competition—DOE changed its contracting rules to set competition as the standard approach to awarding contracts. Under these rules, the percentage of major site contracts awarded competitively (competed) increased to 56 percent as of 2001, up from 38 percent as of 1996. All but one of the 11 contracts that had not been competed were for managing research and development centers exempted by statute from mandatory competition. The department evaluates these contracts to determine whether they should be extended or competed. DOE has thus far decided on non-competitive extensions for these contracts, including some for contractors that have experienced performance problems. DOE opted to address these performance problems with specific contract provisions, but it remains to be seen whether this approach will succeed. Finally, all of DOE’s major site contracts are now performance-based, incorporating results-oriented statements of work and the performance objectives and measures used to evaluate contractor performance. To further emphasize the importance of the performance-based approach, DOE has increased the proportion of contractor fees tied to achieving the performance objectives to 70 percent in fiscal year 2001 from 34 percent in fiscal year 1996. However, development of good performance measures has continued to be a challenge, and DOE acknowledges that it must make further progress in this area. One of the major focuses of DOE’s contract reform initiative has been developing alternatives to the traditional contracts used for the management and operation of its major sites and facilities. Under these “management and operating” contracts, one primary contractor performed almost all of the work at a site, the contractor had broadly defined statements of work, and DOE reimbursed the contractor for virtually all costs. As a result, work under these contracts focused more on annual work plans and budgets rather than on specific schedule and cost targets for accomplishing work. In implementing alternatives to its traditional contracting arrangements, DOE’s intent was to use the best contracting alternative given the required work and the objectives and risks associated with that work. DOE implemented four main actions as alternatives to these management and operating contracts, but has experienced problems with implementation, in part due to difficulties in determining the most appropriate approach for a given situation, as follows: Reducing the number of large, cost-reimbursement contracts that cover virtually all of the activities at a DOE site. DOE has modified a total of 20 site contracts since 1994, so that no single contractor manages and operates those sites. Some of these management and operating contracts were divided into smaller service contracts, such as for guard services. Other management and operating contracts were changed to integration contracts (commonly called management and integration contracts). According to DOE officials, integration contracts were used to better reflect the changing mission of the site and to better tailor the contract scope to the program requirements. Under a management and integration contract, one contractor is responsible for integrating the work of a variety of subcontractors that carry out most of the actual work at the sites. The integrating contractor is responsible for selecting “best-in-class” subcontractors for specific work activities, overseeing the work done by the subcontractors, and ensuring that activities at the site are effectively coordinated. DOE has used this integration contract approach at sites such as Oak Ridge in Tennessee for environmental restoration work. However, DOE’s Office of Inspector General reported in March 2001 that the integrating contractor at Oak Ridge has subcontracted out a third less work than originally proposed, resulting in less cost savings to the government. Implementing a more disciplined approach to “make-or-buy” decisions by site contractors. DOE revised its regulations in 1997 to require that its major site contractors develop make-or-buy plans instead of having most of the work at a site performed by the primary contractor. Under these plans, the primary contractor must identify work functions that could be performed at less cost or more effectively through subcontracts. Although all of its major contractors have approved make-or-buy plans, DOE acknowledges that it does not routinely gather information on how much work is done by subcontractors, making it difficult to determine the extent to which this approach was implemented. In addition, DOE’s Office of Inspector General reported in February 2000 that three of the four contractors that it reviewed had either not included all functions in their make-or-buy plans or had not done the required cost-benefit analysis on work functions that could have been subcontracted. Implementing an alternative contracting and financing approach called privatization. DOE started its “privatization initiative” in 1995 as a way to reduce the cost and speed the cleanup of its contaminated sites. This initiative was primarily an alternative contracting and financing strategy to foster open competition for fixed-price contracts; to require the contractor to design, finance, build, own, and operate the facilities necessary to meet waste treatment requirements; and to pay the contractor for units of successfully treated waste. DOE’s experiences with this approach showed that privatization could achieve cost savings on projects with a well-defined scope of work and few uncertainties, such as laundry facilities for contaminated uniforms and other items at the Hanford site. However, on complex cleanup projects such as the effort at Idaho Falls to clean up Pit 9, privatization had little success in achieving cost savings, keeping the project moving forward on schedule, or getting improved contractor performance. Establishing “closure contracts” that tie performance incentives to contract completion, not to annual activities. DOE has used closure contracts at several sites that are scheduled for cleanup and closure, including the Rocky Flats site in Colorado and the Fernald site in Ohio. These contracts emphasize completing all work at a site or a portion of a site by a target date and at a target cost. Most of the fee or profit to be earned by the contractor depends upon meeting the schedule and cost targets. If the contractor can complete all work on time or sooner and below the target cost, then the contractor can earn additional fee. For example, under the Rocky Flats closure contract, the amount of incentive fee that the contractor can earn ranges from $130 million to $460 million, depending on cost and schedule performance against the targets. Since the target closure date for this contract is December 2006, it remains to be seen whether this approach will be effective in completing the work on time and at lower costs to the government. These problems reflected the lack of a systematic approach to deciding which contract type was best for a given situation. For example, we reported in May of 1998 that DOE’s use of fixed-price contracting was appropriate when projects were well-defined, when uncertainties could be allocated between DOE and the contractor, and when either adequate cost information or multiple competing bidders were available to determine a fair and reasonable price for the work. However, when these conditions did not exist, cost overruns and schedule delays could occur on these fixed-price contracts. DOE has begun to develop a more systematic approach to determining the best contract type for a given situation. For example, in October 2000, DOE issued new policy and guidance for the acquisition of capital assets such as waste treatment facilities. The guidance includes developing an acquisition plan that considers the financial, technical, and performance risks associated with a new project. This policy is consistent with DOE’s overall goal of tailoring the contract type to the work to be performed and the business and technical risks associated with that work. In addition, to strengthen oversight of major acquisitions, in November 2001 DOE issued additional guidance that requires approval of acquisition plans for projects of $5 million and above at the assistant secretary level or higher. Despite these initial steps, DOE is still developing and implementing its formal acquisition strategy, and it is too soon to tell whether this new strategy will help DOE make better decisions about how to acquire capital assets. DOE has increased the proportion of major site contracts awarded competitively, but still extends a number of these site contracts non- competitively, as allowed by procurement law, including contracts for some sites that have experienced contractor performance problems. DOE competed 56 percent of its major site contracts that were up for award or renewal from 1997 through 2001, a significant increase over the 38 percent it had competed from 1991 through 1996 (see table 1). During the 1997 through 2001 period, DOE selected new contractors for 10 of the 14 competitively awarded contracts, compared to 9 new contractors for the 11 competitive awards from 1991 through 1996. (Appendix I contains a listing of DOE’s major site contracts in 2001 and the extent to which they have been competed). The growth in competition at major DOE sites is largely a result of new regulations the department issued under contract reform. The new rules generally require competition for major site contracts and allow a contract period consisting of an initial term of up to 5 years with options to extend the contract provided that the total contract period does not exceed 10 years. Many of the contracts that DOE did not compete have been for its federally funded research and development centers for which DOE may extend contracts non-competitively under the Competition in Contracting Act of 1984. By 2001, all but one of the 11 contracts extended without competition fell under this exemption for research and development centers. The exception was the major site contract for the management of DOE’s West Valley Demonstration Project in New York. DOE extended the contract in 1998 and recently announced plans for another extension. According to DOE procurement officials, this recent extension was because of the limited amount of cleanup work remaining at the site and the lack of interest by other contractors to compete for the work. As part of its overall effort to increase competition for site contracts, DOE also reassessed which sites it should continue to designate as federally funded research and development centers. As a result of the reassessment, DOE has removed six of 22 sites from the federally funded research and development center designation. The department subsequently competed the contracts for two of these, the Knolls and Bettis Atomic Power Laboratories in New York and Pennsylvania. The department restructured the other four contracts and no longer regards them as major site contracts. In six other instances, although DOE has thus far decided the sites should remain designated as federally funded research and development centers, the department has competed the contracts even though federal law and regulations allow DOE to extend the contracts non-competitively. These six competed contracts included those for the Oak Ridge National Laboratory in Tennessee and the Idaho National Engineering and Environmental Laboratory. In addition to its reassessment effort, in 1996 the department issued guidance that it must follow to support any recommendation for a non- competitive extension of any major site contract. Among other things, the guidance called for DOE to provide a certification that competition is not in the best interest of the department, a description of the incumbent contractor’s past performance, an outline of the principal issues and/or significant changes to be negotiated in the contract extension, and in the case of a federally funded research and development center, a showing of the continued need for the research and development center. Based on such documentation, the agency head can authorize a contract extension of up to 5 years. Table 2 lists the ten federally funded research and development centers for which DOE has awarded contracts non- competitively since this guidance was issued. DOE’s decision not to compete some of the federally funded research and development center contracts has not been without controversy. For example, in 2001, DOE extended the management and operating contracts with the University of California for the Los Alamos and Lawrence Livermore National Laboratories. The University of California has operated these sites for 50 years or more and is the only contractor ever to have operated them. In recent years, we and other organizations have documented significant problems with laboratory operations and management at these two laboratories—particularly in the areas of safeguards, security, and project management. Congressional committees and others have called for DOE to compete these contracts. Even with these problems and concerns, however, DOE chose not to compete these contracts. This decision was made at the highest levels in the department and was based on national security considerations. Rather than compete these contracts, DOE intends to address these performance problems using contract mechanisms. In the 2001 contract extension, DOE required the university to focus on strengthening management performance in five areas, including initiatives for safety and project management. For the first 2 years of the 5-year contract period, the University of California must meet specific requirements before it can earn any of the $17 million in incentive fees available under the contract. DOE is to assess the university’s performance on these specific requirements on a pass/fail basis. After the first 2 years of the contract, performance in these 5 areas will be assessed as part of the regular performance measures in the contract. The department’s first (2001) annual assessment found that the contractor was meeting the required milestones for all of the improvement initiatives. However, many of the milestones in the first year involved evaluating existing systems or developing action plans. For other objectives that focus on results, such as demonstrating improved performance in nuclear facility operations, the final outcomes will not be known for several years. Therefore, it remains to be seen whether DOE will be successful in improving the University of California’s performance using these contracting tools. If the University of California does not make significant improvements in its performance, DOE may need to reconsider its decision not to compete the contracts. DOE has reported that all of its major site contracts incorporate performance-based techniques to define requirements and measure results. Before DOE initiated its contract reforms, major site contracts generally had broad statements of work that focused more on annual budgets and work plans rather than specific results to be achieved. Feesunder these contracts usually consisted of a base fee that was guaranteed (fixed) plus an award fee that was paid if the contractor met general performance expectations. In the mid-1990s, DOE began restructuring its major site contracts to use results-oriented statements of work and, for most of the major site contracts, to incorporate performance incentive fees that were designed to reward the contractor if it met or exceeded specific performance expectations in priority areas. These fees may be tied to either subjective or objective performance measures, but DOE regulations suggest the use of specific and quantifiable measures whenever possible. In 1999, DOE issued additional regulations that limited the use of base fee and established a clear preference for contracts where all of the fee was based on a contractor’s performance. Since DOE changed its policy in favor of using incentive fees, there has been a substantial shift in the type of fees available on DOE contracts. As shown in figure 1, between fiscal years 1996 and 2001, DOE decreased the total aggregate amount of base and award fee available to its contractors and substantially increased the amount of fee that is based on performance incentives. For individual contracts, the percentage of each fee type varied widely. For example, in fiscal year 2001, the Sandia National Laboratories contract had 100 percent base fee, and the Oak Ridge National Laboratory contract had 100 percent performance incentive fee. In addition to shifting most of the fee available to incentive fee, in 1999, DOE also established a new contract clause making payment of fee conditional on meeting certain safety requirements and other minimum requirements in the contract. According to language in this clause, in order to receive all of the earned fee, the contractor must meet, among other requirements, minimum environment, safety, and health requirements and avoid any “catastrophic” events such as a fatality or serious workplace- related injury. Since 1999, DOE has withheld over $5 million in fees from six contractors under this conditional payment of fee clause. The largest fee withheld—$2 million—was from CH2M Hill Hanford Group, Inc., for “failures to meet the contractually imposed minimum environment, safety, and health performance requirements” as defined by the contractor’s integrated safety management system. Although these changes reflect a marked shift in DOE’s approach, the lack of good performance measures blunted their effect. Since 1997, numerous studies and reports—both internal and external to the department— criticized DOE’s performance-based contracts for ineffective performance measures. Examples include the following: DOE’s Office of Inspector General has issued 11 reports since 1997 that found multiple problems with DOE’s performance measures. In 2001, the Inspector General reported, after reviewing the Office of River Protection Tank Farm Management, Oak Ridge Y-12 Plant, and Kansas City Plant contracts, that DOE was not focusing on high priority outcomes, was loosening performance requirements over time without adequate justification, and was failing to match appropriately challenging contract requirements with fee amounts. The department disagreed with this report, stating that it was not appropriate to evaluate the overall success of performance-based contracts by looking at individual performance measures. In 1999, reporting on a self-assessment of its performance-based contracting practices, DOE concluded that while significant improvements had been made in the management of performance- based contracts, several issues had arisen. These issues included difficulties with measuring the results of basic science activities, establishing performance measures that were consistent with project baselines, determining the appropriate use of incentive fees for non- profit contractors, and balancing incentives that both challenge the contractor and continue to reward performance that has been sustained at an excellent level. In its 1999 review of project management at DOE, the National Research Council found that DOE did not always take advantage of the performance-based incentive approach and did not have standard methods for measuring project performance. The council’s 2001 follow- up assessment stressed the importance of using methods such as performance-based contracting to focus contractors on achieving desired results. The council added that success would be determined by how well these methods are followed and recommended that DOE strengthen its performance-based contracting guidance and practices. In response to these and other criticisms of its performance-based incentives, DOE has taken several actions that include issuing criteria for a performance incentive development process at the field office level and focusing on developing performance incentives more directly linked to a site’s strategic objectives. For example, DOE officials said that multi-year incentives in the Hanford contract and multi-site incentives that tie together activities at four production sites—Kansas City in Missouri, Savannah River in South Carolina, Pantex in Texas, and Y-12 at Oak Ridge, Tennessee—strive to establish the strategic focus that was absent from performance incentives in earlier contracts. DOE officials pointed out that, with these new incentives, greater progress was being made. For example, the Hanford site had reached its cleanup goals for fiscal year 2001. However, it remains to be seen if contractors will meet milestones throughout the contracts’ full length and, if they do not, if DOE will require contractors to forfeit the provisional fee payments as allowed under the contracts. Although DOE has made strides in implementing its contract reform initiatives and has reviewed the performance measures in many of its contracts, the department has developed little objective information to demonstrate whether the reforms have resulted in improved contractor performance. In the early years of contract reform, DOE measured progress in terms of developing and issuing new contracting policies and guidance. As new policies were established, the department also focused on assessing its progress in implementing these policies in key areas of competition and performance-based contracting. More recently, DOE has reviewed many of its site contracts to determine, among other things, whether the performance incentives are working properly. While these steps are useful, this information does not help DOE determine outcomes—whether, for example, competing more contracts resulted in more favorable contract terms for the government or better performance from its contractors. DOE program managers and procurement officials at DOE headquarters and several sites believe that contract reforms have resulted in improved contractor performance, and they cite a number of examples where they believe contractor performance has improved. However, there are also numerous examples of contractors who performed poorly. Furthermore, DOE’s February 2002 review of its Environmental Management program observed that significant progress in cleanup and risk reduction had not been achieved despite the performance-based contracting approach. Since DOE does not have measures to determine whether the contract reform initiatives had resulted in improved performance, we examined the extent of cost overruns and schedule delays on a number of DOE’s major projects as a partial indicator of success. For these projects, cost and schedule data showed no improvement when compared to similar data in 1996. While this performance information provides only a limited view of department- wide contractor performance, it does raise questions regarding the overall effectiveness of the reform initiatives. At the outset of contract reform, DOE established specific action steps and related time frames for changing its contracting practices. For example, DOE set a goal of developing guidance by August 1994 for increasing competition in awarding contracts. Subsequently, DOE proposed new regulations concerning contract reforms in the areas of competition, performance-based contracting and fee policies. As the department’s contract reform activities shifted from issuing guidance to restructuring actual contracts, officials began to monitor the extent to which its contracting organizations adopted DOE’s contracting policy changes in key reform areas. Because the contract cycle for the large site contracts was so long—typically contracts were renewed about every 5 years—DOE encouraged early incorporation of contract reform principles as each contract came up for renewal. Over the 8 years since the contract reform initiative was introduced, DOE has primarily gauged its progress by monitoring implementation of the reforms and reviewing individual contracts rather than by developing objective measures to determine whether the reforms have resulted in improved contractor performance. In addition to tracking the number of contracts that incorporated the new requirements to use competition and performance-based features, the department reviewed the implementation of performance-based contracting for many of its major contracts. Some examples of DOE’s monitoring activities include: DOE’s annual performance reports required under the Government Performance and Results Act contained measures for both competing major site contracts and converting them to performance-based contracts. In 1999, DOE reported that it exceeded the goal of awarding at least 50 percent of the major site contracts using competitive procedures. In the reports for the years 1999, 2000, and 2001, DOE met its performance goals to convert all major site contracts awarded in each year to performance-based contracts. DOE’s Office of Procurement and Assistance Management monitored the contracts awarded at major sites. For the years 1997 through 2000, the office reported that DOE met its annual goal of awarding contracts that were performance-based at all of the major sites. DOE maintains a Web site that provides information on the status of its procurement goals. These goals include increasing the use of competition in awarding contracts and of performance-based concepts in those contracts. DOE’s Web site reports that as of 2001, 26 of its major site and facility contracts were competed and that 100 percent of these major contracts are performance-based. In 1997, the department’s self-assessment of contract reform determined that progress had been made in implementing contract reforms across the complex. However, the report noted difficulties in identifying and quantifying contract reform data and recommended on- going analysis of key reform areas such as the effectiveness of fixed- price contracting. In both 1997 and 1999, the department reported on its use of performance-based incentives in major site contracts. The department documented considerable progress in developing guidance and in incorporating performance-based incentives but also found that early incorporation of performance-based concepts had resulted in some poorly structured incentives. For example, performance incentives were sometimes overly focused on process milestones rather than outcomes. The 1997 report recommended issuing guidance on how to restructure performance objectives, but not on how to assess the effectiveness of the restructured incentives. The 1999 report concluded that the quality of contractor performance incentives had improved and that the performance incentives were incorporated into contracts in a more timely manner. The report further stated that the best measure of the effectiveness of the incentives was improvement in contractor performance. The report discussed specific contracts but did not present overall data on contractor performance. Procurement and program officials in headquarters continue to be actively involved in developing and reviewing performance measures in major site contracts. DOE officials said this oversight is improving the quality of performance incentives and providing valuable information on lessons learned. They acknowledged, however, that DOE has not developed objective information on the outcomes associated with the reforms. Such results-oriented information is important to determine the extent to which the contract changes have resulted in improved contractor performance. Although objective performance information focusing on results is not available, DOE program managers and procurement officials at both DOE headquarters and field operations offices believe that contract reforms have made a difference. In support of this view, DOE officials generally provide examples that they believe demonstrate improved contractor performance. For example, officials at DOE’s Albuquerque operations office pointed out that after competing the contract for the Pantex site, the new contractor met required production levels that were not achieved by the previous contractor. These officials also mentioned that the poor performance by the previous contractor was one of the deciding factors in competing the contract for the Pantex site. In addition to the examples of improved performance provided by DOE officials, DOE’s 1999 review of its performance-based contracting practices reported that “anecdotal evidence supports that the proper use of well-structured performance-based incentives is leading to improvements in performance at some DOE sites.” One of the examples cited in this internal review was improved performance at the Rocky Flats site under a performance-based contract established in 1995. Under the previous contract with a broad statement of work, the contractor was primarily safeguarding and maintaining facilities at the site, and no buildings had been decontaminated, demolished, and removed. When DOE competed the contract in 1995 and selected a new contractor, DOE also incorporated performance measures into the contract. Consistent with these measures, the new contractor decontaminated, demolished, and removed six buildings during fiscal year 1996 and 12 during fiscal year 1998. Other examples demonstrate, however, that the instances DOE cites are not necessarily representative of the overall performance of DOE’s contractors. Examples of poor performance by DOE’s contractors include the following: DOE has experienced major cost overruns and schedule delays on the National Ignition Facility at Lawrence Livermore National Laboratory in California. This facility, the size of a football stadium, is designed to produce intense pressures and temperatures to simulate in a laboratory the thermonuclear conditions created in nuclear explosions. DOE considers the facility to be an essential component of the program to ensure the safety and reliability of the nuclear weapons stockpile in the absence of nuclear testing. Although DOE had incorporated performance-based measures and incentives into the overall contract with the University of California, which operates the laboratory and manages the construction project, performance problems still occurred. We reported in August 2000 that the estimated cost of this facility had increased from $2.1 billion to $3.3 billion and that the scheduled completion date had been extended by 6 years to 2008. We attributed these major cost and schedule changes to inadequate management by the contractor and DOE oversight failures. We also found that the performance-based contract placed little emphasis on the National Ignition Facility project even though it dominated the laboratory’s budget and mission. DOE withheld $2 million of the fiscal year 1999 performance fee in recognition of the “significant mission disruption” caused by problems with this project. DOE officials said that the department has since modified the performance-based contract to increase the emphasis on this project and has taken additional steps to improve both contractor management and DOE oversight. DOE has had problems with cost and schedule performance on its contract for the Mound site in Ohio. In August 1997, DOE awarded a cost-plus-award fee performance-based contract for the accelerated cleanup of the Mound site. This contract called for cleaning up the site and transferring facilities to the local community by no later than September 2005 at a total estimated cost of $427 million. In May 2001, DOE’s Office of Inspector General reported that the department and the contractor had committed to that schedule without knowing whether the date was achievable and that the cost and schedule had been established with limited knowledge of the soil and building contamination. The report added that completion of this work was estimated for December 2009 at a cost of over $1 billion. DOE is becoming aware of the problems with relying heavily on anecdotal information when trying to assess outcomes. Officials in one of DOE’s largest program offices—Environmental Management, representing almost a third of the department’s overall budget—recently reported fundamental problems with their program, and with the department’s ability to manage for results. In a February 2002 review, the office stated that although the Environmental Management program had spent over $60 billion since 1989, little progress had been made toward cleaning up radioactive and hazardous wastes resulting from over 50 years of producing nuclear weapons, or toward reducing risks to the public and the environment.During fiscal years 2000 and 2001, however, most of the contractors at Environmental Management sites had earned more than 90 percent of their available performance incentive fee, indicating that the contractors were successfully achieving the performance goals established in their contracts. The Assistant Secretary for Environmental Management reported that if such “successes” can take place without significant progress in cleanup and risk reduction, the program has been using the wrong set of indicators to measure success. She added that Environmental Management program indicators “measured process, not progress, opinions, not results.” Among the conclusions in the report was that the Environmental Management program needed to significantly improve its management of performance-based contracts, focus on accomplishing measurable results, and align contractors’ performance fees with end points rather than intermediate milestones. Based on our review of the performance of selected projects, it does not appear that DOE’s contractors have significantly improved their performance since 1996. Because we could not determine whether DOE’s contract reform initiatives had resulted in improved performance using the department’s measures, we reviewed DOE’s ongoing projects to assess whether they were experiencing cost overruns or schedule delays. We compared current ongoing DOE projects with estimated total costs exceeding $200 million with similar information we developed in 1996 on projects with estimated total costs exceeding $100 million. In both 1996 and 2001, over half of the projects we reviewed had both schedule delays and cost increases. Furthermore, as shown in table 3, the proportion of projects experiencing cost increases of more than double the initial cost estimates or schedule delays of 5 years or more increased during the 6-year period. For example, the initial cost estimate in 1998 for the spent nuclear fuels dry storage project at Idaho Falls, Idaho, was $123.8 million with a completion date of 2001. Currently, the cost estimate for this project is $273 million with a completion date of 2006. Appendix II contains additional information on DOE’s ongoing major projects as of December 2001. The projects we reviewed—with estimated costs ranging from $270 million to $8.4 billion—may not be representative of all DOE projects. Although this comparison provides only a limited measure of contractor performance, it does raise questions about the overall impact of DOE’s contract reform initiative on improving contractor performance. The problems with DOE’s ability to track the results of contract reform reflect a broader need to develop an approach to managing its initiatives that is more consistent with best practices. As part of our review, we looked at best practices for managing improvement initiatives. We found that high-performing organizations use a systematic results-oriented management approach that includes defining goals for the initiative and gauging progress towards those goals. They also use information on results to continuously adjust the implementation of the initiative and sustain improvements. DOE’s approach to contract reform did not incorporate these best practices, and its emphasis on measuring progress in terms of implementation indicated a focus primarily on contract reform itself as a goal rather than improved performance. Furthermore, DOE faces the same fundamental challenge—lack of a results-oriented approach—in several other management improvement initiatives that, if successful, could enhance its contract reform efforts. DOE’s approach to implementing its contract reform initiatives has not followed best management practices. In our review of authoritative literature we found that leading organizations were able to sustain such management improvement initiatives by using a systematic, results- oriented approach that incorporated a rigorous measurement of progress. Such an approach typically included the following steps: (1) define clear goals for the initiative, (2) develop an implementation strategy that sets milestones and establishes responsibility, (3) establish results-oriented outcome measures to gauge progress toward the goals, and (4) use results- oriented data to evaluate the effectiveness of the initiative and make additional changes where warranted. While DOE followed an implementation strategy for its contract reform initiatives, it implemented those initiatives largely without clearly defining goals, gauging progress toward those goals with results-oriented measures, or using results- oriented data to evaluate the effectiveness of its reforms. Although DOE had set general, overarching goals for its contract reform efforts, the department did not further define those goals. As stated in the 1994 report of the Contract Reform Team, the overall goal of contract reform was to make the department’s contracting process “…work better and cost less.” The secretary’s preface to the report presented the fundamental problem: “DOE is not adequately in control of its contractors. As a result, the contractors are not sufficiently accountable to the department, and we are not in a position to ensure prudent expenditure of taxpayer dollars in pursuit of our principle missions.” However, DOE did not further align those broad goals in relation to the specific contract reform efforts. For example, the department did not frame its contract reform initiatives to increase competition in terms of improved contractor accountability, better performance, or reduced costs. While increasing the number of competitively awarded contracts is a positive development, it does not by itself indicate that the department’s contracting processes work better or cost less. DOE was effective at establishing an implementation strategy that set milestones and assigned responsibility for carrying it out. For example, DOE’s February 1994 report by its contract reform team contained 48 specific reform actions, each containing a required action, establishing a deadline, and assigning a specific DOE office with responsibility for developing the reform action. These reform actions, for the most part, involved developing policies, procedures, guidance, and plans to implement reforms such as competitive procurements and performance incentives. Our 1996 assessment of DOE progress toward implementing those goals found that DOE had completed 47 of 48 reform actions. Since that time, DOE has continued to set milestones and assign responsibility for its reform initiatives. For example, following an internal review in 1997, the department developed another series of actions to improve its implementation of reform initiatives pertaining to performance-based incentives. Those actions also had milestones for completion and assigned responsibility for carrying them out. DOE did not establish results-oriented outcome measures for its contract reform initiatives. Instead, as discussed earlier, DOE generally focused on measuring the progress of implementing its reform initiatives and reviewing individual contracts, but did not develop ways to gauge progress towards its overarching reform goals of making contracting work better and cost less. A shortcoming of goals defined so generally is the lack of objective ways in which to measure progress in meeting those goals. Translating the general goal of “working better” into a more specific objective, such as having contractors complete a greater number of their projects on time and within budget, would have helped the department to identify ways it could measure results and, therefore, gauge progress towards the goals of contract reform. Finally, DOE does not have the results-oriented data to evaluate the effectiveness of its contract reform initiatives. Because the department did not develop clear goals and results-oriented measures, it does not have the results-oriented data necessary to systematically review progress, take corrective action, and reinforce success. Although DOE has received feedback on its reform efforts from internal reviews such as self- assessment reports and external reports by the DOE Inspector General, GAO, and others, these outside reviews are not a substitute for a systematic feedback process. Despite not following best practices for reform initiatives, DOE has taken steps to strengthen the management and oversight of its activities. For example, DOE has recently taken steps to integrate contract, project, and financial management functions under a single office—the Office of Management, Budget, and Evaluation/Chief Financial Officer. DOE officials believe that this action will improve the coordination, oversight, and control of these important activities. Although DOE’s contract reform initiative has focused on increasing competition and holding contractors more accountable for results, DOE recognizes that contract reform by itself is not enough to ensure that improved contractor performance actually occurs. DOE has begun several other initiatives that, if successfully implemented, could enhance its contract reform efforts. These initiatives include efforts to strengthen its management of projects, develop and use information systems for oversight and control, and improve the training and expertise of the DOE staff overseeing contractor activities. We conducted only a limited review of these initiatives and did not fully assess DOE’s implementation against all four steps in a “best practices” approach. Nevertheless, we identified instances where, as with the contract reform initiative, DOE’s management of the initiative fell short of best management practices in one or more areas. Table 4 below outlines these initiatives, how they could enhance the contract reform efforts, and the potential management weakness that could limit their effectiveness. Although none of these initiatives have been fully implemented, their effectiveness may be limited by the same lack of a results-oriented approach to managing the initiative and sustaining improvement as does the department’s contract reform efforts. Poor performance by DOE contractors and inadequate DOE management and oversight of those contractors led us to conclude in 1990 that DOE’s contracting practices were at high risk for fraud, waste, abuse, and mismanagement. Subsequently, DOE began its contract reform initiative to improve the performance and accountability of its contractors. Although DOE has undertaken a number of reforms over the years and has monitored its progress in implementing those reforms, it has no good measure of the results of the reforms. Aside from individual examples of good or poor performance on specific projects, DOE cannot tell, for example, if the contract reforms have resulted in better performance by its contractors or more favorable contract terms for the government. Limited evidence we developed suggests that contractors managing DOE’s major projects are performing no better in 2001 than on similar projects in 1996. DOE faces a fundamental challenge to ensuring the effectiveness of its contract reform initiative—developing an approach to managing the initiative that is more consistent with the best practices of high-performing organizations. DOE’s practices in managing its contract reform initiative, as well as its other initiatives such as project management, that could also help to improve contractor performance, fall short of the best practices followed by high-performing organizations. Unless DOE strengthens the way in which it manages initiatives such as contract reform, DOE may not be able to fully realize the benefits of these initiatives and ensure that its programs are adequately protected from fraud, waste, abuse, and mismanagement. To improve the effectiveness of DOE’s contract reform initiative, as well as other management improvement initiatives, we recommend that the department develop an approach to implementing its initiatives that incorporates best practices including the key elements of (1) clearly defined goals, (2) an implementation strategy that sets milestones and establishes responsibility, (3) results-oriented outcome measures, and (4) a mechanism that uses results-oriented data to evaluate the effectiveness of the department’s initiatives and to take corrective actions as needed. We provided a draft of this report to the Department of Energy for its review and comment. DOE’s Director, Office of Management, Budget, and Evaluation/Chief Financial Officer responded that DOE had three main concerns about our report but agreed with our recommendation that DOE develop an approach to its management improvement initiatives, such as contract reform, that is more consistent with the practices of high- performing organizations. DOE’s first concern was that the report characterizes contract reform as DOE’s fundamental management challenge but the report also discusses program and project management issues. DOE believes this creates the misperception that the procurement system can be used to address the myriad of issues facing the department. We believe that our report fairly and accurately describes the context of contract management in DOE. Our report identifies contract management as a major management challenge for DOE, and one that we have reported on for over 10 years. The report does not suggest that contract management is DOE’s primary or most fundamental management challenge. In fact, we have issued other reports such as our December 2001 report on DOE’s major mission, structure, and accountability problems that discuss more fundamental management issues. However, within the context of those more fundamental management challenges, DOE can and should strive to effectively manage its contracts. Our report does not imply that effective contract management will solve the other problems facing the department. In fact, the report discusses initiatives other than contract reform that are under way at DOE, including the project management initiative, because those initiatives could also have an impact on the results of the contract reform initiative. DOE’s second concern was that our report concluded that its contract reform initiative was not managed in a systematic manner. DOE said that its 1994 contract reform initiative was managed systematically and included top management oversight, a matrixed implementation team, clearly defined goals and objectives, an implementation strategy, and identified outcomes. DOE also said it used internal assessments of the effectiveness of specific reform initiatives. Our analysis involved comparing DOE’s approach to contract reform with the best practices for managing improvement initiatives followed by high-performing organizations. That comparison showed that DOE’s approach to contract reform, and to several other management improvement initiatives, was not consistent with those best practices, particularly in the areas of defining measurable goals, establishing results-oriented outcome measures, or developing results-oriented data with which to measure the effectiveness of the initiatives. We revised our report to clarify this point. DOE also questioned how we could criticize its approach to contract reform when we had recommended in earlier reports that it pursue contract reform. Our report does not question the need for contract reform in DOE or the components of DOE’s reform initiative, such as increasing competition and the use of performance-based contracts. Rather, our report assesses what progress DOE has made in implementing the initiatives, whether the initiatives have resulted in improved contractor performance, and any challenges DOE faces in ensuring that its contract reform initiatives are effective. DOE’s third concern was that the report identifies a limited number of projects to support a conclusion that DOE’s contract management system is in trouble. DOE believes the problems are more likely due to program and project management issues and the risks generally associated with unique, technically complex projects and DOE’s funding and political environment. We believe that our report fairly characterizes DOE’s contract management system. Our report clearly states that DOE has developed little objective information to demonstrate whether its contract reforms have improved contractor performance. We pointed out that anecdotal examples can be used to illustrate both improved contractor performance and continued poor contractor performance. And we identify other evidence to suggest that contractor performance may not have improved. We also acknowledged that other factors, such as DOE’s approach to managing projects, could also affect the outcome of DOE’s contract reform efforts. Regarding our recommendation that DOE develop an approach to implementing its management improvement initiatives that includes the key elements found in the best practices of high-performing organizations, DOE agreed with the recommendation and said that it would incorporate our observations and recommendation into its future improvement efforts. DOE also provided technical corrections, which we incorporated as appropriate. DOE’s written comments on our draft report are included in appendix III. We conducted our review from October 2001 through August 2002, in accordance with generally accepted government auditing standards. Appendix IV provides details on our scope and methodology. This report contains a recommendation to you. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and to the House Committee on Government Reform not later than 60 days from the date of this letter and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this letter. Bechtel Hanford Inc. Fluor Hanford Inc. Bechtel BWXT Idaho, LLC Southeastern Universities Research Association Honeywell Federal Manufacturing and Technologies KAPL, Inc. BWXT of Ohio Midwest Research Institute Bechtel Nevada Corp. Bechtel Jacobs Company, LLC BWXT Pantex, LLC Princeton University Sandia Corporation Westinghouse Savannah River Co. The following table shows the original and current cost estimates and completion dates for ongoing DOE projects with estimated costs greater than $200 million. The table does not include 10 additional DOE projects with estimated costs greater than $200 million because the projects were suspended or only recently started as of December 2001. To assess the progress that DOE has made since 1996 in implementing contract reform initiatives in the key areas of developing alternative contracting approaches, increasing competition, and using performance- based contracts, we reviewed DOE’s three self-assessment reports on contract reform efforts and GAO and DOE Office of Inspector General reports on DOE contract and project management since 1996. We also interviewed officials from DOE’s Offices of Contract Management and Procurement and Assistance Policy, and procurement officials with the National Nuclear Security Administration. The National Nuclear Security Administration, a semi-autonomous agency within DOE, has its own procurement organization. However, since both entities follow the same policies, regulations, and guidance, we have not made a distinction in this report between contracts and projects of the two organizations. To assess the extent to which DOE had incorporated the key contract reforms into its major facility contracts, we obtained information on 33 contracts that DOE’s headquarters procurement office identified as site or facility management contracts. We reviewed the contract award history of these major facility contracts, to determine which contracts had been competed as of 1996 and as of 2001. To qualify as a competitively awarded contract, DOE must have issued a request for proposals and a public announcement inviting proposals. We also obtained data on annual budgets and fees available and earned for these same contractors for fiscal years 1996 through 2001. We did not attempt to validate this information provided by DOE. In addition, we reviewed documentation for major facility contracts obtained from DOE’s Albuquerque Operations Office, Richland Operations Office, and the Office of River Protection. To determine the extent to which these initiatives have resulted in improved contractor performance, we interviewed DOE officials from the Office of Contract Management and the three largest program offices— Environmental Management, Defense Programs, and Science. In addition, we interviewed procurement and program office officials at DOE’s Albuquerque Operations Office, Richland Operations Office, and the Office of River Protection. We reviewed documents they provided, including the procurement organization’s balanced scorecard. In addition, we reviewed DOE’s February 2002 review of the Environmental Management program, and numerous GAO and Inspector General reports. Because DOE did not have objective results-oriented measures of contractor performance, as a potential indicator of that performance, we developed information as of December 2001 on the cost and schedule performance of DOE’s ongoing projects and compared that information with similar information we developed in 1996 on DOE major system acquisitions. In 1996, DOE categorized a “major system acquisition” as a project with a total project cost greater than $100 million. When we began our review in January 2002, we learned that DOE had since raised the threshold of “major project” to $400 million. Since our compilation of DOE reported data revealed only 19 ongoing projects that meet the current $400 million threshold (nine of which had recently started or were on hold), we expanded our scope to projects with total project costs greater than $200 million, in order to compare results on a similar number of projects. Those projects were under the management and oversight of DOE’s site contractors or under privatization projects under DOE’s oversight. There may be other projects with total project costs greater than $200 million, but they were not identified by DOE during our review. Because DOE does not maintain centralized data on its projects, we obtained information from project management offices within DOE and its National Nuclear Security Administration. We did not verify the data obtained from DOE, but we did examine the reasonableness of these data based on information in prior GAO reports and audits. For consistency, we used, when available, preliminary budget estimates submitted to the Congress as the basis for original cost estimates and completion dates, comparing those to current cost estimates and completions dates as of December 2001. For this report, we used, wherever possible, the projects’ “total project cost,” which includes construction and operating funds. Where these costs are not available, we used the “total estimated cost,” which includes construction costs. We have footnoted the latter. (See appendix II.) To identify the challenges, if any, that DOE faces in ensuring the effectiveness of its contract reform initiatives, we reviewed the reports of the National Research Council on improving DOE project management. In addition, we reviewed reports and other documentation from the National Academy of Public Administration, the Project Management Institute, and prior GAO work to develop best practices criteria for managing improvement initiatives. We compared DOE’s implementation of its contract reform initiative to these best practices criteria to determine areas of concern. To identify the other management improvement initiatives that could impact contract reform, we reviewed the reports of the National Research Council, GAO and Inspector General; the President’s Management Agenda for fiscal year 2002; and DOE’s 5-year workforce restructuring plans. We also interviewed DOE officials in the Office of Engineering and Construction Management and the Office of Program Analysis and Evaluation. We conducted our review from October 2001 through August 2002 in accordance with generally accepted government auditing standards. In addition to those named above, Carole Blackwell, Robert Crystal, Doreen Feldman, Molly Laster, Patricia Rennie, Carol Shulman, Stan Stenersen, and Arvin Wu made key contributions to this report. | The Department of Energy (DOE), the largest civilian contracting agency in the federal government, relies primarily on contractors to operate its sites and carry out its diverse missions, such as maintaining the nuclear weapons stockpile, cleaning up radioactive and hazardous wastes, and performing research. Although federal law generally requires federal agencies to use competition in selecting a contractor, until the mid-1990s, DOE contracts for the management and operation of its sites generally fit within an exception that allowed for the use of noncompetitive procedures. Since 1996, DOE has made progress toward implementing contract reform initiative in three key areas--developing alternative contracting approaches, increasing competition, and using performance-based contracts. However, DOE continues to encounter challenges in implementing these initiatives. Although DOE has made strides in implementing contract reform initiatives, it is difficult to determine whether contractors' performance has improved because objective performance information is scarce. Over the past 8 years, DOE has primarily gauged progress by measuring its implementation of the reforms, such as the number of contracts competed each year, and by reviewing individual contract performance incentives. DOE faces a fundamental challenge to ensuring the effectiveness of its contract reform initiatives--developing an approach to managing its initiatives and sustaining improvements that would incorporate the best management practices of high-performing organizations. These practices include four key elements: (1) clearly defined goals; (2) an implementation strategy that sets milestones and establishes responsibility; (3) results-oriented outcome measures, established early in the process; and (4) systematic use of results-oriented data to evaluate the effectiveness of the initiative and make additional changes where warranted. |
The Magnuson-Stevens Fishery Conservation and Management Act provides for the conservation and management of fishery resources in the United States. Under the act, eight regional fishery management councils—the New England, Mid-Atlantic, South Atlantic, Gulf of Mexico, Caribbean, Pacific, North Pacific, and Western Pacific councils—are responsible for developing plans for managing fisheries in federal waters. To develop their plans, the councils each use a collaborative process that involves advisory committees, public hearings, and other means to ensure that interested parties have an opportunity to provide input. Council staff then analyze the information for use in plan development. Once a council adopts a plan, NMFS drafts regulations to implement the plan. The council then submits the plan and regulations to the Secretary of Commerce for approval. The Secretary reviews the plan and proposed regulations for consistency with U.S. law and with each other. The plan and proposed regulations may then be published for public comment. Plans may be fully or partially approved, or disapproved and returned to the council for revision. If approved, regulations must be issued for implementation. Once a fishery management plan is approved, NMFS is responsible for implementing it. In the case of an IFQ program, NMFS must set up the systems for collecting annual permit, logbook, and fish dealer data; obtain records of qualifying catches and other information to determine eligibility to hold quota share; process initial requests for quota; and issue the initial quota share. The quota share represents a percentage of the total allowable catch for the fishery, which a fishery management council sets—typically each year—subject to NMFS’s confirmation. To set the total allowable catch, the council relies on stock assessments performed by one of the NMFS regional fisheries science centers. In the case of the halibut fishery, the International Pacific Halibut Commission performs the stock assessment and sets the total allowable catch. Once a fishery management plan becomes operational, NMFS is responsible for administering it. Administrative activities unique to an IFQ program include, among others, calculating and distributing the annual quota allocations, approving and processing quota transfers, and monitoring compliance with program requirements. In addition, administrative activities in early IFQ program years may include adjudicating appeals of the initial allocation. Both NMFS and the councils have responsibility for monitoring existing plans and proposing any changes for approval and implementation by NMFS. NMFS shares responsibility with the U.S. Coast Guard and state agencies for enforcing the rules of a fishery management plan. For an IFQ program, the Coast Guard generally conducts at-sea and aerial surveillance of fishing activities, and NMFS contracts with state agencies to assist its Office for Law Enforcement with inshore activities, such as monitoring the landings for compliance with individual catch limits. NMFS also audits the paper trail (consisting of logbook, landings, and buyer records) created by the IFQ program. The 1996 Sustainable Fisheries Act amended the Magnuson-Stevens Act to require the Secretary of Commerce to recover “actual costs directly related to the management and enforcement” of IFQ programs. The act limits cost recovery fees to 3 percent of the ex-vessel value of fish harvested under any IFQ program and further requires that the fees be collected at the time of landing, at the time of filing a landing report, at the time of sale during a fishing season, or during the final quarter of the year when the fish is harvested. In addition, the Secretary is authorized to reserve up to 25 percent of the fees collected for use in an IFQ loan program to help finance the purchase of quota share by entry-level fishermen and fishermen who fish from small boats. Estimated IFQ management costs for fiscal year 2003 varied by program and, according to fishery managers, when compared with pre-IFQ management costs, were higher for the halibut and sablefish program and lower for the surfclam/ocean quahog program. Whether management costs were higher or lower than under the previous fishery management system depended, in part, on the characteristics of the fishery, as well as program complexity. Also, according to fishery managers, both the fishery management councils and NMFS incurred additional costs associated with the development and implementation of the halibut and sablefish and surfclam/ocean quahog IFQ programs. We aggregated cost estimates for each IFQ program on the basis of information provided by various organizations and estimated that the management costs for fiscal year 2003 ranged from a high of at least $3.2 million for the halibut and sablefish program to a low of $7,600 for the wreckfish program. Since NMFS does not systematically track the costs of IFQ programs or the time spent on IFQ activities, we requested cost information from NMFS and other organizations that performed IFQ-related activities during fiscal year 2003. However, these organizations did not or could not provide cost information for all of their IFQ-related activities. (See app. I for information on the organizations that provided data.) The estimated management costs shown in table 1 varied significantly by program, in part, because of differences in the number of program participants and program design. For example, the halibut and sablefish program had the largest number of quota holders—about 4,300—and a complex set of rules designed, in part, to protect the owner-operator character of the fleet, such as limits on the amount of quota an individual could hold and restrictions on who could receive quota transfers. In contrast, the surfclam/ocean quahog program had no more than 120 quota holders and a simpler set of rules designed, in part, to minimize government regulation. On the basis of information provided to us by NMFS and other organizations involved in IFQ-related activities, we determined that the $3.2 million spent in fiscal year 2003 to manage the halibut and sablefish program represented about 1.4 percent of the $236.5 million ex-vessel value of the halibut and sablefish catch. Of the total spent to manage the program, about 51.6 percent, or $1.7 million, was spent on NMFS enforcement activities, such as dockside monitoring, and 42.7 percent, or $1.4 million, was spent on NMFS administrative activities, such as managing IFQ permits and quota share transfers. The remaining 5.8 percent, or $186,100, was spent by the International Pacific Halibut Commission to conduct halibut stock assessments, among other things, and the North Pacific Fishery Management Council to perform IFQ-related management activities, such as reviewing and revising the program. The reported fiscal year 2003 management costs for the surfclam/ocean quahoq IFQ program totaled about $274,000 and represented about 0.45 percent of the $60 million ex-vessel value of the surfclam and ocean quahog catch. NMFS administrative and review activities constituted about 71.5 percent, or $196,000, of the cost, whereas NMFS enforcement activities amounted to about 5.3 percent, or $14,400. The remaining 23.2 percent, or $64,800, consisted of costs incurred by the Mid-Atlantic Fishery Management Council to review and amend the program and by NOAA’s Northeast Regional Counsel to provide legal advice on measures considered by NMFS and the Mid-Atlantic Council. The wreckfish IFQ program cost estimates totaled about $7,600 for fiscal year 2003. Only two boats fished wreckfish during the 2003 fishing season. However, since NMFS cannot disclose ex-vessel value for fewer than three participants for confidentiality reasons, estimated wreckfish costs as a percentage of ex-vessel value were not available. The estimated costs comprised NMFS administrative activities associated with managing IFQ permits and quota shares for the wreckfish IFQ program. According to NMFS officials, NMFS incurred no other costs associated with the program’s management during fiscal year 2003, and cost information from the South Atlantic Fishery Management Council was not available. IFQ management costs were higher than pre-IFQ costs for the halibut and sablefish program but lower for the surfclam/ocean quahog program, according to fishery managers. Since information on how wreckfish management costs changed with the introduction of the IFQ program was not available, we did not include wreckfish in our analysis of comparative costs. While NMFS does not systematically track IFQ management costs and cost data on fishery management activities prior to the IFQ program are incomplete, fishery managers said the overall costs of managing the halibut and sablefish fisheries were higher under the IFQ program than under the previous management system. Before implementation of the IFQ program, both the halibut and sablefish fisheries were managed by setting an annual catch limit for the entire fishery by fishing area, as well as restricting the times when fishing could occur and the type of gear that could be used—for example, hooks, pots, and nets. However, there were no restrictions on the number of people that could fish. Over time, as more boats entered the fishery and the catch limits were reached sooner, the fishing seasons became shorter; in some areas, fishing was limited to less than 48 hours a year, resulting in so-called fishing derbies—that is, fishermen trying to catch as much fish as they could within the time allotted. With the implementation of the IFQ program, the fisheries were managed under a complex set of rules designed, in part, to protect the owner-operator character of the fleet. For example, the rules limited the amount of quota an individual could hold, restricted who could receive quota transfers, and required that quota be issued by vessel categories with quota transfers prohibited across vessel categories—for example, larger boats could not buy quota from smaller boats. In addition, the IFQ program allowed fishery managers to extend the fishing season to 8 months. The IFQ program’s complexity and longer fishing season required NMFS to devote more staff time to administrative, monitoring, and enforcement activities than previously needed. More specifically, NMFS created a Restricted Access Management division to handle the administrative activities of the IFQ program, such as issuing annual quota allocations, handling quota transfers, and maintaining the IFQ landings database; NMFS created an Office of Administrative Appeals to handle appeals related to the IFQ program, such as appeals of the initial quota allocation determinations and subsequent decisions regarding quota transfers; NMFS hired 20 additional staff (16 enforcement officers and 4 agents) to monitor the individual catch limits of the more than 3,000 halibut fishermen who now, with an 8-month fishing season, could land their catch at any 1 of more than 35 ports along the coasts of Alaska, Oregon, and Washington; and the International Pacific Halibut Commission, which conducts halibut stock assessments and annually establishes halibut catch limits, by geographic area, determined that the IFQ program’s extended season increased the resources needed for the U.S. portion of its halibut sampling program. In contrast to the halibut and sablefish program, fishery managers reported that overall management costs for the surfclam and ocean quahog fisheries were lower following the implementation of the IFQ program. Fishery managers primarily attributed the lower costs to the simplicity of the IFQ program as compared with the previous management system. Before the IFQ program, the fisheries were managed through a combination of tools, such as minimum size limits for harvested clams; annual and quarterly quotas; and, in the case of surfclams, fishing time restrictions. Fishery managers said that the pre-IFQ time management system, which required NMFS to set and monitor an allowable fishing time for each vessel in the fishery, was very labor-intensive for the Mid-Atlantic Council and the following offices: NMFS Sustainable Fisheries, NMFS Enforcement, NOAA Northeast Regional Counsel, and NOAA Northeast General Counsel for Enforcement and Litigation. Further, as overfishing continued, the length of time each vessel was allowed to fish continued to be reduced until it had decreased to six 6-hour trips per fishing quarter in the mid-1980s. According to NMFS officials, the continual changes in policy required NMFS to spend significant staff time monitoring the status of the fishery, as well as drafting revisions to fishery regulations. After implementation of the surfclam/ocean quahog IFQ program, fishery managers reported that the amount of management time the council and NMFS spent on the surfclam and ocean quahog fisheries decreased dramatically. For example, council staff estimated that the IFQ program reduced the amount of time they spent on surfclam/ocean quahog activities from 3 or 4 staff-years annually to less than 1/2 a staff-year during fiscal year 2003. This decrease occurred because the surfclam/ocean quahog population had stabilized, and fishery managers no longer had to micromanage the fisheries. In addition, NMFS officials also reported that enforcement costs were substantially lower after implementation of the surfclam/ocean quahog IFQ program. Before IFQ implementation, enforcement under the time management system required the use of Coast Guard boats and helicopters to monitor boats for compliance with their fishing time restrictions. Enforcement also required monitoring offloads to ensure that minimum clam sizes were being met. With the implementation of the IFQ program and its reliance on individual catch limits, NMFS changed its enforcement efforts from the costly at-sea monitoring of boats to monitoring the amount of clams coming ashore and making sure all landings were reported accurately. The council and NMFS generally believe that the surfclam/ocean quahog fisheries are ideally suited to dockside enforcement because the fisheries have a small number of vessels that can offload their clam cages only at docks with cranes and sell their product to one of a few processors with a canning facility. For this reason, fishery managers said that the surfclam and ocean quahog fisheries required substantially less enforcement effort than before the IFQ program was implemented. According to fishery managers, the fishery councils and NMFS incurred additional costs associated with developing and implementing the halibut and sablefish and surfclam/ocean quahog IFQ programs. IFQ program development, which includes developing the fishery management plan and the regulations and infrastructure to implement it, was time-consuming and costly for fishery management councils and NMFS because of the complexity and controversy of designing a fishery program based on individual quota shares and the need to develop infrastructures to manage the program. In addition to development costs, NMFS reported that it also incurred additional implementation costs during the initial years of the halibut and sablefish and surfclam/ocean quahog IFQ programs, as fishery managers and participants adjusted to a new management system. Both the fishery management councils and NMFS incurred additional costs during the development phase of the halibut and sablefish and surfclam/ocean quahog IFQ programs, according to fishery managers. As shown below, staff from the North Pacific and Mid-Atlantic Councils—the councils responsible for the halibut and sablefish and surfclam/ocean quahog fisheries, respectively—said that the costs the councils incurred annually to develop the IFQ programs were much higher than the annual costs they now incur to monitor and review the programs. North Pacific Council staff estimated that the council devoted 25 percent of its staff time and 20 percent of its budget to the development of the halibut and sablefish IFQ program for 3 years until the program was adopted in 1991. In contrast, they said the council spent less than 10 percent of 1 staff-year on management activities related to the halibut and sablefish program during fiscal year 2003. Mid-Atlantic Council staff said that it took the equivalent of about one full-time council staff between 2 and 3 years to develop the fishery plan amendment that created the surfclam/ocean quahog IFQ program. In contrast, they estimated that they spent about 40 percent of 1 staff-year on the program during fiscal year 2003. Similarly, NMFS reported incurring the following additional costs during the development phase of both IFQ programs. NMFS Sustainable Fisheries staff estimated that it took the equivalent of two and one-half staff almost 2 years to write the regulations for the halibut and sablefish IFQ program, which is significantly higher in comparison with the time it now spends annually to write program regulations. A NOAA Northeast Regional Counsel attorney estimated that providing legal input on the development of the surfclam/ocean quahog program required 30 to 50 percent of one attorney’s time, in contrast to the 5 percent of one attorney’s time spent on the IFQ program during fiscal year 2003, because the surfclam/ocean quahog IFQ program raised legal issues that NMFS had not previously addressed. NMFS Restricted Access Management officials estimated that over a 6-month period, they devoted the equivalent of four full-time staff, in addition to supervisory and clerical staff, to the halibut and sablefish quota application and allocation process. NMFS Restricted Access Management officials also said the Alaska Region spent over $1.2 million on personnel, contractual services related to the establishment of computer technology, and the computerized transaction terminals used to record halibut and sablefish IFQ landings. NMFS Law Enforcement officials estimated that NMFS spent about $2 million during fiscal year 1994 to hire and train 16 new enforcement officers and four agents for the halibut and sablefish program and to establish an enforcement presence in a variety of ports around the state of Alaska and the Pacific Northwest. In addition to development costs, NMFS also reported incurring additional implementation costs during the initial years of the halibut and sablefish and surfclam/ocean quahog IFQ programs. According to fishery managers, management costs for the halibut and sablefish IFQ program were higher during its first years as NMFS and industry adjusted to the new program. For example, as shown below, NMFS incurred additional costs in the area of adjudicating appeals, learning and enforcing new program rules, and handling many minor legal issues related to the halibut and sablefish IFQ program. A NMFS official from the Alaska Region’s Office of Administrative Appeals said the costs associated with appeals from industry related to quota were much higher during the initial years of the halibut and sablefish program than they are today. By the end of the program’s second year, for example, NMFS had received 170 appeals, requiring the equivalent of five or six full-time staff, whereas the region currently receives just 1 or 2 appeals each year. According to NMFS enforcement data, staff in the Alaska Division of NMFS’s Office for Law Enforcement spent almost twice as much time on IFQ activities during the first year of the IFQ program than during the program’s second year. NMFS officials said that in addition to their customary enforcement activities, agents and officers spent a significant amount of time learning new policies and procedures for enforcing IFQ program rules. In addition, the number of written warnings and summary settlements increased from 192 in 1994 to 404 in 1995, the first year of the IFQ program, and then dropped to 260 in 1996 as industry adjusted to the new program rules. Attorneys from NOAA’s Alaska General Counsel for Enforcement and Litigation reported that they received many minor cases resulting from participant misunderstandings about program rules. Also, attorneys needed time to develop their knowledge and familiarity with IFQ case management. As the program matured, however, the number of violations declined, and attorneys became more skilled at handling IFQ violations. Over time, enforcement attorneys have also been able to reduce their workload by handing over clear-cut violations to NMFS enforcement officers for resolution by summary settlement. As a result, the amount of enforcement attorney time spent on the IFQ program has decreased. The surfclam/ocean quahog IFQ program incurred additional costs in several management areas during implementation but also experienced some cost reductions in others. For example, program managers reported that learning to manage transfers and leases of quota shares was very time-consuming for NMFS staff, particularly because the program was the first one with transferable quotas in the country. In addition, management of quota allocations and annual distribution of cage tags was time-consuming until NMFS officials developed a more efficient procedure for producing and distributing tags. A NMFS official estimated that during the program’s first years, these activities required the time of two Sustainable Fisheries’ staff during the first month of each year and 25 percent of their time for the remainder of the year. While some offices incurred additional costs during initial program implementation, NOAA Regional Counsel staff said that they spent considerably less time on the surfclam/ocean quahog fisheries once the IFQ program was implemented. Also, in contrast to the halibut and sablefish IFQ program, there were very few appeals of the initial quota allocation, because the allocation was based on landings and vessel ownership data that already had been recorded. For this reason, according to NOAA Northeast Regional Counsel, it was difficult for fishermen to contest the validity of these data. In 1996, the Magnuson-Stevens Act was amended by the Sustainable Fisheries Act, requiring NMFS to collect a fee to recover the “actual costs directly related to the management and enforcement of any individual fishing quota program” and limiting the fee to 3 percent of the ex-vessel value of the fish harvested. Further, the amendment prohibited NMFS from collecting such fees in the surfclam/ocean quahog and wreckfish fisheries until after January 1, 2000. NMFS implemented cost recovery for the halibut and sablefish program in 2000, 5 years after the IFQ program became operational. However, at the time of our review, NMFS had not implemented cost recovery for the surfclam/ocean quahog and wreckfish IFQ programs. According to NMFS officials, they had not recovered surfclam/ocean quahog or wreckfish management costs as required under the act, because (1) cost recovery has not been a priority for the surfclam/ocean quahog program and (2) very few people were fishing wreckfish, and they believe that recovering program management costs would be an economic burden for these fishermen. Although NMFS is recovering some costs for the halibut and sablefish program, it may not be recovering full costs associated with the program. The Magnuson-Stevens Act does not define “actual costs directly related to the management and enforcement” of an IFQ program, and the legislative history is also silent as to the meaning of this term. However, NMFS has interpreted the term to be limited to the costs that would not have been incurred but for the IFQ program (i.e., the incremental costs). Under this interpretation, at the end of each fiscal year, offices in NMFS’s Alaska Region, including Restricted Access Management, Sustainable Fisheries, and Law Enforcement, as well as the International Pacific Halibut Commission, submit their incremental cost estimates to the Restricted Access Management office. The Restricted Access Management office uses these estimates and the total ex-vessel value of the two fisheries to calculate an annual fee to be levied on halibut and sablefish program participants. NMFS relies on cost estimates provided by these various offices because it does not systematically track the costs of IFQ programs or the time spent on IFQ activities. NMFS officials told us that developing the cost estimates is challenging because most staff work on more than one program at a time, and it is difficult to isolate the costs attributable to the IFQ program. While NMFS requests cost estimates for nine budget categories—personnel compensation, personnel benefits, travel, transportation, rent, printing, other contractual services, supplies, and equipment—NMFS does not have a standard procedure for estimating these costs. Instead, each organization develops its cost estimates independently using its own methodology. For example, the Restricted Access Management office prepares year-end estimates of the amount of time each staff person spent on IFQ work, an average percentage of all staff time spent on IFQ work, and a percentage of its overhead costs to be charged to the IFQ program. In contrast, the International Pacific Halibut Commission prepares its incremental cost estimates by adjusting the U.S. portion of its pre-IFQ (1994) costs upward by 5 percent per year and then subtracts that amount from the U.S. portion of the commission’s total annual costs. Nonetheless, NMFS officials believe that their cost estimates represent the best available information on the incremental costs of the IFQ program. Applying the “incremental costs” definition and using the cost estimates submitted by the various offices, NMFS reported recovering about $3.2 million in halibut and sablefish IFQ program costs for fiscal year 2003. However, there is another way to interpret “actual costs directly related to” an IFQ program, that is, full costs. Under a “full cost” approach, NMFS could have recovered more than the $3.2 million recovered for fiscal year 2003. For example, NMFS could have recovered the costs associated with the sablefish stock assessment, which would be done regardless of whether or not the fishery was managed under an IFQ program. It also could have recovered the IFQ-related costs of the North Pacific Fishery Management Council and the U.S. Coast Guard, which perform activities needed to manage the halibut and sablefish IFQ program. Several methods are used for sharing IFQ management costs between government and industry; each method has advantages and disadvantages. These methods principally fall into three categories—user fees, quota set-asides, and devolution of services from government to industry. Sharing costs between government and industry can help alleviate concerns about fishery management costs and the equity of giving away a public resource in the form of individual fishing quota to a select group of beneficiaries. Table 2 shows the types of cost-sharing methods used in selected countries that manage fisheries under individual fishing quotas. Under the user fee method, government recovers costs by collecting a fee from those who benefit from using the resource. In the case of an IFQ program, the beneficiary is generally the quota holder or fisherman. Among the advantages, user fees promote equity, because they distribute management costs to those who benefit from having exclusive access to a public resource. Further, government can select the method for collecting fees that best reflects the extent to which program participants have benefited. For example, in the Alaskan halibut and sablefish IFQ program, fishermen pay their fees after the fishing season closes on the basis of the amount of fish caught. Fishermen who have not caught any fish do not pay a fee. By collecting fees after the end of the season, government also has better cost information for the program. Charging fees also creates an incentive for users to evaluate which management services have benefits that exceed their costs and communicate this information to government. Among the disadvantages, user fees directly affect a fishing firm’s profitability and its ability to compete. In cases where participants pay a flat fee regardless of the extent to which they benefit from using the resource, user fees could be disproportionately borne by the smaller fishing firms. Also, user fees have administrative costs to government for determining the total amount of recoverable costs, as well as for billing, tracking, collecting, and enforcing the fee payments of each individual quota holder or fisherman. User fee programs that base their fees on ex-vessel value may require additional recordkeeping. In the United States, for example, NMFS must keep records on IFQ fish prices and IFQ landings by species, month, and port in order to calculate the annual fee charged for halibut and sablefish IFQ management costs. Several countries recover IFQ management costs through user fees. However, the features of each user fee program vary by which costs are recovered and how fees are assessed. As previously discussed, in the United States, NMFS collects fees to recover the incremental costs of the Alaskan halibut and sablefish IFQ program, and it does not recover stock assessment costs. In contrast, other countries, such as Australia and New Zealand, do not limit recovery to incremental costs. Australia recovers all domestic commercial fisheries’ licensing, data management, and logbook management costs; 50 percent of monitoring and enforcement costs; and 80 percent of research and data collection costs, which include stock assessment research. New Zealand recovers all research, compliance, and administrative costs. Moreover, both Australia and New Zealand, unlike the United States, base their fees on the amount of quota shares an individual holds, with no limit on the amount of the fee charged. Under the quota set-aside method, the government sets aside (i.e., does not allocate) a certain amount of quota each year, leases it to fishermen, and then uses the revenue to pay for IFQ program management costs. An advantage of the quota set-aside method is that it does not necessitate the collection of fees from each quota holder, thus avoiding late or nonpayment concerns and reducing collection costs to government. Another advantage of quota set-asides is that government eliminates the possibility that those who do not pay their fees might continue to benefit from the public resource. A disadvantage of the set-aside method is that if the value of the quota is too low, the government may not raise enough funds to cover the IFQ program’s management costs. Therefore, government needs to accurately estimate the value of the quota for the upcoming season and the cost of managing the fishery when determining the amount of quota to withhold. Canada uses a method similar to a quota set-aside (known as quota reallocation) to collect costs of its halibut IFQ fishery. In that fishery, a portion of each quota holder’s annual quota—not to exceed 15 percent of the total allowable catch—is allocated to an industry association for redistribution. The original quota holder has the right to lease back his or her shares. If he or she declines, the industry association makes the shares available for purchase by other quota holders. In either case, the representative industry association uses the revenue raised from the quota reallocation to defray the costs of the halibut IFQ program. Under the devolution of services method, responsibility for providing selected fishery management services is transferred to the fishing industry. Since government is no longer responsible for providing some fisheries management services, industry must obtain these services and pay for them itself. Even though responsibility for making some fishery management decisions is devolved to industry, government must ensure that industry acts in accordance with government standards and specifications and complies with program rules. This approach could also reduce concerns about potential government inefficiencies in providing such services. Also, devolving services to industry means that the government can avoid future investments in fisheries management infrastructure, such as computer systems to track individual catch amounts. Regarding disadvantages of devolving services to industry, government may be further removed from enforcement, making it a greater challenge to ensure that industry is complying with the program rules. Also, devolving services may raise legal concerns regarding who is ultimately responsible should a service fail to be provided. Another disadvantage is that government could face some resistance from industry when it wants to change program rules. Both New Zealand and Canada have devolved some of their IFQ management responsibilities to industry. In New Zealand, the government has devolved responsibility for certain services to industry, including maintaining the quota share database, registering quota shares, monitoring landings data for compliance with quota limits, and issuing permits, while retaining responsibility for developing standards, specifications, and regulatory proposals. In Canada, the government provides a baseline of fishery management services, but it has devolved to industry the responsibility for hiring and paying for government-certified at-sea and dockside observers to monitor fishing activities. Canada also gives industry associations the option to select and pay the government for additional fishery management services through service contracts. Canada currently has 15 service contracts with industry, including several involving IFQ programs. IFQ programs bring special benefits to quota holders, who receive exclusive access to a public trust resource. With the enactment of the Sustainable Fisheries Act, NMFS is required to recover actual costs directly related to the management and enforcement of all IFQ programs. While NMFS recovers some costs for the halibut and sablefish IFQ program, it does not recover any management costs for the surfclam/ocean quahog and wreckfish IFQ programs. Such a situation not only raises concerns regarding noncompliance with the law, but it also raises concerns about fairness because a select group of beneficiaries is receiving exclusive access to a public resource without compensation to the public. Also, quota holders in the halibut and sablefish fisheries are paying fees, while quota holders in the surfclam/ocean quahog and wreckfish fisheries are not. Moreover, because NMFS does not provide guidance on how to estimate costs for IFQ programs, each organizational unit with IFQ-related costs uses its own methodology to estimate recoverable costs. Without a standard cost estimation process, NMFS has no credible basis for knowing whether it is charging the appropriate fees and whether it is recovering all required costs. Finally, since the Magnuson-Stevens Act does not define “actual costs directly related to the management and enforcement” of an IFQ program and NMFS has interpreted the term to mean incremental costs, NMFS may be recovering fewer costs than the Congress intended. Another interpretation, that is, a “full cost” approach, could result in greater cost recovery by NMFS. If the Congress would like NMFS to recover other than incremental costs, it may wish to clarify the IFQ cost recovery fee provision of the Magnuson-Stevens Act. To comply with the cost recovery requirements of the Magnuson-Stevens Act, we recommend that the Secretary of Commerce direct the Director of NMFS to take the following two actions: implement cost recovery for all IFQ programs and develop guidance regarding which costs are to be recovered and, when actual cost information is unavailable, how to estimate these costs. We provided a draft copy of this report to the Department of Commerce for review and comment. We received a written response from the Under Secretary of Commerce for Oceans and Atmosphere that includes comments from the National Oceanic and Atmospheric Administration (NOAA). Overall, NOAA stated that our report was well researched and presented, and was responsive to the specific request made by the Congress. NOAA agreed with our recommendation to implement cost recovery for all IFQ programs. NOAA agreed that the IFQ cost recovery provision of the Magnuson-Stevens Act applies to all IFQ programs. NOAA said that it would work with the Mid-Atlantic and South Atlantic Fishery Management Councils on adding cost recovery to the surfclam/ocean quahog and wreckfish IFQ plans. It also said that the costs of collecting these fees should be taken into account when determining whether cost recovery is required in a particular IFQ fishery. To that end, NOAA suggested that we may want to recommend that the Congress consider adding a rule exempting IFQ programs from the cost recovery requirement if those costs fall below some reasonable threshold. Since the scope of our work did not include an evaluation of the cost recovery provisions of the Magnuson-Stevens Act, we believe that it would be premature to make a recommendation to the Congress at this time. NOAA also agreed with our recommendation to develop guidance regarding which costs are to be recovered and, when actual cost information is unavailable, how to estimate these costs. Specifically, it said that NOAA will develop guidance on how to identify activities directly attributable to an IFQ program and on how the costs associated with these activities can be measured. NOAA also raised some questions about specific issues covered in the report. For example, NOAA suggested that we should have looked at the net benefits of IFQ programs and the circumstances and general cost recovery policies in selected foreign countries, but doing so was beyond the scope of our work. Also, NOAA believes that the recovery of incremental costs is more consistent with the requirements of the Magnuson-Stevens Act than an interpretation requiring the recovery of full costs. Because the act does not define “actual costs directly related to the management and enforcement” of an IFQ program, which we believe can be interpreted in more than one way, our report suggests that the Congress may wish to clarify this provision if it would like NMFS to recover other than incremental costs. NOAA’s specific comments and our detailed responses are presented in appendix IV of this report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees, the Secretary of Commerce, and the Director of the National Marine Fisheries Service. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841 or Stephen Secrist at (415) 904-2236. Key contributors to this report are listed in appendix V. This is the third in a series of reports on individual fishing quota (IFQ) programs requested by the Chairman and Ranking Minority Member of the former Subcommittee on Oceans, Fisheries, and Coast Guard, Senate Committee on Commerce, Science, and Transportation. For this report, we reviewed domestic quota programs to (1) determine the costs of managing (i.e., administering, monitoring, and enforcing) IFQ programs and how these costs differ from pre-IFQ management costs; (2) determine what, if any, IFQ management costs are currently being recovered by the Department of Commerce’s National Marine Fisheries Service (NMFS); and (3) assess ways to share the costs of IFQ programs between government and industry. The term “individual fishing quota” as used in this appendix includes individual transferable quota and individual vessel quota. For all three objectives, we visited locations in Alaska, Florida, Massachusetts, New Jersey, and South Carolina. We selected these sites to obtain broad geographic coverage for the three domestic IFQ programs. In these locations and elsewhere, we interviewed agency officials at the headquarters office of NMFS as well as its Northeast, Southeast, and Alaska regional offices; representatives of the Gulf of Mexico, Mid-Atlantic, North Pacific, and South Atlantic Fishery Management Councils; representatives of the International Pacific Halibut Commission; officials at the headquarters office of the U.S. Coast Guard and the 1st, 7th, and 17th Districts; officers from the Alaska State Troopers and the New Jersey Division of Fish and Wildlife; and others. We also visited ports in Juneau, Homer, and Seward, Alaska, and Point Pleasant and Wildwood, New Jersey, where we observed offloads of IFQ fish. To determine the costs of managing IFQ programs, because NMFS does not systematically track this information, we developed a data collection instrument and asked organizations that perform IFQ-related activities to provide information on their IFQ-related costs for fiscal year 2003. For the halibut and sablefish IFQ program, the following organizations provided cost information: the Restricted Access Management Program and the Sustainable Fisheries Division of NMFS’s Alaska Region, the Alaska Division of NMFS’s Office for Law Enforcement, the International Pacific Halibut Commission, and the North Pacific Fishery Management Council. The following organizations did not provide cost information although we requested it: the National Oceanic and Atmospheric Administration’s (NOAA) Office of the Alaska Regional Counsel (information regarding IFQ- related legal activities) and NMFS’s Alaska Fisheries Science Center (information regarding the sablefish stock assessment). Although NOAA’s Office of General Counsel for Enforcement and Litigation, Alaska Region, provided estimates of staff hours spent on IFQ work, it could not provide the associated costs. For the surfclam/ocean quahog IFQ program, the following organizations provided cost information: the Sustainable Fisheries Division, the Fishery Statistics Office, and the Information Resource Management of NMFS’s Northeast Region; NOAA’s Northeast Regional Counsel; the Northeast Division of NMFS’s Office for Law Enforcement; and the Mid-Atlantic Fishery Management Council. NMFS’s Northeast Fisheries Science Center did not provide cost information regarding the surfclam and ocean quahog stock assessments, although we asked it to do so. For the wreckfish IFQ program, the Constituency Services Branch of the Management, Budget and Operations Division of NMFS’s Southeast Region provided cost information, but the Southeast Division of NMFS’s Office for Law Enforcement (information regarding IFQ-related enforcement activities) and the South Atlantic Fishery Management Council (information regarding wreckfish management) did not. For all three IFQ programs, the U.S. Coast Guard could not provide any cost information because it does not track the costs associated with IFQ- related enforcement activities. Using the cost information received, we prepared estimates of the management costs incurred in fiscal year 2003 for each IFQ program. We obtained the views of fishery managers on how halibut and sablefish and surfclam and ocean quahog management costs changed after the two IFQ programs were implemented. We also obtained views and supporting information, where possible, on the costs incurred during the development and implementation of each IFQ program. To assess the reliability of the data we received, we interviewed officials most knowledgeable about each IFQ program and its probable costs. On reviewing the data, they appeared reasonable, given differences among the programs. Consequently, we concluded that the reported data were sufficiently reliable for purposes of this report. To determine what costs, if any, are currently being recovered by NMFS, we reviewed laws and regulations, including the Magnuson-Stevens Act and the Sustainable Fisheries Act and their legislative histories, which set out the cost recovery requirements for IFQ programs. We also interviewed NMFS officials and fishery council representatives to determine which IFQ programs are recovering management costs; what costs they are recovering; and, if costs are not being recovered, the reasons why. To assess ways to share the costs of IFQ programs between government and industry, we identified domestic and foreign programs that share IFQ costs between government and the fishing industry. We interviewed and obtained the views of government officials from the United States, Australia, Canada, and New Zealand and academicians on cost-sharing methods that are being used or could be used to share costs and their advantages and disadvantages. We also reviewed studies related to existing and potential cost-sharing methods. For purposes of this report, we did not examine foreign laws and regulations, relying instead on foreign fishery managers for the legal requirements of their programs and how they operated. We conducted our review from February through December 2004 in accordance with generally accepted government auditing standards. This appendix describes the three IFQ programs in the United States. The term “individual fishing quota” as used in this appendix includes individual transferable quota. Surfclams and ocean quahogs are mollusks found along the East Coast, primarily from Maine to Virginia, with commercial concentrations off the Mid-Atlantic Coast. While ocean quahogs are found farther offshore than surfclams, the same vessels are largely used in each fishery. These vessels tow hydraulic clam dredges that extract clams from the ocean floor. The catch is emptied into metal cages holding roughly 32 bushels, off-loaded at one of a small number of landing sites, and sold to processing facilities. Surfclams are used in strip form for fried clams and in chopped or ground form for soups and chowders. Ocean quahogs are used in soups, chowders, and white sauces. The fishery consists of a few large, vertically integrated firms, small processors, and independent fishermen. The surfclam fishery developed after World War II. When the surfclam fishery declined in the mid-1970s, the ocean quahog fishery arose as a substitute. Disease and industry overfishing led the Mid-Atlantic Fishery Management Council to develop a management plan for surfclams and ocean quahogs, the first such plan in the United States. Between 1977 and 1990, the council and NMFS used a variety of effort controls to limit the harvest to sustainable levels, such as restrictions on fishing times, areas fished, clam sizes, gear, vessels, who fished, and how fishing occurred. IFQs were established for the surfclam/ocean quahog fishery in 1990—the first IFQ program approved under the Magnuson-Stevens Act. The program was designed to help stabilize the fishery, reduce excessive investment in fishing capacity, and simplify the regulatory requirements of the fishery to minimize the government and industry cost of administering and complying with program requirements. Wreckfish are found in the deep waters far off the South Atlantic coast, primarily from Florida to South Carolina. They were first discovered in the southern Atlantic in the 1980s by a fisherman recovering lost gear. Wreckfish are fished by vessels over 50 feet in length using specialized gear. These vessels are used primarily in other fisheries. Within 3 years of the discovery of wreckfish, wreckfish landings increased to more than 3 million pounds, and the number of vessels used for wreckfish increased from 2 to 40. Because of concerns that the resource could not support unlimited expansion, the South Atlantic Fishery Management Council added wreckfish to the snapper-grouper fishery management plan and set the catch limit at 2 million pounds per year. The council developed an IFQ program for wreckfish in 1991. After the IFQ program was implemented in 1992, wreckfish landings declined rapidly, in part because of the difficulty and costs associated with fishing wreckfish in relation to their market value, and quota holders started participating in easier, less costly fisheries with higher market values. Today, the wreckfish fishing fleet is small, with only 2 vessels reporting wreckfish landings in 2003. Wreckfish are sold fresh or frozen as a market substitute for snapper and grouper. Pacific halibut and sablefish (black cod) are found off the coast of Alaska, among other areas. The fishing fleets are primarily owner-operated vessels of various lengths that use hook-and-line gear for halibut and hook-and-line or pot (fish trap) gear for sablefish. Some vessels catch both halibut and sablefish. The International Pacific Halibut Commission manages the halibut fishery under a treaty between the United States and Canada. The Halibut Commission adopts conservation regulations, such as seasons and area catch limits, which it forwards to the United States and Canada for approval. NMFS, in consultation with the North Pacific Fishery Management Council, has the authority to develop other regulations that do not conflict with the Halibut Commission’s regulations. Historically, there was no limit on the number of people who could participate in the halibut and sablefish fisheries, and, starting in the mid- 1970s, the number of boats in these fisheries began to increase rapidly. By the late 1980s, overcapitalization of the halibut and sablefish fleets led to seasons that lasted less than 2 days in some areas and a race for fish that put boats and fishermen at risk and resulted in gear loss, excessive bycatch of nontarget species, and poor product quality, among other things. In response to these conditions, the North Pacific Council developed an IFQ program that was implemented by NMFS in 1995. The program was designed, in part, to help improve safety for fishermen, enhance efficiency, reduce excessive investment in fishing capacity, and protect the owner- operator character of the fleet. The program set caps on the amount of quota that any one person may hold, limited transfers to bona fide fishermen, issued quota in four vessel categories, and prohibited quota transfers across vessel categories. This appendix describes IFQ cost-sharing programs in Australia, Canada, and New Zealand. The term “individual fishing quota” as used in this appendix includes individual transferable quota and individual vessel quota. Australia’s fishing zone, the third largest in the world, supports many high- value fisheries. The gross value of Australia’s commercial fisheries production was an estimated AU$2.3 billion in fiscal year 2003. Australia introduced IFQs in the early 1980s and currently has at least 20 federal and state fisheries under IFQ management. These fisheries account for about 22 percent of the total value of Australia’s commercial fisheries. Australia began recovering fishery management costs in the mid-1980s as part of a governmentwide initiative to introduce user charges for government services. The fishing industry (i.e., fishing permit holders) pays for services that directly benefit fishermen, while the government pays for management activities that may benefit the general public. According to an Australian government official, in commercial fisheries managed by the federal government, Australia recovers 50 percent of compliance costs, 80 percent of research and data collection costs, and 100 percent of all other management costs. The recoverable costs are collected through levies, license fees, and observer fees. The amount of the levy for each quota holder is generally based on the amount of quota held and the fishery’s budgeted costs for the year, with an adjustment made the following year if actual costs differ from the budgeted costs. In fiscal year 2003, the Australia Fisheries Management Authority, the government group that manages commercial fisheries, received AU$11.3 million from levies and license fees and AU$609,000 from observer and other fees. These fees are paid to the general treasury but are then transferred to the Australia Fisheries Management Authority to finance fisheries management costs. Canada, the fifth largest exporter of fish and seafood products in the world, exported CA$4.7 billion worth of fish and seafood products in 2002. In the early 1990s, Canada started using IFQs to manage several of its commercial fisheries, including western Canadian sablefish, Pacific halibut, and groundfish. In an effort to eliminate its budget deficit and promote government efficiency, the Canadian government cut spending and made cost sharing with industry a priority in 1994. Under Canada’s system, as follows, fishermen pay an access fee to the government, a cost-sharing fee to industry associations, and observer fees to private companies. The access fee, paid to the Canadian government’s general treasury, is considered a form of rent to the government and Canadian people for the right to use a public resource. Canada’s Department of Fisheries and Oceans does not receive funding to support program delivery from this fee. Canada provides a baseline level of fishery management services at no cost to industry. However, if fishermen want additional services, they must pay for them. Examples of additional services include adding enforcement officers, adding stock assessment reports, and running an IFQ program. Industry associations representing fishermen negotiate with the Department of Fisheries and Oceans on the costs to be shared to provide for the additional services. The associations then collect payments from the fishermen through various methods. For example, in the groundfish fishery, the association asks individual license holders to voluntarily contribute funds. For the halibut fishery, the industry association raises funds by setting aside a portion of the total commercial quota, not to exceed 15 percent, and leases it back to individual fishermen. The association then uses these funds to share IFQ program costs with the government. In addition to user fees and cost-sharing fees, fishermen pay observer fees. Canada requires fishermen to hire government-certified at-sea and dockside observers from the private sector to monitor fishing activities. Seafood is New Zealand’s fourth largest export, after dairy, meat, and forest products. In 2000, seafood exports were worth about NZ$1.43 billion and accounted for 90 percent of industry revenue. New Zealand introduced IFQs in 1986, and about 50 species are now managed under the IFQ system. New Zealand’s IFQ fish accounted for about 95 percent of the fishing industry’s value in 2003. A provision for cost recovery for fisheries and conservation services was added into fishing legislation in 1994 to enable the government to recover costs associated with the commercial fishing industry. Recoverable costs include conservation costs and costs that can be attributed to a beneficiary of the resource. Costs of services that also benefit the general public are not recoverable. The 1996 Fisheries Act encouraged government to give industry a greater role in the quota management system. As a result, since 2001, New Zealand has transferred, or devolved, responsibility to industry for specified services, while retaining responsibility for developing standards and specifications for industry to follow. Currently, New Zealand has devolved to industry responsibility for the quota registry system and collecting fishing activity information. The following are GAO’s comments on NOAA’s written comments provided by the Under Secretary of Commerce for Oceans and Atmosphere in a letter dated February 11, 2005. 1. As NOAA acknowledged, we were asked to report on the costs of IFQ programs. An analysis of the net benefits of IFQ programs was beyond the scope of our work. 2. We noted several times in the report that management costs changed with IFQ implementation, in part, due to the characteristics of the fishery and the complexity of the program. We believe that we have given this point sufficient emphasis and, for this reason, we made no changes to the report. 3. We disagree with NOAA’s comments that the report exaggerates the problems of NMFS’s noncompliance with the cost recovery requirements of the Magnuson-Stevens Act. NOAA does not believe that noncompliance is a general problem because NMFS is recovering costs for the largest and costliest IFQ program. However, the act requires NMFS to recover the costs of all IFQ programs, regardless of their size and cost. Our report title reflects our finding that NMFS is only recovering costs for one of the three programs. Not only does such a situation raise concerns regarding compliance with the law, it also raises concerns about fairness because halibut and sablefish quota holders are paying fees, while surfclam/ocean quahog and wreckfish quota holders are not. For these reasons, we made no changes to the report. 4. We disagree with NOAA’s comment that our report suggests that all IFQ management and enforcement costs should be recovered. We said that the Magnuson-Stevens Act does not define “actual costs directly related to the management and enforcement” of an IFQ program. We also said that NMFS has defined the term to mean incremental costs and noted that there is another way to interpret costs, that is, full costs. We did not suggest that all IFQ management and enforcement costs should be recovered. Rather, we said that if the Congress would like NMFS to recover other than incremental costs, it may wish to clarify the IFQ cost recovery fee provision of the act. For this reason, we made no changes to the report. 5. Our report reviews different methods for sharing IFQ costs between government and industry in the United States as well as in other countries. We clarified that under U.S. law, the sole approach provided in the Magnuson-Stevens Act is user fees. 6. In our review of cost-sharing methods, we found that auctions were seen as an option for distributing quota shares and for other uses; they were not viewed as one of the principal methods for sharing IFQ costs. For this reason, we did not include auctions in our discussion. 7. The purpose of appendix III is to provide additional background information about cost-sharing programs for fisheries management in Australia, Canada, and New Zealand. We did not review the legal circumstances and options available to each country because an audit of each country’s cost-sharing program was beyond the scope of this report. 8. The scope of our work did not include an evaluation of the IFQ cost recovery provision of the Magnuson-Stevens Act. Therefore, we think that it would be premature to make a recommendation to Congress at this time. In addition to those named above, Allen T. Chan, Nancy L. Crothers, Robert G. Crystal, Doreen S. Feldman, Curtis L. Groves, Julian P. Klazkin, Susan J. Malone, Keith W. Oleson, and Rebecca A. Sandulli made key contributions to this report. | Overfishing may have significant environmental and economic consequences. One tool used to maintain fisheries at sustainable levels is the individual fishing quota (IFQ), which sets individual catch limits for eligible vessel owners or operators. This is GAO's third study on IFQ programs. For this study, GAO determined (1) the costs of managing (i.e., administering, monitoring, and enforcing) IFQ programs and how these costs differ from pre-IFQ management costs; (2) what, if any, IFQ management costs are currently being recovered by the National Marine Fisheries Service (NMFS); and (3) ways to share the costs of IFQ programs between government and industry. Fiscal year 2003 management costs varied considerably among IFQ programs. According to fishery managers, halibut and sablefish program costs were higher and surfclam/ocean quahog program costs were lower, when compared with pre-IFQ management costs. Although complete cost information was not available, GAO aggregated cost estimates from information provided by NMFS and other organizations involved in IFQ-related activities and estimated that fiscal year 2003 IFQ management costs were at least $3.2 million for the Alaska halibut and sablefish program, $274,000 for the surfclam/ocean quahog program, and $7,600 for the wreckfish program. While NMFS does not systematically track the costs of managing IFQ programs and does not have complete information on pre-IFQ management costs, fishery managers said management costs were greater under the halibut and sablefish IFQ program than under pre-IFQ management, in part, because of the IFQ program's complex rules. In contrast, fishery managers said costs were less under the surfclam/ocean quahog IFQ program than under pre-IFQ management, in part, because the simplicity of the program's design made it easier to monitor compliance. Moreover, according to fishery managers, NMFS incurred additional costs for the development and initial implementation of both programs. NMFS is not recovering management costs as required by the Magnuson- Stevens Act for two of the three IFQ programs. Under the act, as amended by the 1996 Sustainable Fisheries Act, NMFS is required to recover the "actual costs directly related to the management and enforcement" of all IFQ programs. NMFS has implemented cost recovery for the halibut and sablefish program, but it has not done so for the surfclam/ocean quahog or wreckfish programs. NMFS officials said that cost recovery for the surfclam/ocean quahog program has been a low priority and very few people were fishing wreckfish. Also, the Magnuson-Stevens Act does not define "actual costs directly related to the management and enforcement" of an IFQ program. NMFS has interpreted the term to mean those costs that would not have been incurred but for the IFQ program (i.e., the incremental costs). However, another way to interpret the term "actual costs directly related to" is full costs. Under a "full cost" approach, NMFS could have recovered more costs of managing the IFQ program. Several methods are used for sharing IFQ management costs between government and industry. These methods principally fall into three categories: user fees, quota set-asides, and devolution of services. Under user fees, government recovers costs by collecting a fee from the quota holder or fisherman. Under a quota set-aside, government can set aside (i.e., not allocate) a certain amount of quota each year, lease the set-aside quota to fishermen, and use the revenue to pay for program management costs. Finally, under devolution of services, management services previously performed by government, such as monitoring compliance with individual catch limits, are transferred to industry. |
In fiscal year 1996, students and their families used federal student loan programs to borrow approximately $30 billion to pay for postsecondary education. FDLP is one of two main approaches the federal government has taken to make loans available for college. Under this program, students or their parents borrow money directly from the government through the schools the students attend. The other major program, FFELP, provides loans through private lenders, and the federal government guarantees repayment if borrowers default. According to a Department official, FDLP accounted for about 32.1 percent of student loan volume in fiscal year 1996. Most FDLP borrowers can select one of four repayment options, as illustrated in figure 1. These four options differ by the amount of time allowed to repay loans and the flexibility of the payment schedule. The ICR option is the most flexible. It allows borrowers to pay relatively small or no monthly payments when their incomes are low and to pay more when their incomes rise. For example, a married borrower with a loan balance of $20,000 and an annual family income of $15,000 would initially pay about $77 a month. If the borrower’s annual income were $45,000, the initial monthly payment would be about $225. Repayment term: maximum of 10 years $0 if the minimum payment does not cover monthly interest, the unpaid interest is added to the principal balance for later repayment. maximum of 25 years if the loan is not repaid after 25 years, the remaining balance is canceled (the unpaid amount is considered income for tax purposes) For our analysis,we classified FDLP loans into three main categories. Direct nonconsolidated loans: These are the basic FDLP loans with which students or their parents can help finance postsecondary education. There are three kinds: subsidized and unsubsidized direct Stafford loans and direct PLUS loans. Direct subsidized Stafford loans, available only to students with a demonstrated financial need, are subsidized in that the federal government does not charge interest while the student is in school at least half-time, during a 6-month grace period after the student graduates or otherwise leaves school, and during periods in which loan repayment is deferred (such as when the borrower is seeking but unable to find full-time employment). In contrast, direct unsubsidized Stafford loans, which are available to all students regardless of financial need, do not include an interest subsidy. If the borrower does not make interest payments while in school, the interest is added to the principal balance to be repaid as part of the total loan amount. Direct PLUS loans are available to parents of dependent students to help pay for their children’s education; they are unsubsidized because parents are responsible for paying all interest charges. Direct consolidation loans: During the course of their education, students can obtain loans from more than one program. By obtaining a direct consolidation loan, borrowers can combine their loans and make only one monthly payment. Borrowers can consolidate their loans while they are in school or afterward, and the interest on their consolidation loans can be subsidized or unsubsidized, depending on the kind of original loans they consolidated. Borrowers in default on a student loan who have made satisfactory arrangements to repay the defaulted loan, or who agree to repay under the ICR plan, can also obtain direct consolidation loans. Parents with multiple PLUS loans can combine them into a single direct PLUS consolidation loan. Debt Collection Service (DCS) consolidation loans: These are direct consolidation loans to borrowers who previously defaulted on their FFELP loans and whose loans were assigned to the Department’s DCS for collection. In fiscal year 1995, the Department began to increase collections on defaulted FFELP loans by offering direct consolidation loans to these borrowers so they could make more affordable payments through the ICR plan. As shown in table 1, the vast majority (83.6 percent) of FDLP borrowers in repayment had nonconsolidated loans as of March 31, 1997. These borrowers represented about 69 percent of the total direct loan volume in repayment. However, borrowers with direct consolidation loans had average loan amounts that were much higher than those of the two other kinds of borrowers—$21,807 compared with $6,611 and $5,453. Such borrowers had more than 26 percent of total loan volume, even though they were only about 10 percent of all borrowers. As of March 31, 1997, slightly more than 56,000 borrowers in repayment were using ICR—about 9 percent of the total (see fig. 2). Collectively, these borrowers accounted for about $831 million in outstanding loans, or about 16 percent of the $5.3 billion of FDLP loans in repayment. Borrowers using the standard plan were the largest in number and loan volume among the four plans. However, the average size of their loans (about $6,530) was considerably smaller. By comparison, loans held by ICR users averaged about $14,770. Borrowers using the extended plan had the highest average balance (about $17,000). Borrowers using ICR differed from most other FDLP loan borrowers in repayment in several important ways. More than half (51 percent) were borrowers with direct consolidation loans (see fig. 3). In contrast, only about 8.5 percent of all borrowers in FDLP had such loans. Borrowers with direct consolidation loans held nearly 80 percent of total dollar volume of loans being repaid under ICR. Another large portion (about 42 percent) of borrowers using the ICR plan were those with DCS consolidation loans. However, these borrowers had relatively small average loan amounts ($6,100 compared with $23,000 for direct consolidation loans) and held only 17 percent of the total loan volume being repaid under ICR. Only about 7 percent of borrowers using ICR held nonconsolidated loans. Information on the kinds of schools that ICR users attended is limited to borrowers who had nonconsolidated loans. According to a Department official, the Department does not track repayment plan data by school for direct and DCS consolidation loans. Because students whose previous loans were combined into either a direct or DCS consolidation loan sometimes have attended more than one school, classifying loans by kind of school is difficult and not very meaningful. Data on FDLP borrowers with nonconsolidated loans show little relationship between the type of school attended and a borrower’s selection of ICR as a repayment plan. For the most part, there was little variation between the various repayment plans when compared by type of school, such as public and private or 2-year and 4-year. The data did show that borrowers from 2-year public schools were somewhat more inclined to select the ICR plan than were borrowers from other kinds of schools. However, since nonconsolidated loan recipients represented less than 10 percent of ICR users, it is unclear whether they represented ICR users as a whole. Across all four types of repayment plans, 14.4 percent of FDLP borrowers were delinquent and 1.7 percent were in default, according to the Department data in our analysis. (See table 2.) About 70 percent of borrowers were current on their loan payments, and another 13.7 percent were currently not paying because their payments had been postponed through statutorily provided deferment or forbearance procedures. The data we analyzed generated an understated percentage of loans in default because only defaulted loans in arrears for 181 to 270 days are included. According to a Department official, loans in arrears for longer than 270 days had been transferred to the Department’s DCS and, therefore, data on these loans were not contained in the database we used for our analysis.This official said that, as of March 31, 1997, about $34.6 million in such defaulted loans had been transferred to DCS. Thus, when these defaulted loans are combined with the $71 million in loans that were in default for 181 to 270 days, the total of defaulted direct loans is about $105.6 million. It is important to note that the percentage of FDLP loans in default we computed in our analysis is different from the default rate the Department computes. There are two major differences. First, our computation of loans in default reflects only borrowers who have not made a payment for 181 to 270 days, but the Department’s default rates include borrowers who have not made a payment for more than 270 days. Second, the percentage of borrowers in default that we computed for FDLP is a simple percentage (number of borrowers in default divided by the total number of borrowers in repayment at a single point in time). In contrast, the Department’s default rates are computed for a cohort of borrowers over a period of time. (This is explained in app. I.) Compared with the three other payment plans, the overall percentage of loans that were delinquent or in default under ICR were higher (see fig. 4). The delinquency rate among ICR users was 16.1 percent, and the percentage of loans in default was 5.0. By comparison, the next highest delinquency rate was 14.8 percent (for borrowers using standard repayment), and the next highest percentage of loans in default was 1.4 (also for borrowers using standard repayment). There appear to be two possible explanations for why borrowers using ICR, as a group, have overall higher delinquencies and defaults than borrowers using the other repayment plans. First, a higher concentration of borrowers with DCS consolidation loans uses the ICR plan than the other repayment plans (53 percent, or 23,678, of 44,407 DCS consolidation loan borrowers are using ICR), and, as we discuss below, borrowers of these kinds of loans have the highest percentage of loans that are delinquent and in default. Second, the exclusion of PLUS loan borrowers, who according to Department officials tend to have lower delinquency and default rates than student borrowers, from the ICR plan could tend to make delinquencies and defaults of the other plans lower relative to the rates of the ICR plan. Among borrowers using ICR, there is considerable variance in delinquency rates, depending on the type of loan (see fig. 5). Of the three categories of loans in repayment (nonconsolidated, direct consolidation, and DCS consolidation), the highest delinquency rate was for borrowers with DCS consolidation loans (about 19 percent). ICR users with direct consolidation and nonconsolidated loans had significantly lower delinquency rates (14.6 percent and 9.6 percent, respectively). Given that the majority (53.3 percent) of borrowers with DCS consolidation loans are ICR users, the overall higher delinquency rate for ICR compared with the other repayment plans could be partly the result of considerably greater involvement of DCS consolidation loan borrowers (borrowers who previously defaulted on FFELP loans) in the ICR plan compared with the other repayment plans. (See app. II.) A comparison of individual types of loans shows that ICR users do not have higher delinquency rates than users of all other repayment plans (see fig. 6). For example, for nonconsolidated loans alone, the delinquency rate among ICR users was below that among users of the standard plan and about the same as that among users of extended and graduated plans. Even for DCS consolidation loans, ICR users had a lower delinquency rate compared with those in the three other plans. However, with over half of DCS consolidation loans under the ICR plan, the influence of these loans’ high delinquency rate is felt primarily by ICR. FDLP loan default patterns are similar to those for delinquencies. Among borrowers using ICR, the percentage of loans in default is much higher for DCS consolidation loans than for nonconsolidated or direct consolidated loans (see fig. 7). ICR users who had DCS consolidation loans defaulted at a rate of 8.8 percent, compared with rates of 0.9 percent and 2.5 percent for ICR users with nonconsolidated and direct consolidation loans, respectively. Again, given the concentration of borrowers with DCS consolidation loans in the ICR plan, ICR’s overall high percentage of loans in default is strongly affected by this one type of loan. As with delinquencies, a comparison of individual types of loans shows that ICR users did not have higher percentages of loans in default across the board than users of other repayment plans (see fig. 8). However, ICR users did have the highest percentage of loans in default for two of the three loan types. There is no single answer to whether a borrower will pay more or less under ICR compared with standard, extended, or graduated plans. Borrowers for whom ICR was primarily designed (that is, borrowers with a limited ability to pay) could face relatively higher total payments in the form of larger total interest costs and tax liability—on amounts they were not able to repay within the 25-year loan repayment limit. In contrast, ICR may be less costly than the extended or graduated plans for borrowers with considerably greater ability to repay their loans. To provide some indication of how the type of repayment plan affects a borrower’s initial monthly payment amount and total loan payments, we compared the four plans with two different starting incomes—$15,000 and $45,000. This scenario assumes that (1) the borrower and spouse have an initial annual combined income at the beginning of the repayment period of $15,000 and receive annual income increases of 5 percent over the repayment period, (2) the borrower is married throughout with no children, and (3) the loan interest rate is 8.25 percent. Table 3 shows how the size of a loan affects the initial monthly payment amounts under the ICR plan compared with the other repayment plans. The initial monthly payments for a borrower using ICR are substantially less than the initial monthly payments for the other repayment plans for loans of $20,000 and higher. Although initial payment amounts under the other plans increase for larger loan amounts, the payments under ICR increase to a much lesser extent and stop increasing at loans above $20,000. Under ICR, borrowers’ payment amounts are capped at 20 percent of their discretionary income. Thus, a borrower with an income of $15,000 and $100,000 in loans would pay no more per month under ICR than a borrower with the same income and an initial loan amount of $20,000. The size of a borrower’s monthly payment has a direct effect on his or her total loan payments. Those payments include the amounts to repay principal and interest, and ICR users can also incur a cost for the potential tax liability on the loan balance that remains unpaid after 25 years. Unpaid loan balances are forgiven at the end of the 25-year period but must be treated as taxable income. Whether the lower income borrower under ICR actually pays more or less than borrowers using alternatives depends in part on the amount borrowed (see table 4). A borrower with an initial income of $15,000 and loans ranging from $5,000 to $10,000 would pay more under ICR than under the other plans. In contrast, a borrower with $40,000 or more in loans would repay far less under ICR than under the extended and graduated alternatives because under these two plans the borrower pays off the total loan; the borrower using ICR would not. However, a borrower using ICR has to contend with having to declare the unpaid balance of the loans as income—and possibly incur a tax liability. As loan amounts increase, the potential tax liability rises substantially for borrowers at this income level. This scenario makes the same assumptions as the first, except that calculations are based on a starting income of $45,000. ICR does not provide the same lower monthly payment advantages over the other plans as it does for lower income borrowers. Initially, as table 5 illustrates, ICR has consistently lower monthly payments than the standard plan but higher monthly payments than the extended and graduated plans, except for loans at the $100,000 level. Table 6 compares total loan payments that a borrower and spouse with a starting combined income of $45,000 would pay under each of the four repayment plans for loan amounts ranging from $5,000 to $100,000. As it shows, payments for principal and interest under ICR are always higher than under standard repayment but always lower than under the extended or graduated plans. Unlike the borrower who begins with a $15,000 income, the borrower with an initial income of $45,000 has no unpaid balance after 25 years for any of the loan amounts illustrated. Information on borrower income for computing monthly payment amounts for the ICR plan is obtained from either documentation provided by the borrower or information from IRS on the borrower’s AGI as reported on his or her federal income tax return. The monthly payment amount for borrowers in their first year of repayment is based on documentation and other information submitted by borrowers to the Department’s direct loan servicing center. This documentation, referred to as “alternative documentation of income,” can be recent pay stubs, dividend statements, canceled checks, or a statement signed by the borrowers explaining their source of income. According to a Department official, the Department uses alternative documentation for borrowers in their first year of repayment because, in most cases, AGI information from IRS is zero or close to zero. AGI reflects prior-year income when borrowers were generally in school or not working full time and were reporting little or no taxable income. However, most borrowers have income, and the alternative documentation captures it. This kind of documentation is also used in other situations when borrowers’ AGI does not reflect their current income, such as when a borrower becomes unemployed. According to Department officials, service center personnel do not conduct credit checks or contact employers to verify the accuracy of borrowers’ information. However, when borrowers submit this documentation, they also certify that they are providing accurate and complete income information. After ICR users have been out of school for at least 1 year, their monthly payment amount is based on their AGI as reported on their federal income tax returns. To obtain this information, the service center sends computer tapes containing borrower identification information to IRS, which matches this information against its records. IRS then sends computer tapes containing borrower AGI information directly to the service center. After receiving the IRS tapes, service center personnel run edit checks for quality assurance. According to Department officials, the Department does not verify the accuracy of the information the borrowers provide IRS on their tax returns. Rather, it relies on the IRS’ own audits, edits, and verifications to make sure borrowers’ AGI is accurate. However, other measures are taken in certain circumstances to ensure the accuracy and reasonableness of borrowers’ income information. For example, if a borrower is required to provide alternative documentation of income because his or her AGI would reflect an in-school period, the servicer still obtains AGI information from IRS to see how accurately borrower-reported information from the previous year reflected IRS-reported information for that year. According to Department officials, borrowers falsifying their income to reduce their monthly payments lengthen the time required to pay off their loans, which ultimately costs them more money. The officials also said that borrowers who do not cooperate in providing accurate income information are automatically removed from the ICR plan and placed into the standard repayment plan. The Department of Education reviewed a draft of this report and had no written comments, although it provided technical suggestions that we incorporated as appropriate. Copies of this report are being sent to the Chairman of the Senate Committee on Labor and Human Resources, the Secretary of Education, appropriate congressional committees and Members, and others who are interested. If you have any questions about this report, please call me or Joseph J. Eglin, Jr., Assistant Director, at (202) 512-7014. Major contributors to this report include Joan A. Denomme, Charles M. Novak, and Charles H. Shervey. To determine the extent to which borrowers are using the income contingent repayment (ICR) plan compared with other repayment plans, we obtained and analyzed data from the Department of Education on Federal Direct Loan Programs (FDLP) loans being repaid as of March 31, 1997. To determine the extent to which borrowers at the various kinds of schools used the different types of repayment plans, we obtained and analyzed data on nonconsolidated loans. Data on consolidation and DCS consolidation loans categorized by kind of school were not available. A Department official said that such data are not captured in the Department databases we used for our analysis. To determine the extent to which loans being repaid under ICR and other repayment plans were delinquent or in default, we computed simple percentages that reflect the proportion of total borrowers or dollar amounts of loans in repayment classified as delinquent or in default on March 31, 1997. The percentages we computed are not comparable to the annual cohort default rates the Department computes in accordance with the Higher Education Act of 1965, as amended, and its Default Reduction Initiative. The cohort default rate is computed to determine whether to allow schools to participate in federal student loan programs—schools with cohort default rates above certain statutory thresholds can be dropped or prevented from participating in these programs. In general, cohort default rates reflect the percentage of a school’s borrowers who enter repayment in one fiscal year and default by the end of the next fiscal year. To compare borrowers’ total payments under ICR and other repayment plans, we used information from selected hypothetical examples contained in the Department’s 1996 Repayment Book. Data on unpaid loan balances remaining at the end of the repayment period for loans being repaid under the ICR plan—for the various hypothetical scenarios we used—were not contained in the Repayment Book. Therefore, we asked the Department to compute these figures, and we used them in our analyses. To determine the extent to which the Department or its FDLP service center verifies the accuracy of borrowers’ income information, we reviewed Department regulations and guidelines. We also interviewed Department officials to obtain additional information on these procedures. Our work was conducted from February to June 1997 in accordance with generally accepted government auditing standards. Table II.1: Repayment Plans Selected by Borrowers of All Kinds of FDLP Loans, as of March 31, 1997 Amount (in millions) Table II.2: Repayment Plans Selected by Borrowers With Nonconsolidated Loans, as of March 31, 1997 Amount (in millions) Table II.3: Repayment Plans Selected by Borrowers With Direct Consolidation Loans, as of March 31, 1997 Amount (in millions) Table II.4: Repayment Plans Selected by Borrowers With DCS Consolidation Loans, as of March 31, 1997 Amount (in millions) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed borrowers' use of the William D. Ford Federal Direct Loan Program's (FDLP) income contingent repayment (ICR) plan, focusing on: (1) the extent to which borrowers are using ICR compared with other repayment plans available under FDLP; (2) how loan delinquencies and defaults under ICR compare with delinquencies and defaults under other FDLP repayment plans; (3) how loan payments under ICR compare with payments under other FDLP repayment plans; and (4) how the Department of Education, which administers the program, verifies the accuracy of income reported by borrowers using ICR. GAO noted that: (1) as of March 31, 1997, about 663,000 borrowers owing about $5.3 billion in FDLP loans were repaying loans; (2) about 9 percent of these borrowers were using ICR; (3) GAO found that 80 percent of borrowers using ICR either were current in their monthly payments or had their payments suspended because they were in school or for other reasons; (4) borrowers using ICR tended to be delinquent or in default at higher percentages than borrowers using other repayment plans; (5) borrowers who have been placed into the ICR plan because they have defaulted on an Federal Family Education Loan Program (FFELP) loan are a major factor in the higher percentage of defaults for ICR users; (6) of the 2,832 borrowers using ICR and in default, 2,083, or 73.6 percent, had defaulted on an FFELP loan; (7) comparing estimated total loan payments for ICR users and borrowers who use the three other repayment plans is complicated; (8) compared with borrowers who use the standard repayment plan, ICR users and those using extended and graduated plans generally face higher total payments; (9) compared with borrowers who use the extended or graduated repayment plans, ICR users face comparatively higher total payments if their incomes are low but comparatively lower total payments if their incomes are high; (10) the Department of Education checks the reported income of borrowers using ICR in one of two ways; (11) for borrowers who are in their first year of repayment or who may have recently lost their jobs, the Department relies primarily on documentation submited by the borrower, such as pay stubs, dividend statements, or cancelled checks; (12) the Department does not verify the accuracy of this documentation when it is submitted; rather it relies on a signed certification from the borrower that the information is complete and accurate; (13) for borrowers who have been out of school for a year or more, the Department obtains income information directly from the Internal Revenue Service (IRS); (14) the Department does not verify the accuracy of information borrowers provide IRS but relies on IRS' verification process; (15) however, during the transition from using borrower documentation to using IRS information, the Department compares the income amounts from the two sources for discrepancies; and (16) if there are significant discrepancies or if borrowers do not cooperate in providing correct income information, they are removed from the ICR plan and placed into another repayment plan. |
On September 14, 2001, President Bush proclaimed a national emergency in the wake of the September 11, 2001, terrorist attacks. In his proclamation, he said he would use various sections of Title 10 of the United States Code to mobilize additional forces. Section 12302, in particular, authorizes the President to call up National Guard and Reserve members to active duty for up to 2 years. Since September 2001, DOD has activated about 300,000 of the 1.2 million National Guard and Reserve personnel. As of October 8, 2003, about 166,000 Reserve and National Guard members remained on active duty. Some of the reservists were assigned to domestic military installations to provide, for example, base security. When reserve members are mobilized to serve on active duty at military installations in the United States, the installations where they serve arrange lodging for them. If lodging is not available on base, installations may provide activated reservists with Certificates of Non-Availability enabling them to acquire off-base lodging in the local area at prevailing GSA rates. Because of the size and length of the current mobilization, some installations, like MacDill Air Force Base, made arrangements with local hotels and apartment vendors to provide reservists with off-base lodging. The 6th Contracting Squadron at MacDill was responsible for developing the BPAs, and the 6th Services Squadron/Military Lodging was in charge of assigning reservists to available lodging. Because mobilized National Guard and Reserve personnel are considered to be in temporary duty status, their per-diem, travel, and transportation allowances are governed by DOD’s Joint Federal Travel Regulations. A per-diem allowance is designed to offset the cost of lodging, meals, and incidental expenses incurred by reservists while they are on travel status or on temporary duty away from their permanent duty station. DOD’s regulations state that within the continental United States, travelers are entitled to the per diem set by GSA for a particular location. Specifically, if a contracting officer contracts for rooms and/or meals for members traveling on temporary duty, the total daily amount paid by the government for the member’s lodging, meals, and incidental expenses may not exceed the applicable GSA per-diem rate. In December 2002, CENTCOM established plans for providing working quarters at MacDill for coalition partners supporting Operation Iraqi Freedom and titled the project Coalition Village II. The project was modeled after similar working quarters established at MacDill for coalition partners supporting the war on terrorism, Coalition Village I. Representatives from CENTCOM and Civil Engineering supported the 6th Contracting Squadron, which provides contracting support to MacDill’s base tenant units, in its efforts to establish Coalition Village II. The 6th Contracting Squadron is a part of the 6th Air Mobility Wing, which reports to the Air Mobility Command. The Air Mobility Command is a component of the United States Transportation Command. During the summer of 2003, public concerns were raised in the Tampa area about the practices used at MacDill to acquire off-base lodging for reservists and temporary office space for coalition partners in the war against Iraq. Specifically, these concerns questioned whether MacDill officials paid above-market rates for apartments; used competition in awarding BPAs for off-base lodging; and advertised for bids for lodging services. Questions were also raised about whether the contract providing office space for coalition partners supporting military operations in Iraq was adequately managed to avoid excessive costs. In order to reduce the cost of off-base lodging for 1,700 military personnel and reservists on short-term and long-term temporary duty, MacDill Air Force Base officials instituted two procedures. MacDill used BPAs as a flexible procurement method to obtain lodging at prices that were at or below the maximum allowable GSA rate of $93 per day for Tampa. MacDill also implemented installation guidance that required reservists at certain ranks to share two-bedroom apartment units that further reduced costs on a per-person basis. MacDill officials estimate that these procedures saved about $12.6 million in off-base lodging costs in fiscal year 2003. Our review showed that the prices paid by MacDill were similar to those paid by corporate entities in Tampa for comparable lodging units, but were lower on a per-person basis due to lodging sharing arrangements. Our work showed that practices used at other military installations to provide off- base lodging varied but did not reveal any one approach that resulted in more significant cost savings over other approaches, where shared lodging was required. Alternative approaches for obtaining off-base lodging, such as obtaining long-term leases for blocks of properties, could be considered but would require that various factors be weighed in considering their use. MacDill Air Force Base contracting officials used BPAs to acquire off-base lodging to handle the large influx of reservists who were mobilized following the September 11, 2001, terrorist attacks. According to the Federal Acquisition Regulation (FAR), a BPA is a simplified method of filling anticipated, repetitive needs for supplies or services by establishing “charge accounts” with qualified sources of supply. Air Force officials had used this method to acquire off-base lodging for several years. We have no basis to conclude that the Air Force’s use of BPAs was inconsistent with the FAR. MacDill contracting officials told us that these agreements provide them with greater flexibility than contracts would in arranging temporary lodging. BPAs permit either party to walk away from the agreement without a penalty. The agreements allow federal travelers to use their government-issued travel cards to obtain lodging at hotels and apartments at reduced prices and favorable contract terms. The costs for reservists who do not have government-issued travel cards are billed to MacDill under a purchase order. MacDill officials indicated that they go through an established process to set up an agreement with an apartment vendor or hotel. The process begins when either MacDill contacts a lodging facility or a facility contacts MacDill. As part of this initial contact, MacDill schedules an inspection to ensure that the facility meets its cleanliness, safety, health, and fire standards. If the facility passes the inspection, MacDill sets up an agreement with the facility and lists the facility as a source of lodging for reservists at an agreed-upon daily rate. MacDill officials told us they review BPAs annually to ensure that their needs are still being met and to determine if the facility still meets standards. At the time of our review, MacDill had agreements with 35 vendors (29 hotels and 6 apartment providers) and was housing an average of about 1,700 personnel a day in off-base lodging facilities. Of these, about 900 were in hotels and 800 were in apartments. In September 2003, the prices that MacDill had obtained for hotel rooms ranged from $44 to $93 per person per day, and for apartment units from $55 to $93 per person per day (see table 1). The agreements with apartment vendors do not require security deposits and also allow reservists to leave earlier than their scheduled departure dates without paying penalties. Apartment rental officials told us that, in contrast, other apartment renters must give a 30-day notice before leaving or incur penalties, such as the loss of 1 month’s rent, forfeiture of the security deposit, or being held liable for the cost of the remaining term of the lease. The apartments acquired by MacDill are fully furnished. The daily rate for the apartment covers the cost of utilities, amenities (kitchenware, linens, vacuum cleaners, microwave ovens, and cable television service), and weekly maid service. Apartment vendors also do not charge reservists a 12 percent Florida tax for leases of less than 6 months, which private renters typically pay. In addition to using BPAs to procure off-base lodging for reservists, MacDill used installation-specific guidance on sharing lodging to further reduce off-base lodging costs in two-bedroom apartments. The guidance requires officers at or below the rank of Lieutenant Colonel and enlisted personnel at or below the rank of Chief Master Sergeant or Sergeant Major to share two-bedroom apartments. This practice allowed MacDill to achieve a cost savings of up to 55 percent of the GSA rate (table 1). For example, if two reservists were sharing a two-bedroom apartment that costs $93 per day, each would pay half of that amount, significantly less than the GSA daily rate of $93 per person. Of a total of 800 reservists housed in apartments, about 600 shared two-bedroom units. MacDill officials responsible for lodging operations told us that they try to place military personnel who are on temporary duty for 45 days or longer in apartments. This allows personnel to have access to cooking facilities, as well as more room than they would have in a hotel room. MacDill officials indicated that they consider three criteria in placing personnel in apartments: (1) whether or not personnel have access to transportation to get to the base, (2) whether they are compatible in terms of rank and gender to fill a vacancy in a two-bedroom apartment, and (3) if these two criteria are met, officials randomly assign personnel to a unit. However, the officials also must consider such factors as security or the ability of a particular apartment complex to accommodate an entire reserve unit. Based on data that we received from MacDill lodging officials, the base spent about $23.3 million for 386,466 bed-nights in off-base lodging, including both short- and long-term stays, in fiscal year 2003. However, had MacDill paid the maximum allowable GSA rate of $93 per day for the same number of days, the costs would have amounted to $35.9 million. As a result, the installation reported that it saved an estimated $12.6 million for off-base lodging costs by using blanket purchase agreements and requiring apartment sharing. Of the $23.3 million spent in fiscal year 2003, MacDill paid about $13.9 million for apartment rentals and about $9.3 million for hotels. The estimated savings attributable to apartments is about $7.6 million and about $5 million in savings is attributable to hotels. In our limited review of local rental prices in the Tampa area, we found that MacDill’s lodging costs were comparable with those paid by corporate entities for the same types of units but were higher than prices for typical furnished apartments cited in media reports. These reports compared MacDill’s apartment costs with the cost of furnished apartments that ranged, for example, from $1820 to $1880 ($60.66 to $62.66 per day) for a two-bedroom unit with maid service and utilities. In a search of Internet sites listing housing prices in the Tampa area, we found that individually furnished two-bedroom apartments ranged from $623 to $1655 ($20.77 to $55.17 per day)—but typically would not include the full range of services obtained by MacDill. However, according to apartment brokers that we contacted in the Tampa area who provide services to corporate entities and private sector renters as well as MacDill, corporate-style facilities may be the most appropriate to compare to MacDill’s costs. Corporate apartments offer essentially the same provisions as the apartments that MacDill obtains: they are fully furnished and the prices include amenities (i.e., kitchenware, linens, microwave ovens, vacuum cleaners, and cable television service), maid service, and utilities. The main difference is that corporate apartments generally require a minimum 3-month lease and a 30-day notice to break the lease while MacDill’s BPA arrangements do not require a minimum length of stay or have any penalties if reservists leave earlier than scheduled. We found that prices paid per unit by MacDill are comparable to those paid by corporate entities, but MacDill’s prices are generally much lower on a per-person basis due to lodging sharing arrangements. According to one apartment broker we interviewed, the price of a corporate apartment ranged from $46.50 to $114.60 per day. The price that MacDill pays for a similar apartment at the same complex ranges from $71 to $93 per day. Another apartment broker we contacted told us that the corporate rates for his apartments ranged from $76 to $100 per day, depending on the location of the apartment. The price that MacDill pays for a similar unit ranges from $71 to $93 per day, with the actual cost per person in both examples being lower depending on the number of occupants. Public concerns were raised about the absence of advertising and competition in creating BPAs to provide off-base lodging, suggesting that increased competition and advertising would help control costs. However, because a BPA is not a contract, competition and advertising were not required to establish these BPAs. In any event, while MacDill did not hold a competition or advertise for bids, it did establish BPAs with multiple vendors. According to MacDill officials, contracts over $25,000 require 15 days to advertise, 30 days for the vendor to respond, and 15 days to negotiate. MacDill officials told us that they used BPAs because they could be arranged in a shorter time frame than solicited contracts. They stated that they were under extreme time pressures to acquire immediate housing in February 2003 when 325 reservists arrived at MacDill to provide force protection services. Other DOD installations that we contacted during our review either provided lodging for reservists on base or used similar practices to reduce off-base lodging costs. In the few selected instances where we identified the use of off-base lodging, housing officials used a variety of procurement methods (BPAs, contracts, and purchase orders) to obtain prices at or below the allowable GSA lodging rate for those locations. In addition, they required reservists to share hotel rooms and apartment units. However, our review did not identify any one approach that stood out as offering more significant cost benefits than other approaches where shared lodging was required. In general, the Army installations that we surveyed used purchase orders or requirements contracts to procure off-base lodging for temporary duty reservists. At the time of our review, Fort Bragg housed about 2,400 reservists off base. Fort Bragg had awarded contracts to 25 vendors (20 hotels and 5 apartment providers) to supply lodging for reservists and had spent an estimated $35 million between October 2002 and November 2003 for this lodging. The contracted lodging rates were at or below the maximum allowable GSA lodging rate of $63 per day for Fayetteville. Fort Bragg had also implemented an installation policy requiring reservists at the rank of sergeant and below to share hotel rooms as well as apartment bedrooms. This sharing resulted in average savings of up to 56 percent in relation to the GSA lodging rate (see table 3)—savings similar to those realized at MacDill. Although Fort Bragg used purchase orders immediately after September 11, 2001, the base switched to contracts to obtain off-base lodging soon thereafter to streamline the process. When they used purchase orders, for example, contracting officials had to issue a modification each time a reserve unit increased or decreased its numbers or changed its length of stay. Two full-time contracting specialists and one part-time contracting officer were needed to handle the paperwork. According to Fort Bragg officials, the change to contracts made the process more economical because contracts require less paperwork and less manpower to administer. Unlike MacDill’s BPAs, however, Fort Bragg’s contracts were based on the number of bedrooms being rented, irrespective of whether they were in a hotel or an apartment. Bedrooms were defined as single- or double- occupancy. Fort Bragg’s contracted rates were below the GSA lodging rate of $63 per day and ranged from $32 to $60 per day for single- occupancy rooms and from $20 to $30 per day for double-occupancy rooms (see table 3). Thus, at Fort Bragg, two reservists sharing a two- bedroom apartment with single-occupancy rooms could cost $60 per room or up to $120 per day. However, if the bedrooms were double-occupancy, up to four reservists could be housed for $120 per day. The contract terms required a 72-hour to 2-week notice to vacate the lodging unit earlier than scheduled. Fort Bragg’s sharing policy required enlisted personnel at the rank of sergeant and below to share rooms. When a bedroom was to be shared, Fort Bragg required that each reservist have sufficient space and be provided with a dresser or chest of drawers in the room. In contrast to Fort Bragg, Army officials at Fort Hood, Texas, and Fort Dix, New Jersey, told us that they were able to accommodate most of their temporary duty reservists on base. In the few cases when off-site lodging had to be procured, the installation’s contracting personnel used purchase orders to obtain the needed facilities. Officials told us that, in general, these off-base stays were for 3 to 4 days at Fort Hood and a maximum of 60 days at Fort Dix. At both bases, enlisted personnel below the rank of Sergeant First Class were required to share hotel rooms. Unlike MacDill Air Force Base, reservists on long-term temporary duty at Pope, Dover, and McGuire Air Force bases were accommodated on site. According to an Air Force official, most reservists did not have transportation and, thus, were given priority for on-site lodging. As a result, some non-reserve service members had to be placed in off-base lodging. Like MacDill, these Air Force bases used BPAs to procure their off-site lodging needs, which were generally for short periods of time. At the time of our review, Pope Air Force Base had 12 BPAs with hotel vendors. Pope officials said that they do not use apartments because most stays off-base are less than a week, and personnel are not required to share rooms. We were told that, in general, service members or reservists who are assigned to Pope for extended duty are housed on base. Under the terms of the BPAs, personnel accommodated in hotels may vacate the hotel at any time without a penalty. The first priority in selecting a hotel for off-base lodging is the distance from the base to the hotel because aircrews sometimes have to leave on short notice. According to a Pope official, in general about 300 airmen are housed in off-base lodging facilities each month. Prices for a one-bedroom hotel room for Pope ranged from $48 to $63, for savings of up to $15 per day compared with the GSA lodging rate of $63 per day for Fayetteville. According to a lodging official, Pope spent an estimated $1.825 million on off-base lodging in fiscal year 2003. Our analysis of data provided by Navy officials indicates that the Navy spent a total of $14.8 million in fiscal year 2003 on contracted and leased lodging facilities. However, a Navy official told us that, in most cases, the temporary-duty reservists were accommodated in on-site lodging. The major exceptions are reservists mobilized in the Washington, D.C., area. These reservists are provided with Certificates of Non-Availability, which enable them to acquire lodging in local area hotels, and they are reimbursed for their lodging costs up to the maximum GSA rate allowed for the Washington, D.C., area, which currently is $150 per day. About $11.3 million of the $14.8 million the Navy spent on contracted and leased lodging facilities was used to acquire lodging in local markets with Certificates of Non-Availability. Marine Corps reservists were accommodated in existing on-site facilities. The extent and length of the current mobilization has created some long- term, off-base lodging requirements and associated costs that appear high when considered on a monthly basis and when compared with private sector prices that typically, however, offer fewer amenities. Whether other alternatives for obtaining off-base lodging should be considered or whether they would be cost effective is unclear. Much would depend on individual circumstances, local market conditions and costs, the number of personnel requiring lodging, and the length of the lodging requirement. One alternative approach that could be explored might be to obtain long- term leases for blocks of properties to provide lodging for reservists on extended temporary duty during times of high mobilizations. However, MacDill lodging officials told us that this approach would require them to obtain furnishings, utility hook-ups, and amenities (i.e., vacuum cleaners, kitchenware, linens) as well as staffs to manage property inventories and reservation systems. Government management of such inventories could be viewed as counter to recent defense initiatives to rely on the private sector for the provision of commercially available services. MacDill lodging officials also pointed out that the need for long-term lodging could vanish as quickly as it materialized, leaving them committed to long-term leases, property inventories, and the attendant costs. Under the approach MacDill currently uses, apartment units and hotels assume these risks. This approach would also need to consider potential force protection issues that might be of concern with large concentrations of personnel lodged together off base. From project initiation to settlement of the contractor’s claim, the management of Coalition Village II suffered from questionable acceptance of the winning offer, poor record keeping, undocumented decisions regarding changes to the contract, and changes to contract requirements that were not properly coordinated with contracting officials. As a result of these weaknesses, we were unable to assess the basis for significant cost increases in the contract. These weaknesses also made it difficult for us to determine whether the government paid for costs that otherwise might have been avoided or disallowed. Coalition Village II was implemented under tight time constraints that presented unique challenges for the 6th Contracting Squadron in the solicitation, award, and pricing of the contract. MacDill contracting officials reference a March 21, 2003, memorandum from the Air Force’s Deputy Assistant Secretary (Contracting)/Assistant Secretary (Acquisition) whose subject was, “Rapid, Agile Contracting Support During Operation Iraqi Freedom.” The memorandum encourages, “…every contracting professional to lean way forward, proactively plan for known and anticipated customer needs, and put the necessary contract vehicles and supporting documents in place as soon as possible.” The memorandum further calls for Air Force contracting officers to be a “community of innovative, even daring risk takers.” CENTCOM initiated its urgent request for temporary office space to the 6th Contracting Squadron in February 2003. It requested 14 temporary office trailers to house additional coalition partners that were supporting the United States in Operation Iraqi Freedom. CENTCOM said it needed the trailers in 30 days, and the 6th Contracting Squadron used a provision of the FAR, entitled “Unusual and Compelling Urgency,” to meet the tight timeline. Under this provision, the government is allowed to limit the number of sources and approve written justifications after the contract is awarded within a reasonable time, if preparation and approval prior to the award would unreasonably delay the acquisition. Consistent with the authority for an urgent and compelling acquisition, MacDill’s contracting office developed a list of three potential contractors. According to a MacDill contracting official, the office contacted only those contractors who had proven records of timely and satisfactory performance for similar work at the base. MacDill issued the solicitation for leasing the trailers on February 14, 2003, and established 12:00 p.m. Eastern Standard Time on February 18, 2003, as the deadline for receipt of proposals. One contractor, the Warrior Group, did not submit a proposal in time to meet the deadline, and its proposal was not considered. Two other contractors were judged to have met the deadline for submitting their proposals, although acceptance of the winning proposal was controversial. William Scotsman, the incumbent contractor for the Coalition Village I project, hand-carried its proposal to the 6th Contracting Squadron at 11:31 a.m. on February 18, 2003, and there was no question that it had met the deadline. The third contractor and winning offeror, Resun Leasing, faxed its proposal at 3 minutes past 12:00 p.m., according to the time stamp on the fax machine. However, MacDill contracting officials determined that the fax machine clock was 3 minutes fast, and that the first page of Resun’s proposal was received by the 12:00 p.m. deadline. Although not all the pages of Resun’s proposal were received by the deadline, the contracting officer determined that because the first page had been received in time, the entire proposal was timely. Although Resun’s proposal was arguably late, MacDill contracting officials determined that Resun Leasing was the “lowest price, technically acceptable offeror” and verbally notified the contractor on February 18, 2003, to proceed with the project. Resun’s initial offer for the contract was $111,000, but a MacDill contracting official subsequently noted a computation error, which increased the offer to $142,755. The offer submitted by William Scotsman was for $196,000. William Scotsman subsequently questioned MacDill officials about the propriety of considering Resun’s apparently late offer. Nevertheless, although William Scotsman submitted a timely offer and therefore could have protested to GAO, it did not protest the award to Resun Leasing and MacDill’s handling of the Resun offer. A contracting official told us that MacDill has now instituted a policy clearly stating that all pages of a faxed proposal must be received by the deadline for it to be considered timely. Numerous modifications to the contract were made after work began on February 19, 2003. On April 22, 2003, Resun filed a claim for additional work, including six additional flagpoles, electrical and wiring changes, interior and exterior trailer modifications, revised grounding/lightning protection, interior and exterior locks, and additional air conditioning units totaling $467,000, but revised the amount several times. Resun submitted another revision on June 9, 2003, claiming an amount of $372,172. On May 20, 2003, MacDill validated $134,000 of the claim, leaving $238,172 to be negotiated. On July 20, 2003, the contractor acknowledged that it owed the government $4,977 because of erroneous billing, which left a total of $233,196 to be negotiated. MacDill officials agreed to pay this amount and issued a contract modification on July 31, 2003, to capture this change. The total amount paid for the project was, therefore, $509,951 (see table 4). However, as discussed subsequently, the contract file did not contain adequate documentation for us to determine how MacDill officials arrived at this settlement. Our efforts to assess contract costs for Coalition Village II were hampered by missing documents in the contract file, undocumented decisions for properly authorized changes to the contract, and changes to contract requirements by on-site personnel that were not properly coordinated with contracting officials. Because of these weaknesses in contract management, we were unable to determine if the government paid costs that otherwise might have been avoided or minimized. Our review of the Resun contract file showed that it was missing several key documents needed to assess the appropriateness of contract costs. The file did not contain documentation that the winning proposal represented a technically acceptable offer or an assessment that the price was reasonable. MacDill contracting officials agreed that poor record keeping was a problem with the Coalition Village II contract. The contract file also did not contain documentation to fully validate the contractor’s entire claim. While validation of $134,000 of the initial claim was documented, there was no documentation indicating how MacDill officials determined that the remaining amount of the claim was valid and reasonable. Further, the file did not contain sufficient documentation regarding authorized changes to the contract. Modifications to the contract were made during twice-weekly meetings between representatives of the contractor, the customer (CENTCOM), technical advisors (civil engineers), and contracting staff, but no official minutes were maintained to document the agreements that were reached. In a memorandum for the record, the contract administrator acknowledged that a written log of contract changes was not developed. The absence of documentation of authorized contract modifications makes it difficult to assess contract costs. The Resun contract file also did not contain sufficient documentation to indicate who authorized some contract changes or the cost estimates for some changes. MacDill officials told us that they were surprised when the contractor submitted the claim for $467,000 to cover additional work performed under the contract. They said that the contracting officer and contract administrator were not aware of all changes that had been made because unauthorized personnel inappropriately authorized changes to the contract on site without informing contracting officials. During the rush to get the project completed, involved parties including representatives of the customer and technical advisors made on-site changes that were not always coordinated with the contracting officer. In a memorandum for the record dated June 29, 2003, the contract administrator wrote that he did not know about many of the changes, nor did the CENTCOM point of contact or the representative from civil engineering, who assisted with contract oversight. The price negotiation memorandum written to document the final settlement of the claim also notes a lack of adequate documentation to determine who authorized the extra work. The absence of these documents along with inadequate documentation of contract changes makes it difficult to retrospectively assess the appropriateness of contract costs. MacDill Air Force Base and other installations we identified that provide lodging for reservists on extended temporary duty are often making efforts to reduce off-base lodging costs by (1) obtaining prices that are below the maximum allowable rate for lodging established by GSA and (2) requiring military personnel below specified ranks to share apartments and/or hotel rooms. While public concerns in the Tampa area were accurate in citing MacDill’s monthly rental costs for some two-bedroom apartment units of $2,400, these concerns failed to recognize that GSA establishes lodging rates for travelers on official government business based on daily per- person rates. Therefore, a two-bedroom apartment renting for $2,400 per month ($80 per day) shared by two people results in a daily lodging rate of $40 per person, well below the maximum allowable GSA rate of $93 per day in the Tampa area. On a unit basis, these rates are also comparable to corporate housing rates in the Tampa area, which generally provide furnished units with similar amenities to those provided to military personnel, though MacDill’s per-person costs were usually lower due to lodging sharing arrangements. Each installation we visited had different methods for providing extended temporary lodging. The majority of installations contacted had sufficient capacity to provide lodging for reservists on base or made arrangements to provide lodging off base for other military travelers on a short-term basis. Installations providing off-base lodging used different procurement tools (BPAs, purchase orders, and contracts) but obtained comparable savings regardless of the procurement instrument used. Local GSA lodging rates are public knowledge and generally represent the ceiling for acceptable offers. Significant savings over GSA daily rates were also obtained through the implementation of installation specific guidance requiring reservists at specific ranks to share rooms and/or apartments, but the ranks required to share units varied by installation. Installations also obtained varying terms in their agreements with hotels and apartment vendors, primarily regarding penalties for early departures. The primary factors affecting off-base lodging prices are local market conditions (the inventory of vacant hotel rooms and apartment units) and the prevailing GSA lodging rate. An alternative approach to providing off- base lodging, such as direct leasing of apartment properties, might be considered but would need to consider other factors such as the added costs of government management and the provision of additional services comparable to those now being provided. Although Coalition Village II was implemented under extreme time constraints, effective contract management suffered from questionable acceptance of the winning offer, poor record keeping, undocumented decisions, and changes to contract requirements that were not properly coordinated with contracting officials. We were not able to assess the basis for additional costs paid to the contractor or the extent to which costs might have been avoided or minimized because of these contract management weaknesses. We recommend that the Secretary of Defense direct the Secretary of the Air Force to direct the Commander of the Air Mobility Command to emphasize to MacDill personnel the importance of adhering to sound contract management procedures that exist to protect the interests of the government. Communications should reemphasize that contract files should be properly maintained and only authorized personnel should initiate changes to contract requirements, even during time sensitive procurements. In addition to contracting officials, such communications should also be provided to contractors, base customers of contracting services, and contract support personnel. In commenting on a draft of this report, the office of the Director, Defense Procurement and Acquisition Policy, did not dispute the GAO audit findings regarding the Coalition Village II procurement and partially concurred with our recommendation. The office suggested that the recommendation is not needed because the 6th Contracting Squadron at MacDill Air Force Base had already taken corrective actions, including an internal review of Coalition Village II contract files that resulted in letters of reprimand for a contracting officer and contract administrator. However, as noted in DOD’s response, some of the more significant actions that relate to the specifics of our recommendation are planned but not yet completed. Accordingly, we believe it appropriate to retain the recommendation pending completion of all indicated corrective actions. We expect to follow up to determine the extent to which planned actions have been taken. The comments from the office of the Director, Defense Procurement and Acquisition Policy, are included in appendix II of this report. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this letter, please contact me at (202) 512-5581. Key contributors to this letter were George Poindexter, Vijay Barnabas, Nelsie Alcoser, Kenneth Patton, Tanisha Stewart, and Nancy Benco. To describe the extent to which MacDill Air Force Base used cost-effective measures to provide long-term, off-base lodging for reservists on extended temporary duty, we visited and interviewed officials from the 6th Contracting Squadron and 6th Services Squadron at MacDill Air Force Base, and we interviewed apartment managers and brokers in the Tampa, Florida, area. We analyzed records on temporary lodging rates paid for military personnel housed off-site at MacDill Air Force Base and the numbers of National Guard and Reserve service members on extended temporary duty at this installation. We identified the allowable GSA lodging rate for the Tampa, Florida, area and compared this amount to the amounts paid for off-base lodging. We determined whether MacDill Air Force Base used contracts or BPAs to provide off-site lodging for service members on extended temporary duty and reviewed the processes followed in developing these procurement instruments for acquiring off- base lodging. We reviewed the BPAs MacDill had with hotel and apartment vendors in the Tampa area. To compare the practices used at MacDill Air Force Base to acquire off- base lodging to practices at other installations, we visited and interviewed contracting and lodging officials at Fort Bragg and Pope Air Force Base. These installations were selected based on our review of Reserve and National Guard deployment data for force protection activities and follow- up phone calls to establish that the bases procured off-base lodging. In addition, we obtained information on lodging practices at Fort Meyer, Dover Air Force Base, McGuire Air Force Base, Fort Hood, and Fort Dix. We also contacted Navy and Marine Corps officials at the headquarters level to determine their practices for providing lodging for reservists on extended temporary duty. We identified the allowable GSA lodging rates for Fort Bragg and Pope Air Force Base and compared these amounts to the amounts paid for off-base lodging. We determined whether these installations used contracts, purchase orders or BPAs to provide off-site lodging for service members on extended temporary duty and the processes followed in developing these procurement instruments. We met officials from the Under Secretary of Defense (Personnel and Readiness), U.S. Air Force (Installations and Logistics Contracting), and DOD’s Per Diem, Travel and Transportation Allowance Committee to collect information on Department of Defense lodging regulations and procedures. At each of the installations we visited, we collected and reviewed lodging policies, procedures, and practices regarding temporary duty personnel. In addition, we reviewed the requirements in the Joint Federal Travel Regulations regarding temporary duty travel. We reviewed all data that we received, but we did not verify the accuracy of the data provided by DOD or the installations. To determine if MacDill followed proper procedures in contracting for the lease of temporary office trailers for Coalition Village II, we interviewed officials from the 6th Contracting Squadron, including the commander, the current contracting officer, the contract administrator, and other contract staff familiar with the procurement process. In these discussions, we sought information on the actions taken to implement the project, the timing of such actions, and the justification for contracting procedures followed. We reviewed documents prepared by contracting officials to explain procedures followed in administering the contract, including a Talking Paper and Acquisition Timeline of Events for Coalition Village II. In addition, we reviewed the contract and other documentation in the contract file, including correspondence, memorandums for the record and the contractor’s claims for payment. We also reviewed relevant provisions of the Federal Acquisition Regulation (FAR) related to this procurement. Specifically, we researched FAR authorities related to the use of “Unusual and Compelling Urgency” in government procurements, including competition and documentation requirements under such circumstances. We also researched and analyzed prior GAO bid protest decisions regarding determinations of timeliness in the acceptance of electronic submissions of proposals. We conducted our review from June 2003 through December 2003 in accordance with generally accepted government auditing standards. | Since the September 11, 2001, attacks and the beginning of Operation Iraqi Freedom, thousands of National Guard and Reserve members have been activated and mobilized to military installations across the country. Some installations, like MacDill Air Force Base in Tampa, Florida, where more than 3,000 reservists have been mobilized, have had to arrange for off-base lodging in local hotels and apartment buildings. In addition, MacDill, which serves as U.S. Central Command headquarters, has had to set up temporary office space for staffs of coalition partner nations. Public concerns have been raised about these arrangements. GAO was asked to review (1) the extent to which MacDill used cost-effective measures to provide off-base lodging for reservists and (2) whether a contract providing office space for coalition partners was adequately managed to control costs. During recent mobilizations, MacDill contracting officials used two practices that effectively reduced the overall cost of off-base lodging for reservists on extended temporary duty to below that allowed by the General Services Administration's (GSA) lodging rate. Officials used a simplified acquisition procedure--Blanket Purchase Agreements (BPA)--to obtain prices that were at or below the maximum allowable GSA rate of $93 per day for Tampa, Florida. MacDill officials obtained daily lodging rates of $71 to $93 per unit for two-bedroom apartments. The BPAs also provided greater flexibility in vacating units without incurring penalties. In addition, MacDill officials reduced per person lodging costs further by implementing a roomsharing policy for personnel at certain ranks. When two reservists shared a two-bedroom unit (about 600 reservists), the cost dropped by up to 55 percent of the daily GSA rate. Overall, during fiscal year 2003, MacDill reported that it saved about $12.6 million using these practices. Our review of local rental costs showed that BPA prices were similar to those paid by corporate entities for comparable lodging units, but were lower on a perperson basis because of lodging sharing arrangements. From project initiation to settlement of the contractor's claim, the Coalition Village II contract suffered from questionable acceptance of the winning offer, poor record keeping, undocumented contracting decisions, and changes to contract requirements that were not properly coordinated with contracting officials. Although MacDill officials determined that the winning offer was received on time, only the first page of the proposal was received by the established deadline. Contract costs for the project, which was implemented under tight time constraints, increased by more than $367,000 over the winning offer of $142,755. However, due to the absence of proper documentation in the contract files, we were unable to fully assess the basis for additional costs paid to the contractor or the extent to which costs might have been avoided or minimized. |
The WTO Agreement on Subsidies and Countervailing Measures establishes general international rules regarding the types of subsidies that exporting countries may and may not maintain and procedures that importing countries may employ to counter injurious subsidy practices. U.S. trade law generally reflects the agreement’s provisions. The United States applies CVDs with some regularity—almost always in tandem with the other major mechanism for providing relief from unfairly traded imports: antidumping duties. However, CVDs are requested and applied much less frequently than antidumping duties. Appendix II provides additional background information on WTO subsidy rules and relevant U.S. laws, explains antidumping actions in more detail, and provides more detail about how frequently CVD and antidumping actions are sought and duties actually imposed. The U.S. government does not apply its CVD laws against China because the Department of Commerce classifies China as an NME country and has adopted a policy against taking CVD actions against countries so designated. This policy rests upon two principles, first advanced in two 1984 Department of Commerce decisions and subsequently upheld by the U.S. Court of Appeals for the Federal Circuit. These principles are (1) from a legal standpoint, Commerce does not have explicit authority to apply CVDs against NME countries; and (2) as a practical matter, Commerce cannot arrive at economically meaningful conclusions regarding subsidies in such countries. The Department of Commerce classifies China, as well as Vietnam and a number of former Soviet republics, as NME countries. Under U.S. trade law, Commerce may classify any country that does not operate on market principles “so that sales of merchandise in such country do not reflect the fair value of the merchandise” as an NME country. Commerce has classified China as an NME country since 1981. Figure 4 shows the countries that Commerce currently classifies as NMEs. U.S. trade law does not contain any explicit prohibition against applying CVDs to NME countries. Nonetheless, the Department of Commerce determined in 1984 that it did not have explicit legal authority to apply CVDs to such countries. Commerce set forth its conclusions on this matter in rulings denying CVD protection against carbon steel wire rods from Poland and Czechoslovakia (which were then considered NME countries). Commerce observed that, even though Congress had addressed unfair trade remedies in both the Trade Act of 1974 and the Trade Agreements Act of 1979 and revised U.S. countervailing duty law on both occasions, it had not given any indication that CVD law should be applied against these countries. Instead, Congress provided two other remedies—antidumping duties and safeguard measures—to address unfair trade practices by NME countries. The U.S. Court of Appeals for the Federal Circuit upheld Commerce’s decision in Georgetown Steel Corp. v. United States. “We believe a subsidy (or bounty or grant) is definitionally any action that distorts or subverts the market process and results in a misallocation of resources. . . . In NMEs resources are not allocated by a market. With varying degrees of control, allocation is achieved by central planning. Without a market, it is obviously meaningless to look for misallocation of resources caused by subsidies. There is no market process to distort or subvert…. It is this fundamental distinction—that in an NME system the government does not interfere in the market process but supplants it—that has led us to conclude that subsidies have no meaning outside the context of a market economy.” In upholding Commerce’s position in this matter, the Court of Appeals found in Georgetown Steel that in nonmarket economies the governments control their trading entities by determining where, when, and what they will sell, and upon what terms. When no market exists, subsidies cannot be found to distort market decisions. Commerce could take either of two paths to applying U.S. CVD law to China. First, Commerce could use its administrative authority to change China’s NME status in whole or in part. This would allow Commerce to apply U.S. CVD law to China on a country or industry basis. However, Commerce officials observed that it may be difficult for China to meet these criteria in the near term. Alternatively, Commerce could reverse its 1984 position and decide that CVD law could be applied to China while it remains classified as an NME country. However, absent a clear grant of authority from Congress, such a reversal could be challenged in court. The results of such a challenge are uncertain. WTO rules, including relevant provisions of China’s WTO accession agreement, do not explicitly preclude the United States from pursuing either alternative. Moreover, China’s WTO accession commitments (1) permit use of third-country information in CVD cases and (2) could facilitate Commerce adjudication of CVD actions against state-owned enterprises in that country. The Department of Commerce has administrative authority to reclassify NME countries as market economy countries, or individual NME country industries as “market oriented” in character, provided that the country as a whole or the industries in question meet certain criteria. Department of Commerce officials explained that countries classified as NMEs may ask that their status be reviewed either within the context of an ongoing import relief case or as an independent matter. Commerce has responded to a number of requests for such reviews, granting some countries (such as Russia and Estonia) market economy status while classifying others (such as Vietnam) as nonmarket economies. Table 1 shows former NME countries that Commerce has determined merit reclassification as market economy countries. In making decisions on such matters, U.S. trade law specifies that Commerce shall take into account the following factors: the extent to which the country’s currency is convertible into the currency of other countries, the extent to which wage rates are determined by free bargaining between labor and management, the extent to which joint ventures or other investments by foreign firms the extent of government ownership over the means of production, the extent of government control over the allocation of resources and enterprises’ price and output decisions, and other factors that Commerce considers appropriate. In April 2004, the United States and China established a Structural Issues Working Group under the auspices of the U.S.-China Joint Commission on Commerce and Trade. Among other things, this group is examining issues relevant to China’s desire to be classified as a market economy country under the criteria set forth in U.S. antidumping law. U.S. officials involved with the group have observed that substantial additional reform will have to take place (e.g., in improving respect for labor rights and reducing or abandoning controls on currency convertibility) before China can expect to be declared a market economy country under the criteria listed above. The Chinese government regards recognition as a market economy among its trading partners as a desirable diplomatic goal. While acknowledging that this change in status may result in the United States (and other countries) applying countervailing duties, Chinese officials we spoke with emphasized the political value of their country being officially declared a “market economy.” Other trade experts pointed out that Chinese officials may also be seeking this change because they believe it would generally result in lower antidumping duties being assessed against Chinese products. China has actively sought change in its status among its trading partners. A number of them, including Singapore and Malaysia, have declared China to be a market economy. However, in June 2004, the European Union officially declined a Chinese request for designation as a market economy. In making this decision, the EU acknowledged that China had made progress, but concluded that much remained to be done in reducing state interference in the economy, improving corporate governance and the rule of law, and bringing the banking sector under market rules. Department of Commerce officials informed us that Chinese representatives have not yet officially requested that Commerce review their country’s NME status under U.S. law. The Department of Commerce could also designate individual Chinese industries as “market oriented” and thus as eligible for application of CVDs. In a 1992 CVD decision involving imported oscillating and ceiling fans from China, Commerce determined that, short of finding that an entire country merits designation as a market economy, Commerce can find specific industries within such countries to be “market oriented” in character. Commerce stated that certain criteria, developed earlier in the context of an antidumping case (also against China), would have to be met for an industry to be found market oriented. The industry in question must be characterized by the following: virtually no government involvement in setting prices or amounts to be private or collective ownership, and market-determined prices being paid for all significant inputs, whether material or nonmaterial, and for all but insignificant proportions of all the inputs accounting for the total value of the product. Commerce justified application of these criteria to determine whether a CVD investigation should proceed by observing that if the Chinese fan industry met the criteria just described, then Commerce could rely on prices and costs to producers in that industry to provide accurate measures of value. Commerce concluded that, if the prices and costs in a sector of an NME were market determined, then the practical concerns cited by the Court of Appeals in Georgetown Steel would not arise and Commerce could apply U.S. CVD law. To date, Commerce has not accepted any claim that an NME country industry should be designated as market-oriented in character. Commerce officials observed that, as a practical matter, the criteria for designation as a market-oriented industry may be difficult for producers operating in a nonmarket economy to satisfy. In any event, Commerce has not received a CVD petition involving a market-oriented industry claim since the early 1990s. In a few cases, Chinese companies have responded to antidumping cases, in part, by requesting designation as a market-oriented industry. Commerce has denied these requests—primarily on the grounds that the Chinese companies in question submitted information that was insufficient or provided too late in Commerce’s process to allow an informed decision. Since there is no explicit statutory bar to applying CVDs to NME country products, Commerce could make an administrative determination to apply such duties to China and other NME countries. Some legal experts have taken the position that Georgetown Steel merely upheld Commerce’s decision that it could not apply CVD law to NME countries, and that Commerce could therefore change its policy so long as the change could be defended as reasonable. Commerce officials told us that it might be possible for parties in a particular case to present new legal positions that would permit it to apply CVDs against an NME country product without a change in current law. They added, however, that in the absence of an actual case, it was hard to say whether or how this would occur. While Commerce could reverse its 1984 position, the Court of Appeals’ Georgetown Steel ruling raises serious doubt about Commerce’s ability to make such a change without a clear grant of authority from Congress. The Court of Appeals did uphold Commerce, but the court also appeared to make its own findings. The court emphasized that recent trade legislation showed that Congress had intended that any selling by NME countries at unreasonably low prices should be dealt with under the antidumping law, and that there was no indication that Congress had intended or understood that the CVD law would also apply. The court stated, in addition, that “f is inadequate to protect American industry from such foreign competition (resulting from sales in the United States of merchandise that is priced below its fair value) . . . it is up to Congress to provide any additional remedies it deems appropriate.” The Uruguay Round Agreements Act, adopted in 1994, made important changes in U.S. CVD law but did not add any language authorizing CVD actions against NME countries. Moreover, the Statement of Administrative Action accompanying the Act acknowledged that the Georgetown Steel ruling stood for “the reasonable proposition that the CVD law cannot be applied to imports from nonmarket economy countries.” Although Members of Congress introduced legislation to make U.S. CVD law explicitly applicable to NME countries in 2004, and again in 2005, these bills did not gain approval. Consequently, a Commerce decision to reverse the position it adopted in 1984 and allow CVD actions against NME countries could very well be challenged in court. The results of such a challenge are uncertain. WTO subsidy and countervailing duty rules do not address the issue of NME status in CVD proceedings. The WTO Agreement on Subsidies and Countervailing Measures does not discuss market/nonmarket economy designations in general and, more specifically, does not address the question of whether members can bring CVD actions against NME countries. The CVD provisions in China’s WTO accession agreement are similarly silent. Thus, we believe that these rules (1) do not explicitly restrict the United States from continuing or ceasing to apply NME status to China on either a countrywide or industry-specific basis and (2) do not explicitly preclude bringing CVD actions against countries that are classified as NMEs. While WTO rules allow members to apply alternate methodologies—not based strictly on information from within the exporting country—to calculate antidumping duties in certain cases, the organization’s rules do not make explicit provision for applying third-country information in CVD cases. However, China’s WTO accession agreement specifically permits application of third-country information in CVD determinations. The agreement states that countries attempting to identify and quantify subsidy benefits in China may encounter special difficulties because “prevailing terms and conditions in China may not always be available as appropriate benchmarks.” In such situations, the agreement allows other member countries to employ “terms and conditions prevailing outside China” to generate benchmarks that can be used to measure subsidy benefits and establish appropriate CVDs. The agreement does require, however, that before considering application of information from outside China, member countries should first seek to use adjusted information from China itself. This provision has no expiration date and does not differentiate between China as a market or a nonmarket economy. China’s WTO accession agreement contains another provision that may facilitate application of CVDs in some cases involving state-owned enterprises. WTO members may only apply CVDs when the subsidies in question can be shown to be “specific to an enterprise or industry or group of enterprises or industries.” Determining whether a particular subsidy meets this test can be challenging. For example, a government loan program directed specifically at providing below-market financing to enable fishermen to acquire boats and equipment might be considered specific, and thus actionable. On the other hand, a program that provides below-market financing to many types of small businesses, including some fishermen, might not be considered specific, and thus not open to application of CVDs. China’s accession agreement provides that subsidies benefiting state- owned enterprises will be regarded as specific if, among other things, such enterprises are the “predominant” recipients or receive “disproportionally large amounts” of such subsidies. This may facilitate application of CVDs in some circumstances because it may make it difficult for China to argue that such subsidies are generally available, and thus not actionable. Instead, members may regard them as specific to state-owned businesses without regard for the sector in which they operate. While Commerce could proceed with CVD actions against China, it would continue to face substantial practical challenges in identifying Chinese subsidies and determining appropriate CVD levels. Commerce could employ third-country information or “facts available” to complete China CVD actions. However, these approaches would not eliminate the challenges that such actions would present. Moreover, Commerce lacks explicit legal authority to implement China’s WTO commitment allowing other members to employ third-country information in CVD actions against China. “It is difficult to identify and quantify possible export subsidies in China because of the lack of transparency in China’s subsidy regime. Chinese subsidies are often the result of internal administrative measures and are not publicized. U.S. subsidy experts are currently seeking more information about several Chinese programs and policies that may confer export subsidies. Their efforts have been frustrated in part because China has failed to make any of its required subsidy notifications since becoming a member of the WTO.” Commerce officials told us that even though there has been substantial reform in China, underlying features of the Chinese economy continue to make it difficult to identify appropriate benchmarks for measuring subsidies. For example, according to USTR, most Chinese subsidies are believed to be provided through that country’s financial system. However, some trade experts stated that government control over the banking system in China makes it difficult to identify market-determined rates of interest that could be used as benchmarks to determine whether, or to what extent, particular companies or industries are benefiting from credit subsidies. U.S. government and private sector analysts added that while the Chinese government heavily influences allocation of credit—favoring some industries over others—it is uncertain how to quantify the subsidy benefits conferred through this process. In addition, while some attorneys who have represented Chinese companies disagreed, Commerce officials and attorneys who have represented U.S. firms said that lack of adherence to generally recognized accounting standards and unreliable bookkeeping among Chinese companies can make accurate identification and accurate measurement of subsidy benefits extremely difficult. Some Commerce officials and trade experts also said that unlike most market economies, which are national, China’s economy is fragmented into five or six regions, each with its own pricing. Thus, even if an industry were declared to be market oriented, it would be difficult to evaluate the subsidy benefits accruing to the national industry as a whole. Commerce may find employing third-country information or “facts available” helpful in completing China CVD actions. However, these approaches would not fully resolve the challenges that would face Commerce. Commerce has not attempted to develop methodologies or procedures for determining CVDs against products from nonmarket economies—based either on information from within the country itself or from a third country. Nonetheless, Commerce officials stated that, if required, they would endeavor to apply existing guidance and conduct an investigation that would withstand analytical and legal scrutiny. While the United States employs “surrogate” or third-country information to calculate antidumping duties on imports from China and other NME countries, CVD cases against China would raise issues that Commerce analysts do not face in antidumping cases and that may not be resolved by use of third-country information. For example, it may be difficult to separate specific (and therefore countervailable) subsidies from those that are generally available (and therefore not countervailable). In addition, identifying reasonable benchmarks (such as market-determined capital costs) in third countries will only provide Commerce with a starting point for calculating CVD rates that should be applied to Chinese products. After establishing such benchmarks, Commerce would then face significant challenges in quantifying, for example, the capital or utility costs that are actually being paid by Chinese companies under investigation, so that analysts can determine the difference between unsubsidized and subsidized costs. Commerce also has the authority to employ facts available to overcome difficulties in calculating subsidy benefits and corresponding CVD rates. Commerce normally obtains information from U.S. companies seeking relief and also from foreign companies and government agencies alleged to be benefiting from or providing subsidies. However, U.S. law grants Commerce authority to make determinations based on facts otherwise available when foreign sources cannot or will not provide needed information. Commerce might be able to complete some China CVD cases by applying this approach. However, Commerce officials pointed out that their authority to employ facts available is subject to certain limitations. The extent to which Commerce would employ this approach in China CVD cases is uncertain. Existing U.S. laws do not provide Commerce with clear authority to fully implement China’s WTO commitment allowing members to use third- country information to identify and measure Chinese subsidy benefits. In joining the WTO, China made commitments regarding four import relief measures that other members may apply against imports from China. As already noted, even before China joined the WTO, U.S. trade law specifically allowed for implementation of the first of these commitments—application of third-country information in antidumping cases. Congress has passed legislation—commonly referred to as section 421—implementing the second (involving application of safeguard measures). While Congress did not adopt legislation to implement China’s third import-relief commitment (regarding textile safeguards), the U.S. interagency group responsible for processing textile safeguard cases believes that existing legislation provides it with authority to implement such measures. In contrast, U.S. trade law was not amended to explicitly authorize Commerce to implement China’s fourth commitment, regarding application of third-country information in CVD cases, and does not otherwise clearly state that Commerce may apply third-country information in such cases against foreign countries in general. Commerce regulations do provide for application of third-country information to CVD cases—but only in some circumstances. The most explicit provision covers only subsidies that impact goods and services used in producing the allegedly subsidized imports. This lack of clarity raises a question about whether Commerce could currently apply this commitment, even if it were to decide to reclassify China as a market economy or specific Chinese industries as market oriented in character. Department of Commerce officials said they had not yet decided whether Commerce could fully apply the commitment in the absence of authorizing legislation. Making CVD procedures available to U.S. producers that believe they are injured as a result of unfairly subsidized Chinese imports would provide a mechanism for taking actions that specifically target Chinese government subsidies. However, it is unclear whether, on a net basis, applying CVDs to China would result in overall levels of protection for U.S. products that are higher than those already applied through antidumping duties. CVDs could be applied alone or in tandem with antidumping duties. CVDs alone generally tend to be lower than antidumping duties. For two reasons, simultaneous application of both types of duties could well result in reduced antidumping duties, and it is unclear whether application of CVDs would compensate for such reductions. First, designating China as a market economy would require a change in the methodology used to calculate companion antidumping duties that is widely expected to lead to lower duty rates. Second, regardless of whether China is designated as a market economy, some companion antidumping duties might need to be reduced to avoid double counting subsidy benefits. Each of these considerations introduces an element of uncertainty about the magnitude of the total level of protection that would be applied to Chinese products; each may result in combined rates that are lower than might be expected. U.S. CVDs tend be lower than companion antidumping duties. This may, in part, explain why U.S. producers seek CVDs less often than antidumping duties. Figure 2 compares CVDs and antidumping duties imposed on the same products over the last decade. As the figure shows, CVDs imposed on these products varied from less than 2 percent to more than 60 percent. However, CVDs were lower than companion antidumping duties in nearly 70 percent of the 36 cases where the United States imposed CVDs. The average CVD rate imposed in these cases was about 13 percent, while the average antidumping duty rate imposed was about 26 percent. Under the WTO subsidies agreement and U.S. law, CVD rates are limited to the levels required to offset the amount of the subsidies. For example, a company may be receiving government credit subsidies that reduce its capital costs by 20 percent. This advantage may make a real difference in the company’s ability to compete in the international market. However, Commerce stated that CVD rates are calculated by dividing the total value of subsidy benefits by the total value of an exporting company’s sales. Since the subsidy just mentioned affects only one portion of the company’s balance sheet (capital costs) the countervailing duty applied to offset this benefit may be much lower than 20 percent. In some instances, past government intervention and support may have been critical to an exporting industry’s start-up or survival. However, loans and nonrecurring benefits, such as equity infusions or grants, are generally amortized over a period of years. After several years have passed, the current value of these subsidies may have declined to a comparatively insignificant level. U.S. companies, therefore, may experience substantial difficulty in competing with Chinese companies that owe their existence to favorable government actions in the past, but find that legitimately applied CVDs are minimal. Administrative actions reclassifying China as a market economy (in whole or in part) would require Commerce to cease applying its NME methodology for calculating antidumping duties on affected Chinese products. This is significant because, as noted earlier, CVD actions usually have a companion antidumping action. U.S. law allows Commerce to employ its third-country-based methodology to calculate antidumping duties only when the merchandise in question is being produced in countries that it classifies as NMEs. Therefore, once Commerce reclassified China as a market economy it could no longer apply this methodology. Similarly, Commerce could no longer apply this methodology to individual Chinese industries after it found them to be market-oriented in character. After either finding, Commerce would apply its market economy approach to calculate antidumping duties. Commerce has never attempted to calculate antidumping duties on Chinese products based on prices charged within China. Whether these antidumping duties would be higher, lower, or approximately the same as those derived through Commerce’s NME approach remains uncertain. However, if—as trade experts commonly expect—they prove to be significantly lower than antidumping duty rates derived through Commerce’s NME methodology, then even in combination with companion CVDs, the total level of protection applied may not be as high as that currently applied against Chinese products. If called upon to apply CVDs against Chinese products, Commerce would have to adjust companion antidumping duty rates downward in some cases in order to avoid “double counting”—imposing two sets of duties to compensate for the same unfair trade practice. However, the extent of these adjustments—and their net impact on combined duty rates to be applied—remains uncertain. Both WTO rules and U.S. laws require adjustments in combined duty rates to avoid double counting of export subsidies. WTO rules specify that no product can be subjected to both antidumping and countervailing duties “to compensate for the same situation of dumping or export subsidization.” U.S. law echoes this provision, in effect, by requiring adjustments in antidumping duties in the event that CVDs are applied simultaneously to counter export subsidies on the same products. The rationale behind these provisions is that since antidumping duties are calculated by comparing domestic prices with export prices, such duties already offset the price advantage that export subsidies confer over the prices charged in the exporter’s domestic market. When imposing both countervailing and antidumping duties, Commerce adjusts antidumping duty rates downward by any amount that is attributable to export subsidies. Commerce would be obliged to make such adjustments when applying both types of duties to China, regardless of whether China remains an NME country under U.S. law. The extent to which Commerce would have to reduce antidumping duty rates in order to avoid double counting Chinese export subsidies is unknown. As already noted, China agreed to cease providing export subsidies upon joining the WTO. Some trade experts allege that China has nonetheless continued to provide such subsidies. However, no industry group has petitioned for application of countervailing duties against Chinese subsidies, and U.S officials have not attempted to quantify the benefits provided by Chinese subsidy programs in general, or export subsidies in particular. Another potential source of double counting could emerge if Commerce were to apply CVDs to China while it retains its NME status. In such circumstances, Commerce would continue to use third-country information to calculate antidumping duties. While, in principle, double counting of actionable domestic subsidies generally does not occur when analysts employ information from exporting countries themselves to determine duty rates, it may occur when analysts use third-country information. However, current trade law does not make any specific provision for adjusting antidumping duties in such situations, and the implications of such situations arising are therefore unclear. When an antidumping duty is calculated using the third-country-based methodology that Commerce applies to NME countries, the normal value of the product (the basis for calculating an antidumping duty) is based not on Chinese prices (which might be artificially low as a result of domestic subsidies) but on information from a country where prices are determined by free markets. Thus, when the normal value is compared with the export price, the difference will, at least in theory, reflect the price advantages that the exporting company has obtained from both export and domestic subsidies. Economists, trade law practitioners, and Commerce officials we consulted disagreed on whether, in practice, antidumping duties derived through the third-country-based methodology effectively offset all of the subsidy benefits enjoyed by Chinese exporters. However, they generally agreed that, in theory, antidumping duties derived in this way do offset much of the value of both export and domestic subsidies. As a result, it appears that some double counting of actionable domestic subsidies could occur if Commerce used third-country information to calculate antidumping duties on the same products against which it also applied CVDs. Because the United States has never attempted to apply both countervailing and antidumping duties against an NME country, the implications of taking such an action are unknown. The relevant WTO agreements are silent with regard to making adjustments to avoid double counting actionable domestic subsidies, and U.S. law does not provide Commerce with any specific authority to avoid double counting in such situations. Therefore, Commerce officials observed that they would have no choice but to apply both duties without making any adjustments. While at least two U.S. courts have suggested that double counting to compensate for the same unfair trade practice is generally considered improper, they have not ruled on the specific question of whether double counting of actionable domestic subsidies, in particular, is improper. Commerce officials told us that, theoretical arguments aside, interested parties finding fault with Commerce’s decision making would have to prove that there was actual double counting. Despite increasing concern about Chinese government subsidies and their adverse impacts on U.S. producers, U.S. producers may not currently avail themselves of the U.S. government’s primary tool for countering unfair subsidies—CVDs. While the methodology that Commerce currently employs to calculate antidumping duties on Chinese products already results in duty rates that offset subsidy benefits to some degree, Commerce could act to make CVDs available against China as well. It could do this either by changing China’s NME status, or by changing its current policy and determining that it may apply CVD law against China regardless of its NME status. However, Commerce appears unlikely to employ the first alternative in the near future, and the Georgetown Steel ruling raises an obstacle to employing the second, without clear authority from Congress. While Congress is considering legislation that would authorize Commerce to apply CVDs to China as an NME country, substantial practical questions about how such cases would proceed remain unanswered, and the results that they would produce remain uncertain. The absence of such information makes it difficult for interested Members of Congress, prospective participants in CVD cases, and Commerce itself to gain any perspectives on the implications of taking such actions. Commerce has had no experience in attempting to complete CVD investigations on Chinese products and has no specific guidance in place for how to proceed. In particular, Commerce lacks guidance or experience in applying third- country information to calculate CVD rates—an approach that is explicitly permitted under the terms of China’s accession to the WTO and that Commerce may very well find necessary to employ given lack of transparency regarding China’s subsidy practices. The CVD rates that would result from these investigations are uncertain, as are the net effects of applying both CVDs and antidumping duties to Chinese products. Furthermore, Commerce lacks clear authority under U.S. law to either fully implement China’s WTO commitment regarding the use of third-country information in CVD cases or adjust antidumping duty rates to avoid double counting of Chinese domestic subsidy benefits. Given this lack of clarity, it is reasonable to expect that parties objecting to Commerce’s decisions on these issues would challenge relevant aspects of CVD decisions against China, complicating and delaying application of such duties to products from that country. Until these issues are clarified, policymakers will not be fully informed about the implications of applying U.S. CVD laws to China, and Commerce will not be prepared to implement such a change in policy. In order to provide Congress and the Department of Commerce with better information about the implications of taking actions that would result in application of U.S. CVD laws to China, we recommend that the Secretary of Commerce analyze and report to Congress on Commerce’s ability to identify and measure subsidy benefits at the present time, based on its knowledge of significant Chinese subsidy programs; and broadly applicable methodologies that Commerce might employ to complete CVD actions against Chinese products, if called upon, including how it might respond to potential double counting of domestic subsidy benefits when applying both countervailing and antidumping duties to the same products. In the event that (1) Commerce changes China’s NME status or (2) Congress decides to adopt proposed legislation that would authorize Commerce to apply U.S. CVD laws to NME countries, including China, Congress may wish to consider adopting legislation to provide Commerce clear authority to fully implement China’s WTO commitment regarding use of third- country information in CVD cases, and make corrections to avoid double counting domestic subsidy benefits when applying both CVDs and antidumping duties to the same products from NME countries, in situations where Commerce finds that double counting has in fact occurred, taking into account Commerce’s analyses of this issue prepared in response to our recommendation above. The Department of Commerce provided written comments on a draft of this report. These comments are reprinted in appendix III. Commerce provided a different characterization of our finding that it did not have clear legal authority to apply CVD law to China, taking the position that there is no explicit statutory bar to taking such an action, and stating that Commerce would carefully consider any CVD petition. We modified our report to clarify that Commerce could decide, in response to a petition, that circumstances warrant and permit a change in its policy. However, given that Commerce determined in 1984 that it did not have explicit legal authority to take such an action, and this was subsequently upheld and affirmed by a federal appeals court, and later confirmed by a 1994 statement of administrative action, we continue to believe that there would be legal obstacles to Commerce changing its policy. With regard to our recommendations, Commerce did not comment on our recommendation that it analyze and report on its ability to identify and measure subsidy benefits in China. Commerce believed our recommended report on the methodologies the department would employ if called upon to apply CVDs to China would be too speculative, and therefore not meaningful or appropriate before an actual case was filed, and that such a report could prejudge the outcome of future actions. We agree that specific decisions on how best to complete individual CVD actions against China would depend upon the facts in particular cases. We did not intend that Commerce provide detailed discussions of how it would respond to particular sets of circumstances. Rather, this report would provide Commerce, Members of Congress, and potential parties to CVD cases with some general-level guidance about how such actions might proceed. For example, such a report could address Commerce’s use of benchmark information from within or outside China to measure subsidy benefits and application of China’s WTO commitment regarding CVD actions involving state-owned enterprises. Providing broad commentary on such points would be consistent with Commerce making general guidance on its antidumping practices publicly available. Regarding our matters for congressional consideration, Commerce cited some legal authority for using external benchmarks in CVD cases. We evaluated this information and added a discussion in our report. We were not convinced that the cited authority would clearly provide for full implementation of the special methodology in China’s WTO accession agreement. An explicit grant of authority by Congress would remove doubt and lesson the chances for legal disputes, and therefore we continue to believe our suggestion is prudent. Commerce also said our suggestion that Congress provide Commerce with authority to correct any double counting of domestic subsidies in companion CVD and antidumping actions was not warranted or appropriate because Commerce had not yet encountered this situation, such corrections might be too difficult, and it would put China in a special category distinct from all other countries. We maintain that our analysis shows that there is substantial potential for double counting of domestic subsidies if Commerce applies CVDs to China while continuing to use its current NME methodology to determine antidumping duties. We believe that, in such a situation, Commerce should be provided authority to proactively address potential double counting, rather than waiting for it to occur and create methodological and legal problems. Finally, we intended that our suggestion on double counting apply to all NME countries, and have clarified our language on this point. The Department of Commerce also provided technical comments, as did USTR and ITC. We took these comments into consideration and made revisions throughout the report as appropriate to make it more accurate and clear. We are sending copies of this report to the Secretary of Commerce and the United States Trade Representative, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or any of your staff have any questions about this report, please contact me at (202) 512-4347 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In May 2003, the House Appropriations Committee’s Subcommittee on Commerce, Justice, and State, the Judiciary, and Related Agencies held hearings regarding U.S. government efforts to support American businesses adversely affected by imports from China. In light of concerns expressed at this hearing, the conference report on fiscal year 2004 appropriations legislation requested that GAO monitor the efforts of U.S. government agencies responsible for ensuring free and fair trade with China. In subsequent discussions with your staff, we agreed to respond by providing a number of reports on relief mechanisms available to U.S. producers adversely affected by unfair or surging imports, and the manner in which they have been applied to China. In this report, we (1) explain why the United States does not currently apply CVDs to imports from China, (2) describe available alternatives for applying CVDs to Chinese- origin imports, (3) explore the challenges that the Department of Commerce would face in applying these alternatives, and (4) examine the potential impact that applying these alternatives would have on the rates of duty applied to Chinese products. To address our objectives, we reviewed applicable U.S. laws and regulations and World Trade Organization (WTO) agreements, including the Agreement on Subsidies and Countervailing Measures, and China’s WTO accession agreement. We conducted a literature search and reviewed relevant scholarly and legal analyses, Department of Commerce determinations, and decisions by U.S. courts and the WTO Dispute Settlement Body. We consulted with trade and legal policy experts from the U.S. government, private sector trade associations, consulting firms, and academic institutions; law firms with broad experience in trade actions against China; as well as representatives of the WTO, the government of China, and other governments concerned about Chinese trade practices, including the European Union, Canada, New Zealand, and Mexico. In addition, to address our fourth objective, we obtained information on U.S. countervailing and antidumping duty actions from 1995 through 2004 from the Department of Commerce and the U.S. International Trade Commission. We used this data to construct our own database on countervailing duty determinations and antidumping duties applied on similar products over the same period. We included all countervailing duty cases over this time period, as well as all antidumping cases in which a petition was filed by U.S. industry for an antidumping investigation against a similar product from the same country (e.g., honey from Argentina). Of the 72 countervailing duty cases from 1995 through 2004, we found only 3 in which a similar antidumping petition was not also filed. Our database includes information on the outcome of the investigations (e.g., whether an order was issued), the status of the orders as of the end of 2004, the duty rates imposed in each case that resulted in a CVD order and the antidumping duty rates imposed on similar products. For each countervailing or antidumping duty order, the Department of Commerce may issue several different duty rates. These may include separate duty rates for large individual companies (suppliers), as well as weighted average “all others” rates for smaller suppliers. We collected all of these rates and compared the lowest and highest separate rates, the average of all separate duty rates, and the “all others” rates. As we report above, we found that the averages or median rates for countervailing duties orders are smaller than similar antidumping rates, whether comparing the lowest rates, the highest rates, the average rates, or the “all others” rates. However, as shown in figure 2, in some individual cases countervailing duty rates were higher than antidumping rates. Also, future investigations may yield different results depending on the types of products, countries, and activities investigated. Having verified these data with the original Federal Register notices, which provide the official U.S. government notification of investigations and orders, we find the data to be sufficiently reliable for analyzing the number, status, and duties (if imposed) on U.S. countervailing duty cases from 1995 through 2004, as well as U.S. antidumping cases on similar products. In addition, to provide information on the growth of U.S. imports from China, we examined official U.S. import data from the Department of Commerce, Bureau of the Census, which we adjusted for inflation using the end-use import price index published by Commerce’s Bureau of Economic Analysis. While U.S. data on imports from China have some acknowledged limitations, we found them to be sufficiently reliable for the purpose of establishing that there has been rapid growth in these imports in recent years. We performed our work from January 2004 through June 2005 in accordance with generally accepted government auditing standards. The WTO Agreement on Subsidies and Countervailing Measures defines a subsidy as a financial contribution by a government or any public body within a WTO member that confers a benefit. While the agreement imposes an outright ban on some types of subsidies, most types are not completely prohibited but are classified as actionable under certain conditions. Actionable subsidies are those that are specific—i.e., benefit a specific enterprise, industry, or group of enterprises or industries—and cause adverse effects to the interests of another WTO member, such as injury to their domestic industries. According to the WTO, members may impose CVDs when they (1) identify subsidized imports, (2) determine that a domestic industry is suffering injury, and (3) establish a causal link between the subsidized imports and the injury being suffered. These duties are intended to offset the price advantages that the subsidy confers on the imported product and, more broadly, encourage governments that maintain subsidies to eliminate them. The subsidies agreement requires that the investigating authorities quantify the value of the subsidies being provided and limits the level of duty imposed to that value. To facilitate identification of subsidies and evaluation of their trade effects, the agreement requires WTO members to provide the organization with annual notifications on all of the specific subsidies they maintain and to provide additional information on any of these programs when requested. The agreement specifies that member states should provide sufficient information “to enable other Members to evaluate the trade effects and to understand the operation of notified subsidy programs.” Under U.S. law, CVDs may be imposed against subsidized imports from other WTO members when a U.S. industry is materially injured or threatened with injury or the establishment of an industry in the United States is materially retarded. The ITC and the Department of Commerce share investigative and decision-making responsibility in CVD cases. The ITC determines whether there is material injury or threat thereof to the domestic industry by reason of the subject imports. Commerce determines whether the foreign country is providing a countervailable subsidy, and, if so, the size of the subsidy and (consequently) the size of the CVD that should be imposed. To make these determinations, Commerce solicits information from exporting country governments and from individual producers and exporters of the subject merchandise and applies this information to establish appropriate duty rates for each known exporter or producer. The United States has imposed CVDs with some regularity, on a variety of products from a variety of countries. From 1995 through 2004, U.S. domestic industries petitioned for 72 CVD investigations against 43 different products from 25 countries. Thirty-six of these investigations (50 percent) resulted in application of CVDs. Figure 4 shows the results of these 72 petitions. Generally, when petitioners seek imposition of CVDs, they also seek imposition of antidumping duties on the same product from the same country. In 69 of the 72 CVD cases, petitioners also requested a companion antidumping investigation. Dumping occurs when a foreign company sells merchandise in a given export market (for example, the United States) at prices that are lower than the prices charged in the producers’ home market or another export market. When this occurs, and when the imports have been found to materially injure, or threaten to materially injure, U.S. producers, WTO rules and U.S. laws permit application of antidumping duties to offset the price advantage enjoyed by the imported product. As in CVD cases, Commerce analysts establish antidumping duties for each known producer or exporter. Figure 5 illustrates how antidumping duties are determined. Petitioners requesting antidumping investigations do not always request CVD investigations, and CVDs are, in fact, sought and imposed much less frequently than are antidumping duties. From 1995 through 2004, U.S. industry groups petitioned for nearly five times as many antidumping as countervailing duty investigations (354 compared with 72). Similarly, the United States put in place over four times as many antidumping duty orders (156) as it did CVD orders (36). Figure 6 shows the distribution of these countervailing and antidumping duty orders by year for 1996 through 2004. For antidumping orders, these are further broken down into orders against market economies, China, and other nonmarket economies. The number of CVD orders imposed might have been higher, and the contrast with antidumping duty orders less pronounced, if CVDs had been available against nonmarket economies during this period. Nonetheless, figure 6 shows that even among market economy countries, the United States imposes CVDs much less frequently than antidumping duties. The following are GAO’s comments on the Department of Commerce’s letter dated June 1, 2005. 1. We agree that U.S. trade law does not explicitly bar CVD actions against NME countries. Also, we acknowledge that the Department of Commerce remains open to considering petitions for CVD action against such countries, and that Commerce could conceivably decide that the facts in a particular case warrant and permit applying CVDs in an NME context. We have revised the text to ensure that these points are clearly stated. Nonetheless, while not explicitly barring CVD actions against NME countries, U.S. trade law also does not explicitly authorize such actions, and both Commerce and a U.S. Court of Appeals decision have indicated that U.S. CVD law was not intended to be applied to NME countries. This position was also supported by the Statement of Administrative Action accompanying the 1994 Uruguay Round Agreements Act. Accordingly, we conclude that there would be legal obstacles to Commerce reversing its policy and allowing CVD actions against NME countries, including China. It is likely that, absent a clear grant of authority, such a policy change would result in court challenges. 2. We do not presume that applying CVD law to China would require that China be designated a market economy under U.S. antidumping law. We assume that if Commerce applied CVDs to China without changing its status as an NME country, it would continue to apply its NME methodology in antidumping cases against that country. 3. We agree that completing CVD actions against China would be a challenging exercise, and that specific decisions on how best to complete such actions would depend on the facts at hand in particular cases. We do not intend to suggest that Commerce provide detailed analyses of how it would respond in case-specific circumstances. Nonetheless, a Commerce study evaluating how it might generally proceed in such cases would be helpful to Commerce itself and Members of Congress in considering whether to take actions that would lead to CVD cases against NME countries, as well as to potential parties to such actions concerned about how to proceed in such cases. For example, such a report could address (1) benchmark information from within or outside China that Commerce would consider in measuring subsidy benefits, (2) methods and approaches that could be employed to respond to potential double counting of domestic subsidy benefits, and (3) how China’s WTO commitment regarding subsidies and state-owned enterprises might affect specificity determinations. We have revised the report text to make these points. 4. We acknowledge that Department of Commerce regulations do provide for applying information from outside a subsidizing country to assist in assessing subsidy benefits—in some circumstances—and that Commerce has applied them in a number of cases. We have revised our report to include information on these provisions. Nonetheless, current U.S. law does not explicitly authorize Commerce to fully apply China’s commitment regarding the use of information from outside China to complete CVD actions. Also, as discussed in more detail in the body of the report, the methodologies set forth in regulatory provisions cited by Commerce do not apply to the full range of subsidies that might arise in a CVD case. Moreover, the methodology in the more specific of these provisions has been questioned in a dispute settlement case under the North American Free Trade Agreement. 5. We agree that any legislation authorizing Commerce to adjust duty rates to avoid double counting in applying countervailing and antidumping duties to products from NME countries should not apply only to China. We have modified the report text to make this clear. We disagree with Commerce’s comment that legislative action on this matter is not warranted or appropriate. We believe that sound economic reasoning suggests that there is substantial potential for domestic subsidies to be double counted in the event that Commerce applies CVDs to NME country products while continuing to use third- country information to calculate antidumping duties on those same products. Therefore, congressional action to provide Commerce with authority to avoid double counting in these instances would be prudent. We agree that making such adjustments could raise complex methodological issues. It is for this reason that we recommend that, in reporting on methodologies for completing CVD actions against China, Commerce include discussion on responding to potential double counting of domestic subsidy benefits. This would allow Commerce to evaluate, among other things, the feasibility and cost of making such adjustments and their likely impact. In addition to those named above, the following individuals made significant contributions to this report: Adam R. Cowles, R. Gifford Howland, Michael McAtee, Richard Seldin, Ross Tuttelman, and Timothy Wedding. | Some U.S. companies allege that unfair subsidies are a factor in Chinese success in U.S. markets. U.S. producers injured by subsidized imports may normally seek countervailing duties (CVD) to offset subsidies, but the United States does not apply CVDs against countries, including China, that the Department of Commerce classifies as "nonmarket economies" (NME). In this report, GAO (1) explains why the United States does not apply CVDs to China, (2) describes alternatives for changing this policy, (3) explores challenges that would arise in applying CVDs, and (4) examines the implications for duty rates on Chinese products. The current Commerce policy of not applying CVDs to NME countries (including China) rests on two principles advanced in 1984 and confirmed by a federal appeals court. These are that Commerce (1) lacks explicit authority to do so, and (2) cannot arrive at meaningful conclusions regarding subsidies in such countries due to government intervention in the economy. Commerce could reclassify China as a market economy or individual Chinese industries as "market oriented" and apply CVDs against China as a market economy. Commerce has criteria for such determinations, but said that China is unlikely to satisfy them in the near term. It could also reverse its 1984 position and apply CVDs without any change in China's NME status. However, absent a congressional grant of authority, such a decision could be challenged in court, with uncertain results. World Trade Organization (WTO) rules do not explicitly preclude either alternative. Commerce would face challenges, regardless of the alternative adopted. Chinese subsidies remain difficult to identify and measure. Employing third-country information or "facts available" may help, but would not eliminate these difficulties. Commerce lacks clear authority to fully implement China's WTO commitment on use of third-country information in CVD cases. It is unclear whether, on a net basis, applying CVDs would provide greater protection than U.S. producers already obtain from antidumping duties. CVDs alone tend to be lower than antidumping duties. If Commerce grants China market economy status, both CVDs and antidumping duties could be applied simultaneously, but required methodological changes could well reduce antidumping duties. It is not clear whether CVDs would compensate for these reductions. Regardless of China's status, some duties might need to be reduced to avoid double counting of subsidies. Commerce lacks clear authority to make such corrections when domestic subsidies are involved. |
The Corps is an agency in the DOD that has military and civilian responsibilities. The military program provides engineering, construction, and environmental management services to DOD agencies. Under its civil works program, at the direction of the Congress, the Corps plans, constructs, operates, and maintains a wide range of water resources projects. A military Chief of Engineers oversees the Corps’ civil and military operations and reports on civil works matters to an Assistant Secretary of the Army for Civil Works. The Corps operates as a military organization with a largely civilian workforce (34,600 civilian and 650 military personnel). The Corps is organized geographically into its headquarters, located in Washington, D.C.; eight divisions across the country; and 41 subordinate districts throughout the United States, Asia, and Europe (see fig. 1). Corps headquarters creates policy and plans the future direction for the organization. The eight divisions coordinate the work carried out by the 41 districts, and individual projects are largely planned and implemented at the district level after they have been approved at the division and headquarters level. To assist in its human capital planning efforts, in September 2002, the Corps issued a human capital planning document entitled The Strategic Management of Human Capital in the U.S. Army Corps of Engineers. The human capital plan was focused on recruiting and retaining a world- class workforce, and in order for this to happen the Corps recognized that it needed to become a learning organization and develop leaders at all levels. The plan also documented the human capital challenges the Corps faced as well as past, current, and future responses to those challenges. The plan incorporated and was driven by, among other things, the agency’s 2002 strategic plan, called the Campaign Plan, and its accompanying vision statement. In developing the human capital plan, the Corps incorporated the three strategic goals contained in the Campaign Plan: (1) people— being recognized for the technical and professional excellence of its world class workforce, functioning as teams delivering projects and services; (2) process—using the project management business process to operate as one Corps, regionally delivering quality goods and services; and (3) communication—communicating effectively to build synergistic relationships that serve the nation. Each new incoming Commander of the Corps has the opportunity to redraft the strategic plan for the agency, which last occurred in June 2005. Specifically, the 2005 strategic plan incorporated the Corps’ increased responsibilities for various contingency operations, such as Iraq and Afghanistan, and responding to natural disasters like Hurricane Katrina. The strategic plan also outlines the agency’s responsibilities as outlined in the 2004 National Response Plan—responding to the Department of Homeland Security domestically, and to the U.S. Agency for International Development globally, for non-DOD contingency operations. Additionally, the 2005 strategic plan contained three new strategic goals not contained in the agency’s 2002 strategic plan: (1) support stability, reconstruction, and homeland security operations; (2) develop sound water resources solutions; and (3) improve the reliability of water resources infrastructure using a risk-based asset management strategy. Because a new Commander for the Corps was appointed in 2007, the agency is in the process of redrafting its Campaign Plan to reflect the new Commander’s strategic vision and priorities for the next 3 to 4 years. Finally, in 2004 the Corps began a new organization plan, called USACE 2012, intended to streamline the agency’s organizational structure and reduce redundancy among districts. USACE 2012 focuses on implementing the following four goals, called key concepts, to achieve organizational and cultural change: (1) establishing regional business centers, which foster divisions and districts working together as a regional unit; (2) creating regional integration teams, focused on the execution of the civil works and military programs mission; (3) establishing communities of practice, consisting of individuals who practice and share an interest in a major functional area or business line, for the purpose of developing and sharing best practices and fostering cross-functional and cross-divisional collaboration; and (4) developing national and regional support models designed to provide support services that effectively separate divisions’ responsibilities from headquarters’. Before 2004, the eight divisions served largely as a conduit between headquarters and the district offices, and the 41 districts, in turn, were each responsible for managing their own workforce to complete their projects. Under the new organizational structure, the eight divisions have greater responsibility for managing the workforce and workload of all of their component districts on a regional basis. According to Corps officials, USACE 2012 is part of a continuous improvement process to better meet its customers’ and national needs. The Corps’ 2002 strategic human capital plan is out of date and not aligned with the agency’s most recent strategic plan, developed in 2005. Because the Corps lacks a current human capital plan, human capital activities are being managed inconsistently by division and district officials across the agency. In 2002, the Corps’ human capital plan was designed to, among other things, improve the agency’s ability to attract and retain a world class workforce and provide more accurate and objective ways to measure success. Also consistent with OPM’s guidance on effective human capital planning, the 2002 human capital plan was aligned with the agency’s 2002 strategic plan and its accompanying vision statement. For example, the 2002 strategic plan included “people” as one of its three strategic goals— that is, the Corps wanted to “be recognized for the technical and professional excellence of our world class workforce, functioning as teams delivering projects and services.” The people goal contained three major objectives—attract and retain a world-class workforce, create a learning organization, and develop leaders at all levels—and strategies for each of them. Each objective and strategy, along with an implementation plan, was addressed in the agency’s human capital plan. However, the human capital plan has not been revised since 2002 to reflect the Corps’ new strategic direction as outlined in the agency’s most current strategic plan, developed in June 2005, and other recent events. For example, the 2005 strategic plan does not contain a strategic goal related to people. It does, however, contain three additional strategic goals that are not reflected in the 2002 human capital plan: (1) support stability, reconstruction, and homeland security operations; (2) develop sound water resources solutions; and (3) improve the reliability of water resources infrastructure using a risk-based asset management strategy. Moreover, because the human capital plan has not been revised, it does not reflect events that have taken place since 2002 that have had a significant impact on the agency’s human capital needs, such as the agency’s increased focus on supporting contingency operations and its new responsibilities outlined in the 2004 National Response Plan. For example, since the 1990s, the Corps has been called upon more frequently to take part in contingency operations at home and abroad—such as responding to natural disasters like Hurricane Katrina. Similarly, under the National Response Plan, the Corps provides support as both a primary agency and a coordinating agency for emergency and support functions outlined in the plan. We found that the relevance of the Corps’ outdated human capital plan will become further diminished in the near future because the agency is beginning the process of updating its 2005 strategic plan to reflect the new strategic direction of the incoming Commander of the Corps. According to Corps officials, although this has not been communicated agencywide, headquarters has “abandoned” the use of the outdated 2002 human capital plan, replacing it with annual and quarterly updates of human capital activities required by OPM under the President’s 2002 Management Agenda. Officials in the Corps’ Office of Human Resources told us that the Corps does not have the staff and resources to both update its human capital plan and provide the updates to OPM. The President’s Management Agenda established governmentwide initiatives designed to improve the management and performance of the federal government in five areas, including strategic management of human capital. OPM was designated the lead agency for overseeing the human capital initiative, and federal agencies were to identify human capital activities they planned to undertake and to provide quarterly and annual updates on these activities to OPM. For example, to fulfill its annual reporting requirements to OPM, the Corps provides a list of completed human capital activities, such as “Community of Practice Conference Workshop held,” and activities to be undertaken, such as “Identify Fiscal Year 2008 Intern Requirements.” However, we found that these updates are not an adequate substitute for the Corps’ human capital plan because they do not represent a coherent framework of the agency’s human capital policies, programs, and practices, and they do not include any of the components of an effective human capital plan, such as goals, strategies, and a system for measuring how successfully the strategies have been implemented. The lack of a current human capital plan has also led to inconsistent approaches in how divisions and districts are managing human capital activities for the agency. For example, some division and district officials told us that they are still using the 2002 human capital plan to guide their activities; others said they relied instead on guidance they receive from headquarters. Still others said that because they receive limited guidance from headquarters on developing human capital goals and objectives, they have to independently develop strategies as best they can. For example, one district told us that it had developed its own informal succession plan in 2004 that it updates continually. The plan assesses all of the district’s ongoing missions as well as the strategies for recruiting, developing, and retaining the technical skills needed to carry out the district’s mission. Finally, some districts said they relied on information they receive from the divisions, and others told us that they rely on information on human capital flexibilities obtained from an OPM handbook to assist with human capital planning. The Corps does not have comprehensive agencywide data on critical skills to identify and assess current and future workforce needs. As a result, the Corps cannot effectively identify gaps in its workforce needs and determine how to modify its workforce planning approaches to fill these gaps. Effective workforce planning requires consistent agencywide data on the critical skills needed to achieve current and future programmatic results. However, the Corps does not have a process for collecting comprehensive and consistent agencywide data, and headquarters has not provided guidance to its divisions and districts on how to collect this information. More specifically, according to Corps officials, while the agency collects critical skills data on its current workforce needs through the Army’s Workforce Analysis Support System database, this database does not allow the Corps to capture information on the agency’s future workforce needs. In the absence of such a process, some Corps divisions and districts have independently collected their own data on workforce needs; however, we found that those divisions and districts that have collected data on critical skills have used various methods to do so. For example, some division and district officials told us that they assessed their current workforce at the division level to determine their critical skills. Others stated that they conducted a gap analysis to identify critical skills needs. Because these data on both the agency’s current and future workforce needs have not been systematically collected, a meaningful comparison of the data across divisions to assess the agency’s overall needs is not possible. Consequently, we believe that the lack of this information hampers the Corps’ ability to develop effective approaches to recruiting, developing, and retaining personnel. Obtaining comprehensive and consistent agencywide data on critical skills needs has become even more important since the Corps began to restructure its organization in 2004. One of the primary goals of the restructuring is to streamline the organization to more effectively share Corps resources. Under the previous organizational structure, headquarters generally sets policy, divisions communicated policy to the districts, and the districts were responsible for managing their workforce and workload. Districts’ workforce management activities included hiring staff and contracting work out. In addition, according to Corps officials, while some districts interacted to share resources, others did not. Under the new structure, which continues to evolve, the workforce and workload management functions have shifted to the divisions. Under the new structure the Corps would like to enable the divisions, with input from their districts, to more efficiently meet the workforce needs across the division by sharing human capital resources, such as biologists and engineers, among the districts. According to the Corps, this approach should also foster information and resource sharing among the eight divisions. For example, officials in one district told us that when their work dries up, under the new organizational concept the district can get work from other districts, or staff can be reassigned or shared with other districts or divisions. However, it is unclear to us how the goals of this new structure can be realized if the Corps’ divisions and districts do not have consistent agencywide data to enable them to identify the units that have the critical skills that other organizational units are seeking. The Corps has recently recognized the need to establish a process for collecting comprehensive and consistent agencywide information on critical skills. In June 2007, the Corps initiated a National Technical Competency Strategy to, among other things, identify (1) the future roles of the Corps, (2) the critical skills needed to support these roles, and (3) any critical skills gaps. In October 2007, the Corps established a National Technical Competency Team to implement the strategy through coordination with Corps senior leadership. The team is charged with reviewing prior and current division and district initiatives to collect data on the agency’s technical skill needs and capabilities and identifying ways to unify and integrate these initiatives to minimize redundancy. However, it is too early to evaluate the Corps’ overall progress on this effort. A number of human capital challenges, including strong competition from other employers to hire the most talented potential employees, are affecting the Corps’ ability to attract and retain a qualified workforce, according to Corps officials. Although various human capital tools to help attract and retain a high-quality workforce are available to the Corps under federal personnel law, the agency’s use of several financial incentives has sharply declined in the last 5 years. Moreover, the Corps does not have a process in place to evaluate the effectiveness of the human capital tools it has used, so while the agency can provide information on the extent to which it has used various tools, it cannot assess their effectiveness in meeting workforce needs. According to Corps headquarters, division, and district officials, a number of human capital challenges are undermining their efforts to balance the Corps’ workforce with its workload. These challenges include (1) competition from the private sector and other entities, (2) the loss of staff to various contingency operations, and (3) the large number of retirement- eligible employees. First, Corps officials told us that competition from the private sector and other entities, such as state and local governments, greatly affects their ability to recruit and retain a qualified workforce. For example, in certain locations, such as Los Angeles, it can be difficult to fill engineering positions because the cost of living is high and the Corps has to compete with private firms, the city, and the county, which can pay more than the agency for qualified personnel. Similarly, officials told us that in one of the states where the Corps operates, the state government recently increased the salaries of engineers to a level that is difficult for the Corps to match, thereby making it harder for the Corps to effectively recruit and retain engineers in that labor market. In addition, Corps officials told us that the overall state of the economy also affects the agency’s ability to compete with others for qualified individuals. They told us that when the economy is doing well it is harder for the Corps to compete with other employers. Second, the Corps is also challenged by the vacancies created by employee deployments for contingency operations, such as war and natural disasters, which since the 1990s have increasingly become a focus for the Corps. For example, Corps officials told us that since March 2004 about 4,000 employees have been deployed to support Iraq and Afghanistan operations, and since August 2005 an additional 9,000 have been deployed to help with efforts to address the effects of Hurricanes Katrina and Rita. Corps officials in one division told us that they are running out of volunteers to support the Gulf Regions—with some employees having served up to three tours in these areas. In some cases, the Corps calls upon its remaining employees to perform dual roles, a situation that stresses the workforce and could put the Corps at risk of not being able to perform its mission. In addition, Corps officials told us the agency uses contractors to fill some of the gaps caused by these staff losses. The Corps also relies heavily upon its reemployed annuitant cadre to fill vacancies created by such deployments. At the same time, Corps officials stated that while vacancies created by deployments and volunteer assignments are a challenge, they also offer opportunities—that is, the employees who take over the deployed employees’ responsibilities gain experience in new areas. Moreover, deployed employees learn from their experiences, adding value to the Corps. Finally, Corps officials told us that the increasing number of retirement- eligible employees is a challenge to planning for its future workforce. As we have previously reported, the federal government is confronting a retirement wave and with it the loss of leadership and institutional knowledge at all levels. If large numbers of employees retire over a relatively short period and agencies are not effective in replacing them with the appropriate number of employees possessing the needed skills, the resulting loss of institutional knowledge and expertise could adversely affect mission achievement. According to the Corps, in fiscal year 2006, approximately 23 percent of the agency’s workforce was eligible to retire, although on average, Corps employees retire 5.75 years after they are eligible. Corps officials told us that the agency works with retirement- eligible employees to provide them with interesting work to delay their departure. For example, the Corps allows retirement-eligible employees to work on projects in which they have a special interest, or if the employees are willing, the Corps may deploy them to other locations, such as Iraq, for more interesting work in the hope that this will persuade them to stay on with the agency. The Corps uses various hiring authorities and human capital flexibilities to offset its human capital challenges. Some examples of the hiring authorities used by the agency include The Federal Career Intern Program—under this hiring authority the Corps hired 621 interns from fiscal year 2002 through fiscal year 2006. Most interns are hired for 18 to 24 months, typically entering the program at entry-level salaries. At the end of the program, interns are guaranteed a full-time position if they agree to sign a mobility agreement. Corps officials told us that interns are a major component of the Corps’ recruiting efforts because the agency can easily convert interns to full-time employees. They also told us that they primarily concentrate their intern recruitment efforts in the engineering and scientific specialties, which constitute approximately 90 percent of their intern hiring efforts. Further, according to these officials, interns typically realize the benefits of working for the Corps during their internships and tend to stay with the agency. Reemployed Annuitant Office Cadre Program—under this authority the Corps rehires former federal employees to supplement its workforce, as needed. The Corps established this program in response to its declining workforce, increased responsibilities for various contingency operations, and the high number of retirement-eligible employees. Among other things, the Corps uses these employees to fill positions needing specialized skills or to supplement staff to complete specific projects in a timely manner. Student Career Experience Program and the Student Temporary Employment Program—under these authorities the Corps can hire applicants currently enrolled in high school, college, a university, or a technical or vocational school. Students hired through the Student Career Experience Program must be enrolled in a specific educational discipline that meets the requirements for the position and are eligible for conversion to permanent employees. Students hired through the Student Temporary Program are not required to be in educational disciplines that match the work the student is performing, and their appointments are limited to 1 year that can be extended until the completion of their educational requirements. The Veterans Employment Opportunities Act of 1998—under this authority the Corps can hire applicants that have preference eligibility or substantially completed 3 or more years of active service, in addition to having received an honorable or general military discharge or were released under honorable conditions shortly before completing a 3-year tour of duty. One district also told us that it has an affirmative employment plan that includes outreach to various colleges and universities to attract qualified applicants from diverse backgrounds. Under the plan, the district participates in various conferences, such as the Hispanic Engineer National Achievement Awards Conference and the Black Engineering Conference. As a result of its affirmative employment plan, according to district officials, the district has increased the quality and diversity of its workforce. The Corps also uses a variety of human capital flexibilities to maintain its workforce as shown in table 1. According to Corps officials, some of these flexibilities are helpful to their recruiting efforts in areas where the cost of living is high, such as San Francisco. In such locations, the Corps uses such tools as recruitment and retention bonuses as an incentive for employees to work there. Corps officials also cited other tools they use to attract and retain a qualified staff, including paying for employees to obtain advanced degrees; providing long-term training; and providing a family-friendly workplace that allows flex-time, telecommuting, or alternative work schedules. While the Corps has a number of flexibilities available to help in its recruiting and retention efforts, we found that the use of these flexibilities has sharply declined in recent years. For example, although the Corps awarded approximately $2.5 million in recruitment, relocation, and retention bonuses during fiscal years 2002 through 2006, the amount it devoted annually to recruitment bonuses decreased almost 97 percent during that time—from about $750,000 in fiscal year 2002 to about $24,000 in fiscal year 2006. Moreover, the total amount the Corps spent annually on recruitment, relocation, and retention decreased 75 percent from fiscal year 2002 to 2006—from about $800,000 to about $198,000. (See table 2.) This trend is inconsistent with the concerns Corps officials have cited about the growing impact of human capital challenges on the Corps’ workforce over the past 6 years. Moreover, district officials with whom we spoke generally felt that the Corps should be more aggressive in its use of human capital authorities and flexibilities to address its human capital challenges. More specifically, some officials said that increasing the agency’s use of recruitment, relocation, and retention bonuses would increase the agency’s ability to attract and retain a qualified workforce. For example, according to one district official, although his district tries to provide incentives to recruit qualified staff, the incentives have to first be approved by the district’s Corporate Board. Oftentimes if this approval is not received, he has had trouble hiring experienced scientists and engineers and has had to hire less experienced staff instead. In addition, some officials told us that increasing the use of the various student intern and career experience programs would also help recruit qualified people in a shrinking labor pool. Further, these officials suggested establishing or increasing early outreach to students and schools, in addition to the Corps’ college recruiting initiatives, as a way to increase students’ interest in careers in science, technology, engineering, and mathematics—as well as a career with the Corps. In addition to the use of human capital tools discussed above, the Corps also has the ability to outsource portions of its workload to private sector organizations and other entities. More specifically, the Corps has a goal of contracting out 30 percent of the planning and design aspects of its civil works projects, allowing the agency to meet its workload needs without having to hire additional staff to fill gaps in its workforce. Corps officials told us that they use this option when they do not have the staff or skill sets to assign to a particular project. On the other hand, according to one Corps official, although approximately 40 percent of the Corps’ engineering work is done in-house, that number may be declining. This official said that the practice of “contracting out for the sake of contracting out” makes it difficult to bring people into the Corps because engineers do not want to review the work of contractors—they would rather do the work themselves. The official stated that the Corps needs to find the right balance between in-house and contract work. Finally, while the Corps tracks the extent to which it uses certain human capital tools, it has not developed a process to systematically evaluate their effectiveness. For example, the Corps tracks and can provide information on its use of recruitment and retention bonuses, but it does not have a process for assessing the extent to which such monetary flexibilities are effective in helping recruit and retain a qualified staff. Consequently, the Corps could not provide us with information on the extent to which its use of various tools and flexibilities, such as retention bonuses, has been effective in meeting its workforce needs. Without a process to evaluate the effectiveness of its human capital tools, it is unclear how the Corps can determine the overall costs and benefits of the various methods it is using to recruit and retain employees and whether certain tools are being under- or overused. An agency’s human capital plan is the key to its progress toward building a highly effective organization that can recruit, hire, motivate, and reward a top-quality workforce. Although the structure, content, and format of human capital plans may vary by agency, human capital plans should clearly reflect the agency’s strategic direction. However, this is not the case with the Corps because it does not have a current human capital plan that is aligned with its strategic plan. Without such a human capital plan, the agency not only is limited in strategically managing its workforce efforts but also is not providing clear guidance to all of its organizational levels on how they are to effectively and consistently carry out their human capital responsibilities. Further, the Corps’ lack of comprehensive and consistent agencywide data on critical skills undermines its ability to identify and assess current and future workforce needs. It remains to be seen whether the Corps’ recently begun effort to develop a process to collect such information will be successful. Finally, although the Corps uses a number of human capital tools to address the challenges it faces, such as an aging workforce and competition from the private sector for qualified applicants, it lacks a process to assess the effectiveness of these tools. Without such a process, the Corps has no way to determine either the overall costs and benefits of the tools it uses to recruit and retain employees or whether additional approaches are needed to develop and maintain its workforce for the future. To help the Corps better manage its workforce planning efforts, we are recommending that the Secretary of Defense direct the Commanding General and Chief of Engineers of the U.S. Army Corps of Engineers to take the following three actions: Develop a human capital plan that is directly linked to the Corps’ current strategic plan and that contains all the key components of an effective plan as outlined by the Office of Personnel Management. Distribute the revised plan agencywide and direct the divisions and districts to use it to guide their human capital activities. Develop and implement a process for determining the effectiveness of the human capital tools the Corps is using so that it can adjust their use, as necessary, to meet workforce needs. We provided a draft copy of this report to the Department of Defense for review and comment. The Department generally concurred with our recommendations. Specifically, the Department concurred with our recommendation that the Corps develop a human capital plan that is directly linked to the Corps’ current strategic plan and that contains all the key components of an effective plan as outlined by OPM. The Department stated that it will conduct an Enterprise Human Resources Strategy Summit on July 9 – 11, 2008, with stakeholders to obtain input that will be used to update the Corps’ human capital plan. The Department stated that it expects to finalize the Corps human capital plan by January 2009. The Department also agreed with our recommendation to distribute the revised human capital plan to the Corps’ divisions and districts, stating that it would do so with the appropriate guidance within 30 days of the plan being finalized. Finally, the Department concurred with our recommendation that the Corps develop and implement a process for determining the effectiveness of its human capital tools so that it can adjust their use, as necessary, to meet workforce needs. The Department stated that metrics for determining the effectiveness of the human capital tools used by the Corps will be identified and included in the agency’s updated human capital plan. The Department also provided additional information regarding various human capital actions and initiatives mentioned in our report. The full text of the Department’s comments can be found in appendix II as well as our response to these comments. Of particular note is the Department’s comment that since 2005, the Corps has been rated “green” in status and “green” in progress on the Human Capital Scorecard by the Office of Management and Budget (OMB). According to the Department, the Corps’ human capital initiatives received such a rating only after rigorous scrutiny of OMB and OPM. We are aware that the Corps has been rated “green” for its human capital initiative updates, however, as we state in the report, these updates do not provide an adequate substitute for the agency’s human capital plan because they do not include any of the components of an effective plan, such as goals, strategies, and a system for measuring how successfully strategies have been implemented. Consequently, they do not represent a comprehensive framework of the agency's human capital policies, programs, and practices needed to assist the Corps in achieving its mission. Additionally, the Department stated that the report placed undue weight on feedback from a small number of respondents. We disagree with the Department’s characterization. We contacted officials in all eight Corps division offices and a third of all the Corps district offices, and reported on those experiences and opinions with which these officials generally concurred. For example, the report states that district officials with whom we spoke generally felt that the Corps should be more aggressive in its use of human capital authorities and flexibilities. The individual examples cited throughout the report were used to provide more clarification on the specific types of concerns and situations being faced by the district officials. We are sending copies of this report to the Secretary of Defense, the Commanding General and Chief of Engineers of the U.S. Army Corps of Engineers, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III. We were asked to examine the (1) extent to which the Corps has aligned its human capital plan with its strategic plan, (2) extent to which the Corps has the information necessary to identify and meet current and future workforce needs, and (3) challenges the Corps faces in meeting its workforce needs. To assess the alignment of the U.S. Army Corps of Engineers’ human capital plan with its strategic plan, we analyzed and reviewed a broad range of Corps policy and planning documents from headquarters and divisions. Specifically, we examined information on the Corps’ operations and strategic planning efforts, such as the Corps’ 2002 Strategic Human Capital Plan, the Integrated Strategic Plan, Campaign Plans, related headquarters and division documents, and the USACE 2012 regionalization plan. We also reviewed information from Corps strategic boards and committees, the Office of Personnel Management’s (OPM) Human Capital Assessment and Accountability Framework, and our relevant reports. We corroborated information provided in these documents through interviews with human resources managers and program managers at Corps headquarters, divisions, and districts. We also interviewed cognizant community of practice program leaders in real estate, contracting, planning, research and development, operations and regulations, resource management, strategic integrations, human resources, program and project management, logistics, environment, and engineering and construction. To assess the extent to which the Corps is collecting the information necessary to meet current and future workforce needs, we visited and interviewed Corps officials at two divisions (the North Atlantic and South Pacific divisions) and three districts (New York, San Francisco, and Sacramento) to obtain information about their strategic workforce planning strategies and their human capital initiatives related to recruitment, development, and retention of staff. We used the information obtained from the visits to develop a structured interview that we administered to the Corps’ eight divisions and a purposeful sample of 14 of the Corps’ 38 districts that conduct work in the United States. We selected 2 districts from each division to include in our interviews, with the exception of Pacific Ocean Division, where we interviewed only the division staff. Our site selections were based on (1) number of scientists and engineers, (2) overall full-time equivalent employees, (3) budget size, and (4) geographic location. Although the information from our sample of districts is not generalizable to all districts within a division, our interviews cover human capital issues at locations representing nearly half (46 percent) of Corps scientist and engineering staff, and represent issues at locations with diverse staff sizes, budget sizes, and geographic locations. We did not include districts in the Pacific Ocean Division because 2 of the districts are outside the United States, and the human capital challenges at the domestic districts—Alaska and Honolulu—would likely be unique to labor force demographics at these locations. Additionally, because there are only 2 districts within the Pacific Ocean Division that perform work in the United States, the division is likely more aware of the districts’ activities compared to those other divisions that are responsible for more districts. The 14 districts selected were Huntington, Louisville, St. Louis, Vicksburg, New England, New York, Omaha, Walla Walla, Jacksonville, Mobile, Albuquerque, Los Angeles, Fort Worth, and Little Rock. Although the New Orleans District was originally selected based on our criteria, we chose St. Louis as a replacement because of other ongoing audit work at the site, and the Corps’ heavy workload related to Hurricane Katrina reconstruction efforts. We interviewed managers identified by the District Deputy Commander responsible for strategic human capital planning and human resources-related issues. The structured interview covered, among other things, human capital initiatives, performance measures, critical skills, and challenges to meeting workforce needs. To reduce nonsampling errors, we conducted pretests with respondents from two divisions and 3 districts to ensure that questions were interpreted in a consistent manner and we revised the questions on the basis of the pretest results. We also reviewed division and district documents on recruitment, training and development, and retention to corroborate information discussed during the interviews. To determine the challenges the Corps faces in meeting its workforce needs, we included open-ended questions about challenges the Corps faces in meeting its workforce and program needs in our structured interviews and interviewed community of practice program leaders at Corps headquarters. We conducted a content analysis of interview responses for which general themes were developed and then independently coded. Coding discrepancies were reviewed, and if necessary, arbitrated by a third party until agreement statistics reached 100 percent. The content codes and other interview data were analyzed to develop general statistics on human capital issues across the divisions and districts. In addition, we analyzed data obtained from Army’s Workforce Analysis Support System for information on the Corps’ workforce and the Corps of Engineers Financial Management System for information on the Corps use of recruitment, retention, and relocation allowances as well as expenditures for training and development activities. To assess the reliability of the data needed to answer the engagement objectives, we checked these data for obvious errors in accuracy and completeness, reviewed existing information about these data and the system that produced them, and interviewed agency officials knowledgeable about the data. We determined that these data were sufficiently reliable for the purposes of this report. We conducted this performance audit from March 2007 to April 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following are GAO’s comments to the additional information included in the Department of Defense’s letter dated May 1, 2008. 1. We are aware that the Corps human capital initiative updates have received a “green” status from OPM and OMB. However, as the report states these updates to OPM do not contain any of the components of an effective human capital plan and they do not represent a comprehensive framework for the agency’s human capital policies, programs, and practices. We made no modifications to the report in response to this comment. 2. We disagree with the Department’s characterization of our report. We contacted officials in all of the Corps 8 division offices and 14 of its district offices, and presented those issues and concerns that were generally agreed on by these officials. The examples cited throughout the report were used to provide more specifics as to the type of concerns expressed by district officials and were not all inclusive of the comments received. We made no changes to the report in response to this comment, however we have clarified that the Corporate Board referred to by the district official was not an agencywide Corporate Board. 3. We disagree with the Department’s characterization of the report. Our report does not state that the agency’s 2005 Campaign Plan does not address human capital. Instead our report states that the 2005 plan does not contain a strategic goal related to “people” similar to the strategic goal that was included in the 2002 Campaign Plan. We have not modified the report in response to this comment. 4. We disagree with the Department’s comment that the draft report does not mention the Corps ongoing effort to update its Campaign plan. Our report clearly states that the agency is in the process of updating its 2005 strategic plan to reflect the new strategic direction of the incoming Commander of the Corps. No changes were made in response to this comment. In addition to the individual named above, Vondalee R. Hunt (Assistant Director), Tania Calhoun, Nancy Crothers, William Doherty, Diana Cheng Goody, Nisha Hazra, Grant Mallie, Jamie A. Roberts, Rebecca Shea, and Katherine Hudson Walker made key contributions to this report. NASA: Progress Made on Strategic Human Capital Management, but Future Program Challenges Remain. GAO-07-1004. Washington, D.C.: August 8, 2007. Human Capital: Federal Workforce Challenges in the 21st Century. GAO-07-556T. Washington, D.C.: March 6, 2007. Human Capital: Retirements and Anticipated New Reactor Applications Will Challenge NRC’s Workforce. GAO-07-105. Washington, D.C.: January 17, 2007. Human Capital: Increasing Agencies’ Use of New Hiring Flexibilities. GAO-04-959T. Washington, D.C.: July 13, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Federal Employee Retirements: Expected Increase Over the Next 5 Years Illustrates Need for Workforce Planning. GAO-01-509. Washington, D.C.: April 27, 2001. | With a workforce of about 35,000, the U.S. Army Corps of Engineers (the Corps) provides engineering services for civil works and military programs in the United States and overseas. Recently, the Corps' focus has shifted to also support contingency operations, such as responding to natural disasters. To meet its mission and emerging priorities, the Corps must have effective human capital planning processes to ensure that it can maintain its workforce. In this context, GAO was asked to examine the (1) extent to which the Corps has aligned its human capital plan with its strategic plan, (2) extent to which the Corps has the information necessary to identify and meet current and future workforce needs, and (3) challenges the Corps faces in meeting its workforce needs. To address these issues, GAO reviewed agency human capital and strategic planning documents, conducted structured interviews with eight Corps divisions and a purposeful sample of 14 of its districts, and interviewed other Corps officials. The Corps' strategic human capital plan is outdated; is not aligned with the agency's most recent strategic plan, which was developed in 2005; and is inconsistently used across the agency. Specifically, the human capital plan has not been revised since it was developed in 2002, and it is therefore not aligned with the Corps' current strategic plan. Headquarters officials told GAO they "abandoned" the use of the plan and replaced it with the human capital updates required under a presidential initiative. While these updates list the Corps' human capital activities and milestones for completing them, they do not contain key components of an effective human capital plan, such as goals, strategies, and a system for measuring performance. Moreover, the outdated human capital plan is being used inconsistently across the agency. Some divisions and districts are still using the 2002 plan to guide their human capital efforts, while others are relying on guidance from headquarters or the Office of Personnel Management or developing their own guidance. Without a current, consistently implemented human capital plan that is aligned with its strategic plan, the Corps' ability to effectively manage its workforce is limited. The Corps lacks the necessary agencywide information on critical skills to identify and assess current and future workforce needs and therefore cannot effectively perform its workforce planning activities. Effective workforce planning depends on consistent agencywide data on the critical skills needed to achieve the agency's mission. However, the Corps does not have a process for collecting consistent agencywide data, and headquarters has not provided guidance to the divisions and districts on how to gather this information systematically. Without guidance, some divisions and districts have collected this information independently, using varying methods, leaving the Corps with inconsistent and incomplete data with which to assess the agency's overall workforce needs. As a result, the Corps' ability to determine effective approaches to recruiting, developing, and retaining personnel is limited. Realizing the need for consistent information on critical skills, the Corps recently began an effort to systematically collect these data. However, it is too early to assess the Corps' progress on this effort. The Corps faces several challenges to its workforce planning efforts, such as competition from the private sector and others to hire qualified staff. To address these challenges, the Corps uses human capital tools such as recruitment and retention incentives. However, the Corps' use of some tools has sharply decreased recently. For example, in fiscal year 2002 the Corps awarded $750,000 in recruitment bonuses, but in 2006 this dropped to $24,000. One official told GAO he has had to hire less qualified staff because he has been unable to offer sufficient incentives. Moreover, the Corps lacks a process for assessing the effectiveness of the tools it uses. Consequently, the Corps can neither determine the overall costs and benefits of using these tools nor decide whether additional methods are needed to recruit, develop, and retain its current and future workforce. |
The basic purpose of prepositioning is to allow DOD to field combat-ready forces in days rather than in the weeks it would take if the forces and all necessary equipment and supplies had to be brought from the United States. However, the stocks must be (1) available in sufficient quantities to meet the needs of deploying forces and (2) in good condition. For prepositioning programs, these factors define “readiness.” If on-hand stocks are not what is needed—or are in poor condition—the purpose of prepositioning may be defeated because the unit will lose valuable time obtaining or repairing equipment and supplies. U.S forces had months to build up for OIF, so speed was not imperative. Prepositioning sites became reception and staging areas during the months leading up to the war, and afforded the military the necessary time and access in Kuwait to build up its forces for the later offensive operations of OIF. Prepositioning programs grew in importance to U.S. military strategy after the end of the Cold War, particularly for the Army. Recognizing that it would have fewer forward-stationed ground forces—and to support the two-war strategy of the day—the Army used equipment made available from its drawdown to field new sets of combat equipment ashore in the Persian Gulf and in Korea. It also began an afloat program in the 1990s, using large ships to keep equipment and supplies available to support operations around the world. The Marine Corps has had a prepositioned capability since the 1980s. Its three Marine Expeditionary Forces are each assigned a squadron of ships packed with equipment and supplies—the Marines view this equipment as their “go-to-war” gear. Both the services also have retained some stocks in Europe, although the Army stocks have steadily declined since the end of the Cold War. Today, the Army has sites in the Netherlands, Luxembourg, and Italy, while the Marine Corps retains stocks in Norway. Figure 1 shows the location of Army and Marine Corps prepositioned equipment prior to OIF. Prepositioning is an important part of DOD’s overall strategic mobility calculus. The U.S. military can deliver equipment and supplies in three ways: by air, by sea, or by prepositioning. Each part of this triad has its own advantages and disadvantages. Airlift is fast, but it is expensive to use and impractical for moving all of the material needed for a large-scale deployment. Although ships can carry large loads, they are relatively slow. Prepositioning lessens the strain on expensive airlift and reduces the reliance on relatively slow sealift deliveries. However, prepositioning requires the military to maintain equipment that essentially duplicates what the unit has at home station. Moreover, if the prepositioned equipment stocks are incomplete, the unit may have to bring along so much additional equipment that using it could still strain lift, especially scarce airlift in the early days of a conflict. The Army and Marine Corps reported that their prepositioned equipment performed well during OIF but that some problems emerged. We reviewed lessons-learned reports and talked to Army and Marine Corps officials who managed or used the equipment. We heard general consensus that major combat equipment was generally in good condition when drawn and that it performed well during the conflict. However, Army officials said that some equipment was out-of-date and some critical items like trucks were in short supply and parts and other supplies were sometimes not available. The officials agreed that, overall, OIF demonstrated that prepositioned stocks could successfully support major combat operations. Most of the issues we heard were with the Army’s program. Marine Corps officials reported few shortfalls in their prepositioned stocks or mismatches with unit equipment. This is likely due to two key differences between the services. First, the Marines view prepositioned stocks as their “go-to-war" gear and give the stocks a very high priority for fill and modernization. Second, the units that will use the prepositioned stocks are assigned in advance and the Marine Corps told us that the combat units feel a sense of “ownership” in the equipment. This manifests itself in important ways. For example, the Marines have periodic conferences with all involved parties to work out exactly what their ships will carry and what the units will need to bring with them to the fight. Such an effort to tailor the prepositioned equipment increases familiarity, allows for prewar planning, and thus minimizes surprises or last-minute adjustments. The Marines also train with their gear periodically. By contrast, the Army does not designate the sets for any particular unit and provides little training with the equipment, especially with the afloat stocks. Personnel who used and managed the equipment agreed that the tanks, infantry fighting vehicles, and howitzers were in good condition when they were drawn from the prepositioned stocks; moreover, the equipment generally stayed operational throughout the fight. For example, the Third Infantry Division after-action report said that new systems and older systems proved to be very valuable and the tanks and Bradleys were both lethal and survivable. Additionally, according to Army Materiel Command documents, combat personnel reported that their equipment, in many cases, worked better than what they had at home station. Moreover, operational readiness data we reviewed showed that major combat equipment stayed operational, even in heavy combat across hundreds of miles. In fact, officials from both services agreed that OIF validated the prepositioning concept and showed that it can successfully support major combat operations. Moreover, the U.S. Central Command, in an internal lessons-learned effort, concluded that prepositioned stocks “proved their worth and were critical in successfully executing OIF.” Some of the Army’s prepositioned equipment was outdated or did not match what the units were used to at home station. At times, this required the units to “train down” to older and less-capable equipment or bring their own equipment from home. Examples include: Bradleys—The prepositioned stocks contained some older Bradley Fighting Vehicles that had not received upgrades installed since Operation Desert Storm. Such improvements included items like laser range finders, Global Positioning System navigation, thermal viewers, battlefield identification systems, and others. In addition, division personnel brought their own “Linebacker” Bradleys instead of using the outdated prepositioned stocks that would have required the crew to get out of the vehicle to fire. M113 Personnel Carriers—The prepositioned stocks contained many older model M113A2 vehicles. This model has difficulty keeping up with Abrams tanks and requires more repairs than the newer model M113A3, which the units had at home station. Trucks—The prepositioned stocks included 1960s-vintage model trucks that had manual transmissions and were more difficult to repair. Most units now use newer models that have automatic transmissions. The effect of this was that soldiers had to learn to drive stick shifts when they could have been performing other tasks needed to prepare for war; in addition, maintenance personnel were unfamiliar with fixing manual transmissions. Tank Recovery Vehicle—The prepositioned stocks contained M-88A1 recovery vehicles. These vehicles have long been known to lack sufficient power, speed, and reliability. We reported similar issues after Operation Desert Storm. According to data collected by the Army Materiel Command, these vehicles broke down frequently, generally could not keep up with the fast-paced operations, and did not have the needed capabilities even when they were in operation. None of these problems, however, were insurmountable. The U.S. forces had months to prepare for OIF, and plenty of time to adjust to the equipment they had available. Additionally, the U.S. forces faced an adversary whose military proved much less capable than U.S. forces. Our preliminary work also identified shortfalls in available spare parts and major problems with the theater distribution system, which were influenced by shortages of trucks and material handling equipment. Prior to OIF, the Army had significant shortages in its prepositioned stocks, especially in spare parts. This is a long-standing problem. We reported in 2001 that the status of the Army’s prepositioned stocks and war reserves was of strategic concern because of shortages in spare parts. At that time the Army had on hand about 35 percent of its stated requirements of prepositioned spare parts and had about a $1-billion shortfall in required spare parts for war reserves. Table 1 shows the percentage of authorized parts that were available in March 2001 in the prepositioned stocks that were later used in OIF. These stocks represent a 15-day supply of spare and repair parts for brigade units (Prescribed Load List) and for the forward support battalion that backs up the brigade unit stocks (Authorized Stockage List). While the goal for these stocks was to be filled to 100 percent, according to Army officials the Army has not had sufficient funds to fill out the stocks. In March 2002, the Army staff directed that immediate measures be taken to fix the shortages and provided $25 million to support this effort. The requirements for needed spare and repair parts were to be filled to the extent possible by taking stocks from the peacetime inventory or, if unavailable there, from new procurement. By the time the war started in March of 2003, the fill rate had been substantially improved but significant shortages remained. The warfighter still lacked critical, high-value replacement parts like engines and transmissions. These items were not available in the supply system and could not be acquired in time. Shortages in spare and repair parts have been a systemic problem in the Army over the past few years. Our recent reports on Army spares discussed this issue and, as previously noted, our 2001 report highlighted problems specifically with prepositioned spares. According to Army officials, the fill rates for prepositioned spare parts— especially high-value spares—were purposely kept down because of systemwide shortfalls. The Army’s plan to mitigate this known risk was to have the units using the prepositioned sets to bring their own high-value spare parts in addition to obtaining spare parts from non-deploying units. Nonetheless, according to the Third Infantry Division OIF after-action report, spare parts shortages were a problem and there were also other shortfalls. In fact, basic loads of food and water, fuel, construction materials, and ammunition were also insufficient to meet the unit sustainment requirements. The combatant commander had built up the OIF force over a period of months, departing from doctrinal plans to have receiving units in theater to receive the stocks. When it came time to bring in the backup supplies, over 3,000 containers were download from the sustainment ships, which contained the required classes of supply—food, fuel, and spare parts, among others. The theater supply-and-distribution system became overwhelmed. The situation was worsened by the inability to track assets available in theater, which meant that the warfighter did not know what was available. The Third Infantry Division OIF after-action report noted that some items were flown in from Europe or Fort Stewart because they were not available on the local market. Taken together, all these factors contributed to a situation that one Army after-action report bluntly described as “chaos.” Our recent report on logistics activities in OIF described a theater distribution capability that was insufficient and ineffective in managing and transporting the large amount of supplies and equipment during OIF. For example, the distribution of supplies to forward units was delayed because adequate transportation assets, such as cargo trucks and materiel handling equipment, were not available within the theater of operations. The distribution of supplies was also delayed because cargo arriving in shipping containers and pallets had to be separated and repackaged several times for delivery to multiple units in different locations. In addition, DOD’s lack of an effective process for prioritizing cargo for delivery precluded the effective use of scarce theater transportation assets. Finally, one of the major causes of distribution problems during OIF was that most Army and Marine Corps logistics personnel and equipment did not deploy to the theater until after combat troops arrived, and in fact, most Army personnel did not arrive until after major combat operations were underway. Forces are being rotated to relieve personnel in theater. Instead of bringing their own equipment, these troops are continuing to use prepositioned stocks. Thus, it may be several years—depending on how long the Iraqi operations continue—before these stocks can be reconstituted. The Marine Corps used two of its three prepositioned squadrons (11 of 16 ships) to support OIF. As the Marines withdrew, they repaired some equipment in theater but sent much of it back to their maintenance facility in Blount Island, Florida. By late 2003, the Marine Corps had one of the two squadrons reconstituted through an abbreviated maintenance cycle, and sent back to sea. However, to support ongoing operations in Iraq, the Marine Corps sent equipment for one squadron back to Iraq, where it is expected to remain for all or most of 2004. The Marine Corps is currently performing maintenance on the second squadron of equipment that was used during OIF, and this work is scheduled to be completed in 2005. Most of the equipment that the Army used for OIF is still in use or is being held in theater in the event it may be needed in the future. The Army used nearly all of its prepositioned ship stocks and its ashore stocks in Kuwait and Qatar, as well as drawing some stocks from Europe. In total, this included more than 10,000 pieces of rolling stock, 670,000 repair parts, 3,000 containers, and thousands of additional pieces of other equipment. According to Army officials, the Army is repairing this equipment in theater and reissuing it piece-by-piece to support ongoing operations. Thus far, the Army has reissued more than 11,000 pieces of equipment, and it envisions that it will have to issue more of its remaining equipment to support future operations. Thus, it may be 2006 or later before this equipment becomes available to be reconstituted to refill the prepositioned stocks. Officials also told us that, after having been in use for years in harsh desert conditions, much of the equipment would likely require substantial maintenance and some will be worn out beyond repair. Figure 2 shows OIF trucks needing repair. Both the Army and the Marine Corps have retained prepositioned stocks in the Pacific to cover a possible contingency in that region. While the Marine Corps used two of its three squadrons in OIF, it left the other squadron afloat near Guam. The Army used most of its ship stocks for OIF, but it still has a brigade set available in Korea and one combat ship is on station to support a potential conflict in Korea, although it is only partially filled. Both the Army and the Marine Corps used stocks from Europe to support OIF. The current status of the services’ prepositioned sets is discussed in table 2. Army and Marine Corps maintenance officials told us that it is difficult to reliably estimate the costs of reconstituting the equipment because so much of it is still in use. As a result, the reconstitution timeline is unclear. Based on past experience, it is reasonable to expect that the harsh desert environment in the Persian Gulf region will exact a heavy toll on the equipment. For example, we reported in 1993 that equipment returned from Operation Desert Storm was in much worse shape than expected because of exposure for lengthy periods to harsh desert conditions. The Army has estimated that the cost for reconstituting its prepositioned equipment assets is about $1.7 billion for depot maintenance, unit level maintenance, and procurement of required parts and supplies. A request for about $700 million was included in the fiscal year 2004 Global War on Terrorism supplemental budget, leaving a projected shortfall of about $1 billion. Army Materiel Command officials said they have thus far received only a small part of the amount funded in the 2004 supplemental for reconstitution of the prepositioned equipment, but they noted that not much equipment has been available. Additionally, continuing operations in Iraq have been consuming much of the Army’s supplemental funding intended for reconstitution. Since much of the equipment is still in Southwest Asia, it is unclear how much reconstitution funding for its prepositioned equipment the Army can use in fiscal year 2005. But it is clear that there is a significant bill that will have to be paid for reconstitution of Army prepositioned stocks at some point in the future, if the Army intends to reconfigure the afloat and land-based prepositioned sets that have been used in OIF. The defense department faces many issues as it rebuilds its prepositioning program and makes plans for how such stocks fit into the transformed military. In the near term, the Army and the Marine Corps must focus on supporting current operations and reconstituting their prepositioning sets. Moreover, we believe that the Army may be able to take some actions to address the shortfalls and other problems it experienced during OIF. In the long term, however, DOD faces fundamental issues as it plans the future of its prepositioning programs. As it reconstitutes its program, the Army would likely benefit from addressing the issues brought to light during OIF, giving priority to actions that would address long-standing problems, mitigate near-term risk, and shore up readiness in key parts of its prepositioning program. These include ensuring that it has adequate equipment and spare parts and sustainment supplies in its prepositioning programs, giving priority to afloat and Korea stocks; selectively modernizing equipment so that it will match unit equipment and better meet operational needs; and planning and conducting training to practice drawing and using prepositioned stocks, especially afloat stocks. Based on some contrasts in the experiences between the Army and the Marine Corps with their prepositioned equipment and supplies in OIF, some officials we spoke to agree that establishing a closer relationship between operational units and the prepositioned stocks they would be expected to use in a contingency is critical to wartime success. The Marines practice with their stocks and the Army could benefit from training on how to unload, prepare, and support prepositioned stocks, particularly afloat stocks. While the Army has had some exercises using its land-based equipment in Kuwait and Korea, it has not recently conducted a training exercise to practice unloading its afloat assets. According to Army officials, such exercises have been scheduled over the past few years, but were cancelled due to lack of funding. The long-term issues transcend the Army and Marines, and demand a coordinated effort by the department. In our view, three main areas should guide the effort. Determine the role of prepositioning in light of the efforts to transform the military. Perhaps it is time for DOD to go back to the drawing board and ask: what is the military trying to achieve with these stocks and how do they fit into future operational plans? If, as indicated in Desert Storm and OIF, prepositioning is to continue to play an important part in meeting future military commitments, priority is needed for prepositioning as a part of transformation planning in the future. Establish sound prepositioning requirements that support joint expeditionary forces. If DOD decides that prepositioning is to continue to play an important role in supporting future combat operations, establishing sound requirements that are fully integrated is critical. The department is beginning to rethink what capabilities could be needed. For example, the Army and Marines are pursuing sea-basing ideas—where prepositioning ships could serve as offshore logistics bases. Such ideas seem to have merit, but are still in the conceptual phases, and it is not clear to what extent the concepts are being approached to maximize potential for joint operations. In our view, options will be needed to find ways to cost-effectively integrate prepositioning requirements into the transforming DOD force structure requirements. For example, Rand recently published a report suggesting that the military consider prepositioning support equipment to help the Stryker brigade meet deployment timelines. Such support equipment constitutes much of the weight and volume of the brigade, but a relatively small part of the costs compared to the combat systems. Such an option may be needed, since our recent report revealed that the Army would likely be unable to meet its deployment timelines for the Stryker brigade. Ensure that the program is resourced commensurate with its priority, and is affordable even as the force is transformed. In our view, DOD must consider affordability. In the past, the drawdown of Army forces made prepositioning a practical alternative because it made extra equipment available. However, as the services’ equipment is transformed and recapitalized, it may not be practical to buy enough equipment for units at home station and for prepositioning. Prepositioned stocks are intended to reduce response times and enable forces to meet the demands of the full spectrum of military operations. Once the future role of prepositioning is determined, and program requirements are set, it will be important to give the program proper funding priority. Congress will have a key role in reviewing the department’s assessment of the cost effectiveness of options to support DOD’s overall mission, including prepositioning and other alternatives for projecting forces quickly to the far reaches of the globe. Mr. Chairman, I hope this information is useful to Congress as it considers DOD’s plans and funding requests for reconstituting its prepositioned stocks as well as integrating prepositioning into the department’s transformation of its military forces. This concludes my prepared statement. I would be happy to answer any questions that you or the Members of the Subcommittee may have. For questions about this statement, please contact William M. Solis at (202) 512-8365 (e-mail address: [email protected]), Julia Denman at (202) 512-4290 (e-mail address: [email protected]), or John Pendleton at (404) 679-1816 (e-mail address: [email protected]). Additional individuals making key contributions included Nancy Benco, Robert Malpass, Tinh Nguyen, and Tanisha Stewart. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Since the Cold War, the Department of Defense (DOD) has increased its reliance on prepositioned stocks of military equipment and supplies, primarily because it can no longer plan on having a large forward troop presence. Prepositioned stocks are stored on ships and on land in the Persian Gulf and other regions around the world. Prepositioning allows the military to respond rapidly to conflicts. Ideally, units need only to bring troops and a small amount of materiel to the conflict area. Once there, troops can draw on prepositioned equipment and supplies, and then move quickly into combat. Today's testimony describes (1) the performance and availability of Army and Marine Corps prepositioned equipment and supplies to support Operation Iraqi Freedom (OIF); (2) current status of the stocks and plans to reconstitute them; and (3) key issues facing the military as it reshapes these programs to support DOD's force transformation efforts. The importance of prepositioned stocks was dramatically illustrated during OIF. While they faced some challenges, the Army and Marine Corps relied heavily on prepositioned combat equipment and supplies to decisively defeat the Iraqi military. They both reported that prepositioned stocks were a key factor in the success of OIF. Prepositioned stocks provided most of the combat equipment used and, for the most part, this equipment was in good condition and maintained high readiness rates. However, the Army's prepositioned equipment included some older models of equipment and shortfalls in support equipment such as trucks, spare parts, and other supplies. Moreover, the warfighter did not always know what prepositioned stocks were available in theater, apparently worsening an already overwhelmed supply-and-distribution system. The units were able to overcome these challenges; fortunately, the long time available to build up forces allowed units to fill many of the shortages and adjust to unfamiliar equipment. Much of the prepositioned equipment is still being used to support continuing operations in Iraq. It will be several years--depending on how long Iraqi Freedom operations continue--before these stocks will be available to return to prepositioning programs. And, even after they become available, much of the equipment will likely require substantial maintenance, or may be worn out beyond repair. The Army has estimated that it has an unfunded requirement of over $1 billion for reconstituting the prepositioned equipment used in OIF. However, since most prepositioned equipment is still in Southwest Asia and has not been turned back to the Army Materiel Command for reconstitution, most of the funding is not required at this time. When the prepositioned equipment is no longer needed in theater, decisions will have to be made about what equipment can be repaired by combat units, what equipment must go to depot, and what equipment must be replaced with existing or new equipment to enable the Army to reconstitute the prepositioned sets that were downloaded for OIF. DOD faces many issues as it rebuilds its prepositioning program and makes plans for how such stocks fit into its future. In the near term, the Army and Marines must necessarily focus on supporting ongoing OIF operations. While waiting to reconstitute its program, the Army also has an opportunity to address shortfalls and modernize remaining stocks. For the longer term, DOD may need to (1) determine the role of prepositioning in light of efforts to transform the military; (2) establish sound prepositioning requirements that support joint expeditionary forces; and (3) ensure that the program is resourced commensurate with its priority and is affordable even as the force is transformed. Congress will play a key role in reviewing DOD's assessment of the cost effectiveness of various options to support its overall mission, including prepositioning and other alternatives for projecting forces quickly. |
The MHSS consists of military medical facilities and private sector health care providers. The primary mission of the MHSS is to maintain the health of military personnel and to support the services during time of war. In addition, the MHSS provides health care to dependents of active duty members, retirees and their dependents, and survivors of service members. Active duty members receive their care almost entirely from military medical facilities. When space and resources are available, other beneficiaries may obtain their care from military medical facilities as well. Overseas, U.S. civilian government employees are also eligible to receive care in military medical facilities on a space-available basis. The collapse of the Warsaw Pact and the end of the Cold War have significantly changed the American military landscape in Europe. Because of the easing of East-West tensions, the United States has chosen to substantially reduce its military forces in Europe. Between July 1990 and April 1993, DOD initiated three major plans to reduce its military forces in Europe, each with successively lower personnel levels. The first plan, developed in July 1990, would have reduced military positions in Europe to 225,000; the second to 150,000; and the latest plan calls for about 100,000 Army, Air Force, and Navy personnel in Europe by the end of fiscal year 1996. The U.S. military medical system in Europe has also been reduced and reorganized. The number of military hospitals and clinics in Europe is being cut from 23 hospitals and 89 clinics in 1989 to 9 hospitals and 48 clinics in 1995. In Germany, for example, the Air Force is reducing its hospitals from three to one and its clinics from six to five. Army hospitals and clinics in Germany are being reduced from 9 to 3 and 55 to 25, respectively. In northern Italy, the Air Force has one clinic and the Army has one hospital and one clinic, the same as in 1989. The Army, however, plans to convert the hospital to a clinic in October 1995 because (1) very low utilization makes it difficult to maintain a high-quality hospital and (2) quality medical care is available from host nation providers. Appendix II lists those Air Force and Army medical facilities operating as of April 21, 1995. The number of dental clinics is also being significantly cut back. Prior to the downsizing, the Army had 94 dental clinics in Europe. The Army has completed its reduction and now has 35 dental clinics. The Air Force is reducing its dental clinics from 31 to 11. Beneficiaries have access to primary care at military facilities, including outlying clinics. Most of the outlying clinics are closed in the evenings and on weekends, however, necessitating that after-hours primary care and emergency services be obtained from German and Italian providers. In general, U.S. military specialty care is available to active duty personnel and is most accessible to beneficiaries living near U.S. military hospitals. Dental care is more readily available to active duty personnel than other beneficiaries. Military providers told us that primary care clinics are able to serve most beneficiaries. Since 1989, the ratio of primary care providers (general medical officers, family practice physicians, physician’s assistants, and nurse clinicians) to beneficiaries has improved—from 1:1,222 to 1:868—and plans call for further improvement to 1:661 by November 1995. Generally, clinics are open Monday through Friday, and some have extended hours—one evening during the week or morning hours on the weekend. Two Army clinics in Germany are open 24 hours, 7 days a week. Beneficiaries in all categories expressed general satisfaction with their access to primary care in military facilities. They did, however, express frustration over difficulties in making appointments by telephone and delays in obtaining routine physical exams and well-woman exams. They also stated concerns about delays in obtaining test results. Although the overall ratio of primary care providers is improving, staff at many of the outlying clinics we visited mentioned that they need more physicians trained in family practice and pediatrics. Some of the clinics had no family practice, pediatric, or other primary care specialty physician except the clinic commander who also had administrative and supervisory responsibilities. Army clinics rely heavily on general medical officers to provide primary care. Army officials stated that they do not have enough family practice or other specialty-trained primary care physicians to assign to clinics. DOD was unable to provide us with data to compute how the ratios of specialists to beneficiaries have changed since 1989 or to measure how long it takes to get an appointment with a specialist. However, the military medical leadership, military physicians, and beneficiaries all commented that there has been a significant reduction in the amount and location of U.S. military specialty care available in Europe since the downsizing began. As a result, access to specialty care varies by specialty and among categories of beneficiaries. Some specialty areas have substantially fewer physicians than before the downsizing began. For example, the number of Army obstetricians/gynecologists has been reduced from 42 to 17; urologists from 6 to 2; otolaryngologists (ear, nose, and throat) from 8 to 4; general surgeons from 32 to 11; and orthopedic surgeons from 26 to 11. Only one specialty (nephrology), however, is no longer available in Europe. Active duty members are generally able to obtain the specialty care they need, although in some instances they must wait a month or longer. Service members needing inpatient psychiatric services are sometimes sent back to the United States for such care because of limited inpatient mental health resources in Europe. Non-active duty beneficiaries have less, and in some cases no, access to specialty care, particularly otolaryngology, orthopedics, and mental health—also because of limited resources. Beneficiaries and military medical officials commented that many people who need these services must either wait a substantial period of time to get the care from military facilities in Europe or return to the United States for it. Access to specialty care is also less convenient because of the reduction in U.S. military hospitals. In 1989 the Army had nine hospitals in Germany. Now U.S. military specialty care is provided almost entirely in the three remaining Army hospitals in Germany: Landstuhl, Wuerzburg, and Heidelberg. Beneficiaries in Augsburg, for example, must travel about 130 miles one way to obtain the specialty care that is available at the U.S. Army hospital in Wuerzburg or about 170 miles one way to Landstuhl to obtain specialty care that is not available in Wuerzburg. Beneficiaries in many communities throughout Germany find themselves in similar circumstances. Obtaining specialty care is also inconvenient for beneficiaries when repeat hospital visits are required. For example, most outlying clinics do not have physical therapists or mental health professionals on staff. Consequently, patients must travel to one of the military hospitals to obtain these recurring services. Each visit frequently requires patients to spend a full day traveling and receiving services. To help beneficiaries living in remote areas, specialists assigned to the three Army hospitals periodically visit clinics to provide care, but these visits are infrequent. Also, military communities provide shuttle bus service to the nearest U.S. military hospital. In most communities, the shuttle bus makes one trip daily between the military community and the hospital, leaving early in the morning and returning in the late afternoon of the same day. In some communities, however, the service is limited to only a few days each week. Regardless, making long trips for follow-up appointments created hardships on family members and active duty service members with family and work responsibilities. Also, we were told that soldiers’ full-day absences from their assigned duties can adversely affect their units’ wartime readiness. In northern Italy, the Army plans to convert its hospital in Vicenza to an outpatient clinic in October 1995. The clinic will maintain an after-hours acute care capacity to treat minor injuries and illnesses. Emergency and specialty care, now available at the Vicenza Army hospital, will be provided by the city hospital in Vicenza, by other Italian facilities, or by military facilities in Germany or the United States. (For some time now, life-threatening emergencies have been sent to Vicenza’s city hospital.) For other military communities in northern Italy, such as Aviano and Livorno, specialty care will continue to be provided by host nation facilities, as it has since 1989. Relatively few military retirees and their dependents age 65 and older live overseas. Those that do are especially concerned about their access to specialty health care because Medicare coverage does not extend to beneficiaries living overseas. DOD estimates that fewer than 1,400 such beneficiaries reside in Europe. These beneficiaries, who have chosen to reside overseas, have been largely dependent on the military health care system to provide their medical care and, as a result, many have never purchased supplemental health insurance through U.S. or host nation health companies. Obtaining private insurance may not be an option for some elderly retirees and family members because it is costly. Access to dental care is limited for many beneficiaries living in Europe. Active duty personnel have better access to dental care than do their family members, who are generally able to obtain only emergency dental care, annual examinations, and cleanings. Many beneficiaries, except for active duty, have limited or no access to specialty dental care. The dental staff in some clinics dedicate most of their orthodontic care to patients whose treatment programs were initiated in the United States. New cases are seldom started. In Vicenza and Livorno, all beneficiaries have access to dental services. Many beneficiaries and U.S. military dentists do not consider host nation dental care a viable option. It is expensive, and beneficiaries do not like the differences in the practice patterns of host nation dentists. Numerous obstacles confront the MHSS in Europe. Some existed prior to the downsizing, including medical staffing shortages, long waits for laboratory results, and equipment problems. Many U.S. military physicians stated that these obstacles hinder their ability to provide quality medical care. Many clinic and hospital officials we met with stated that they have too few military and civilian personnel. Their facilities are staffed at less than 100 percent of authorized military levels in such positions as nurses, medics, X-ray technicians, and pharmacy technicians. In addition, medical staff frequently complained about shortages in civilian personnel, including receptionists, custodians, and patient liaisons. Medical staff are working long hours attempting to meet the demand for care. Two other factors have had a serious impact on the military’s ability to meet the health care needs of all beneficiaries in Europe. First, medical and dental units have been under additional strain to meet the demand for care during the downsizing. The military had intended to keep medical resources in Europe at levels proportionally higher than nonmedical units so that access to health care would be improved during the downsizing. To the contrary, many of the health and dental clinics we visited were staffed at their so called “endstate” levels, while nonmedical units had not yet reached their final levels. Army officials were unable to provide documents showing how a coordinated withdrawal of medical and nonmedical personnel was planned to ensure improved access to health care. However, they did provide data indicating that the ratios of total medical personnel to beneficiaries have changed little since 1989—from 1:31 to 1:38. Over time, as more units withdraw from Europe, this tension should ease somewhat. Second, until recently, Army medical units have not received replacements when their medical personnel are temporarily reassigned to other units. Between October 1993 and December 1994 the Army in Europe sent 715 men and women from medical units to other areas of the world without providing replacement personnel for the affected medical units. These actions often resulted in immediate personnel shortages for the medical units in Europe and further hindered the delivery of health care to beneficiaries there. The Army has implemented a policy which calls for replacing medical personnel (not necessarily on a one-for-one basis) who are temporarily assigned to other units for more than 14 days. Since March 1995, the Army has provided temporary replacements to medical units in Europe. Medical staff experience daily problems with equipment failures and delays in obtaining laboratory test results. Generally, these problems are attributed to old and unreliable equipment. Staff repeatedly told us that X-ray, X-ray processor, and culture machines are frequently broken. They also mentioned that problems exist with the ambulance fleet, defibrillators, CT scanners, and pulse oximeters because they are old, outdated, or in short supply. Medical staff also experience problems in obtaining laboratory test results. Although data were unavailable on the specific or average times needed to get laboratory results, staff said that all test results require more time than they should to get back. Results of glucose, potassium, cholesterol, liver and thyroid function, and tissue exams are typically delayed, as are X rays. Health care providers at one clinic estimated that it took between 2 and 4 weeks to obtain the results for such tests. They cited delays as long as 2 months for Pap test results. DOD is currently implementing a medical information system that will allow providers to obtain test results via computer rather than mail. The new computer system, officials believe, should enable military providers to get laboratory results in a more timely manner. Beneficiaries under age 65 who either are unable or do not want to receive care from military medical facilities have the option of obtaining care from host nation providers. Although the beneficiaries we spoke with were generally satisfied with the outcome of the host nation health care they received, they expressed a great deal of frustration over their specific experiences in obtaining that care. They also expressed a strong preference to receive their health care from military facilities. Beneficiaries and military medical officials agree, however, that as less and less care is available from military medical facilities in Europe, beneficiaries will have to rely more on host nation providers. Beneficiaries are frustrated with host nation medical care for a variety of reasons. Some host nation providers, for example, require payment or a large deposit in advance of treating U.S. military beneficiaries. These upfront payments, we were told, amount to as much as the equivalent of about $6,000. Also, U.S. military officials provide beneficiaries little information or help in choosing German or Italian providers. Essentially, beneficiaries are given a list of English-speaking doctors and encouraged to ask other beneficiaries about their experiences with these doctors before selecting one. In addition, beneficiaries feel abandoned by military medical physicians when they use host nation providers. In general, military physicians are not required to actively monitor U.S. patients’ care in host nation facilities. Although they may be aware of their patients’ progress, the lack of direct contact gives beneficiaries the impression that they have been “dumped” on host nation providers and that the military is not concerned about their care. The Aviano community is an exception. Several patient assistance services have been in place for some time there. For example, the Air Force contracts with bilingual Italian physicians to help beneficiaries understand their diagnosis and treatment. Beneficiaries also mentioned that they need help obtaining services from host nation facilities, especially during evenings and weekends. They are concerned about such matters as knowing where to go, having someone available to translate their medical emergency, and getting assistance with paperwork. In addition, beneficiaries using host nation providers were required to pay deductibles and copayments for their care. When admitted, beneficiaries explained that they must contend with language barriers, cultural differences, and quality of care concerns such as differences in treatment. Military physicians told us that some differences in treatment do exist among the U.S., German, and Italian systems. Although the cultural and treatment differences are unsettling to U.S. patients, the military medical staff, for the most part, are confident about the quality of health care delivered in Germany and northern Italy. Once care is completed and patients are released from host nation providers, many patients are left with their medical information in a foreign language. This problem is most prevalent in Germany where, currently, treatment records are written in German, and often the only information translated is that done by bilingual physicians working for the U.S. military. In several communities, military physicians estimated that less than 10 percent of medical records are ever translated. Consequently, patients may not have an adequate record of their medical conditions and treatments. DOD and beneficiaries recognize that there must be a greater reliance on host nation care: Rebuilding U.S. military medical facilities overseas is not an option. Therefore, DOD has taken and is planning a number of steps to alleviate beneficiary concerns and improve access to host nation care. Although some of DOD’s actions have been slow in coming, most are expected to be in place by October 1995. In our view, these actions are positive steps toward alleviating the concerns voiced by beneficiaries. However, the extent to which beneficiaries will be satisfied remains to be seen. To address beneficiaries’ overall concern, DOD is developing an interservice health care plan for all beneficiaries in Europe that seeks to maximize the use of military medical facilities. This effort is being headed by a tri-service executive steering committee made up of senior medical officials in Europe and assisted by a military treatment facility commander’s council—a group representing military hospital and clinic commanders in Europe. Instead of focusing on tangible outcomes, most efforts to date have focused on planning, coordinating, and determining how the military services can effectively work together to better serve their beneficiaries. These formative sessions represent a significant step because, in the past, the services have essentially operated independently rather than working in a collaborative way. Beginning in the summer of 1994, DOD also initiated efforts to establish a preferred provider network in Europe, Africa, and the Middle East. Once completed, this network will enable beneficiaries to choose among various host nation providers who (1) are interested in serving them, (2) are willing to accept payment under CHAMPUS, and (3) will not require advance payments from beneficiaries. At the outset approximately 20,000 host nation providers were identified as having billed CHAMPUS for services. DOD contacted these providers and asked if they were willing to treat U.S. beneficiaries, outlining the conditions. DOD is also working to ensure the quality of network participants by verifying their qualifications. As of February 1995, over 4,000 of these providers had indicated an interest in joining a CHAMPUS-preferred provider network. In April 1995, the Army established a toll-free telephone number for beneficiaries to obtain after-hours referrals to host nation facilities. The service is currently available at Army hospitals in Heidelberg and Wuerzburg and is planned for Landstuhl as well. To assist beneficiaries who are using host nation providers, DOD established a patient liaison coordinator program. As of June 5, 1995, 59 patient liaisons were assigned to Europe. These liaisons (1) coordinate consultations with host nation facilities and follow-up care, (2) help make appointments at host nation facilities, (3) educate beneficiaries on host nation medical services, (4) interpret information between host nation providers and beneficiaries, (5) assist with paperwork associated with hospitalization at host nation facilities, and (6) visit patients in hospitals. Beneficiaries generally agree that the patient liaisons reduce the anxiety involved in using host nation facilities. However, most communities have only one or two patient liaisons and whose services are generally available only on weekdays until 4 p.m. The patient liaison program is intended to be supplemented with a volunteer system to provide coverage after business hours. However, none of the communities we visited had yet established a volunteer system that provided evening and weekend coverage. Consequently, beneficiaries using host nation facilities after normal business hours often obtained that care without assistance. In response, DOD has agreed to increase the availability of liaisons to provide 24-hour coverage. Effective October 1, 1994, DOD expanded an existing CHAMPUS initiative to improve access to host nation facilities for active duty family members. DOD estimates this initiative will cost approximately $2.8 million annually. The expanded CHAMPUS initiative waives cost sharing for active duty family members who obtain outpatient and inpatient care at host nation facilities. Beneficiaries are pleased and indicated that the elimination of copayments and deductibles has enhanced their willingness to seek care at host nation facilities. DOD is also planning to use host nation physicians to act as liaisons and assist military doctors in monitoring beneficiaries admitted to host nation facilities for care. The direct involvement of a physician representing the military may ease beneficiaries’ feelings of being “dumped” when they are referred to host nation facilities. To better inform beneficiaries and thereby reduce their anxieties about health care—military and host nation—available in their communities in Europe, DOD is creating an education program. DOD is also planning to have host nation medical records translated into English. This should help ensure that in the future patients will have an adequate record of previous medical conditions and treatments. To improve beneficiaries’ access to dental care, DOD is taking a number of steps. First, DOD is striving to efficiently use its existing dental capabilities, including sharing resources among the three services. Second, DOD is increasing the number of dentists, orthodontists, pedodontists, and other dental support personnel assigned in Europe. The Air Force plans to assign an additional 23 general dentists, 2 orthodontists, 2 pedodontists, and 54 dental assistants to Europe during fiscal year 1995. As of May 26, 1995, all but four dentists had arrived overseas. The Army has contracted with civilians to fill 22 general dentist, 5 orthodontist, and 10 dental hygienist positions. Third, at remote locations or areas with small populations where military dental services may not be available, DOD plans to arrange for dental care through host nation providers. Fourth, family members will be allowed to remain enrolled in the Dependents Dental Plan while the service member is assigned overseas. This will permit family members to obtain dental care in the United States, for example, during stateside visits. Finally, over the past year, DOD has made an effort to educate beneficiaries on the forthcoming changes in Vicenza and to develop a plan to ensure the availability of quality medical care. For example, it has (1) prepared a new detailed handbook to inform patients about host nation obstetrical services; (2) developed a questionnaire to obtain beneficiary feedback about host nation medical care; (3) held meetings with beneficiaries to educate them on the changes; (4) hired a host nation physician to perform oversight and liaison services among the host nation facility, the patient, and the military medical providers; and (5) made arrangements for translators to assist when Italian ambulance service is needed. Several other significant steps are described in detail in a plan DOD prepared and sent to the Congress in March 1995. In February 1995, an Italian newspaper reported that the hospital in Vicenza—the primary host nation referral facility—was alleged to have engaged in poor health care practices. These practices included improper disposal of contaminated waste in the emergency room, operating rooms, and the pathologic anatomy and metabolic disease sections. Expired or spoiled medicines were also reportedly discovered throughout the hospital. Army medical officials in Vicenza followed up with hospital administrators and were assured that U.S. beneficiaries did not receive expired medicines or have resultant bad medical outcomes. Army officials believe the situation is resolved and that beneficiaries are not at any risk. They believe the hospital provides superb care overall. This incident does, however, provide sufficient reason for military medical providers to remain actively involved in their patients’ care when they are referred to host nation facilities. Army officials recognize this need and have pledged to actively monitor all patient care in host nation facilities. Military health and dental care professionals are working long hours attempting to meet beneficiary demands that are greater than military facilities are staffed to provide. Even though some of the strain placed on medical and dental resources may decrease slightly as the beneficiary population in Europe continues to shrink, the military medical facilities in Europe will not have the capacity to handle all care to eligible beneficiaries. Nor does it appear practical to staff and maintain enough military medical facilities to meet the peace-time health care needs of all eligible beneficiaries. Troops are widely dispersed and, in some places, too few in number to provide the workload necessary to justify a full service medical facility and enable medical staff to maintain their skills. Therefore, beneficiaries’ use of host nation medical care will continue and may increase. Given these circumstances, the U.S. military medical leadership needs to continue to take an active role in attending to and managing the health care needs of beneficiaries—particularly those who must rely on host nation care. An active military role not only will ensure that beneficiaries receive appropriate care but should also improve the perceptions that beneficiaries have about host nation health care. DOD has been slow to address the problems confronting military beneficiaries. In our view, though, the steps that have been taken are directed toward alleviating the major concerns of most beneficiaries. Because of these actions, we are not making any recommendations. In a letter dated June 20, 1995, the Assistant Secretary of Defense (Health Affairs) generally concurred with this report. (See app. III.) The letter acknowledged that we accurately described the problems and the corrective actions under way and planned. In addition, DOD officials provided updated information on some of the actions they are taking, and this has been added to the report. We are sending copies of this report to the Chairman and Ranking Minority Member, Senate Committee on Armed Services; the Chairman and Ranking Minority Member, Subcommittee on Military Personnel, House Committee on National Security; the Secretary of Defense; and other interested parties. This work was performed under the direction of Stephen Backhus, Assistant Director. Other major contributors were Timothy Hall and Barry DeWeese. Please contact me on (202) 512-7101 if you have any questions about this report. To assess how DOD is meeting the needs of beneficiaries overseas as the number of military personnel and facilities are reduced, we visited the following 15 military communities: Augsburg, Darmstadt, Frankfurt, Grafenwoehr, Hanau, Heidelberg, Kaiserslautern, Katterbach, Nuremberg, Spangdahlem, Stuttgart, Wiesbaden, and Wuerzburg, Germany; and Aviano and Vicenza, Italy. During these visits we met with numerous military health officials, including the commanders of the of the five remaining U.S. military hospitals in Germany and northern Italy (four Army and one Air Force). In addition, we interviewed 29 physicians representing obstetrics/gynecology, family practice, pediatrics, orthopedics, allergy/immunology, psychiatry, ambulatory patient care, internal medicine, radiology, otolaryngologists, and general surgery. We also met with 11 Army and Air Force commanders and staff of outlying health clinics. Because beneficiaries indicated concerns over a lack of access to U.S. dental facilities overseas, we interviewed six Army dental commanders, including three Army dental clinic commanders assigned to outlying military communities. We conducted “round-table” panel discussions to obtain input from beneficiaries as to changes in the availability of health care. We convened 20 panels with a total of 102 beneficiaries in the military communities we visited in Europe. Most of the beneficiaries were active duty members and their dependents. The beneficiaries were not randomly selected but were identified by representatives of the National Military Family Association, Army Community Services, and Air Force Family Support Centers. These meetings with (1) military medical and dental staff and (2) beneficiaries provided the basis for much of the information contained in this report. Both before and after our visit to Europe, we met with officials of the Office of the Assistant Secretary of Defense (Health Affairs) and Offices of the Surgeons General to discuss the status of their actions and plans to meet the health care needs of beneficiaries overseas. In addition, we met with representatives of the National Military Family Association—an advocacy group for military families—to discuss their concerns about military and host nation health care in Europe. We reviewed documents obtained from military medical officials in the Office of the Assistant Secretary of Defense (Health Affairs), Offices of the Surgeons General, and various medical activities in Europe. These documents included legislation, policy memorandums, medical drawdown information, data on beneficiary access to care, data on military medical staffing in Europe, analyses of beneficiary complaints, and beneficiary handbooks about military and host nation medical care. We did our work between March 1994 and March 1995 in accordance with generally accepted government auditing standards. The following is a list of all U.S. Air Force and U.S. Army medical facilities operating in Germany and northern Italy as of April 21, 1995. Air Force facilities are noted with an asterisk. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed beneficiary access to military health care in Europe, focusing on the: (1) availability of health care in military facilities; (2) obstacles in providing military health care; (3) experiences of beneficiaries that have used host nation providers instead of military health care providers; and (4) Department of Defense's (DOD) handling of service delivery problems and beneficiary concerns. GAO found that: (1) since the downsizing of U.S. military personnel in Europe, beneficiaries have found it difficult to obtain health services at overseas military facilities; (2) although beneficiaries have access to primary health care services, their access to specialty and dental care services is limited; (3) the reduced military health care system has resulted in DOD relying on the German and Italian medical systems to provide health services to beneficiaries; and (4) beneficiaries must contend with language barriers, cultural differences, unfamiliar doctors, and the general lack of information about how to obtain host nation health care. In addition, GAO found that DOD: (1) is developing an interservice health care plan for all beneficiaries in Europe; (2) has hired liason personnel to help beneficiaries obtain health care from German and Italian health care providers; and (3) plans to contract for services to monitor the care that beneficiaries receive from host nation providers, an education program that explains beneficiary health care options in Europe, and the translation of host nation medical records. |
The United States and its international partners from over 40 nations have been engaged in efforts to secure, stabilize, and rebuild Afghanistan since 2001. U.S. civilians have been a vital part of the U.S. strategy. To implement the U.S. strategy, the U.S. Mission Afghanistan committed in April 2009 to expand its civilian personnel both in Kabul and in the field. U.S. government civilians in Afghanistan generally fall under either the authority of the Chief of Mission (i.e., the U.S. Ambassador) or under DOD’s combatant commander authority. The Chief of Mission has authority over almost every U.S. executive branch employee there, except those under the command of a U.S. military commander or those on the staff of an international organization. Although typically stationed at the U.S. Embassy and consulates, U.S. Chief of Mission personnel in Afghanistan can also be deployed at a variety of military facilities outside of Kabul. These field-deployed civilians rely on the military for security, mobility, food, and lodging but remain under Chief of Mission authority. The Chief of Mission presence in Afghanistan consists of personnel from several agencies performing a variety of activities, some of which are described in table 1. In addition, DOD estimates that, since 2001, over 41,000 civilians have deployed worldwidedisaster relief, and stability operations, including ongoing operations in Afghanistan. DOD civilians in Afghanistan serve under the authority of the combatant commander responsible for operations in that area of the world—the U.S. Central Command—and support a wide range of DOD missions. These missions include combat support missions that have traditionally been performed by military personnel such as equipment maintenance, logistical support, and intelligence gathering and analysis; noncombat support missions such as administrative positions within the to support combat operations, contingencies, joint task force headquarters; and capacity-building missions parallel to the Chief of Mission effort to improve Afghan security institutions. To integrate the U.S. civilian expansion into the broader counterinsurgency and stabilization campaign outside of Kabul, the U.S. Mission Afghanistan, U.S. Forces—Afghanistan, and the International Security Assistance Force have established a framework for civilian-military activities. The U.S. and International Security Assistance Force civilian-military effort includes the use of provincial reconstruction teams and district support teams. Provincial reconstruction teams are combined civilian and military groups responsible for integrating the activities of all military and civilian elements in an assigned province. This integration includes harnessing both civilian and military resources to perform security, governance, and development activities to implement the U.S. counterinsurgency and stabilization strategy as well as to monitor and report on progress. District support teams are combined civilian and military groups responsible for integrating the security, governance, and development activities of all civilian and military elements in an assigned district. To enhance civilian-military coordination, the U.S. Mission Afghanistan has established a parallel civilian structure within each relevant military installation (i.e., regional command down to district support teams), with senior civilian representatives and civilian team leads managing and supervising Mission personnel at each level, as well as coordinating with their military and local Afghan government counterparts. Together, the senior civilians and military commanders at each level coordinate to perform stability, capacity-building, and development operations in their area of responsibility. Mission contingents at the field facilities typically contain State, USAID, and/or USDA personnel. U.S. Drug Enforcement Administration agents also deploy to some military facilities in the field but primarily conduct counternarcotics activities with U.S. military and Afghan counternarcotics forces. U.S. Mission Afghanistan develops requests for Chief of Mission civilian positions in Afghanistan, and State Headquarters approves these requests after consulting with other agencies. In addition, representatives from State, other U.S. agencies under Chief of Mission authority, and U.S. Embassy Kabul participate in periodic interagency staffing reviews. During these staffing reviews, participants use strategic “lines of effort” to classify and prioritize all Chief of Mission positions in Afghanistan according to their priority and feasibility of staffing. Strategic lines of effort for Afghanistan comprise management operations, agriculture, public diplomacy, rule of law, economic growth, counternarcotics, infrastructure, border management, stabilization, governance, threat finance, and bilateral relationship. Approved requirements and their staffing progress are discussed among State, other agencies under Chief of Mission authority such as USAID and USDA, and U.S. Embassy Kabul at biweekly teleconferences. Agencies under Chief of Mission authority rely on both temporary, external hires and permanent employees to staff civilian requirements in Afghanistan. In particular, agencies are relying on special hiring authorities to meet their staffing needs. Figure 1 illustrates how State, USAID, and USDA recruit and identify candidates for positions in Afghanistan. DOD relies on an established process for filling civilian positions in Afghanistan. According to DOD officials, the department establishes civilian requirements and fills positions through an integrated military and civilian planning process. Civilian requirements begin at the Joint Task Force level, with commanders identifying military and civilian personnel needed to complete a mission. The commander specifies unit and individual needs in request for forces and joint manning documents, and sends these documents to the corresponding combatant commanders for validation and position designation. When the joint manning document is approved, the Joint Chiefs of Staff record and designate the service responsible for filling positions. At that time, individual positions are designated as military or civilian, or acceptable for either to fill. Once all positions are validated and categorized, the request is sent to the Joint Force Coordinator within the Office of the Joint Chiefs of Staff. A list of individual position requirements is then sent to the services for staffing. Once the staffing source is identified, the requesting commander becomes responsible for tracking which positions have been filled. To enable the department to readily identify civilians to deploy in support of its missions, including those in Afghanistan, DOD established the CEW program in January 2009 within the Office of the Deputy Assistant Secretary of Defense for Civilian Personnel Policy—which is under the purview of the Under Secretary of Defense for Personnel and Readiness. that are organized, ready, trained, cleared, and equipped in a manner that enhances their availability to mobilize and respond urgently to expeditionary requirements now and in the future. The CEW is dedicated to creating a cadre of DOD civilians As we previously reported, DOD’s use of civilian personnel to support military operations has long raised questions about its policies on compensation and medical benefits for such civilians. Department of Defense Directive 1404.10, DOD Civilian Expeditionary Workforce (Jan. 23, 2009). related to deployed civilians increased as executive agencies began deploying civilians to support efforts in Iraq and Afghanistan. In 2009, we issued a report that addressed issues related to whether agencies that deployed civilians had (1) comparable policies concerning compensation, (2) comparable policies concerning medical care, and (3) policies and procedures for identifying and tracking deployed civilians. The report contained 18 recommendations made to nine agencies concerning policies related to deployed civilians, including a recommendation to both the Secretary of State and the Secretary of Defense to improve their capability to identify and track deployed civilians. We reported that this capability was critical, so that agencies could notify deployed civilians about emerging health concerns that might affect them. Some law enforcement and other personnel with specialized training can be waived from this course. civilians must complete. For example, some DOD personnel must complete language and culture training beyond the normal requirement. Since January 2009, U.S. agencies under Chief of Mission authority more than tripled their civilian presence and expanded outside Kabul in response to the President’s 2009 announcement. DOD both created new programs to build the security capacity of the Afghan government and reported expanding its overall civilian presence. U.S. agencies during the course of our review acknowledged data reliability problems with staffing data and have efforts under way to improve the reliability of that data. According to State, from January 2009 through December 2011, the Chief of Mission civilian presence more than tripled from 320 to 1,142 civilians, an increase of 257 percent. Overall Chief of Mission staffing requirements also grew during this period from 531 to 1,261 positions, and, as of December 2011, about 91 percent (1,142 of 1,261) of those positions were filled. As of October 2011, State officials did not foresee further expansion of the U.S. civilian presence and planned to change their focus to reconfiguring staffing resources as needed within the existing presence. Figure 2 illustrates the increased U.S. Chief of Mission presence in Afghanistan since January 2009. Of the nine executive branch agencies under Chief of Mission authority, as of December 2011 State, USAID, Department of Justice, and USDA had filled most of the Chief of Mission position requirements, as illustrated in table 2. Additionally, the Chief of Mission presence expanded outside Kabul—a response to the President’s call for greater U.S. civilian expertise at provincial and district levels. From January 2009 through December 2011, field position requirements grew by approximately 260 percent (from 147 to 529), and over 85 percent of those requirements were filled. These positions are assigned to locations throughout Afghanistan, including at military facilities such as provincial reconstruction and district support teams and at State’s regional consulates. Comparing the Chief of Mission Civilian Staffing Matrixthe position requirements reported by individual agencies, we found that the data in the Chief of Mission Civilian Staffing Matrix were sufficiently reliable for identifying high-level staffing information such as total number of positions filled by each agency under Chief of Mission authority. According to State officials, the high-level staffing data identified in the Chief of Mission Civilian Staffing Matrix are updated weekly using data from U.S. agencies and are also validated through periodic teleconferences, including staff from State headquarters, other agencies, and the U.S. Embassy in Kabul. The 2010 Afghanistan and Pakistan Regional Stabilization Strategy emphasizes the need to match civilian personnel’s expertise to specific mission requirements on the ground. Furthermore, according to federal internal control standards, program managers need operational data to determine whether they are meeting the goals of their agencies’ strategic and annual performance plans and accounting for the effective and efficient use of resources.Orientation and In-Processing (responsible for ensuring that interagency personnel meet all administrative, medical, and training requirements before deploying to Afghanistan) began using a data system called the Afghanistan Civilian Personnel Tracking System (ACPTS) in February 2011 to track Chief of Mission personnel’s locations and movements (e.g., movement from Kabul to a district support team) and to identify position-specific information (e.g., location, position title, appointment type or grade, vacancy status, and the strategic line of effort to which a position belongs). State officials noted that they planned to use this information to optimize the U.S. presence in the next interagency staffing exercise, when they might need to be prepared to reconfigure the existing presence. However, when we examined this data system in March and July 2011, we found discrepancies that called into question the system’s reliability. For example, the ACPTS data we received were insufficiently reliable to determine which strategic line of effort contained the greatest staffing shortfall—crucial information for an interagency staffing exercise. U.S. Embassy Kabul and State’s Office of Over 60 percent of the ACPTS records for July 2011 (648 of 1,192) were missing data in at least 1 of 10 data fields. Our analysis revealed, for example, that 36 percent of the appointment grade fields and 30 percent of the line-of-effort fields were missing. We also found discrepancies between the ACPTS and Chief of Mission Civilian Staffing Matrix with regard to the overall position requirements and the number of positions filled. Table 3 lists the discrepancies we identified in State, USAID, and USDA totals. Our discussions with State, USAID, and USDA officials revealed additional discrepancies in the ACPTS data, including duplicate entries, position titles that did not match official position documentation, and inaccurate arrival dates and appointment grade information. In June 2011, State officials acknowledged that these challenges prevented ACPTS from being used effectively to aggregate detailed, position-specific information regarding the overall U.S. civilian presence in Afghanistan. Although we could not verify the accuracy of the ACPTS system, during the course of our review and after several discussions with us regarding data reliability, in the fall of 2011 State began taking steps to improve the reliability of the ACPTS database. For example, according to State officials, the Office of Orientation and In-Processing recently completed a review of the ACPTS system that included correcting inaccuracies, revising data fields to better reflect actual information being entered, and deleting unnecessary data fields. State has also established standard operating procedures for updating the ACPTS system. For example, according to State officials, the U.S. Embassy’s Arrivals and Departure Unit will be responsible for completing the ACPTS records of newly deployed staff once they arrive in-country, and the Interagency Provincial Affairs Office will be responsible for updating their location information if their duty station changes in the field. Furthermore, in October 2011, U.S. Embassy Kabul issued a new policy for Mission staffing and accountability that established a notification and reporting system to conduct accountability checks of Chief of Mission staff and also outlined the responsibilities of supervisors and individuals in ensuring staffing accountability and tracking. According to State officials, Embassy Kabul conducts monthly data calls with all agencies present in Kabul in accordance with this policy, and the collected data is reconciled with ACPTS data. According to the Joint Chiefs of Staff’s Joint Personnel Status Report, DOD increased its overall civilian presence in Afghanistan by approximately 643 percent from January 2009 through December 2011. While officials acknowledged that some inaccuracies existed in the data provided by this report, they believed that the data fairly depict the increase in the overall DOD civilian presence in Afghanistan. As shown in figure 3, DOD reported its civilian presence in Afghanistan grew from 394 civilians in January 2009 to 2,929 in December 2011. These civilians serve in a variety of roles that support both DOD’s combat mission and its capacity-building efforts. However, it is difficult to specify the number of civilians within DOD’s overall civilian presence that supported the capacity-building efforts because these civilians frequently fill positions that support both combat support and capacity-building missions. For example, civilians that deploy with the U.S. Army Corps of Engineers support multiple projects involving both Afghan National Security Forces and U.S. military forces, making it difficult to identify the number of civilians that support capacity-building efforts. In addition, DOD established two programs to respond to the department’s mission to build the capacity of the Afghan government. The first program—Ministry of Defense Advisors, created in fiscal year 2010— operates under the authority of the Under Secretary of Defense for Policy and deploys senior DOD civilians for up to 2 years to serve as advisors to officials in the Afghan government’s Ministries of Defense and Interior to exchange knowledge concerning defense-related issues. The Ministry of Defense Advisor program was designed to forge long-term relationships that strengthen Afghanistan’s security institutions.program—Afghanistan Pakistan Hands, created in fiscal year 2009— operates under the authority of the Joint Chiefs of Staff and deploys DOD civilians for 5 years to serve as experts on Afghanistan and Pakistan to support the counterinsurgency strategy. Specifically, these civilians engage directly with host country officials to enhance government, interagency, and multinational cooperation and fill related positions outside the region. As of December 2011, these programs had identified requirements for 156 civilian positions, and 106 of these positions were filled. At the time of our review, officials were unclear as to whether the requirements for these two programs would stabilize, increase, or decrease. In table 4, we show the extent to which each of these programs had filled the required positions. Although DOD has aggregate staffing data for deployed civilians within a country or geographical region, its current data system for tracking deployed civilians may not provide sufficiently reliable information to characterize the specific location and identity of deployed civilians within a country. DOD uses the Joint Personnel Status Report to track the number and location of military, civilian, and contractor personnel deployed worldwide. This report is manually created each day by the combatant commands to include the number and location of personnel within their area of responsibility. However, Joint Chiefs of Staff officials told us that the system contains inaccuracies. For example, the officials noted previous reports have omitted and double counted some personnel, as well as listed some personnel in the wrong locations. The officials stated they could not quantify the magnitude of these inaccuracies due to the system’s reliance on manual updates from the individual combatant commands and limited demographic data. We reported in 2009 that DOD issued guidance and established procedures for identifying and tracking deployed civilians in 2006 but concluded in 2008 that its guidance and procedures were not being consistently implemented across the department. In 2009, we found that these policies were still not being fully implemented and recommended that DOD establish mechanisms to ensure that these policies were implemented. In response to this recommendation, DOD stated that it would work with the Defense Manpower Data Center to develop a tracking system for deployed civilians and hoped to have the system completed by September 2009. At the time of our review, Joint Staff officials stated that in conjunction with the Defense Manpower Data Center, they had completed development and were fielding this automated tracking system that would access information from service specific personnel databases in conjunction with Common Access Card usage in theater to establish and record the specific location of employees. According to DOD officials, this new system will provide DOD with an automated system to track the number, identity, and location of deployed civilians. As we reported in both 2005 and 2009, this type of information is critical for identifying potential exposures or other incidents related to a civilian’s deployment. DOD officials stated that, once operational within a combatant commander’s area of responsibility, this system will automatically create a report that fulfills Joint Personnel Status reporting requirements for identifying the number and location of military, civilian, and contractor personnel deployed globally. However, according to Joint Chiefs of Staff officials, this system will not be ready to support these reporting requirements within the Central Command area of responsibility until the middle of fiscal year 2012. The Office of the Secretary of Defense for Personnel and Readiness is responsible for overseeing implementation of the 2009 CEW directive, including developing policy and implementing procedural guidance for the CEW. To implement the policies in this directive, the heads of the DOD components are to identify and designate positions as emergency- essential, non-combat essential, and capability-based volunteers as part of the CEW. Emergency-essential positions are those that support the success of combat operations or the availability of combat-essential systems. Non-combat essential positions support the expeditionary requirements in other than combat or combat support situations. Capability-based volunteers are employees who may be asked to volunteer for deployment, to remain behind after other civilians have evacuated, or to fill the positions of other DOD civilians who have deployed to meet expeditionary requirements in order to ensure that critical expeditionary requirements are fulfilled.directive, the components are to plan, program, and budget for CEW requirements. We found that DOD had taken preliminary steps to implement the CEW. Specifically, DOD had (1) established a CEW program office, (2) created a database containing resumes submitted by volunteers, (3) advertised expeditionary positions for civilians on a designated website, and (4) established predeployment training requirements for volunteers selected to fill CEW positions. According to CEW officials, approximately 10 percent to 15 percent of the 2,929 filled civilian positions in Afghanistan were filled by CEW volunteers and the remaining positions were primarily filled by civilian personnel in the military services and other DOD components. However, the CEW program has not been fully developed and implemented. In particular, DOD components have not identified and designated the number and types of positions that should constitute the emergency-essential, non-combat essential, and capability-based volunteer segments of the CEW because guidance for making such determinations has not been provided by the Office of the Secretary of Defense. Office of the Secretary of Defense officials stated that once key assumptions regarding the size and composition of the CEW have been finalized, implementing guidance will be issued that will contain information on how the components are to identify and designate positions as emergency-essential, non-combat essential and capability- based volunteers. However, Office of the Secretary of Defense officials were not sure as to when this guidance would be issued. By not developing guidance that instructs the components on how to identify and designate the number and types of positions that will constitute the CEW, DOD may not be able to (1) make the CEW a significant portion of the civilian workforce as called for in DOD’s Fiscal (2) meet readiness Year 2009 Civilian Human Capital Strategic Plan,goals for the CEW as required in DOD’s Strategic Management Plan for Fiscal Years 2012-2013,missions. and (3) position itself to respond to future First, in DOD’s fiscal year 2009 civilian human capital strategic plan, DOD identified the CEW as a significant segment of the overall DOD civilian workforce dedicated to supporting DOD operations, contingencies, emergencies, humanitarian missions, stability and reconstruction operations, and combat missions. Further, this plan noted the importance to identify any differences between the of conducting a gap analysis current civilian workforce and the workforce that will be needed in the future for each of the department’s “mission critical occupations”—i.e., occupations that are essential to carrying out the department’s mission. In July 2011, we testified that identifying skills and capability gaps of the civilian workforce is critical for DOD’s strategic planning efforts and that DOD should conduct gap analyses to identify gaps in both the current and the future workforces.to develop strategies to acquire and retain the needed workforce. Further, once workforce needs and strategies are identified, the DOD components will be better positioned to plan, program, and budget for CEW requirements as called for in the CEW directive. Completing a gap analysis is important for DOD Second, as called for by the Department of Defense Strategic Management Plan for Fiscal Years 2012-2013, DOD’s goal to get the right workforce mix should occur through several initiatives, including one to improve the readiness of the CEW by increasing the percentage of emergency-essential and non-combat essential personnel who are qualified as ready. However, without an understanding of the number and types of positions in the emergency-essential and non-combat essential categories, the current CEW is not positioned to support this DOD priority. Third, DOD officials told us that institutionalizing the CEW is critical to DOD efforts to best utilize its total workforce structure—military, civilian, and contractor personnel—because the difficulties associated with identifying and deploying civilians are not unique to the ongoing operations in Afghanistan. According to DOD officials, similar issues were experienced in Bosnia, but because the organization and processes that supported the deployment of civilians during that operation were not retained, DOD had to reconstitute the capability to identify and deploy civilians when the need arose for civilians to deploy to Iraq and Afghanistan. State has established predeployment training requirements for all Chief of Mission personnel deploying to Afghanistan, including courses offered by State’s Foreign Service Institute, as well as key security training provided by State’s Diplomatic Security Bureau—the FACT course. The Foreign Service Institute’s Afghanistan-specific training courses address State’s 2009 training requirement for Chief of Mission personnel deploying to Afghanistan and focus on providing Chief of Mission personnel with basic professional skills and knowledge needed to participate in stabilization and reconstruction activities as members of the U.S. Embassy Kabul or its subordinate entities. Additionally, the training recognizes the requirements for effectively operating in a complex environment, including administrative, survival, and day-to-day functioning/life support. Table 5 further describes the Foreign Service Institute’s training for Chief of Mission personnel. All Chief of Mission personnel are required to take the Afghanistan Familiarization course, while all personnel deploying to locations outside of Kabul are also required to take the Afghanistan Field Orientation and the Interagency Civilian-Military Integration Training Exercise courses. According to State officials, the Afghanistan Familiarization course covers subjects that contribute to employees’ success on the job, such as orientation issues and State support at high-threat posts. Additionally, the Afghanistan Field Orientation course covers subjects that State has identified as needed for the success of provincial reconstruction teams and other civilian-military entities at the regional and district levels. According to State and contractor officials we interviewed during our observation of the Interagency Civilian-Military Integration Training Exercise at the Muscatatuck Urban Training Center in Butlerville, Indiana, personnel who attend this training are able to practice working in situations they would likely encounter while deployed. The training includes working through an interpreter and heavily interacting with Afghan officials. In addition, because field-deployed civilians live and work alongside military colleagues, the exercises focus on the cultural (e.g., education about military ranks) and practical (e.g., participation in convoy security) aspects of working with the military, as shown in figure 4. During the Interagency Civilian-Military Integration Training Exercise, students get the opportunity to simulate living with the military on a Forward Operating Base, and travel by convoy and helicopter to meetings with their Afghan counterparts, played by domestic role-players. There is also the opportunity to work through interpreters, negotiate sensitive situations, and solve problems with Afghan authorities, officials, religious leaders, and villagers, as shown in figure 5. State implemented internal controls to help ensure that Chief of Mission personnel took the required training before deployment. State’s Office of Orientation and In-Processing acts as a central processing point for all Chief of Mission personnel deploying to Afghanistan and works with the Foreign Service Institute to ensure that all training requirements have been met. Examples of the Center’s training verification activities include accessing Foreign Service Institute online registration to determine the accuracy of enrollment records, tracking completion of personnel’s deployment checklists, and visiting classes to confirm enrolled personnel attended the course. According to State officials, in addition to these controls, Embassy Kabul also checks to make sure that the training requirement is met before granting country clearance to individuals about to be deployed. The Office of Orientation and In-Processing also reviews these country clearances before allowing individuals to deploy. To test the reliability of State’s internal controls, we compared State, USAID, and USDA names from a March 2011 run of ACPTS personnel data against Foreign Service Institute training records and State training waiver logs. The analysis yielded 134 names of personnel who could have potentially missed required Foreign Service Institute training. After the names were submitted to the Orientation and In-Processing Center, State stated it was able to account for all of the personnel, either by verifying that they had taken the training or possessed a valid reason for not having taken the training. According to State officials, the Office of Orientation and In-Processing and Embassy Kabul also check to verify that personnel have taken the FACT course before being deployed to Afghanistan. In June 2011, we reported that Diplomatic Security had difficulty verifying training taken by non-State personnel and made several recommendations. Security was aware of this problem and, in June 2011, was in the process of implementing the FACT Tracker to address it. This tracker could be checked by regional security officers at high-threat posts to confirm required training was taken before granting personnel country clearance. At the time of our review, Diplomatic Security officials stated that the FACT Tracker was fully operational and could verify FACT training going back to 2005. For example, we recommended that Diplomatic Security develop or improve the process to track its individual training requirements and completion of training more broadly. See GAO, Diplomatic Security: Expanded Missions and Inadequate Facilities Pose Critical Challenges to Training Efforts, GAO-11-460 (Washington D.C.: June 1, 2011). As of October 2011, Diplomatic Security was taking steps to improve its tracking of training through collaboration with the Foreign Service Institute. against data in the FACT Tracker. We and Diplomatic Security, through the use of the FACT Tracker, were able to account for all 65 names. As 100 percent of our sample received FACT or other appropriate training, we believe that State has established an effective system of internal controls over its training. According to DOD guidance, the Office of the Secretary of Defense for Personnel and Readiness is to develop policies, plans, and programs for the training of DOD personnel, including civilians. In November 2010, the Office of the Secretary of Defense established counterinsurgency standards and required training of individuals and units, including DOD civilians deploying to Afghanistan, on such things as language and cultural awareness. DOD guidance also requires U.S. Central Command to coordinate and approve training necessary to carry out missions assigned to the command and U.S. Central Command-established theater-training requirements that apply to DOD civilians deployed to the command’s area of responsibility.training requirements include general requirements such as anti-terrorism awareness training; chemical, biological, radiological, and nuclear personnel protective measures and survival skills; mine and unexploded ordnance awareness; and requirements specific to the country of deployment—for Afghanistan, the requirements include, for example, language and cultural awareness training, implementation of the Secretary of Defense-approved counterinsurgency qualification standards, and High Mobility Multipurpose Wheeled Vehicle (HMMWV) and Mine Resistant Ambush Protected (MRAP) egress training. Finally, DOD’s 2010 strategic plan calls for the establishment of a requirements U.S. Central Command theater- process that includes front-end analysis and synchronizing service training programs with combatant commander requirements. As shown in table 6, several DOD organizations deploying civilians to Afghanistan provide predeployment training to address Office of the Secretary of Defense, U.S. Central Command, and their own specific requirements. To address these requirements, each of DOD’s components independently developed its own training courses; however, we identified some gaps and duplication in this training. For example, Air Force civilians deploying to Afghanistan through the CEW were required to attend both Air Force and CEW predeployment training. The CEW predeployment training consists of an 11-day course that covers areas such as personal and family benefits and legal information; survival skills, including first aid; HMMWV rollover training and Counter-Improvised Explosive Device training; and language and cultural awareness skills. As a result, those Air Force civilians deploying through the CEW received training on some of the same material, such as Counter-Improvised Explosive Device training, twice prior to deployment. According to DOD officials, in November 2011, DOD began granting some waivers from the CEW training to Air Force civilians that completed Combat Airman Skills Training. Afghanistan are required to complete this training; therefore, Air Force civilians who do not receive Combat Airman Skills Training would still be required to complete both Air Force and CEW predeployment training. Additionally, some Army civilian training did not meet the requirements established by U.S. Central Command. For example, Army civilian training at the CONUS (continental United States) Replacement Center did not cover either the U.S. Central Command-required HMMWV or MRAP vehicle rollover techniques. Table 6 lists the gaps and duplication we identified. Combat Airman Skills Training is special training provided to personnel who will be going into a hostile and uncertain environment. The U.S. civilian presence in Afghanistan and the deployment of civilians to Afghan provinces and districts remain crucial to U.S. efforts to build the capacity of the Afghan government to provide essential services to its people with limited international support. With the increased focus on deploying more U.S. civilians throughout Afghanistan comes the need for the U.S. Mission to be able to track and monitor the movement and location of its civilian staff, especially given the ongoing drawdown of U.S. troops and plans to transition lead security responsibility to the Afghan government in 2014. We are encouraged by State and DOD’s efforts to improve tracking of deployed civilian personnel. Additionally, as DOD has expanded its involvement in overseas military operations worldwide, it has grown increasingly reliant on its civilian workforce to provide support to these operations. While DOD’s efforts to institutionalize the CEW are commendable, until DOD makes decisions regarding the size of the CEW and issues implementation guidance, the CEW may not be capable of supporting future overseas operations as well as departmentwide goals to strengthen and rightsize the DOD total workforce. Furthermore, having policies and procedures in place to help ensure that U.S. civilians receive necessary training before they deploy to a high- threat working environment such as Afghanistan can enhance their safety as well as their ability to accomplish the mission. While agencies present under Chief of Mission authority benefit from a centralized set of training requirements and internal controls, DOD’s civilian training process does not have the same level of oversight or centralized control. Enhancing DOD’s civilian training process would provide greater synchronization of training requirements while still allowing the various components to tailor their training to mission-specific needs. To enable DOD to make the CEW a significant portion of the civilian workforce, meet readiness goals for the CEW, and position itself to respond to future missions, we recommend that the Secretary of Defense direct the Acting Under Secretary of Defense for Personnel and Readiness to take the following two actions: Develop key assumptions concerning the size and composition of the emergency-essential, non-combat essential, and capability-based volunteer categories referred to in the 2009 CEW directive. Finalize the implementation guidance to DOD components on how to identify and designate the number and types of positions that constitute the emergency-essential, non-combat essential, and capability-based volunteer categories. To provide a consistent approach for synchronizing predeployment training for DOD civilians, we recommend that the Secretary of Defense direct the Acting Under Secretary of Defense for Personnel and Readiness to take the following two actions: Establish a process to identify and approve predeployment training requirements for all DOD civilians. Establish a process to coordinate with key stakeholders such as the military services and subordinate commands to ensure that requirements are synchronized among and within DOD components and with departmentwide guidance. We provided a draft of this report to DOD, State, USAID, USDA, as well as the Departments of Homeland Security, Justice and the Treasury. DOD provided written comments, reprinted in their entirety in appendix II, and concurred with our four recommendations—characterizing them as supporting its current initiative to transform the CEW. Specifically, DOD concurred with our recommendations to (1) develop key assumptions concerning the size and composition of the emergency- essential, non-combat essential, and capability-based volunteer categories referred to in the 2009 CEW directive and (2) finalize the implementation guidance to DOD components on how to identify and designate the number and types of positions for these categories. DOD did not specify how it would implement these recommendations. DOD concurred with our recommendation to establish a process to identify and approve pre-deployment training requirements for all DOD civilians. DOD stated that through the process of identifying pre- deployment training requirements, DOD will establish a core set of training needs that are applicable under all circumstances under which DOD civilians may deploy. DOD also stated that it will develop policy that recognizes the agility necessary to prepare DOD civilians for unique mission requirements and conditions now and in the future. DOD concurred with our recommendation to establish a process to coordinate with key stakeholders such as the military services and subordinate commands to ensure that training requirements are synchronized among and within DOD components and with department- wide guidance. DOD stated the process it develops for identifying pre- deployment training requirements will account for the need to make the best use of resources using guiding principles and criteria from the Secretary of Defense and advice from the Chairman of the Joint Chiefs of Staff as needed to ensure an agile and effective contingency workforce. State, the Department of the Treasury, and the Department of Homeland Security provided technical comments, which we have incorporated into the report as appropriate. The Department of the Treasury noted that State’s database had not been updated to reflect 13 total approved Treasury positions. Treasury further noted that two of its positions listed as “open” remained programmatically on hold, resulting in 11 active slots filled. We incorporated this technical comment in our report. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Defense, Homeland Security, and State; the U.S. Attorney General; the Administrator of USAID; and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Brenda S. Farrell at (202) 512-3604 or [email protected] or Charles Michael Johnson Jr. at (202) 512-7331 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To review the U.S. civilian presence in Afghanistan, we obtained information from pertinent strategic planning, recruitment, staffing, and reporting documents and interviewed relevant officials from the Departments of Agriculture (USDA), Defense (DOD), Homeland Security, Justice, State (State), and the Treasury, as well as the U.S. Agency for International Development (USAID). We did not examine costs for the deployment or support of civilian personnel in Afghanistan due to a concurrent review by the Office of the Special Inspector General for Afghanistan Reconstruction on this topic, published in September 2011. To examine the expansion of the U.S. civilian presence in Afghanistan, we obtained and analyzed staffing data from State and DOD regarding staffing requirements and fill rates for all civilian positions under Chief of Mission authority and key positions under combatant commander authority deployed in-country following the President’s March 2009 call to enhance support of Afghan national and subnational government institutions. Our scope was limited to U.S. direct hires and did not include locally engaged staff or contractors. Because, according to DOD officials, the majority of DOD civilians directly serve in combat support positions, we focused our request for staffing data on organizations or programs intended to enhance the capacity of the Afghan government, which included the Ministry of Defense Advisors and Afghanistan Pakistan Hands programs. We validated reports on Chief of Mission staffing progress through interviews with officials representing agencies that deployed staff to fill positions in Afghanistan since January 2009, including officials from Homeland Security, Justice, State, the Treasury, USAID, and USDA. We did not meet with officials from several agencies with fewer than five permanent staff deployed to Afghanistan, such as the Departments of Transportation and Health and Human Services. To assess the reliability of the staffing data reported by State and DOD for civilians in Afghanistan, we reviewed available documentation, examined the data for outliers and missing observations, and conducted follow-up interviews to discuss questions that arose in our analysis of the data. Additionally, for Chief of Mission data, we compared complementary datasets from State’s Afghanistan Civilian Personnel Tracking System (ACPTS) and the Chief of Mission Civilian Staffing Matrix to identify whether any reporting discrepancies existed. We requested datasets from State from each database over corresponding time periods; our first data run compared February 10, 2011, Chief of Mission Civilian Staffing data with March 16, 2011, ACPTS data; our second data run compared June 28, 2011, Chief of Mission Civilian Staffing Matrix data with July 7, 2011, ACPTS data. We further met with State officials to identify the cause and effect of discrepancies that were found to exist, in order to assess whether the discrepancies limit the ability of U.S. agencies to evaluate their staffing progress. For DOD, we requested data from the Ministry of Defense Advisors program, the Afghanistan Pakistan Hands program, and the Joint Personnel Status Report to identify DOD’s civilian presence in Afghanistan. We also met with officials from the Ministry of Defense Advisors program, Afghanistan Pakistan Hands program, and Joint Chiefs of Staff to discuss the data sources, internal controls, and data reliability related to their respective staffing data. We found State civilian staffing data for Afghanistan to be sufficiently reliable to provide an indication of the positions filled at the level of the agency, but State ACPTS data were not sufficiently reliable to report on more-detailed staffing information, such as position type. For the Ministry of Defense Advisors program and the Afghanistan Pakistan Hands program, we found that program documents supported the requirements and the number of filled positions that the program offices provided and that the data from these programs were sufficiently reliable to illustrate the positions filled within these programs. However, the extent to which DOD staffing data in the Joint Personnel Status Report are reliable is unknown because previous reports have omitted or double counted personnel. DOD officials noted that while errors do occur in the daily submission of Joint Personnel Status Report data from the combatant commands, the reports are accurate enough to identify trends in DOD’s civilian presence over time, and we agree. As of late 2011, we could not fully verify the accuracy of the ACPTS system. However, during the course of our review and after several discussions with us regarding data reliability, State began taking steps to improve the reliability of the ACPTS database. To evaluate the implementation of DOD’s Civilian Expeditionary Workforce (CEW) policy, we obtained and reviewed relevant documents. Specifically, we reviewed the DOD directive that established the program to understand the structure of the CEW as presented in this document and reviewed the 2009 DOD Civilian Human Capital Strategic Plan to identify the steps DOD had established as a road map for implementing the CEW directive. We also reviewed other documents such as DOD’s Strategic Management Plan Fiscal Years 2012-2013 to determine how the CEW related to high-priority departmentwide programs. In addition, we interviewed Office of the Secretary of Defense and CEW program officials to further understand the current status of efforts to fully implement the CEW and the department’s plans for the CEW of the future. We also interviewed U.S. Central Command officials to determine how the CEW was being used to satisfy its needs for deployable civilians in Afghanistan and officials from the Air Force, Army, and Navy, to determine how these agencies coordinated efforts to identify deployable civilians. To determine the extent to which U.S. agencies had provided required Afghanistan-specific training to their personnel before deployment, we reviewed predeployment training requirements established by the Department of State for all Chief of Mission personnel and the requirements set by various programs and components within the DOD. We did not analyze training provided by the Department of Justice or its components due to its specialized law-enforcement nature. For DOD training, we reviewed training programs for the CEW, Ministry of Defense Advisors program, Afghanistan Pakistan Hands program, and U.S. Army Corps of Engineers as well as civilian training for the Air Force, Army, and Navy. We focused on these DOD programs because of their capacity- building focus. On two separate occasions, we observed scenario-based training administered to Chief of Mission personnel and the Ministry of Defense Advisors program, both held at the Muscatatuck Urban Training Center in Indiana. To assess the extent to which the agencies complied with predeployment training requirements for Chief of Mission personnel, we compared a March 2011 data run of State, USAID, and USDA personnel from State’s ACPTS system against Foreign Service Institute training rosters for the three Afghanistan-specific, mandatory training courses as well as against a State training waiver log. We focused on State, USAID, and USDA personnel due to the size of their respective civilian presence, as well as their primacy in deploying civilians to the field. This analysis yielded 134 names that did not appear on the Foreign Service Institute rosters or in the waiver log, which we submitted to State’s Office of Orientation and In- Processing for explanation. Additionally, to test Diplomatic Security’s FACT Tracker, we selected a random sample of 65 State, USAID, and USDA names from July 2011 ACPTS personnel data and compared these names against data in the FACT Tracker. This sample was designed so that if we found that all sample cases received FACT or other appropriate training, we would be at least 95 percent confident that fewer than about 5 percent of State, USAID, and USDA personnel in Afghanistan during July 2011 did not receive FACT training. Although we note weaknesses in ACPTS’s data reliability, we judged the database sufficiently reliable to compare names against training rosters, waiver logs, and the FACT Tracker. For DOD personnel, we compared the training curricula utilized by the military services, defense agencies, and the CEW to U.S. Central Command, U.S. Forces—Afghanistan, and Office of the Secretary of Defense requirements and guidance to see whether the training addressed the requirements. In addition, we compared the various training received by deploying civilians to determine if there was any duplication or repetition in the training provided. Because training record keeping within DOD is decentralized, we did not verify individual training records to establish whether deployed civilians had received the required training. We did, however, review the procedures that the military services and defense agencies have in place to ensure that deploying civilians have taken required training. In addition, we interviewed officials with the Office of the Under Secretary of Defense for Personnel and Readiness, CEW training office, Air Force, Army, Navy, U.S. Army Corps of Engineers, and U.S. Central Command to discuss the predeployment training requirements for deployed civilians. We conducted this performance audit from May 2010 to February 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, Hynek Kalkus, Assistant Director; Kimberly Seay, Assistant Director; David Adams; Adam Bonnifield; Virginia Chanley; David Hancock; Mae Jones; Linda Keefer; Shakira O’Neil; and John Wren made key contributions to this report. | In March 2009, the President called for an expanded U.S. civilian presence under Chief of Mission authority to build the capacity of the Afghan government to provide security, essential services, and economic development. In addition, the Department of Defense (DOD) deploys civilians under combatant commander authority to Afghanistan to support both combat and capacity-building missions. DOD established the Civilian Expeditionary Workforce (CEW) in 2009 to create a cadre of civilians trained, cleared, and equipped to respond urgently to expeditionary requirements. As the military draws down, U.S. civilians will remain crucial to achieving the goal of transferring lead security responsibility to the Afghan government in 2014. For this report, GAO (1) examined the expansion of the U.S. civilian presence in Afghanistan, (2) evaluated DODs implementation of its CEW policy, and (3) determined the extent to which U.S. agencies had provided required Afghanistan-specific training to their personnel before deployment. GAO analyzed staffing data and training requirements, and interviewed cognizant officials from the Department of State (State), other U.S. agencies with personnel under Chief of Mission authority in Afghanistan, and DOD. U.S. agencies under Chief of Mission authority and the Department of Defense (DOD) have reported expanding their civilian presence in Afghanistan and took steps to improve their ability to track that presence. Since January 2009, U.S. agencies under Chief of Mission authority more than tripled their civilian presence from 320 to 1,142. However, although State could report total Chief of Mission numbers by agency, in mid-2011 GAO identified discrepancies in States data system used to capture more-detailed staffing information such as location and position type. State began taking steps in the fall of 2011 to improve the reliability of its data system. Also, DOD reported expanding its overall civilian presence from 394 civilians in January 2009 to 2,929 in December 2011 to help assist U.S. efforts in Afghanistan. The extent to which DODs data is reliable is unknown due to omissions and double counting, among other things. In a 2009 report, GAO noted similar data issues and recommended DOD improve data concerning deployed civilians. DOD concurred with the recommendation and expects the issues will be addressed by a new tracking system to be completed in fiscal year 2012. DOD has taken preliminary steps to implement its Civilian Expeditionary Workforce (CEW) policy, including establishing a program office; however, nearly 3 years after DODs directive established the CEW, the program has not been fully developed and implemented. Specifically, DOD components have not identified and designated the number and types of positions that should constitute the CEW because guidance for making such determinations has not been provided by the Office of the Secretary of Defense. Officials stated that once key assumptions regarding the size and composition of the CEW have been finalized, implementing guidance will be issued. Until guidance that instructs the components on how to identify and designate the number and types of positions that will constitute the CEW is developed, DOD may not be able to (1) make the CEW a significant portion of the civilian workforce as called for in DODs fiscal year 2009 Civilian Human Capital Strategic Plan, (2) meet readiness goals for the CEW as required in DODs Strategic Management Plan for fiscal years 2012-2013, and (3) position itself to respond to future missions. U.S. agencies under Chief of Mission authority and DOD provided Afghanistan-specific, predeployment training to their civilians, but DOD faced challenges. State offered predeployment training courses to address its requirements for Chief of Mission personnel and designated a centralized point of contact to help ensure that no personnel were deployed without taking required training, including the Foreign Affairs Counter Threat course. While predeployment training requirements were established for Afghanistan by the Office of the Secretary of Defense and the Combatant Commander, DOD relied on its various components to provide the training to its civilians. In some cases, DOD components offered duplicate training courses and did not address all theater requirements in their training because DOD did not have a process for identifying and synchronizing requirements and coordinating efforts to implement them, as called for in the Strategic Plan for the Next Generation of Training for the Department of Defense. Absent this process, DOD could not ensure that its civilians were fully prepared for deployment to Afghanistan and that training resources were used efficiently. GAOs recommendations to DOD include developing key assumptions and identifying the number and types of positions that should constitute the CEW, and establishing a process to identify and synchronize training requirements. DOD concurred with GAOs recommendations. |
ASD(HD&ASA), within the Office of the Under Secretary of Defense for Policy, serves as the principal civilian advisor and the Chairman of the Joint Chiefs of Staff serves as the principal military advisor to the Secretary of Defense on critical infrastructure protection. ASD(HD&ASA) has issued guidance to help assure the availability of critical infrastructure. A component of this guidance outlines the roles and responsibilities of the organizations involved in DCIP. Table 1 summarizes the training and exercise roles and responsibilities of each DCIP organization. TRANSCOM and the installations we visited that have critical transportation assets have incorporated DCIP-like elements into their existing exercises. Although installation personnel we met with often were unaware of DCIP, we found that many conducted routine antiterrorism, emergency management, information assurance, and continuity of operations planning exercises that often include critical transportation assets located on the installation. As part of their regularly scheduled antiterrorism and continuity of operations programs, installation officials at all 19 installations we visited that have critical transportation assets conducted exercises encompassing critical assets located on their installations. However, unlike DCIP, some of these programs do not emphasize an all-threats, all-hazards approach to assuring critical infrastructure. DOD guidance requires the testing of antiterrorism and continuity of operations plans annually through various exercises. DOD’s antiterrorism guidance requires that commanders maintain antiterrorism exercise documentation for no less than 2 years to ensure incorporation of lessons learned. These antiterrorism exercises often contain aspects of DCIP, such as (1) developing adaptive plans and procedures to mitigate risk, (2) restoring capability in the event of a loss or degradation of assets, (3) supporting incident management, and (4) protecting critical infrastructure-related sensitive information. For example, even though installation personnel are often unaware of DCIP, we found that exercises testing antiterrorism and continuity of operations plans typically include critical installation infrastructure, and exercises for emergency management plans sometimes include assuring the availability of critical transportation assets in the event of natural disasters. Several installations in Japan that we visited conducted exercises that assure the availability of critical transportation assets located on those installations. Also, several installation officials responsible for critical transportation assets in PACOM’s area of responsibility with whom we met told us that they conduct exercises that examine the impact of natural disasters, such as earthquakes and typhoons, on critical infrastructure. Installation officials responsible for critical transportation assets in CENTCOM’s area of responsibility told us that they incorporate lessons learned into future exercises. For instance, an installation in the Middle East used exercises to prepare for its response to and recovery from major accidents, natural disasters, attacks, or terrorist use of chemical, biological, radiological, nuclear, or high-yield explosives, and has incorporated its findings into planning for future exercises. Although several of the combatant commands and military services we visited have variously developed headquarters-level DCIP training programs, DOD has not developed DCIP training standards departmentwide. Further, many of the installation personnel responsible for the assurance of critical infrastructure remain unaware of the DCIP program and the DCIP expertise available at the combatant command and military service levels. DOD’s DCIP instruction requires ASD(HD&ASA) to provide policy and guidance for DCIP and oversee the implementation of DCIP education, training, and awareness of goals and objectives. ASD(HD&ASA) recognized the need for DCIP training in its March 2008 Strategy for Defense Critical Infrastructure. Specifically, the strategy states that ASD(HD&ASA) will establish baseline critical infrastructure education requirements. Given that this strategy is relatively new, DCIP training standards have not yet been established departmentwide nor has DOD established a time frame for implementing the training standards. However, in the absence of DCIP training standards departmentwide, we determined through our work examining the five defense sectors that several combatant commands and military services have independently developed their own training programs or modules. For example, PACOM officials stated that they have conducted internal PACOM training and education on critical infrastructure assurance. U.S. Strategic Command has conducted internal training and continuous education for its staff. Further, TRANSCOM and CENTCOM officials told us that they have developed critical infrastructure training for their headquarters-level personnel. Additionally, CENTCOM officials told us that the development of their internal critical infrastructure training was still in its initial stages. Conversely, U.S. European Command officials told us that they are currently focused almost exclusively on identifying critical infrastructure and threats to those assets. Moreover, the Department of the Navy has developed a DCIP training module that it has incorporated into its information assurance training. The module provides an overview of critical infrastructure protection and the vulnerabilities created by increased interdependencies. The U.S. Marine Corps has begun familiarizing its installation antiterrorism officers with DCIP through required training for its Critical Asset Management System, used by the U.S. Marine Corps to track critical infrastructure. Air Force officials told us that they have a mission assurance training module that includes critical infrastructure protection, and like the U.S. Marine Corps, they conduct training for major Air Force commands on their version of the Critical Asset Management System. Further, officials we spoke with at the Air Mobility Command—an Air Force major command and subcomponent command to TRANSCOM—told us that they provide annual DCIP training to their air mobility wings. Army officials we met with did not identify Army-specific DCIP training but stated that training needs to be comprehensive and not defense sector specific. However, because there are no DCIP training standards departmentwide and combatant command- and military service-level training has not reached installation personnel responsible for assuring the availability of defense critical infrastructure, installation personnel rely on other, more established programs, such as the Antiterrorism Program. However, unlike DCIP, some of these programs do not emphasize consideration of the full spectrum of threats and hazards that can compromise the availability of critical infrastructure. For example, the Antiterrorism Program focuses on terrorist threats to assets and personnel. While some DCIP training exists, the combatant commands’ and military services’ development of disparate training programs, without benefit of DCIP training standards departmentwide, may result in programs that contain potentially conflicting information. As a result, training may be less effective, and resources may be used inefficiently. With few exceptions, installation personnel we met with who are responsible for assuring the availability of critical transportation infrastructure were not familiar with DCIP and were not aware that the combatant commands or military services possessed DCIP expertise that they could leverage for two reasons. First, as we previously reported, the military services have not yet developed specific guidance for how installations are to implement DCIP. Second, DCIP efforts to date have focused primarily on the identification and assessment of critical infrastructure. At 13 of the 19 installations we visited that have critical transportation assets, installation personnel we spoke with stated that prior to our visit, they had not heard of DCIP. Furthermore, DOD has not developed an effective way to communicate that DCIP expertise is available to installation personnel at the combatant command and military service levels. Until DOD develops a way to effectively communicate the existence of DCIP expertise to installation personnel, such personnel may not be able to fully leverage DCIP knowledge, which will affect how they assure the availability of critical infrastructure from an all-hazards approach, which they currently may not be doing. Because the network of DOD- and non-DOD-owned critical infrastructure represents an attractive target to adversaries and also is potentially vulnerable to a variety of natural disasters or accidents, it is crucial for DOD to conduct DCIP exercises and develop and implement DCIP training. With few exceptions, at the sites we visited, installation officials responsible for the assurance of critical assets were not aware of DCIP. However, they conducted complementary exercises that while in some cases not emphasizing the full spectrum of threats and hazards, often involved some aspects of critical infrastructure assurance and provided a measure of protection for critical assets located on the installation. In the absence of DCIP training standards departmentwide, the combatant commands and military services are developing and implementing disparate training programs, which may result in duplicative programs or programs that potentially may contain inconsistent information. As a result, training may be less effective and resources may be used inefficiently. Furthermore, lacking a process for communicating existing DCIP expertise across the department, installation personnel will be unable to take full advantage of existing knowledge in assuring the availability of critical infrastructure. We are making two recommendations to help assure the availability of critical infrastructure by improving training and awareness. We recommend that the Secretary of Defense direct ASD(HD&ASA) to: Develop departmentwide DCIP training standards and an implementation time frame to enable the combatant commands and military services to develop consistent and cost-effective training programs. Coordinate with the combatant commands and military services to develop an effective means to communicate to installation personnel the existence and availability of DCIP expertise at the combatant command and military service levels. In written comments on a draft of this report, DOD concurred with both of our recommendations. Also, TRANSCOM provided us with technical comments, which we incorporated in the report where appropriate. DOD’s comments are reprinted in appendix II. DOD concurred with our recommendation to develop departmentwide DCIP training standards and an implementation time frame to enable the combatant commands and military services to develop consistent and cost-effective training programs. In its comments, DOD stated that ASD(HD&ASA) intends to designate U.S. Joint Forces Command as the executive agent for the development of critical infrastructure protection education and training standards, and upon completion of the development of training standards, ASD(HD&ASA) will set a 180-day time frame for full implementation by the combatant commands and military services to enable consistent and cost-effective training. DOD also concurred with our recommendation to coordinate with the combatant commands and military services to develop an effective means to communicate to installation personnel the existence and availability of DCIP expertise at the combatant command and military service levels. DOD noted that ASD(HD&ASA) intends to take steps to make critical infrastructure protection materials available to installation personnel and will continue to work with the Joint Staff, U.S. Joint Forces Command, and the Defense Threat Reduction Agency to develop an effective means to improve communication regarding the availability of critical infrastructure protection expertise. We are sending copies of this report to the Chairmen and Ranking Members of the Senate and House Committees on Appropriations, Senate and House Committees on Armed Services, and other interested congressional parties. We also are sending copies of this report to the Secretary of Defense; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the U.S. Marine Corps; the combatant commanders of the functional and geographic combatant commands; the Commander, U.S. Army Corps of Engineers; the Director, Defense Intelligence Agency; the Director, Defense Information Systems Agency; and the Director, Office of Management and Budget. We will also make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions concerning this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To determine the extent to which the Department of Defense (DOD) has (1) incorporated aspects of the Defense Critical Infrastructure Program (DCIP) into its exercises in the Transportation Defense Sector and (2) developed DCIP training standards departmentwide and made installation personnel aware of existing DCIP expertise, we obtained relevant documentation and interviewed officials from the following DOD organizations: Office of the Secretary of Defense (OSD) Office of the Assistant Secretary of Defense for Homeland Defense and Joint Staff, Directorate for Operations, Antiterrorism and Homeland Defense Threat Reduction Agency, Combat Support Assessments Division Military services Department of the Army, Asymmetric Warfare Office, Critical Office of the Chief Information Officer Mission Assurance Division, Naval Surface Warfare Center, Dahlgren Division, Dahlgren, Virginia Department of the Air Force, Air, Space and Information Operations, Plans, and Requirements, Homeland Defense Division Headquarters, U.S. Marine Corps, Security Division, Critical Headquarters, U.S. Central Command, Critical Infrastructure Program Office, MacDill Air Force Base (AFB), Florida Headquarters, U.S. European Command, Critical Infrastructure Protection Program Office, Patch Barracks, Vaihingen, Germany Headquarters, U.S. Pacific Command, Antiterrorism and Critical Infrastructure Division, Camp H.M. Smith, Hawaii U.S. Forces Japan Headquarters, U.S. Transportation Command (TRANSCOM), Critical Infrastructure Program, Scott AFB, Illinois Headquarters, Air Mobility Command, Homeland Defense Branch, Headquarters, U.S. Strategic Command, Mission Assurance Division, Offutt AFB, Nebraska Defense infrastructure sector lead agents Headquarters, Defense Intelligence Agency, Critical Infrastructure Headquarters, Defense Information Systems Agency, Office for Critical Infrastructure Protection and Homeland Security/Defense Headquarters, TRANSCOM, Critical Infrastructure Program, Scott AFB, Headquarters, U.S. Strategic Command, Mission Assurance Division, Headquarters, U.S. Army Corps of Engineers, Directorate of Military Selected critical assets in the continental United States, Hawaii, the U.S. Territory of Guam, Germany, Greece, Kuwait and another country in U.S. Central Command’s area of responsibility, and Japan We drew a nonprobability sample of critical assets in the United States and abroad, using draft critical asset lists developed by the Joint Staff, each of the four military services, TRANSCOM, the Defense Intelligence Agency, and the Defense Information Systems Agency. We selected assets for our review based on the following criteria: (1) overlap among the various critical asset lists; (2) geographic dispersion among geographic combatant commands’ areas of responsibility; (3) representation from each military service; and (4) with respect to transportation assets, representation in TRANSCOM’s three asset categories: air bases, seaports, and commercial airports. Using this methodology, we selected 46 total critical assets for review—22 transportation assets and 24 Tri-Sector assets—in the United States and in Europe, the Middle East, and the Pacific region. Further, we reviewed relevant DOD guidance pertaining to DCIP training and exercise requirements and interviewed officials from OSD, the Joint Staff, defense agencies, the military services, the combatant commands, and the defense infrastructure sector lead agents responsible for DCIP. (Throughout this unclassified report, we do not identify the 46 specific critical assets, their locations or installations, or combatant command or others’ missions that the assets support because that information is classified.) This report’s first objective, examining the extent to which DOD has incorporated aspects of DCIP into its exercises in the Transportation Defense Sector, focused on DCIP-related exercises conducted by TRANSCOM and on exercises conducted at individual installations we visited that have critical transportation assets. To address this objective, we reviewed and analyzed policies, assurance plans, strategies, handbooks, directives, and instructions. Further, we spoke with installation personnel about their efforts to incorporate aspects of DCIP into installation exercises and reviewed and analyzed installation emergency management plans, information assurance plans, and continuity of operations plans to determine how, if at all, critical assets were incorporated into exercises. In addition, to determine how critical assets are included and how lessons learned are incorporated into future exercises, we interviewed combatant command, subcomponent, and installation personnel responsible for planning and conducting exercises involving critical assets. For our second objective, the scope of our work on the extent to which DOD has developed DCIP training standards departmentwide and made installation personnel aware of existing DCIP expertise focused on efforts at OSD; at the four military services; within five combatant commands— U.S. Central Command, U.S. European Command, U.S. Pacific Command, U.S. Strategic Command, and TRANSCOM; and at installations that have critical assets representing each of the five defense sectors that we visited. Regarding DCIP awareness, the scope of our work focused exclusively on installation personnel who are responsible for critical transportation assets. To address this objective, we reviewed existing combatant command and military service DCIP training programs and interviewed program officials at the OSD, combatant command, and military service headquarters levels. Further, we interviewed installation personnel responsible for assuring the critical infrastructure we selected as part of our nonprobability sample to determine their awareness of DCIP and the existence of DCIP expertise and their ability to leverage these resources. We conducted this performance audit from May 2007 through September 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Mark A. Pross, Assistant Director; Gina M. Flacco; James P. Krustapentus; Kate S. Lenane; Terry L. Richardson; Marc J. Schwartz; John S. Townes; Cheryl A. Weissman; and Alex M. Winograd made key contributions to this report. Defense Critical Infrastructure: DOD’s Evolving Assurance Program Has Made Progress but Leaves Critical Space, Intelligence, and Global Communications Assets at Risk. GAO-08-828NI. Washington, D.C.: August 22, 2008 (For Official Use Only). Defense Critical Infrastructure: Adherence to Guidance Would Improve DOD’s Approach to Identifying and Assuring the Availability of Critical Transportation Assets. GAO-08-851. Washington, D.C.: August 15, 2008. Defense Critical Infrastructure: Additional Air Force Actions Needed at Creech Air Force Base to Ensure Protection and Continuity of UAS Operations. GAO-08-469RNI. Washington, D.C.: April 23, 2008 (For Official Use Only). Defense Critical Infrastructure: DOD’s Risk Analysis of Its Critical Infrastructure Omits Highly Sensitive Assets. GAO-08-373R. Washington, D.C.: April 2, 2008. Defense Infrastructure: Management Actions Needed to Ensure Effectiveness of DOD’s Risk Management Approach for the Defense Industrial Base. GAO-07-1077. Washington, D.C.: August 31, 2007. Defense Infrastructure: Actions Needed to Guide DOD’s Efforts to Identify, Prioritize, and Assess Its Critical Infrastructure. GAO-07-461. Washington, D.C.: May 24, 2007. | The Department of Defense (DOD) relies on a global network of DOD and non-DOD infrastructure so critical that its unavailability could have a debilitating effect on DOD's ability to project, support, and sustain its forces and operations worldwide. DOD established the Defense Critical Infrastructure Program (DCIP) to assure the availability of mission-critical infrastructure. GAO was asked to evaluate the extent to which DOD has (1) incorporated aspects of DCIP into its exercises in the Transportation Defense Sector and (2) developed DCIP training standards departmentwide and made installation personnel aware of existing DCIP expertise. GAO examined a nonprojectable sample of 46 critical assets representing the four military services, five combatant commands, and selected installations within five defense sectors. GAO reviewed relevant DOD DCIP guidance and documents and interviewed cognizant officials regarding DCIP exercises, training, and awareness. U.S. Transportation Command (TRANSCOM) and the installations GAO visited that have critical transportation assets have incorporated aspects of critical infrastructure assurance into their exercises. DOD's DCIP guidance requires the combatant commands and the military services to conduct annual DCIP exercises, either separately or in conjunction with existing exercises. DCIP guidance also requires commanders to ensure submission of lessons learned from these exercises. For example, TRANSCOM has included aspects of critical infrastructure assurance in its two major biennial exercises. Although military personnel at 13 of the 19 installations GAO visited that have critical transportation assets generally were not aware of DCIP, GAO found that all 19 of these installations conduct routine exercises that often involve aspects of critical infrastructure assurance, and they incorporate lessons learned from past exercises into future exercises. For example, personnel at these installations conduct antiterrorism, emergency management, and continuity of operations planning exercises that often include critical assets located on the installation. While several of the combatant commands and military services included in GAO's review of the five defense sectors have independently developed DCIP training at the headquarters level, DOD has not yet developed DCIP training standards departmentwide, and installation personnel remained largely unaware of existing DCIP expertise. DOD's DCIP instruction requires the Office of the Assistant Secretary of Defense for Homeland Defense and Americas' Security Affairs (ASD[HD&ASA]) to provide policy and guidance for DCIP and oversee the implementation of DCIP education, training, and awareness of goals and objectives. ASD(HD&ASA) recognizes the need for DCIP training and program awareness, as noted in its March 2008 critical infrastructure strategy. However, given the newness of the strategy, ASD(HD&ASA) has not yet established departmentwide DCIP training standards for assuring the availability of critical infrastructure or a time frame for implementing the training standards. In the absence of established DCIP training standards, the combatant commands and military services are variously developing and implementing their own DCIP training programs. For example, the Navy has established an information assurance training program that includes a DCIP module. Furthermore, installation personnel GAO spoke with, with few exceptions, were not familiar with DCIP or aware of DCIP expertise at the combatant command and military service headquarters levels. In addition, DOD has not developed an effective way to communicate to installation personnel the existence of DCIP expertise. Consequently, they rely on other, more established programs that in some cases do not emphasize the consideration of the full spectrum of threats and hazards. Without DCIP training standards departmentwide and a means of communicating them to installation personnel, the combatant commands and military services potentially may develop mutually redundant or inconsistent training programs, and installation personnel will continue to be unaware of existing DCIP expertise. |
The agricultural sector is a major part of the U.S. economy and has been and will continue to be affected by climate change, according to the Third National Climate Assessment. The assessment states that climate change will likely cause an increase in temperature, rainfall intensity, and extreme events in some areas, and extreme climate conditions, such as sustained droughts and heat waves. USDA plays an important role in addressing these potential impacts by using its resources to develop and implement both mitigation and adaptation measures. The U.S. agricultural sector accounted for $395 billion in sales in 2012, up 33 percent from 2007. According to USDA’s 2012 Census of Agriculture, of this $395 billion, about half is from sales of crops, and half is from livestock sales. Between 2007 and 2012, crop sales increased nearly 48 percent, while livestock sales increased about 19 percent. Thirteen states—California, Iowa, Texas, Nebraska, Minnesota, Kansas, Illinois, North Carolina, Wisconsin, Indiana, North Dakota, South Dakota, and Ohio—each had more than $10 billion in agricultural sales and together accounted for about 62 percent of all agriculture sales in 2012. California led the United States in agricultural sales in 2012, with about $43 billion, or 11 percent, of the total U.S. agricultural sales. In recent years, there has been a trend toward larger-scale farming operations. For example, according to a 2013 USDA Economic Research Service (ERS) report, between 1982 and 2007, for farms, the midpoint acreage for U.S. cropland nearly doubled, from about 590 acres to about 1,100 acres. Three important crops in terms of acreage planted and sales in the United States are corn, soybeans, and wheat. In 2012, corn was harvested on more than 94 million acres, and total sales for corn were about $67 billion. Soybeans were harvested on more than 76 million acres, with sales of about $39 billion, while there were about 49 million acres of all wheat varieties, with sales of about $16 billion. Livestock sales represented nearly half of agricultural sales in 2012, with poultry, cattle, milk, and pigs the most sold commodities in this category. A breakdown of the agricultural sector’s 2012 sales is shown in figure 1. The United States is also a global supplier of food. In 2012, the United States exported nearly $136 billion in agricultural products, or around 10 percent of total U.S. exports. The United States exports about 20 percent of the corn grown domestically, is the leading exporter of soybeans, and also is a leading exporter of wheat. In addition, the United States is the largest producer of beef and poultry in the world. According to the Third National Climate Assessment, climate change is expected to increase disruptions to agriculture production in the future. Increases in temperature, rainfall intensity, and extreme events in some areas, and extreme climate conditions, such as sustained droughts and heat waves, will likely have negative impacts on crop and livestock yields. By the end of the century, the Third National Climate Assessment states that average U.S. temperatures will increase between 3°F and 10°F, and precipitation events will be more extreme, meaning that more rain will fall during these events. In addition, changing climate conditions also will likely affect the geographic distribution and severity of invasive pests, diseases, and weeds. Table 1 shows the projected impacts of climate change and how they could affect agricultural production in the United States. Both climate mitigation and adaptation options exist in the agricultural sector. According to the Third National Climate Assessment, both of these efforts are required to minimize the damage inflicted by climate change in the United States and to adapt to the changes that already have occurred or that will occur. The agricultural sector emits about 6 percent of total U.S. greenhouse gas emissions, but U.S. lands (mostly forestlands) sequester enough carbon to offset 12 percent of total greenhouse gas emissions. Sources of these emissions include fuel consumption, fertilizer that can emit nitrous oxide, and methane emissions from livestock. Farmers can take certain mitigation actions to reduce greenhouse gas emissions and sequester carbon. For example, farmers can use energy-efficient buildings, vehicles, or farm equipment that runs on renewable energy, rather than fossil fuels. In addition, farmers can implement mitigation measures, such as no-till farming and precision agriculture. According to a 2011 ERS report, adoption of precision agriculture can improve the efficiency of input use and reduce environmental harm from the overapplication of inputs such as fertilizers and pesticides, which can reduce nitrous oxide emissions. Through the digestive process, livestock emit a considerable amount of methane, a greenhouse gas; reducing these emissions is another mitigation strategy. According to USDA officials, work is being done to alter the diet of cattle and to improve manure management practices in an effort to reduce methane emissions. Farmers also can improve the resiliency of their operations, as shown in table 2. For example, farmers can change the type of crop they plant to fit the changing climate or change the timing of their planting in response to a longer or shorter growing season. They can also shift to drought-, pest-, or weed-resistant crop varieties to reduce climate change impacts. There are also actions that address both mitigation and adaptation. For example, farmers can choose no-till farming, which allows carbon to remain in the soil and reduces fuel consumption (mitigation) while also improving the capacity of soil to retain moisture to reduce stress on crops during drought (adaptation). Several USDA agencies and offices are involved in climate change work. USDA’s Climate Change Program Office, which coordinates all of the department’s responses to climate change, leads a Global Change Task Force. The task force includes representatives from 20 USDA agencies and offices and works to coordinate climate activities by holding monthly meetings to discuss climate-related opportunities and efforts across the department. Table 3 provides information about the eight USDA agencies and offices that are most heavily involved in the agency’s climate change work. In June 2011, the Secretary of Agriculture issued Departmental Regulation 1070-001 that required USDA agencies to take climate change into account when making long-term planning decisions and to The resulting USDA prepare climate adaptation plans by June 2012.adaptation plan contained individual adaptation plans for 12 of USDA’s agencies, lays out the risks and vulnerabilities facing agency missions as a result of climate change, and details the strategies for overcoming these vulnerabilities. As required by GPRA as amended by the GPRA Modernization Act of 2010, executive agencies are to complete strategic plans in which they define their missions, establish results-oriented goals, and identify the strategies that will be needed to achieve those goals. According to USDA officials, beginning with its 2010-2015 Strategic Plan, USDA included climate change as a part of one of its four strategic goals. Specifically, the Strategic Plan states that USDA will ensure that farms are “conserved, restored, and made more resilient to climate change.” An objective of this strategic goal is to “lead efforts to mitigate and adapt to climate change.” In April 2014, USDA released its 2014-2018 Strategic Plan that includes the same four strategic goals established in the 2010-2015 strategic plan plus one additional strategic goal. USDA partners with the cooperative extension system to, among other things, help deliver information to farmers on farm management practices. The cooperative extension system is a partnership between land-grant universities and USDA. Established by the Morrill Act of 1862, the land grant university system is comprised of more than 100 colleges and universities around the country. These institutions receive federal support and are required to provide relevant information to the public through the extension system. Faculty members at land-grant universities may have dual appointments, meaning that in addition to teaching, they spend a certain portion of their time on research and on extension work. Established by the Smith-Lever Act of 1914, the cooperative extension system is a nationwide system used to disseminate information and research developed at land-grant universities. The system is a network of state and local offices that provide information to the public on a variety of topics, including agriculture. USDA’s National Institute of Food and Agriculture (NIFA) distributes federal funding for the extension service. The extension service also receives state and county funding. According to a June 2013 joint USDA and National Oceanic and Atmospheric Administration (NOAA) report on the role of extension in climate adaptation, climate-related research and extension efforts have existed for decades.provide decision makers with information and tools, disseminated through meetings, written publications, and internet websites, to increase climate literacy. USDA’s climate change priorities include providing better information on future climate conditions to help farmers in their decision making, conducting research, and delivering decision support tools and technical assistance to farmers. USDA’s adaptation efforts focus on research and technical assistance, while its mitigation efforts also focus on reducing greenhouse gas emissions and sequestering carbon. USDA’s climate adaptation and mitigation priorities generally align with national climate priorities. USDA’s climate change priorities include providing better information to farmers on current and potential impacts of climate change, which is information that farmers need to make decisions. According to a USDA official, there is a need to develop better forecasts for upcoming growing seasons. These forecasts are generally 90-day forecasts, but farmers also need accurate forecasts for 6 to 8 months out, because they order their seeds for spring planting in the fall. Another USDA official we spoke with told us that there is a need to provide farmers with longer-term projections for as much as 20 to 60 years into the future that farmers can use to help make large capital investment decisions, such as whether to install an irrigation system or install a cooling system in a barn. A USDA official told us that USDA itself does not engage in climate modeling, but instead relies on some of the federal agencies with strong climate science capabilities, such as NOAA, the National Science Foundation, and the National Aeronautics and Space Administration, to help USDA officials better understand the climate projections that are available and the associated uncertainties for these projections. USDA officials told us that through the USGCRP, USDA has been encouraging these agencies to downscale modeling results so that they provide more localized and, hence, more helpful information to farmers. According to USDA’s 2014 Budget Explanatory Notes for Congress, “Access to consistent and detailed projections of climate change is a major area of uncertainty for our programs and agencies.” According to USDA’s 2010 Climate Change Science Plan, the agency’s adaptation efforts aim to improve the understanding of climate change impacts on agriculture, develop adaptation practices, and deliver science- based information and tools to stakeholders. USDA officials told us that about 80 percent of the agency’s climate research dollars are spent on climate change adaptation. A 2013 Congressional Research Service (CRS) report shows USDA’s funding for climate change research under USGCRP has almost doubled from $63 million in fiscal year 2008 to $121 million in fiscal year 2010, but funding has remained relatively flat since. In fiscal year 2013, total funding for USDA’s climate change research programs was about $82 million, as shown in table 4.data on federal climate expenditures, research represents approximately According to OMB 20 percent of USDA’s total climate change funding. According to USDA officials, USDA does not have a line item in its budget that covers climate change because many of the agency’s climate efforts involve programs across several USDA agencies. However, USDA does compile a “climate change crosscut,” which provides information on the money that various USDA agencies spent on climate change activities and provides this information to OMB. For more information on USDA’s climate change funding, see appendix II. NIFA and the Agricultural Research Service (ARS) are the two USDA agencies with the largest amount of climate change research funding. NIFA. NIFA oversees a competitive grant program called the Agriculture and Food Research Initiative (AFRI) that awards grants for research, extension, and/or education activities. Grant money is also awarded for “integrated” projects that incorporate two or more of these activities. Some integrated projects, known as coordinated agricultural projects (CAP), support large-scale projects that promote collaboration, open communication and the exchange of information, and reduce duplication of effort. In addition, NIFA funds “standard” grants on climate change adaptation and mitigation in agriculture that consist of targeted research, education, extension, or integrated projects. To help communicate information to farmers, NIFA distributes funding to the cooperative extension system. Extension disseminates science-based information and decision support tools through meetings, written publications, and via the internet. For example, the cooperative extension system supports the eXtension website, which can be used to access information and education resources on a wide range of topics from land- grant university staff and experts, such as entrepreneurship and growing certain crops. In fiscal year 2013, NIFA provided approximately $296 million for agricultural extension at land-grant universities, or slightly less than one quarter of NIFA’s total budget, to supplement state and county funds for the extension system. ARS. USDA’s principal in-house research agency, performs both basic and applied research and presents the results through various sources, including academic papers, fact sheets, and conference presentations. Climate change research conducted by ARS takes place under the National Resources and Sustainable Agricultural Systems program, one of its four national research programs. In fiscal year 2013, ARS spent about $38 million on climate change research, about 65 percent of which went to adaptation research projects, according to an internal USDA document. For example, ARS researchers are experimenting with different crop varieties to show which of these can withstand drought or higher temperatures. ERS. ERS also devotes a smaller amount of research money to developing information for decision makers on climate policy, such as examining the costs and benefits of adapting to climate change. In 2012, ERS released a report that examined how farmers might adapt to changing climate conditions to reduce the impact of changes in local weather, resource conditions, and price signals. The study found that, while changing climate conditions are uncertain, farmers have the opportunity to adapt to weather, resource, and price changes by altering crops and adjusting their production practices. In the area of climate change mitigation, USDA has set a goal for 2015 of a 40 million metric ton reduction in greenhouse gas emissions from the agricultural sector compared with 2005 levels and an 80 million metric ton increase in carbon sequestration compared with 2005. According to USDA’s 2010-2015 Strategic Plan, USDA will work to achieve these goals through its existing conservation and energy programs. Many of USDA’s conservation programs are administered by the Natural Resources Conservation Service (NRCS) and provide both technical and financial assistance to farmers who voluntarily enroll in them. Under the Conservation Technical Assistance program, for example, NRCS staff work with farmers to develop and implement conservation plans. In addition, NRCS administers programs that provide financial assistance to encourage farmers to adopt conservation practices, such as the Conservation Stewardship Program. USDA’s Farm Service Agency (FSA) also administers conservation programs, such as the Conservation Reserve Program, which provides financial assistance to farmers who remove land from agricultural production and plant native vegetation on the land. USDA’s conservation programs account for about 4 percent of USDA’s budget—approximately $6.2 billion in fiscal year 2013—but the agency does not report this funding to OMB as spending directly related to climate change. USDA officials said they do not report the spending to OMB as climate-related because the environmental benefits of these programs are wide-ranging and not solely related to climate change. For example, these conservation programs can help reduce erosion from fields and provide habitat for wildlife, in addition to sequestering carbon and improving soil health. USDA officials told us that the agency’s research efforts also play a role in promoting climate change mitigation, but these officials acknowledge that research alone does not directly result in emissions reductions. Several research efforts aim to quantify the reduction in greenhouse gas emissions that occurs if farmers take certain actions. For example, ARS has helped develop GRACEnet, a research program that estimates greenhouse gas emissions and carbon sequestration based on crops planted and land management practices. According to a USDA official, mitigation research represents about 15 percent of USDA’s total climate change research spending. Another mitigation goal involves increasing renewable energy generation in rural communities. USDA has set a goal of more than doubling the amount of renewable energy generation in rural communities from 1.5 billion kilowatt hours in 2009 to 3.1 billion kilowatt hours in 2015. For example, Rural Development’s Rural Energy for America Program (REAP) supports small energy generation projects by providing financial assistance to farmers and rural business owners that institute renewable energy systems, conduct energy audits, and make energy efficiency improvements. According to OMB, REAP received about $3 million for these efforts in fiscal year 2013. In total, Rural Development received about $13 in fiscal year 2013 to support agency mitigation efforts, according to OMB. National climate change priorities, as articulated by the Administration and the USGCRP, are to promote mitigation actions, advance climate science, develop tools and translate information, better predict future climate conditions, and ensure that federal agencies are incorporating climate change into agency programs and operations. The Administration has set a goal of reducing greenhouse gas emissions by 17 percent by 2020 from 2005 levels. National climate change priorities aimed at meeting economy-wide emissions targets and ensuring climate resiliency are addressed in documents including the President’s Climate Action Plan, the 2011 Progress Report of the Interagency Climate Change Adaptation Task Force, and Executive Orders 13514 and 13653. National climate change priorities are also identified in the 10-year USGCRP Research Plan. Table 5 shows climate-related priorities discussed in these documents. USDA’s climate change priorities for agriculture generally align with the national priorities outlined in table 5. USDA officials said they rely on NOAA and other USGCRP science agencies to develop better and more localized climate projections to help farmers make management decisions to adapt to weather variability and a changing climate. USDA also has various adaptation and mitigation research efforts under way that are intended to help understand the current and potential impacts of climate change and develop information and tools for farmers. To help deliver information to farmers, USDA will rely on the cooperative extension system to translate climate information and NRCS staff to provide technical assistance through conservation programs, such as the Conservation Technical Assistance program. According to the USGCRP 2012-2021 Research Plan, USDA’s research and extension efforts, conservation programs, and efforts to provide farmers with decision- making tools support USGCRP priorities on “multiple fronts.” USDA has also developed a climate change adaptation plan. In accordance with Executive Order 13514, USDA issued Departmental Regulation 1070-001 in June 2011, which directed its agencies to develop climate change adaptation plans. The June 2012 climate change adaptation plan detailed how climate change is anticipated to affect USDA operations and how USDA agencies will prepare and adapt to the projected impacts. For example, FSA stated that threats associated with climate change could make farmers become more reliant on financial and disaster assistance programs administered by FSA. To address this vulnerability, FSA’s strategy is to review existing policies and programs and determine if they could be modified to encourage farmers to undertake adaptation measures such as changing crop varieties, diversifying crops, and increasing water-use efficiency. USDA’s climate efforts consist of both mature programs and newer initiatives. USDA has been conducting climate change research and implementing conservation programs for several years. Many of these conservation programs were established prior to USDA’s more recent focus on climate change. In more recent years, USDA has emphasized the need to turn climate research into technical assistance for farmers. In addition, USDA is working with other agencies to improve climate projections. However, USDA is not using its performance planning or reporting process to provide information on how it intends to accomplish its strategic goal on climate change or to track its progress toward meeting this goal. USDA’s climate efforts have consisted of research, conservation, and energy programs. Research: USDA has conducted climate change research since the early 1990s. These research efforts grew out of the Global Change Research Act of 1990, which required the development of a research plan that provides recommendations for collaboration to combine and interpret data to, among other things, “produce information readily usable by policymakers attempting to formulate effective strategies for preventing, mitigating, and adapting to the effects of global change.” Both ARS and NIFA and its predecessor organization at USDA, the Cooperative State Research, Education, and Extension Service, have been involved in climate change research since the 1990s. ARS and NIFA still play major roles in USDA’s climate research efforts. In fiscal year 2013, they accounted for $78 million of the $82 million (95 percent) of the monies that USDA spent on climate change research, according to OMB data. ARS accounted for $38 million in research spending, with most of this research conducted under its Climate Change, Soils, and Emissions Program. This program conducts research on four components of climate change: (1) improving air quality; (2) reducing greenhouse gas emissions and sequestering carbon; (3) enabling agriculture to adapt to climate change; and (4) enhancing soil health. According to an internal ARS document, there were 32 ongoing research projects in 2013 focused on climate mitigation at a cost of $13.7 million, and 53 research projects focused on climate adaptation at a cost of $24.6 million. NIFA’s work on climate change is done through its Institute of Bioenergy, Climate, and the Environment and, in fiscal year 2013, NIFA accounted for about $40 million of USDA’s climate research spending. According to NIFA officials, most of NIFA’s climate research is funded through AFRI, the largest competitive grants program that NIFA administers. One of the five challenge areas that AFRI is working on is to “mitigate and adapt to climate change.” Conservation: USDA also has several established conservation programs administered by FSA and NRCS, many of which were established prior to USDA’s more recent focus on climate change. Now these conservation programs are being presented as not only helping to conserve land, prevent erosion, or provide wildlife habitat, but also as having the climate benefit of sequestering carbon and promoting soil health. For example, FSA, and its predecessor agency, have administered the Conservation Reserve Program since 1986 and, in fiscal year 2013, about 27 million acres of land were conserved through this program. FSA estimates that this effort results in a net reduction of 45 million metric tons of carbon dioxide annually through sequestration of carbon dioxide and reduced fuel and fertilizer use by farmers. Similarly, NRCS oversees a number of conservation programs, and an estimated 52.9 million acres were enrolled in these programs in fiscal year 2012. NRCS’s focus on conservation goes back to the Dust Bowl of the 1930s.Environmental Quality Incentives Program, which provides financial and technical assistance to farmers who implement conservation practices and undertake conservation planning. The largest of NRCS’s conservation programs is the Energy: Another area where USDA has ongoing climate change mitigation efforts involves energy. FSA, NIFA, NRCS, and Rural Development have programs focusing on different aspects of energy, including energy efficiency, renewable energy, and the production of biofuels. Among these programs is Rural Development’s REAP program. According to a March 2012 report on REAP, this program funded 5,733 renewable energy and energy-efficiency improvement projects since 2009. In 2009, research program leaders at NIFA developed a new strategic direction, which called for NIFA to support the creation of “innovative tools for communication and education to provide information that people and communities can use in their daily lives.” Subsequently, USDA’s Climate Change Science Plan, released in 2010, emphasized the need for the agency to develop tools to help farmers with both climate adaptation and mitigation. Also, in 2010, NIFA issued a funding announcement under AFRI’s climate change challenge area for projects focused on climate research, education, or extension. The announcement also provided funding for integrated projects, which combine research along with education and/or extension activities. Among the projects funded under this announcement were the four largest grants the agency had ever awarded according to NIFA officials. NIFA provided $85 million in total funding over a period of 5 years for these four integrated projects, known as CAP grants. Each of these grants, now in their fourth year, involves researchers across several universities. Table 6 provides more information on these projects. The four CAP grant projects are at various stages of turning research into technical assistance for farmers, based on our review of materials and conversations with leaders of these projects. Most of the grantees had conducted outreach to farmers through conducting surveys or holding webinars, but the grants varied in terms of their use of extension and development of web-based tools to aid farmers in decision making. For example, the Sustainable Corn CAP grant has cooperative extension agents in nine Midwestern states in the Corn Belt who are responsible for disseminating project information to farmers. This CAP grant recipient also has partnered with another USDA grantee, Useful to Usable, to communicate research to farmers; this joint effort has resulted in the development of two web-based tools for farmers. One of these tools, the Corn-Growing Degree Day tool, compares current weather conditions with 30-year averages and helps farmers decide when to plant their seeds. In contrast, university officials leading the PINEMAP and REACCH CAP grants say their projects have been more focused on research so far, but they are planning to produce tools for farmers in the final years of funding. These university officials said that developing tools for farmers takes time. In 2013, NIFA awarded two CAP grants focused on livestock, with one focused on dairy cattle, and one focused on beef cattle. These also were 5-year grants, consisting of about $10 million each. These livestock grants focus on developing information for farmers on adaptation and mitigation options. For example, the CAP grant project on beef cattle is examining ways to reduce methane emissions by altering the animals’ diets. NIFA officials told us it was likely that when funding ran out for these CAP grants and the four awarded in 2011, that future CAP grant funding would be less. A senior ARS official told us that ARS’s research on climate mitigation is more mature than its climate adaptation efforts. This official said ARS was more comfortable with making recommendations to farmers about how to reduce greenhouse gas emissions or sequester carbon in soil than with providing information on how to adapt to climate change. He said that ARS’s adaptation research had not progressed to the point where a “decision tree” tool could be developed to guide farmers on adaptation options. During our site visits to two ARS laboratories, we observed growth chambers where ARS scientists were examining the impacts that future climate conditions could have on crops to help identify crop types that best withstand these conditions. One area where ARS has made progress is on soil health techniques, such as using no-till farming. According to ARS and NRCS officials, ARS research has been used by NRCS to provide information to farmers on the benefits of healthy soil. Like ARS, NRCS has also developed some climate change tools for farmers in recent years. For example, NRCS has worked with Colorado State University to develop COMET-FARM, a web-based tool used by farmers to estimate the carbon footprint of their operations and to determine likely impacts from certain actions in reducing their greenhouse gas emissions or increasing sequestration. In 2012, NRCS launched a campaign called Unlock the Secrets in the Soil to share information with farmers on the benefits of healthy and productive soil. As part of this effort, NRCS maintains a website with fact sheets, videos, and other information on soil health. In February 2014, USDA announced the establishment of seven regional climate hubs and three subsidiary hubs to “deliver science-based, practical information to farmers, ranchers” and “to support decision making related to mitigation of, and adaptation to climate change.” Figure 2 is an interactive map showing the regional climate hub locations. (See app. IV for a printable, noninteractive version of fig. 2.) During the first year of this effort, USDA officials expect the hubs to engage with stakeholders, establish a website, conduct a climate risk assessment for the hub’s region, and develop training for USDA staff. Several USDA agencies will actively contribute to this effort, including ARS, NRCS, Rural Development, and the Forest Service. USDA officials said they expect NRCS staff and the cooperative extension service to help distribute technical assistance from the hubs to farmers and other stakeholders in the region. The climate hubs will also collaborate with other federal agencies that have existing climate offices, such as NOAA, which has regional climate partnerships that include Regional Integrated Sciences and Assessments teams, Regional Climate Centers, and Regional Climate Services Directors. These hubs will be located at existing ARS or Forest Service facilities. Interactive GraphicFigure 2: Location of USDA’s Regional Climate Hubs and Information on These RegionsInstructions: Move your cursor over the colored regions below for more information. For a printable, noninteractive version of this figure, see appendix IV. Some USDA officials said that there was no substitute for the one-on-one attention that farmers receive from extension or NRCS staff. However, representatives from farm groups said that the cooperative extension system is “not what it used to be” because both funding and staffing levels have fallen in recent years. During our site visit to Iowa, a leading agricultural state in crop and livestock production, a university official told us that, until 2009, there had been an agricultural extension agent in each of the 99 county extension offices, but there now are only about 30 such individuals, each of whom covers multiple counties. According to USDA officials leading the climate hubs effort, they have reached out to the extension system by hosting a meeting with all of the land-grant universities and sending a letter to states, urging them to take part in the climate hub effort. USDA is also developing a memorandum of understanding on this topic with the Association of Public and Land-Grant Universities. However, there are very few extension specialists that have a focus on climate change; we spoke to two climate change extension specialists during our work, and one of these specialists told us that he knew of only four such specialists in the United States. NRCS officials said that they viewed the staff’s work in the hubs as an extension of NRCS’ traditional work on conservation. These officials did, however, acknowledge that there has been a decline in staffing levels in recent years, which we reported on in a recent report. With the reduced presence of the extension system, some stakeholders told us that farmers are turning more to the private sector for information on managing their farms. Certified crop advisers, who provide information to farmers on, among other things, seed selection, irrigation, and fertilizer decisions, are one such source of information. The Useful to Usable grant project is focused on delivering information to crop advisers that can be shared with farmers. Some of these crop advisers are independent, while others are employed by large agribusiness companies. One of the key pieces of information farmers need to make planting and other decisions is reliable information on future climate projections. USDA officials told us that farmers need projections that cover longer periods of time so they can make longer-range and seasonal decisions, such as what type of seed to buy for the upcoming planting season or whether to purchase a certain piece of equipment. One of the ways that USDA and NOAA are addressing the need for longer-term climate projections is through participation in the Agricultural Model Intercomparison and Improvement Project (AgMIP) consortium, an international effort established in 2010 to examine and improve globally integrated climate, economic, and agricultural production models. A USDA official involved in this effort told us that the AgMIP effort is at an early stage of development. Currently, most climate models provide projections over large geographical regions. For example, the Third National Climate Assessment provided information on possible future climate conditions across nine regions in the United States under different greenhouse gas emissions scenarios. However, NOAA officials said that these were scenarios and not forecasts because they do not include probabilities of these occurring. Officials at USDA told us that they recognize farmers need both localized information to make decisions and more information on the likelihood that certain climate conditions will occur. The process of refining larger-scale model results and arriving at a more local geographic scale is known as downscaling. According to the National Research Council, downscaling can be challenging, and additional evaluation of the various downscaling methodologies is needed. Currently, farmers can obtain more localized projections on weather and drought conditions with shorter time frames from various sources. NOAA’s Climate Prediction Center produces seasonal weather outlooks that cover 90-day periods for both temperature and precipitation and provide probabilities on whether conditions will be average, below average, or above average. NOAA also leads the National Integrated Drought Information System (NIDIS), which provides information on current drought conditions and projections for future drought conditions and communicates these on a regular basis. NIDIS is made up of representatives from USDA, along with the Departments of Energy, the Interior, and Transportation, and various other federal agencies. Helping to make farms more resilient to climate change is part of one of USDA’s four strategic goals, but the agency is not using its performance planning and reporting process to provide information on how it intends to accomplish this goal or the status of its efforts. Developing a strategic plan and establishing performance measures is the first step in an agency’s performance management process. According to GPRA, as amended by the GPRA Modernization Act of 2010, an agency’s performance plan is supposed to explain how the agency will accomplish its performance goals, and its performance reports are supposed to review the extent to which performance goals have been met, and if the performance goals are not met, explain why. We found shortcomings in both of these documents that USDA had been preparing on an annual basis: Performance plans. In USDA’s performance plans for the years 2011, 2012, and 2013, there was only general information about its climate change efforts, and there was no specific linkage between these efforts and its performance goals and how these efforts would be used to accomplish its goals. For example, in the 2013 USDA performance plan, there is no explicit discussion on how ARS’s or NIFA’s research efforts relate to climate change. When asked, USDA officials were uncertain as to why this linkage was not included in the department’s performance plan. GPRA requires that agencies use their performance plans to describe how an agency’s performance goals contribute to the goals laid out in its strategic plan. Without this information in its performance plans, USDA is not providing a plan to the public on how it intends to accomplish its goals. Performance reports. In its performance reports for the years 2011, 2012, and 2013, USDA did not provide any information on whether it was meeting its performance goals under its strategic objective to lead efforts to mitigate and adapt to climate change. USDA provided information on its other measures for its strategic objectives. USDA officials stated that since climate efforts were largely excluded from the performance plan, these efforts were excluded in the performance report. GPRA requires that agencies use their performance reports to provide information on whether they have achieved their performance goals. Without this information, USDA cannot demonstrate whether its efforts have been successful and whether changes need to be made to its programs to address any unmet goals. Performance measures. USDA does not have performance measures in its 2014-2018 strategic plan for some of the adaptation practices, such as no-till farming and the planting of cover crops, which the agency is encouraging farmers to adopt. USDA officials in both ARS and NIFA told us that it is difficult for them to track farm management practices. However, USDA conducts several surveys of farmers, and its Agricultural Resource Management Survey collects information on farm management practices, including the use of no-till farming. Agency officials also said that it has been difficult to develop performance measures for climate change since it was first included as a strategic goal in the 2010-2015 strategic plan. Nonetheless, USDA has developed performance measures to track progress for some of its other strategic objectives that track acreage amounts where certain practices have been implemented on public or private lands. USDA has a wide-ranging set of climate efforts under way, but the performance measures that are part of its strategic plan do not capture the breadth of its efforts. Without measures to track progress on more of its climate efforts, USDA will not be able to fully assess its progress in meeting its climate change strategic goal and provide information to Congress and the public on its progress. See appendix III for more information on the strategic goal, objectives, and associated performance measures that are part of USDA’s strategic plans for fiscal years 2010-2015 and fiscal years 2014-2018. USDA faces challenges in encouraging U.S. farmers to take measures to mitigate and adapt to climate change. To address some of these challenges, USDA is, among other things, developing tools that summarize climate information and communicate research findings to farmers. However, USDA does not provide farmers with information on the costs and returns of taking climate change actions. As mentioned earlier, USDA is developing and delivering technical assistance on climate change to farmers, and we found that the agency faces challenges in these efforts. USDA officials we spoke with, as well as researchers and representatives of environmental groups, said that climate change is a very complex topic, and it is difficult to turn the large amount of often technical research into readily understandable information. Our 2009 report on climate change found that turning climate data into information useful for making climate adaptation decisions was a challenge facing federal, state, and local decision makers. USDA officials acknowledge a need to develop climate projections at geographic scales and time frames relevant to farmers. To accomplish this goal, USDA officials said they must rely on other agencies and researchers, such as NOAA, because USDA does not have the technical capability to do this work. NOAA officials told us that providing climate predictions covering a 5- to 10-year period is one of the largest unmet needs in climate forecasting. Currently, it is difficult for climate modelers to predict when certain key climate features will occur that can have major influences on the climate on these time frames, such as El Niño/Southern Oscillation events.NOAA is seeking to extend its climate forecasts beyond one year. NOAA officials also told us that climate models will always involve a degree of uncertainty and that local cooperative extension staff need to be trained on how to present information about this uncertainty in the models. However, to provide additional help to farmers, Another challenge USDA faces is the incentive structure that farmers consider when making decisions. Officials at USDA, researchers, and farmers we spoke with told us that farmers need incentives to take climate adaptation or mitigation actions. For example, planting cover crops has a financial cost for the farmer in the short-term in the form of seed and planting costs, but the benefits of healthier soil may not be realized for a few years. Similarly, an acre of land that is maintained in perennial vegetative cover for conservation purposes is not available to a farmer for revenue-generating crops, such as corn or soybeans. USDA’s existing conservation programs provide payments to farmers who take such conservation actions, but these payments are generally less than the revenue the farmer would receive from growing and selling crops on the land. Total acreage placed in the Conservation Reserve Program has declined between 2007 and 2013 from about 37 million acres to about 27 million acres, and one USDA official told us this was due in part to higher market prices for corn. Also, farmers who have marginal farmland have been able to convert it from grassland—which provides conservation benefits and sequesters carbon—to revenue-generating crops and qualify for crop insurance coverage offered by USDA, thus lowering their financial risk. As some Iowa farmers told us, farmers generally make decisions based on short-term economic incentives because the farming industry is focused on producing commodities at the lowest price. For example, if climate change measurably decreases yields for farmers in the future, farmers may have an incentive to change their practices. There was also general agreement among university researchers that climate change can be a polarizing topic in the agriculture community. According to a USDA-funded survey of almost 4,800 farmers in the Corn Belt, 66 percent believed that climate change was occurring and, of these respondents, 41 percent believed that humans were at least partly responsible. The authors of this research found that if farmers do not believe climate change is happening or poses a threat, they may be less likely to take adaptation or mitigation actions. USDA has taken several steps to address the challenges it faces. For example, it is working to improve the information that can be provided to farmers from climate models. USDA is also taking steps to deliver information to farmers that is more accessible and easier to understand and apply to their operations. According to USDA officials, the regional climate hubs are intended to provide an avenue to deliver region-specific information to farmers on climate change. In addition, in three of the large CAP grant projects that USDA funded in 2011, farmers have been surveyed or interviewed to get more information on their needs and to better tailor the information provided to them to their needs. An official with the Useful to Usable CAP grant project told us that the project has used focus groups to gather information on the types of information that the farm community needs on climate change. Another USDA project maintains a website called AgroClimate, with weather forecast links and other tools, including fact sheets on different farm management practices, such as planting cover crops. University researchers we spoke with said that an understanding of the farm community is critical in communicating with farmers about climate change. USDA reports have highlighted the importance of providing information to farmers on the costs and returns of taking certain actions in response to climate change. For example, USDA Technical Bulletin 1935 states that there is a need for “risk-weighted” costs and benefits of taking adaptation actions, but “few efforts have been made to develop such comprehensive quantification efforts in the context of climate change.”officials at USDA, researchers, and farmers we spoke with, this According to information is important because farmers weigh the financial costs and returns when making decisions about their farm operations. When we asked USDA officials about the agency’s efforts to develop this information, they highlighted a 2013 report that examined the financial incentives necessary for farmers to adopt certain mitigation practices. For example, the report provides estimated changes in costs, yield, and revenue for farmers changing from conventional tillage to reduced tillage for certain crops including corn and wheat. However, we found that the report does not provide information on instances where crop yields increased as a result of changes in practices, and we did not find evidence of USDA efforts to make information in this 270-page technical report more accessible to farmers. NRCS officials we spoke with did not seem aware of the report, and the website for NRCS’s soil health campaign provides general information on the benefits of healthy soil but not on the farm-level costs to farmers or the demonstrated impact on crop yields from having healthier soil. According to federal standards for internal control, federal agencies are to record and communicate to management and others who need it and in a form and within a time frame that enables them to carry out their responsibilities. Also under these standards, in addition to internal communications, management should ensure that there are adequate means of communicating with, and obtaining information from, external stakeholders that may have a significant impact on the agency achieving its goals. NRCS officials said they did not have information on the costs and returns of taking certain actions in response to climate change and suggested that farmers might get it from other farmers. They noted that this information is challenging to develop because several variables can affect the costs and returns for a particular farmer. In keeping with Technical Bulletin 1935, USDA has taken some steps to develop estimates of costs and benefits of taking actions in response to climate change, but if this information is not distributed in an accessible format, its usefulness to farmers may be limited. Without information that is readily accessible to farmers on the farm-level economic costs and returns of taking certain actions in response to climate change, farmers may be reluctant to take climate adaptation or mitigation actions. USDA is taking several promising steps to begin to help farmers mitigate and adapt to climate change. In recent years, USDA has increased funding for its climate efforts, particularly in the area of research, and has been working to help develop tools and useful information for farmers such as the recent establishment of regional climate hubs. The importance of climate change at USDA is reflected in the fact that one of the agency’s four strategic goals focuses on climate change. However, the agency does not have associated performance measures that reflect the breadth of USDA efforts in the climate area. In addition, its performance plans and performance reports do not provide adequate information on how the agency planned to accomplish its goals or the status of its efforts. By not using its performance plans to explain how it will accomplish its goals in the area of climate change, USDA has not provided Congress and the public with important information on its efforts. Without a more robust performance measurement system, USDA will have difficulty assessing its progress in meeting its strategic goal on climate change and providing information to Congress and the public on the status of its efforts. Farmers weigh the financial costs and returns of taking certain actions carefully. However, USDA has made few efforts to quantify the costs and returns of taking certain actions that could help farmers make both short- and long-term decisions in the face of a changing climate. USDA has taken some steps in this area, but without communicating more accessible information on the economic costs and returns to farmers of taking certain adaptation or mitigation actions on their farms consistent with federal internal control standards, farmers may be reluctant to take certain actions. To better promote agency accountability, we recommend that the Secretary of Agriculture direct the Climate Change Program Office and the Office of Budget and Program Analysis to take the following three actions: Work with relevant USDA agencies to develop performance measures that better reflect the breadth of USDA’s climate change efforts. Ensure that the department’s annual performance plans explain how agency actions will lead to the accomplishment of performance goals in the area of climate change. Use annual performance reports to provide information on the status of agency efforts toward meeting its performance measures in the area of climate change. In addition, to provide relevant information to farmers, we recommend that the Secretary of Agriculture direct the Climate Change Program Office to work with relevant USDA agencies to develop and provide readily accessible information to farmers on the farm-level economic costs and returns of taking certain actions in response to climate change. We provided a draft of this report to the Departments of Agriculture and Commerce for review and comment. We also provided a copy to the U.S. Global Change Research Program for a technical review. In its written comments, reproduced in appendix V, the Department of Agriculture agreed with our recommendations and said that the report reflects the wide range of actions that the department is taking to address climate change in the agriculture sector. USDA also noted it has begun to address some of these recommendations. Specifically, the department is conducting additional work to provide tools to farmers that assess the costs and impacts of adopting technologies that help to mitigate climate change. The Departments of Agriculture and Commerce and the U.S. Global Change Research Program also provided technical comments, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives were to examine (1) U.S. Department of Agriculture’s (USDA) priorities related to climate change and agricultural production and how these align with national priorities; (2) the status of USDA’s climate change efforts; and (3) the challenges, if any, USDA faces in implementing its climate efforts and the steps it has taken to overcome these challenges. To describe USDA’s priorities related to climate change and agricultural production and how these align with national priorities, we analyzed USDA documents that describe the agency’s priorities in this area. These documents include USDA’s fiscal year 2010–2015 and fiscal year 2014- 2018 Strategic Plans, the agency’s Climate Change Adaptation Plan, its Climate Change Science Plan, and Departmental Regulation 1070-001. To identify national priorities regarding climate change and agricultural production, we analyzed the U.S. Global Change Research Program 10- year strategic research plan, the Interagency Climate Change Task Force 2010 and 2011 progress reports, Executive Orders 13514 and 13653, and the President’s Climate Action Plan. We also analyzed budget data that USDA reports to the Office of Management and Budget (OMB) for its annual report to Congress on climate change expenditures. We also interviewed officials responsible for climate change policy from USDA’s Climate Change Program Office and other USDA agencies. To determine the status of USDA’s climate change efforts, we analyzed documents that included annual budget data, progress reports, and annual performance reports. These included the budget data that USDA reports to OMB. We focused on USDA’s key climate change efforts, which we identified through discussions with USDA officials and by examining budget information, which enabled us to determine where USDA was devoting large amounts of funding. We also reviewed USDA’s strategic plans for fiscal years 2010-2015 and fiscal years 2014-2018 and the performance plans and performance reports for the years 2010, 2011, 2012, and 2013. We also examined the requirements under the Government Performance and Results Act (GPRA) of 1993 and the GPRA Modernization Act of 2010 for strategic plans, performance plans, and performance reports. In addition, we reviewed progress reports that had been prepared by various USDA agencies. For the six large Coordinated Agriculture Project grants that had been funded by USDA’s National Institute of Food and Agriculture (NIFA), we interviewed officials that were leading these grants and USDA officials that were responsible for overseeing five of these grants. We also interviewed officials that were leading two standard grants that had been funded by NIFA. To determine the challenges that USDA faces in implementing its climate efforts and steps it has taken to overcome some of these challenges, we reviewed key documents on USDA’s efforts and reviewed our past work on climate change. These USDA documents included USDA’s Technical Bulletin 1935 on climate change and various reports prepared by USDA agencies. Our past work on climate change has identified several potential challenges in implementing climate change efforts at the federal and local level. For all three objectives, we conducted interviews with a range of officials, including USDA officials implementing climate programs, and stakeholders who were knowledgeable about USDA’s efforts and the challenges the agency faces. Specifically, we spoke with officials from 10 USDA agencies and had multiple conversations with officials from the following USDA agencies because of their extensive involvement in climate change work: Agricultural Research Service (ARS), Economic Research Service, Farm Service Agency (FSA), NIFA, Natural Resources Conservation Service (NRCS), and Rural Development. We also spoke multiple times with officials from USDA’s Climate Change Program Office, which is responsible for coordinating USDA’s climate efforts, and USDA’s Office of Budget and Policy Analysis, which is responsible for USDA’s preparation of budget estimates, legislative reports, and regulations, as well as USDA’s strategic planning and reporting efforts. The stakeholders we spoke with included officials from farm groups, environmental groups, and an agribusiness company. We also spoke with farmers from Iowa and Kentucky. Finally, we conducted two site visits on this engagement. We visited ARS’s Beltsville Agricultural Research Center in Maryland, where we met with ARS researchers that were examining the impacts that different climate conditions could have on crops. We also toured the facilities where this research is taking place. We conducted a site visit in Iowa, where we spoke with a variety of officials, including USDA officials from ARS, NRCS, FSA, and Rural Development. We also spoke with a variety of officials at Iowa State University, including professors and extension staff. In addition, we met with extension staff in a county office, visited a corn and pig farm, and spoke with farmers that operated this farm. We selected Iowa because it is a large agriculture producer; according to the 2012 National Census of Agriculture, Iowa was the largest producer of corn and soybeans in the United States. We conducted this performance audit from September 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix presents information on the U.S. Department of Agriculture’s (USDA) budget and programs that are related to climate change. According to USDA officials, USDA does not have a line item in its budget that covers climate change because many of the agency’s climate efforts involve programs across several USDA agencies. However, USDA does compile a “climate change crosscut,” which provides information on the money that various USDA agencies spent on climate change activities. For the purposes of this report, we used budget data that USDA reports to the Office of Management and Budget (OMB) for its annual report to Congress on climate expenditures. Table 7 below is a summary of the amount these agencies spent on climate change efforts, according to OMB, along with a summary of USDA programs associated with this spending. In this appendix, table 8 provides information on the objectives and performance measures related to climate change that are part of the U.S. Department of Agriculture’s (USDA) fiscal years (FY) 2010-2015 and 2014-2018 strategic plans. These underlie USDA’s second strategic goal to “ensure our national forests and private working lands are conserved, restored, and made more resilient to climate change, while enhancing our water resources.” Working lands include both farms and livestock operations. Appendix IV: Information on USDA’s Regional Climate Hubs (Corresponds to Fig. 2) In this appendix, table 9 provides additional details on the U.S. Department of Agriculture’s (USDA) regional climate hubs that are part of the rollover information contained in interactive figure 2. In addition to the individual named above, Anne K. Johnson (Assistant Director), Cheryl Arvidson, Thomas Beall, Carol Bray, Kevin Bray, Christine Broderick, Andrew Burton, Frederick K. Childers, Elizabeth Curda, Scott Heacock, Richard P. Johnson, Leah Marshall, Susan Offutt, Dan Royer, and Sarah Veale made key contributions to this report. | In 2012, the United States produced about $395 billion in agricultural commodities, with about half of this revenue from crop sales and half from livestock. According to the Third National Climate Assessment, climate change has the potential to negatively affect agricultural productivity in the United States through warmer temperatures and an increase in weather extremes. In recent years, USDA has taken actions to help U.S. farmers adapt to climate change and reduce greenhouse gas emissions. GAO was asked to review USDA's climate change efforts. This review examines (1) USDA's climate change priorities and how these align with national priorities, (2) the status of USDA's climate change efforts, and (3) the challenges USDA faces in implementing its climate efforts and the steps it has taken to overcome these challenges. To conduct this work, GAO analyzed USDA documents and data and interviewed USDA officials and other knowledgeable stakeholders, such as farmers and environmental groups. The U.S. Department of Agriculture's (USDA) climate change priorities for agriculture include, among other things, providing better information to farmers on future climate conditions. These priorities generally align with national priorities set by the Administration, which include promoting actions that reduce greenhouse gas emissions, advancing climate science, developing tools for decision makers, and developing better projections of future climate conditions. USDA is engaged in research efforts aimed at better understanding climate change's impacts on agriculture and providing technical assistance to farmers. Through the use of existing conservation and energy programs, USDA aims to reduce greenhouse gas emissions and sequester (store) carbon so it is not released, or is actively withdrawn, from the atmosphere. Helping to make farmers more resilient to climate change is one of USDA's four strategic goals, but the agency is not using its performance planning and reporting process to provide information on how it intends to accomplish this goal or to assess the status of its efforts in this area. According to the Government Performance and Results Act of 1993, as amended, an agency's performance plan is supposed to explain how the agency will accomplish its performance goals, and its performance reports are supposed to review the extent to which those goals have been met. However, USDA performance plans for recent years have not provided a link between the agency's climate efforts and performance goals, and its recent performance reports have not provided information on whether the agency was meeting its performance measures related to climate change. In addition, USDA performance measures do not capture the breadth of the agency's climate efforts. Agency officials told GAO that developing measures for the strategic goal on climate change was difficult. However, USDA has developed measures for other areas, such as conservation, where similar challenges existed. Without developing performance plans and reports that better reflect USDA's climate change efforts, USDA will have difficulty fully assessing its progress in meeting its climate change strategic goal and providing information on its progress to Congress and the public. USDA faces challenges in encouraging farmers to take measures to adapt to climate change and reduce emissions. For example, USDA faces the challenge of turning the large amount of often technical climate research into readily understandable information. To address this challenge USDA is, among other things, developing tools that summarize climate information and communicate research findings to farmers in a more accessible format. USDA also faces a challenge related to the incentive structure that farmers consider when making decisions for their farms. Farmers weigh the financial costs and returns of taking certain actions, but USDA has not provided much information to farmers on the economic costs and returns of taking certain adaptation or emissions reduction actions, such as changing the extent to which they plow their fields. Under federal internal control standards, agencies are to ensure there are adequate means of communicating with external stakeholders when it may have a significant impact on the agency achieving its goals. Without information that is readily accessible to farmers on the farm-level economic costs and returns of taking certain actions in response to climate change, farmers may be reluctant to take these measures. GAO recommends that USDA develop performance measures that better reflect the breadth of USDA climate change efforts and use its performance plans and reports to provide information on how the agency plans to achieve its goals and the status of its efforts. GAO also recommends that USDA develop and provide information to farmers on the economic costs and returns of taking certain actions in response to climate change. USDA concurred with these recommendations. |
In 1994, by congressional direction, DOD developed a space launch modernization plan (known as the Moorman study) that led to the EELV program. In 1995, the Air Force entered a low-cost concept validation phase with four competing contractors. In 1996, the Air Force proceeded into the current pre-engineering and manufacturing development phase with two competing contractors—McDonnell Douglas Aerospace, which later became part of The Boeing Company, and Lockheed Martin Astronautics. In June 1998, the Air Force plans to proceed into the final development phase with the primary purpose of fabricating launch vehicles and activating the launch sites. DOD’s initial acquisition strategy was to select one contractor for final development and production. For development, the plan was to issue a cost-plus-award-fee contract, whereby the government would have paid all of the approximate $1.5 billion in development costs. However, in November 1997, DOD approved a revised acquisition approach designed to maintain the ongoing competition between the two contractors for final development and production. The revised approach was based on forecasts that growth in the commercial space launch services market would support more than one U.S. contractor. Also, the approach anticipates that DOD and the contractors would share in the cost of developing the EELV system, which the Air Force defines as the launch vehicles, infrastructure, support systems, and interfaces. DOD’s cost share is planned to be fixed at an amount not to exceed $1 billion—$500 million for each contractor. The contractors are expected to contribute their own funds, as necessary, to complete EELV development. To provide the contractors sufficient flexibility in financing their share of development costs, the Air Force is proposing to use an acquisition instrument that is referred to as an “other transaction.” Such instruments, which are authorized under 10 U.S.C. 2371, are agreements other than contracts, cooperative agreements, or grants. Consequently, other transaction instruments are not subject to federal procurement laws or the regulations that specifically govern contracts, cooperative agreements, or grants. They (1) permit a deregulation of the government research and development system and allow rules and regulations to be applied by agreement on a selective basis if deemed to add value and (2) allow significant flexibility in negotiating terms and conditions with recipients. They are, however, subject to certain laws that have general applicability, such as civil rights and trade secret statutes. With the signing of two other transaction instruments (one for each development contractor), the Air Force intends to concurrently (1) award one or two firm-fixed-price initial launch service contracts for 30 or more satellite launches that are to occur during fiscal years 2002 through 2005 and (2) execute leasing, licensing, and base support agreements for launch site and facility use. According to the Air Force, this approach is intended to establish an interdependency among the instruments, contracts, and agreements to better ensure that a full family of vehicles—medium-lift, intermediate-lift, and heavy-lift—is developed. The Air Force believes that the contractors would not develop this family of vehicles if the contractors were not concurrently obligated to provide a full range of launch services. DOD’s goal of reducing the cost of launching satellites into space is measured in terms of recurring production and launch costs. However, fluctuations in the contents of the EELV mission model make the results of analyses, based on the model, uncertain. More importantly, the methodology itself is inadequate for measuring potential program savings because it does not include the investment costs that DOD plans to incur in EELV system development to achieve cost savings. A net present value (NPV) analysis, which would use total program costs, is preferred. The Air Force’s methodology for measuring recurring cost reduction is described in the following way: EELV recurring costs, meaning production and launch costs, should be a minimum of 25 percent less, with an objective of 50 percent less, than the recurring costs of using existing expendable launch vehicles—the Delta, Atlas, and Titan class systems. To measure this goal, estimated recurring costs for the EELV system, which are provided by the competing contractors, are subtracted from the equivalent recurring costs for existing vehicles, which is known as the launch cost baseline. These costs are based on projected government launch requirements for fiscal years 2002 through 2020. The launches from 2011 through 2020 are extrapolations, therefore less certain, and are done solely for EELV program purposes. To illustrate this methodology, we estimated the launch cost baseline for existing launch vehicles to be about $15.4 billion (in fiscal year 1995 dollars) by using a total of 164 launches through fiscal year 2020. If the minimum 25-percent cost reduction goal were achieved, the estimated savings would be about $3.9 billion through fiscal year 2020; if the objective 50-percent cost reduction goal were achieved, the estimated savings would be about $7.7 billion for the same period. Since program inception in 1995, the total number, type, and timing of launches contained in the Air Force’s EELV mission model have fluctuated considerably, making a cost reduction estimate, based on the model, uncertain. The major reasons for the fluctuations were (1) assignment of satellites to the wrong type of launch vehicle, (2) inclusion of unverified launch requirements, and (3) reductions in the number of heavy-lift launches because of satellite downsizing. The total number of launches has varied from 169 to 204, with the current Air Force estimate at 183. The most significant fluctuations occurred for fiscal years 2011 through 2020. A credible EELV mission model is fundamental to assessing the program’s principal stated purpose—reducing recurring production and launch costs. Because the mission model is also provided to the development contractors to estimate EELV costs, its accuracy is essential for an assessment of initial launch service costs. In commenting on a draft of this report, DOD stated that the Air Force is in the process of developing a new launch cost baseline, built around the most current EELV mission model, in preparation for the milestone II review. On the basis of 164 launches, we estimated that the reduction in recurring costs through 2020 would be about $5.7 billion (in fiscal year 1995 dollars), or 37 percent. Although our estimate exceeds the minimum EELV program goal of 25 percent, there is still uncertainty regarding this estimate because of persistently questionable launch requirements. Fluctuations in the number of launches can also have a significant effect on the launch cost baseline of existing vehicles. Heavy-lift vehicle costs are particularly sensitive to quantity changes because the cost to launch a Titan IV can decrease substantially as the number of launches decreases, depending on when the launches occur. Although such a cost decrease initially appears counter-intuitive, it is because of the high cost associated with operating and maintaining Titan IV launch capabilities. For example, an Air Force analysis shows that the nine Titan IV launches currently in the mission model would cost about $473 million each, or $4.3 billion, but seven launches would cost about $395 million each, or $2.8 billion. Thus, two Titan IV launches could change the launch cost baseline by $1.5 billion. The overall effect would be to lower the savings from 37 percent to 32 percent. Given this degree of cost sensitivity, a credible mission model is essential. A detailed listing of the composition and fluctuations in the EELV mission model is shown in appendix I. Although measuring a reduction in recurring costs is one method of assessing potential program savings, this method is inadequate because it does not include nonrecurring investment costs that DOD plans to incur to achieve cost savings. The standard criterion for deciding whether a government program can be justified on economic principles is NPV, which would include both recurring and nonrecurring costs, as well as the time value of money. Programs with positive NPVs are generally preferred whereas programs with negative NPVs should generally be avoided. Our initial NPV analysis showed that DOD would achieve a positive return on its investment in the EELV program. However, our analysis does not include all government costs because the total development costs are unknown. DOD does not know the total costs because the effect of reimbursing the competing contractors for their independent research and development (IR&D) costs, as a result of using an other transaction instrument, has not been determined. Considering that each contractor could invest between $800 million and $1.3 billion in an EELV system, a portion of which could be reimbursed by the government, the potential program savings could be substantially lower. We performed an NPV analysis based on 164 launches. We then determined the program’s net savings through 2020 using DOD’s total planned development costs of $1.4 billion, which includes $1 billion in incremental costs starting in June 1998. We also determined, separately, the net savings of DOD’s planned $1 billion incremental investment—$500 million per contractor—to determine whether it was economically prudent to continue with the program. We repeated these two approaches, based on launch projections through 2010, to eliminate the period of greater launch uncertainty that extends from 2011 through 2020. Both the Air Force Space Command’s national mission model and the Department of Transportation’s Commercial Space Transportation Advisory Committee’s commercial mission model only make launch projections through 2010 because of uncertainties in making longer range forecasts. In addition, a shorter time period would be consistent with what an Air Force official stated was the contractors’ expectations for recouping their investments. Table 1.1 shows that the NPV, using total planned development costs, would be $1.8 billion through 2020. Based only on the planned incremental costs starting in June 1998, the NPV would be $2.3 billion through 2020. The analysis of incremental costs results in a larger NPV because, by definition, prior year costs are not included in the cost calculation, but the benefits remain the same. The year in which costs equal benefits (referred to as investment payback) is 2006 and 2004 for total and incremental development costs, respectively. Also, table 1.1 shows the NPV based on a shorter time period. If total planned development costs were considered, the NPV would be $693 million through 2010. If incremental costs only were considered, the NPV would be $984 million through 2010. The investment payback for both calculations also would be 2006 and 2004, respectively. Regarding DOD’s $1 billion incremental investment cost, Air Force officials informed us that they determined this amount in two ways. First, they estimated that government launches will represent about one-third of the U.S. commercial launch market and that the investment amount should be proportionate to this market. Therefore, about $500 million a contractor, or one-third of about $1.5 billion estimated per contractor to develop its version of the EELV system, was considered reasonable. Second, the officials stated that the contractors advised the Air Force that about $500 million each was needed to ensure a competitive corporate rate of return on investment. The officials stated that without the DOD investment, the contractors would not develop an EELV system to meet the full range of DOD’s launch requirements or within the planned time period to transition from existing vehicles to an EELV. Using NPV analysis, the net program benefits are positive when these planned incremental costs are considered. Such an analysis for an EELV system should be positive, given that DOD’s primary program objective is to actively reduce costs and not simply break even on its investment. However, DOD does not know what its total costs will be because the effect of reimbursing the competing contractors for their IR&D costs, as discussed in the following section, has not been determined. Until the total costs are determined, the net program savings will be unknown. As a matter of policy, DOD recognizes contractor costs incurred for IR&D projects as a necessary cost of doing business and considers the projects as a valuable contributor to DOD’s overall research and development effort. Generally, when a contractor charges an allowable cost to IR&D, the cost is accumulated as overhead and later applied as an overhead rate to government contracts. According to an Air Force document, IR&D costs could include, under Federal Acquisition Regulation 31.205-18(e), the costs contributed by the contractors for work under the EELV other transactions instrument. “. . . the committee intended that the sunk cost of prior research efforts not count as cost-share on the part of the private sector firms. Only the additional resources provided by the private sector needed to carry out the specific project should be counted.” The amount of IR&D costs associated with the EELV program has yet to be resolved within DOD. According to a DOD representative, the amount could be quite high, considering that each contractor could invest between $800 million and $1.3 billion. To the extent that IR&D costs would be reimbursed by the government, the result would be to decrease the EELV contractors’ investment and reduce the government’s savings. An Air Force document indicates that it is important to determine the IR&D amount in order to reduce the risk of a dispute regarding the allowance of such costs. The usual means of doing this under a contract is with an advance agreement. Determining the amount also would assist DOD in performing an NPV analysis to estimate EELV program savings. The use of a relatively new acquisition method, called other transactions, will challenge DOD in determining how best to protect the government’s interests. Also, risks are inherent in the program because of (1) DOD’s plan to limit its investment and the contractors’ resulting unwillingness to guarantee a system to meet the government’s launch requirements and (2) a chance that certain launch facilities may not be available as currently scheduled. However, to the extent that the risks can be mitigated, the primary program benefit is expected to be reduced costs to the government. Initially, under DOD’s revised acquisition approach, the Air Force planned to award firm-fixed-price contracts to both EELV contractors for the development effort. However, after the Air Force released a draft request for proposal in late November 1997, EELV program officials stated that both contractors were unwilling to accept firm-fixed-price contracts.According to these officials, the contractors’ unwillingness was because of the resulting risk to corporate financing—a situation whereby the contractors’ long-term contractual liability would require committing their share of EELV development costs in advance. As a result, the Air Force is proposing to use other transaction instruments, instead of standard government contracts, to develop the EELV system. The specific other transactions authority cited by the Air Force is section 845 of the National Defense Authorization Act for Fiscal Year 1994 (P.L. 103-160, Nov. 30, 1993), as modified by section 804 of the National Defense Authorization Act for Fiscal Year 1997 (P.L. 104-201, Sept. 23, 1996). These sections provide DOD with authority, under 10 U.S.C. 2371, to carry out prototype projects that are directly relevant to weapons or weapon systems proposed to be acquired or developed by DOD. The authority, however, is very broad because it includes not only prototype systems but also lesser projects such as subsystems, components, and technologies. Also, the authority is temporary, expiring on September 30, 1999. In December 1996, the Under Secretary of Defense for Acquisition and Technology notified the secretaries of the military departments and the directors of defense agencies about the use of other transaction instruments for prototype projects. He mentioned the flexibility associated with using such instruments as alternatives to contracts, listing 19 statutes that apply to contracts, but which are not necessarily applicable to other transactions. He emphasized that the use of such instruments should incorporate good business sense and appropriate safeguards to protect the government’s interest, including assurances that the cost to the government is reasonable, the schedule and other requirements are enforceable, and the payment arrangements promote on-time performance. He also emphasized that DOD officials who are delegated the authority to use such instruments should have the level of responsibility, business acumen, and judgment to enable them to operate in this relatively unstructured environment. In a March 1997 report, the DOD Inspector General’s office identified problem areas in awarding and administering other transactions. The office reviewed 28 randomly selected other transactions valued at $1.2 billion that were issued by the Defense Advanced Research Projects Agency—4 were section 845/804 prototype projects and 24 were for research. In general, the report stated that no guidance existed for (1) evaluating proposed contributions, (2) monitoring actual research costs, or (3) including an interest provision in other transaction instruments. In March 1998, the Inspector General testified about a continuing concern regarding the lack of controls over the other transaction process since normal rules and procedures generally do not apply. The Inspector General emphasized that although 10 U.S.C. 2371 requires the Secretary of Defense to issue regulations on other transactions, none have been published. On the basis of the 1997 report, the Inspector General stated that there is a need to (1) ensure that cost-sharing arrangements are honored, (2) monitor the actual cost of work against the funds paid, (3) place funds advanced to recipients into an interest bearing account until used, and (4) standardize the audit clause. She also testified that a more current review of 78 other transactions had found problems similar to those in the 1997 report. With regard to an audit clause, the Under Secretary of Defense for Acquisition and Technology identified 10 U.S.C. 2313 in his December 1996 memorandum as being an inapplicable statute for other transactions. This statute provides audit authority to a defense agency awarding certain types of contracts and to the Comptroller General for defense contracts awarded other than through sealed bid procedures. Safeguards, such as government audit authority, that are common to government contracting would not be available under other transaction instruments unless such authority was negotiated as part of the instrument. An official of the Inspector General’s Office of General Counsel emphasized the importance for the government to be able to verify and audit certain aspects of other transactions. He stated that a prudent business practice would provide for audits to verify contribution valuation, cost share, performance milestones, and final costs. In commenting on our draft report, DOD stated that because (1) the government is providing funding to private contractors to develop a commercial item and (2) the government’s funding is significantly less than the contractors’ funding, the contractors do not intend to provide, and the government does not expect to get, visibility into corporate investment and financing. DOD stated that this unique situation is not reasonably subject to audit requirements that generally apply to contracts. DOD, instead, emphasized the importance of government insight into the contractors’ development efforts, stating that a methodology will be established to audit the accomplishment of milestones prior to disbursing funds. The amount of government funds planned to be used to develop the EELV system through other transaction instruments raises a question of materiality. There are indications that most of DOD’s other transactions for prototype projects, historically, have been relatively small in dollar value. For example, the Inspector General testified that for fiscal years 1990 through 1997, she believed that 59 other transaction agreements for prototype projects were valued at $837 million. Although no cost-sharing breakout between the government and the recipient was provided, the average value per agreement was about $14 million. In a DOD report on cooperative agreements and other transactions entered into during fiscal year 1997 that was submitted to the congressional defense authorizing committees, we noted that of 50 other transactions for prototype projects, the government’s contribution on the largest 1 was $60 million. These data contrast sharply with DOD’s intentions to negotiate two EELV other transaction instruments with a government contribution of $500 million each. The significance of these proposed amounts and the lack of DOD regulations for other transactions not only increase the fiduciary responsibility of DOD officials who are authorized to negotiate such instruments but also may necessitate that some degree of government audit authority be established. According to Air Force documents, the two contractors are not willing to guarantee system performance under a firm-fixed-price contract or an other transaction instrument for EELV development. This unwillingness is because DOD’s financial risk is to be capped at $500 million per contractor, while the contractors’ financial risk would be an open-ended commitment. As a result, the contractors would only agree to provide a “best effort” in developing the EELV system, meaning that they would not guarantee a launch vehicle capability to meet the government’s requirements. One DOD representative indicated a possible inconsistency between such a system development agreement and the expectation that the contractors would subsequently deliver fully functional launch services. Such an inconsistency could create a risk to the government of not satisfying its launch requirements. However, the Air Force is relying on the contractors being motivated by a compelling financial interest in an expected lucrative international commercial launch services market. Also, the Air Force intends to negotiate performance-based milestones that represent significant activities under the development effort and to pay the contractors based on completing each milestone. In the case of nonperformance, the Air Force should withhold payment because no payment would be earned. In our June 1997 report on the EELV program, we identified three factors that could create a risk in achieving a smooth launch facility transition at the Cape Canaveral and Vandenberg launch ranges in Florida and California, respectively. They were (1) conflicts associated with existing facilities that the contractors expected to use or that would be affected by an EELV system, (2) completion of environmental regulatory requirements before funds can be committed to engineering and manufacturing development, and (3) the amount of time needed for facility modification and new construction. We did not reassess these factors for this report; however, current Air Force planning documentation identifies meeting launch site facility preparation schedules as the primary program risk. The reason is that construction must begin shortly after the milestone II decision in June 1998 to support the first EELV launch in fiscal year 2002. Other Air Force planning documents show the continued use of certain launch facilities for several months after they are scheduled to undergo site preparations for EELV. In commenting on our draft report, DOD cited a Titan IV launch complex as an example. We also reported on vehicle propulsion, systems integration, and software as technical risk factors that could adversely affect program cost and schedule goals. Current Air Force documentation also identifies these three factors as risks common to both contractors and indicates that mitigation efforts are underway. The primary benefits to the EELV program are expected to culminate in lower costs, whether they are measured in terms of recurring production and launch costs or NPV. Before revising its acquisition approach, DOD was planning on a natural synergy between the federal government and the commercial space industry because of a common requirement for space launch. In our June 1997 EELV report, we discussed DOD’s interest in seeing the EELV used for commercial purposes in order to expand the customer base and help lower costs. At that time, DOD was planning to pay for all development costs—about $1.5 billion—but the contractors indicated a willingness to invest in EELV development. We recommended that the Secretary of Defense devise a cost-sharing mechanism for EELV development to help reduce the government’s investment, particularly in view of the expected compensating benefits to the winning contractor to enhance its competitive position in the international commercial launch services market. In July 1997, the House Committee on Appropriations noted that while partners share benefits, they also share costs, and it suggested that the Air Force aggressively pursue commercial cost sharing. In August 1997, DOD responded to our report by agreeing with the recommendation and stating that the cost-sharing issue would be reviewed as the acquisition strategy was developed over the next 12 months. In September 1997, the Conference Committee on the fiscal year 1998 DOD appropriations bill suggested that the Air Force require a successful bidder to share in the EELV development cost. In November 1997, when DOD approved the Air Force’s proposal to revise the acquisition strategy, contractor cost sharing was one of the requirements. With the Air Force’s proposal for the government’s share to not exceed $1 billion for the two contractors, about $500 million in DOD development costs were expected to be avoided, based on the original $1.5 billion estimate. However, this cost avoidance will be reduced by the need to acquire two additional launches with procurement funds under the initial launch services contracts. The Air Force had originally planned to acquire these two launches for test purposes using development funds. Thus, the net cost avoidance is expected to be about $295 million, with the remaining $205 million to be shifted to a procurement account. In our March 1996 report on DOD research by nontraditional means, we discussed the importance of leveraging the private sector’s financial investment by using other transactions and cooperative agreements. In doing so, DOD can first stretch its research and development funds by having commercial firms contribute to the cost of developing technologies with both military and commercial applications. Second, cost sharing is appropriate and a matter of fairness when commercial firms expect to benefit financially from sales of the technology. Third, a cost-sharing arrangement demonstrates a commitment to the project, enabling less rigid government oversight requirements. These three elements appear to exist in the case of the revised EELV acquisition approach. Air Force officials emphasized that more recent information regarding the projected growth in the commercial launch services market, primarily based on the expected growth in commercial communication satellites, was a key factor in revising the EELV acquisition approach. The recent projection contrasts sharply from DOD’s 1994 space launch modernization plan whereby the commercial market was not considered to be nearly as promising. As a result, the Air Force concluded that this growing market was sufficient to support two EELV contractors, instead of one. Two contractors would ensure more effective competition for future government launch requirements and would result in a change from cost-based contracting to price-based contracting, using the commercial market for launch services. In its November 1994 implementation plan for national space transportation policy, DOD envisioned that the EELV system would (1) maximize common systems and components to reduce procurement costs and enhance production rates and (2) decrease the number of launch complexes, launch crews, and support requirements to reduce operations costs. Although the gains envisioned may not be as large because two contractors are to be supported, the Air Force is still expecting standardization—launch pads configured for all EELV sizes (medium-lift, intermediate-lift, and heavy-lift) and standard payload-to-vehicle interfaces—that should help reduce overall costs and achieve more efficient launch operations than with existing vehicles. In addition, the availability of two launch vehicle manufacturers that use standard payload interfaces would better ensure that government satellites are launched if one contractor’s fleet of vehicles were grounded. DOD’s revised EELV acquisition approach represents a significant departure from the standard government procurement approach. The revision was brought about primarily because commercial interests are expected to dominate the worldwide space launch service market. When making its investment decision in the EELV system, DOD should apply a market-oriented approach, using NPV analysis, to ensure that expected savings are suitable, including consideration for unforeseen future costs. This approach would help protect the government’s interests and be consistent with the EELV program goal of reducing the cost of launching satellites into space. The means by which DOD intends to negotiate an agreement with the competing contractors—other transaction instruments—calls for specific guidance to govern the EELV development effort. Such guidance is particularly important considering the general lack of DOD regulations on the use of such instruments. It is also important considering the high-dollar EELV development program that is to be executed in what is characterized as a relatively unstructured environment. Assuming that the challenge in using other transaction instruments can be met and program risks can be overcome, the primary benefits associated with the EELV system should be reduced costs to the government. Reduced costs would include lower (1) short-term nonrecurring costs by forming a cost-sharing partnership with space industry contractors to develop a product that has mutual benefits for the government and commercial space launch sectors and (2) long-term recurring costs by designing a family of common launch vehicles, standardizing launch facilities and payload interfaces, and establishing price-based competition between two contractors for future launch services. To protect the government’s interest, and to be consistent with entering a business partnership with launch industry contractors for EELV development, we recommend that the Secretary of Defense take steps to ensure that an NPV analysis of the program is performed before making a milestone II decision. The analysis should include (1) DOD’s total planned incremental investment costs for development, (2) the most current EELV costs from the contractors’ proposals and DOD’s estimate for launch services, and (3) a time period for which launch requirements can be verified and reasonably forecasted. The Secretary should (1) establish criteria for judging the results of the analysis that would provide a suitable margin for discounted savings and unforeseen future costs and (2) determine the amount of IR&D costs that need to be factored into the analysis. If the results of the NPV analysis do not meet the criteria, we recommend that the Secretary review the program to either (1) reduce the amount of the government’s planned incremental investment or (2) rejustify the program on a basis other than cost reduction. Because DOD has not prescribed regulations for other transactions, as required under 10 U.S.C. 2371(g), we recommend that the Secretary review the Air Force’s planned use of other transaction instruments for EELV development to ensure that the government’s interest is protected. Consideration should be given to (1) the criteria expressed by the former Under Secretary of Defense for Acquisition and Technology and (2) the DOD Inspector General’s concerns regarding the other transactions process, including some degree of government audit authority. DOD agreed with our recommendation to perform a NPV analysis. DOD stated that such an analysis (1) was a more appropriate affordability measure for determining EELV program viability than the financial analysis performed to date and (2) would be presented during the milestone II decision process. DOD did not specify how the analysis would be used to support the decision. Our intent was to emphasize the importance of using such an analysis as a rigorous means of measuring economic benefits to the government, considering the unique business arrangement DOD is planning with launch industry contractors. DOD also agreed with our recommendation concerning protection of the government’s interest in the use of other transaction instruments for EELV development. DOD stated that adequate visibility into the contractors’ progress would be obtained by a clause in the development agreements to provide insight into technical and schedule performance—for example, to verify the accomplishment of milestones prior to payment. Regarding the issue of government audit authority, or oversight, DOD differentiated between (1) other transactions for research projects that have a statutory requirement for cost sharing by the recipients to the extent the Secretary of Defense determines practicable and (2) other transactions for prototype projects, that have no such statutory requirement, thus leaving the determination of a fair and reasonable amount of government development funding for the EELV program up to the contracting officer. Collectively, these statements imply that some degree of government audit authority may not be needed for the EELV program. Given that such matters are negotiable, our intent was to stress the importance of the Secretary of Defense giving due consideration to some degree of government audit authority because of the (1) significant amount of government development funds planned to be used for EELV and (2) lack of DOD regulations on the use of other transactions for either prototype projects or research. DOD’s comments on a draft of this report are reprinted in their entirety in appendix II. DOD also provided clarifying comments, which we have incorporated, as appropriate. To evaluate the Air Force’s plans and progress in developing the EELV system, we examined acquisition planning documents, budget information, cost assessment methodologies, launch requirements, and information related to other transaction authority and guidelines. We performed our work primarily at the Air Force Space and Missile Systems Center in El Segundo, California. We held discussions with representatives of the Office of the Secretary of Defense, the Department of the Air Force, the National Aeronautics and Space Administration (NASA), the Federal Aviation Administration, Washington, D.C., and the Air Force Space Command, Colorado Springs, Colorado. We acquired limited launch requirement information from the National Reconnaissance Office, Chantilly, Virginia. In addition, we held discussions with private industry representatives from Lockheed Martin Telecommunications, Sunnyvale, California, and Space Systems/Loral, Palo Alto, California; The Boeing Company, Huntington Beach, California; Hughes Space and Communications International, Inc., Los Angeles, California; and TRW Space and Electronic Group, Redondo Beach, California. Because we noted considerable fluctuations in the contents of the Air Force’s EELV mission model during the past 2 years, we adjusted the latest mission model data based on discussions with Air Force satellite program office representatives and NASA representatives and a review of satellite program documentation. Specifically, we excluded 19 NASA and classified launches because they were not fully justified. We used the adjusted mission model data to analyze recurring costs and to perform an NPV analysis. In performing our recurring cost analysis, we obtained current production and launch costs for Delta, Atlas, and Titan launch vehicles from the respective launch program offices. We obtained EELV production and launch costs from the EELV program office, which were based on contractors’ proposals and the Air Force’s evaluation during selection of the two contractors in 1996. (The Air Force is currently revising EELV cost estimates in preparation for the milestone II decision in June 1998.) In performing our NPV analysis, we used our adjusted mission model data and the data we obtained for our recurring cost analysis. In addition, we obtained DOD’s planned investment costs based on a combination of congressional appropriations and funds programmed by the Air Force for EELV development. We used the real discount rate of 3.7 percent, adjusted for forecasted inflation, based on marketable Treasury debt with maturity comparable to that of the EELV program. We performed our review between August 1997 and April 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Ranking Minority Member, Subcommittee on National Security, House Committee on Appropriations; the Chairmen and Ranking Minority Members of the House Committee on National Security; and the Chairmen of the Senate Committee on Armed Services and the Subcommittee on Defense, Senate Committee on Appropriations. We are also sending copies to the Secretary of Defense and the Director, Office of Management and Budget. We will make copies available to others upon request. If you or your staff have any questions concerning this report, please call me on (202) 512-4841. Major contributors to this report are listed in appendix III. Since program inception in 1995, the total number of launches contained in the Air Force’s Evolved Expendable Launch Vehicle (EELV) mission model has fluctuated from 171 to 194 to 204 to 169 to 183. The types of launch vehicles—medium-lift, intermediate-lift, and heavy-lift—and the timing of launches have also varied. The composition of, and fluctuations within, the model, including our adjusted model are shown in table I.1. The number of medium-lift vehicles has fluctuated from 90 to 116 to 80 to 71 to 86. The major reasons were (1) incorrect assignment of 29 Space-Based Infrared System (SBIRS)-Low satellites for launch on intermediate-lift vehicles in the July 1997 model, rather than medium-lift vehicles; (2) a decision that after 2010, SBIRS-Low satellites would be launched on an existing commercial launch vehicle system, called Athena, which is smaller than a medium-lift EELV, and (3) the omission of 16 Global Positioning System satellites from the March 9, 1998, model. The number of intermediate-lift vehicles has also fluctuated, from 63 to 115 to 89. The major reasons were (1) the incorrect assignment of 29 SBIRS-Low satellites for launch on intermediate-lift vehicles rather than medium-lift vehicles and (2) adding 31 classified satellites, of which 12 were not included in a launch summary document and were considered unverified requirements, according to Air Force Space Command representatives. The number of heavy-lift vehicles has decreased almost 50 percent, from 17 to 9. The major reason was because of downsizing the number of satellites. This downsizing was stimulated by the high cost of launching heavy payloads on the Titan IV launch vehicle. On the basis of our analysis, we identified 164 satellite launches from 2002 through 2020. We determined these launches through discussions with Air Force satellite and launch vehicle program office representatives and National Aeronautics and Space Administration (NASA) representatives and from satellite program documentation. Compared with the Air Force’s March 24, 1998, EELV mission model, our adjusted model excluded seven NASA launches because NASA plans to downsize the satellites associated with these seven launches and use vehicles that are smaller than the EELV system. Our adjusted model also excluded 12 classified launches because they were considered to be optional; were not listed as launch requirements in a February 1998 launch summary; and according to Air Force Space Command representatives, were not based on validated requirements. Larry J. Bridges Steve Martinez James D. Moses Allan Roberts Allen D. Westheimer The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Evolved Expendable Launch Vehicle (EELV) program, with emphasis on the Department of Defense's (DOD) revised acquisition approach, focusing on whether: (1) DOD's goal of reducing recurring space launch costs could be achieved; (2) DOD's planned investment would result in commensurate benefits; and (3) there are risks that could affect the program. GAO noted that: (1) DOD's goal in acquiring the EELV system is to reduce recurring production and launch costs by at least 25 percent for fiscal years 2002 through 2020 from the costs that would be incurred if the existing Delta, Atlas, and Titan launch vehicles were used; (2) using DOD's methodology, GAO estimated that the program would exceed the 25-percent goal; (3) however, the number, type, and timing of launches specified in the vehicle's mission model have continued to fluctuate, making a cost reduction estimate, based on the model, uncertain; (4) the major reasons for the fluctuations were that: (a) satellites were assigned to the wrong type of launch vehicle; (b) launch requirements were unverified; and (c) satellite downsizing has changed launch requirements; (5) the Air Force is in the process of developing a new launch cost baseline and cost reduction estimate, based on the most current EELV mission model, in preparation for the DOD milestone II review in June 1998; (6) more importantly, the Air Force's recurring cost methodology does not adequately measure the economic benefits of the program; (7) the reason is that nonrecurring investment costs, which DOD plans to incur to develop the system in order to achieve a cost savings, are not included; (8) the standard criterion for deciding whether a government program can be justified on economic principles--the primary purpose of this program--is net present value (NPV), which would include both recurring and nonrecurring costs and the time value of money; (9) DOD has not yet officially performed a NPV analysis and has not identified all government costs to do so; (10) the use of other transaction instruments for EELV development will challenge DOD in determining how best to protect the government's interests; (11) under DOD's revised acquisition approach, the contractors are not willing to guarantee system performance because their financial risk would be open ended and DOD's investment would be limited; (12) despite this position, the Air Force is counting on the contractors to provide launch services to satisfy the government's requirements, based on their financial interest in a growing commercial market for launch services; (13) in addition, the Air Force planning documentation states that the primary program risk is in meeting launch site facility preparation schedules; and (14) other Air Force planning documentation shows the continued use of certain launch facilities for several months after the facilities are scheduled to undergo site preparation for the vehicle. |
The airline industry has experienced considerable merger and acquisition activity since its early years; especially immediately following deregulation in 1978. Figure 1 provides a timeline of mergers and acquisitions for the four largest surviving airlines, assuming an American–US Airways merger, based on passengers served. A flurry of mergers and acquisitions occurred during the 1980s, when Delta and Western Airlines merged, United acquired Pan Am’s Pacific routes, Northwest acquired Republic Airlines, and American and Air California merged. In 1988, merger and acquisition review authority was transferred from DOT to DOJ. Since 2000, American acquired the bankrupt airline TWA in 2001, America West acquired US Airways in 2005, while the latter was in bankruptcy; Delta acquired Northwest in 2008; United acquired Continental in 2010; and Southwest acquired AirTran in 2011. Certain other attempts at merging since 2000 failed because of opposition from DOJ or employees and creditors. For example, in 2000, an agreement was reached that allowed Northwest to acquire a 50 percent stake in Continental (with limited voting power) to resolve the antitrust suit brought by DOJ against Northwest’s proposed acquisition of a controlling interest in Continental. A proposed merger of United and US Airways in 2000 also resulted in opposition from DOJ, which found that in its view, the merger would violate antitrust laws by reducing competition, increasing air fares, and harming consumers on airline routes throughout the United States. Although DOJ expressed its intent to sue to block the transaction, the parties abandoned the transaction before a suit was filed. In 2006, the proposed merger of US Airways and Delta fell apart because of opposition from Delta’s pilots and some of its creditors, as well as its senior management. Since deregulation in 1978, the financial stability of the airline industry has become a considerable concern for the federal government due, in part, to the level of financial assistance it has provided to the industry through assuming terminated pension plans and other forms of assistance. From 1979 through 2012, there have been at least 194 airline bankruptcies, according to Airlines for America (A4A), an airline trade group. While most of these bankruptcies affected small airlines that were eventually liquidated, 4 of the more recent bankruptcies prior to American’s (Delta, Northwest, United, and US Airways) are among the largest corporate bankruptcies ever, excluding financial services firms. During these bankruptcies, United and US Airways terminated the defined benefit pension plans for their labor groups and $9.7 billion in claims were shifted to the Pension Benefit Guarantee Corporation (PGBC). Further, to respond to the financial shock to the industry from the September 11, 2001, terrorist attacks, the federal government provided airlines with $7.4 billion in direct assistance and authorized $1.6 billion (of $10 billion available) in loan guarantees to six airlines. Although the airline industry has experienced numerous mergers and bankruptcies since deregulation, growth of existing airlines and the entry of new airlines have contributed to a steady increase in capacity, as measured by available seat miles. Previously, we reported that although one airline may reduce capacity or leave the market, capacity returns relatively quickly through new airline entry and expansion of the remaining airlines. However, in recent years this dynamic may be changing. Domestic capacity growth stalled in 2008 owing to the recession and high fuel prices and has not rebounded despite a strengthening economy and demand for air travel (see fig. 2). In recent years, a key factor limiting capacity growth has been high fuel prices, according to industry analysts. In the early part of the last decade while network airlines were restructuring their costs through bankruptcy, low cost airlines like Southwest and JetBlue expanded owing to lower costs, especially for labor (see fig. 3). As a result, while in 2002, network airlines offered 67 percent of domestic seat capacity versus 23 percent for low cost airlines, by October 2012, network airlines share of domestic seats had fallen to 52 percent and low cost airline’s share had risen to 33 percent. However, the expansion of low cost airlines in recent years may have slowed owing to higher fuel costs that diminished their relative cost advantage over network airlines. With fuel costs consuming a greater proportion of airline operating costs for all airlines, any cost advantage that low cost airlines had with respect to labor costs over network airlines is diluted. Finally, DOJ and DOT’s analysis of merger impacts have relied on an expectation that entry by low cost airlines, especially Southwest, would check airline fare increases following a merger. However, that practice might erode as Southwest expansion has slowed and it recently merged with a key low cost rival, reducing the number of low cost airlines that might challenge post merger fare increases. In 1993, DOT published a report entitled the The Southwest Effect that concluded that low cost airlines like Southwest lowered fares in markets they entered and that DOT policy should be to encourage the growth of Southwest and airlines like it. Congressional action and DOT policy in subsequent years, especially in the award of operating rights called “slots” at congested airports like Washington Reagan and New York LaGuardia, favored new entrant airlines like Southwest. Similarly, DOJ cited the relinquishment of 36 slots by Continental to Southwest at Newark Liberty International Airport as alleviating its principle concerns in determining not to object to the United–Continental merger in 2010. A November 2008 paper by Goolsbee and Syverson, found that even the threat of entry by Southwest in a market helped to lower fares in that market, but only if Southwest already operated at one of the market endpoints. More recently though, a 2013 study suggests that the Southwest Effect may not be as prominent following a merger. This study found that Southwest raised fares in markets following the mergers of Delta–Northwest and US Airways– America West more than average fare increases overall, unless another low cost airline was already in that market. The merger of Southwest with a key rival in 2011 could further lessen the potential that Southwest would deter or counteract higher fares in markets following a merger. The DOJ’s review of airline mergers and acquisitions is a key step for airlines hoping to consummate a merger. For airlines, as with other industries, DOJ uses an analytical framework set forth in the Horizontal Merger Guidelines (the Guidelines) to evaluate merger proposals. In addition, DOT plays an advisory role for DOJ and, if the combination is consummated, may conduct financial and safety reviews of the combined entity under its regulatory authority. Finally, because American has been under Chapter 11 bankruptcy protection since 2011, the merger also required federal bankruptcy court approval. Most proposed airline mergers or acquisitions must be reviewed by DOJ as required by the Hart-Scott-Rodino Antitrust Improvements Act (Act). In particular, under the Act, an acquisition of voting securities or assets above a set monetary amount must be reported to DOJ (or the FTC for certain industries) so the department can determine whether the merger or acquisition poses any antitrust concerns. To analyze whether a proposed merger or acquisition raises antitrust concerns—whether the proposal will likely create, enhance, or entrench “market power” or facilitate its exercise—DOJ follows an analytical process set forth in the Guidelines. The commentary to the Guidelines identifies five factors that the department considers in reviewing a merger but notes that their importance varies according to the nature of the industry and the scope of the merger. The five factors considered by DOJ are: the relevant product and geographic markets in which the companies operate and whether the merger is likely to significantly increase concentration in those markets, which in the case of airlines principally applies to city-pair markets; the extent of potential adverse competitive effects of the merger, such as whether the merged entity will be able to charge higher prices or restrict output for the product or service it sells; whether other competitors are likely to enter the affected markets and whether they would counteract any potential anticompetitive effects that the merger might have posed; the verified “merger specific” efficiencies or other competitive benefits that may be generated by the merger and that cannot be obtained through any other means; and whether, absent the merger or acquisition, one of the firms is likely to fail, causing its assets to exit the market. In making the decision whether the proposed merger is likely anticompetitive, DOJ considers the particular circumstances of the merger as it relates to the Guidelines’ five-part analysis. The greater the potential anticompetitive effects, the greater the offsetting verifiable efficiencies for DOJ to clear a merger must be. However, according to the Guidelines, efficiencies almost never justify a merger if it would create a monopoly or near monopoly. If DOJ concludes that a merged airline threatens to deprive consumers of the benefits of competitive air service, then it will seek injunctive relief in a court proceeding to block the merger from being consummated. For example, a proposed merger of United Airlines and US Airways was opposed by DOJ, which found that, in its view, the merger would violate antitrust laws by reducing competition, increasing air fares, and harming consumers on airline routes throughout the United States. In some cases, the parties may agree to modify the proposal to address anticompetitive concerns identified by DOJ—for example, selling airport assets or giving up slots at congested airports—in which case DOJ ordinarily files a complaint with the court along with a consent decree that embodies the agreed-upon changes. DOT conducts its own analyses of airline mergers and acquisitions. While DOJ is responsible for upholding antitrust laws, DOT reviews the merits of any airline merger or acquisition and submits its views and relevant information in its possession to DOJ. DOT also provides some essential data—for example, the airlines’ routes and passenger traffic—that DOJ uses in its review. In addition, presuming the merger moves forward after DOJ’s review, DOT can undertake several other reviews if the situation warrants. Before commencing operations, any new, acquired, or merged airlines must obtain separate authorizations from DOT—“economic” authority from the Office of the Secretary and “safety” authority from the Federal Aviation Administration (FAA). The Office of the Secretary is responsible for deciding whether applicants are fit, willing, and able to perform the service or provide transportation. To make this decision, the Secretary assesses whether the applicants have the managerial competence, disposition to comply with regulations, and financial resources necessary to operate a new airline. FAA is responsible for certifying that the aircraft and operations conform to the safety standards prescribed by the Administrator, for instance, that the applicants’ manuals, aircraft, facilities, and personnel meet federal safety standards. Also, if a merger or other corporate transaction involves the transfer of international route authority, DOT is responsible for assessing and approving all transfers to ensure that they are consistent with the public interest. In addition, American has been under federal bankruptcy protection since November 2011. In May 2013, the federal judge overseeing the bankruptcy approved American’s merger with US Airways as part of the reorganization. Shareholders of US Airways must also approve the merger for it to be consummated. On February 13, 2013, American and US Airways announced an agreement to merge the two airlines. The airlines have also notified DOJ of their intent to merge. The new airline would retain the American name and headquarters in Dallas-Fort Worth while the current US Airways Chief Executive Officer would keep that title with the new airline, and the current American CEO would become Chairman of the new American. The proposed merger will be financed exclusively through an all stock transaction with a combined equity value of $11 billion split roughly with 72 percent ownership to American shareholders and 28 percent to US Airways shareholders. The airlines have not announced specific plans for changes in their networks or operations that would occur if the combination is consummated, but the airlines’ conservatively estimate that the merger will result in $1.4 billion in annual benefits to shareholders of the new airline as outlined in table 1. A key financial benefit that airlines consider in a merger is the potential for increased revenues through additional demand (generated by more seamless travel to more destinations), increased market share, and higher fares on some routes. As we reported in May 2010, mergers may generate additional demand by providing consumers more domestic and international city-pair destinations. Airlines with expansive domestic and international networks and frequent flier benefits particularly appeal to business traffic, especially corporate accounts. The American–US Airways merger is estimated by airline executives to generate $1.12 billion in revenue synergies from improved network connectivity, increased corporate and frequent flier loyalty, and optimization in the use of their aircraft. At the same time, capacity reductions in certain markets from a merger or acquisition could also serve to generate additional revenue through increased fares on some routes. Some studies of airline mergers and acquisitions during the 1980s showed that prices were higher on some routes from the airline’s hubs soon after the combination was completed. Several studies have also shown that increased airline dominance at an airport results in increased fare premiums, in part, because that dominance creates competitive barriers to entry. At the same time, though, even if the combined airline is able to increase prices in some markets, the increase may be transitory if other airlines enter the markets with sufficient presence to counteract the price increase. In an empirical study of airline mergers and acquisitions up to 1992, Winston and Morrison suggest that being able to raise prices or stifle competition does not play a large role in airlines’ merger and acquisition decisions. The other key financial benefit that airlines consider when merging with or acquiring another airline is the cost reduction that may result from combining complementary assets, eliminating duplicative activities, and reducing capacity. As we reported in May 2010, a merger or acquisition could enable the combined airline to reduce or eliminate duplicative operating costs, such as duplicative service, labor, and operations costs—including inefficient (or redundant) hubs or routes—or to achieve operational efficiencies by integrating computer systems and similar airline fleets. By increasing the fleet size, airlines can increase their ability to match the size of aircraft with demand and adjust to seasonal shifts in demand. Other cost savings may stem from facility consolidation, procurement savings, and working capital and balance sheet restructuring, such as renegotiating aircraft leases. Airlines may also pursue mergers or acquisitions to more efficiently manage capacity—both to reduce operating costs and to generate revenue—in their networks. Given recent economic pressures, particularly increased fuel costs, the opportunity to lower costs by reducing redundant capacity may be especially appealing to airlines seeking to merge. In the case of the American–US Airways merger, airline executives estimate that the merger will allow $640 million in cost savings from reducing overlapping facilities at airports and in combining purchasing, technology, and corporate activities. Despite these benefits, there are several potential barriers to successfully consummating a merger, potentially reducing the benefits and increasing the costs. As we reported in July 2008, the most significant operational challenges involve the integration of workforces, organizational cultures, aircraft fleets, and information technology systems and processes, challenges that can be difficult, disruptive, and costly as the airlines integrate. For example, in the case of the American–US Airways merger, with unions supporting the merger, pilots’ and others’ pay will increase by $360 million annually if the merger is completed. However, merging workforces can take time–for example, US Airways’ pilot seniority lists have not been resolved following their merger with America West in 2005. Integrating technology, especially reservation systems, can also be difficult and costly. For example, United has struggled to integrate computer and reservation systems following its merger with Continental in 2010. If approved by DOJ, the merged American-US Airways would surpass United as the largest U.S. passenger airline. Table 2 shows that combining American and US Airways Airlines would create the largest U.S. airline based on data for the four quarters ending October 2012, as measured by capacity (available seat miles) and operating revenues. The combined airline would also have the largest workforce among U.S. airlines based on February 2013 employment statistics, with a combined 101,197 full-time equivalent employees (table 3). The airlines’ workforces are represented by different unions, except dispatchers (table 4). Some of American’s unions have already signed memorandums of understanding for future contracts if the airlines are merged. The combined airline would need to integrate 1,215 aircraft (table 5). American has a predominantly Boeing fleet, while US Airways has a largely Airbus fleet. In addition, in July 2011, American placed a $40 billion order for 200 Boeing 737 series and 260 Airbus A320 series aircraft. Despite its bankruptcy, the bankruptcy court allowed the order to proceed. American has also been trying to sell its regional airline, American Eagle, and its fleet of almost 280 aircraft. If approved by DOJ, the airlines would combine two distinct networks supported by different hubs, where the airlines connect traffic feeding from smaller airports. American’s major hubs are in Chicago O’Hare (ORD), Dallas (DFW), New York (JFK), Los Angeles (LAX), and Miami (MIA), and US Airways has hubs in Charlotte (CLT), Philadelphia (PHL), Phoenix (PHX), and Washington D.C. (DCA), as shown in figures 4 and 5. A key concern for DOJ in reviewing an airline merger is the loss of a competitor on nonstop routes. The loss of a competitor that serves a market on a nonstop basis is significant from a competitive perspective because nonstop service is typically preferred by most passengers and routes that only have nonstop service do not benefit from the availability of alternative, albeit lower valued, connecting service. Based on October 2012 traffic data, the two airlines overlap on 12 nonstop airport-pair routes, which are listed in figure 6. For 7 of these 12 nonstop overlapping airport-pairs (generally between an American hub and a US Airways hub) there are currently no other competitors on a nonstop basis and in only one instance is a low cost airline (Southwest) present. And unlike the United—Continental merger, where most of the endpoint cities had other airports in the region, fewer of these airport pairs have significant other airports in the region. This is especially true for the Charlotte (CLT)—Dallas (DFW) and Phoenix (PHX)—DFW pairs where few alternate options are available at either endpoint. The amount of overlap in airport-pair combinations is far more when considering all connecting traffic; however, on most of the overlapping airport-pair markets, there is at least one other competitor. Based on 2011 and 2012 ticket sample data, for 13,963 airport-pairs with a minimum level of passenger traffic per year, there would be a loss of one effective competitor in 1,665 airport pair markets affecting more than 53 million passengers by merging these airlines (see fig. 7). As the figure shows, compared to the last major airline merger in 2010 between United and Continental, there would be 530 more airport pairs losing an effective competitor. This would affect 18 million more passengers compared to the merger between United and Continental. In addition, any effect on fares may be dampened by the presence of a low cost airline in 473 of the 1,665 airport pairs losing a competitor. The combination of the two airlines would also create a new effective competitor with at least a combined 5 percent market share in 210 airport-pairs affecting 17.5 million passengers. If approved by DOJ, the combined airline could be expected to rationalize its network over time, including where it maintains hubs. The two airlines do not share any airport hubs; therefore, the amount of airport market share overlap that currently exists at these hubs is relatively small but could grow at some hubs while contracting at others under a merger (see table 6). For example, New York could serve as a better hub and international gateway than Philadelphia in the Northeast, while Miami could be a better hub than Charlotte in the Southeast. In addition, 59 out of 116 domestic airports served by US Airways from Charlotte are also served by American from Miami (MIA). Closing hubs is not unprecedented, following the American acquisition of TWA in 2001, St Louis ceased to be an American hub and following the Delta–Northwest merger, service at Delta’s hub in Cincinnati and Northwest’s hub in Memphis has been greatly reduced. Three of the airports noted in table 6 are slot-controlled airports with restricted access for new entrants or expanded service. As we reported last year, slot-controlled airports have more limited competition and tend to have higher fares compared to other hub airports. Based on February 2012 slot holdings, a combined American and US Airways would control one-third of the slots at LaGuardia and two-thirds of the slots at Washington Reagan as noted in Table 7. Both American and US Airways have worldwide networks and serve many international destinations. Between the two airlines, they serve 107 international cities from airports in the United States, 37 of them in common, according to published February 2013 schedules. However, the two airlines do not directly compete on any of the same international city pair markets, though both serve slot-controlled London Heathrow airport with more than 830,000 passengers over the last year. For international routes, U.S. airlines aggregate traffic from many domestic locations at a hub airport where passengers transfer onto international flights. In other words, at Philadelphia, where US Airways has a large hub, passengers traveling from many locations across the U.S. transfer onto US Airways’ international flights. Likewise, American aggregates domestic traffic at New York’s JFK for many of its international flights to some of the same destinations. As such, a passenger traveling from, for example Nashville, may view these alternative routes to a location in Europe as substitutable. Whether service to international destinations from different domestic hubs will be viewed as a competitive concern will likely depend on a host of factors, such as the two airlines’ market share of traffic to that destination and whether there are any barriers to new airlines entering or existing airlines expanding service at the international destination airports. US Airways is part of the larger Star Alliance, and American is a member of the smaller oneworld alliance. US Airways has announced it will leave the Star Alliance and join American in oneworld as part of the merger. The DOT has authority to approve antitrust immunity applications, but DOJ may also comment if it has antitrust concerns. According to a 2011 paper prepared by DOJ economists, “Over the past 17 years, DOT granted immunity to over 20 international alliance agreements, permitting participants in these alliances to collude on prices, schedules, and marketing.” They found that in granting immunity to larger groups of airlines in the three major international alliances, the number of independent competitors over the North Atlantic was significantly reduced adversely affecting consumers through higher fares. Because both airlines are already part of immunized alliances it is unclear what effect, if any, this merger might have on competition in international service. According to DOT officials responsible for reviewing and approving the immunity requests, the agency has analyzed and documented the impact of immunized alliances in its many public orders and has concluded that in its experience, integrated airline alliances enable a number of valuable consumer benefits, including lower prices for many travelers. Chairman Cantwell, Ranking Member Ayotte, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D. at (202) 512-2834 or by email at [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions include Paul Aussendorf (Assistant Director); Amy Abramowitz; Susan Fleming; Dave Hooper; Delwen Jones; Brooke Leary; Dominic Nadarski; Josh Ormond; Gretchen Snoey; and Carrie Wilks. Airline Mergers: Issues Raised by the Proposed Merger of United and Continental Airlines. GAO-10-778T. Washington, D.C.: May 27, 2010. April 21, 2009. Commercial Aviation: Airline Industry Contraction Due to Volatile Fuel Prices and Falling Demand Affects Airports, Passengers, and Federal Government Revenues. GAO-09-393. Washington, D.C.: April 21, 2009. Airline Industry: Potential Mergers and Acquisitions Driven by Financial Competitive Pressures. GAO-08-845. Washington, D.C.: July 31, 2008. Airline Deregulation: Reregulating the Airline Industry Would Likely Reverse Consumer Benefits and Not Save Airline Pensions. GAO-06-630. Washington, D.C.: June 9, 2005. Commercial Aviation: Bankruptcy and Pension Problems Are Symptoms of Underlying Structural Issues. GAO-05-945. Washington, D.C.: Sept. 30, 2005. Commercial Aviation: Preliminary Observations on Legacy Airlines’ Financial Condition, Bankruptcy, and Pension Issues. GAO-05-835T. Washington, D.C.: June 22, 2005. Private Pensions: Airline Plans’ Underfunding Illustrates Broader Problems with the Defined Benefit Pension System. GAO-05-108T. Washington, D.C.: Oct. 7, 2004. Transatlantic Aviation: Effects of Easing Restrictions on U.S.-European Markets. GAO-04-835. Washington, D.C.: Jul. 21, 2004. Commercial Aviation: Despite Industry Turmoil, Low-Cost Airlines Are Growing and Profitable. GAO-04-837T. Washington, D.C.: June 3, 2004. Commercial Aviation: Legacy Airlines Must Further Reduce Costs to Restore Profitability. GAO-04-836. Washington, D.C.: August 11, 2004. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: Oct. 2, 2002. Commercial Aviation: Air Service Trends at Small Communities since October 2000. GAO-02-432. Washington, D.C.: March 29, 2002. Proposed Alliance Between American Airlines and British Airways Raises Competition Concerns and Public Interest Issues. GAO-02-293R. Washington, D.C: Dec. 21, 2001. Aviation Competition: Issues Related to the Proposed United Airlines-US Airways Merger. GAO-01-212. Washington, D.C: Dec. 15, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In February 2013, American and US Airways announced plans to merge the two airlines and entered into a merger agreement. Valued at $11 billion, the merged airline would retain the American name and be headquartered in Dallas-Fort Worth. This follows the mergers of United Airlines and Continental Airlines in 2010 and the acquisition of Northwest Airlines by Delta Air Lines (Delta) in 2008. This latest merger, if not challenged by DOJ, would surpass these prior mergers in scope to create the largest passenger airline in the United States. The passenger airline industry has struggled financially over the last decade and these two airlines believe a merger will strengthen them. However, as with any merger of this magnitude, this proposal will be examined by DOJ to determine if its potential benefits for consumers outweigh the potential negative effects. This testimony focuses on (1) the role of federal authorities in reviewing merger proposals, (2) key factors motivating airline mergers in recent years, and (3) the implications of merging American and US Airways. To address these objectives, GAO drew from its previous reports on the potential effects of prior airline mergers and the financial condition of the airline industry issued from July 2008 through May 2010. GAO also analyzed DOT's airline operating and financial data, airline financial documents, and airline schedule information since 2002. The Department of Justice's (DOJ) antitrust review will be a critical step in the proposed merger between American Airlines (American) and US Airways. DOJ uses an integrated analytical framework set forth in the Horizontal Merger Guidelines to determine whether the merger poses any antitrust concerns. Under that process, DOJ assesses, among other things, the extent of likely anticompetitive effects of the proposed merger in the relevant markets, in this case, airline city-pair markets, and the likelihood that other airlines may enter these markets and counteract any anticompetitive effects, such as higher fares. DOJ also considers efficiencies that a merger or acquisition could bring--for example, consumer benefits from an expanded route network. The Department of Transportation (DOT) aids DOJ's analysis. Airlines seek mergers to reduce costs and improve revenues. GAO has previously reported that mergers can result in increased revenues by offering improved network connections and schedules, but also through higher fares on some routes. Cost savings can be generated by eliminating redundancies and operational efficiencies, including reducing service, but can be muted by problems in combining different aircraft, technologies, and labor forces. In the case of US Airways and American, they estimate that a merger would yield $1.4 billion in annual benefits from increased revenues and reduced costs. If not challenged by DOJ, the merged American would surpass United to become the largest U.S. passenger airline by several measures. While US Airways and American overlap on only 12 nonstop routes, no other nonstop competitors exist on 7 of those 12. Our analysis of 2011 and 2012 ticket data also showed that combining these airlines would result in a loss of one effective competitor (defined as having at least 5 percent of total airport-pair traffic) in 1,665 airport-pair markets affecting more than 53 million passengers while creating a new effective competitor in 210 airport-pairs affecting 17.5 million passengers. However, the great majority of these markets also have other effective competitors. |
DOD relies on its science and technology community—DOD research laboratories, test facilities, industry, and academia—to identify, pursue, and develop new technologies that address military needs. The DOD SBIR program is one mechanism through which DOD attempts to accomplish its science and technology goals and develop technologies that contribute to weapon systems or transition directly to warfighters for use in the field. Within DOD, the Office of Small Business Programs oversees the department’s SBIR program activities, develops policy, and manages program reporting. This office generally relies on the agencies, such as the Army, Air Force, and Navy, to oversee and execute their own SBIR program activities. Each agency has flexibility to tailor its SBIR program to meet its needs, including determining what type of research to pursue, which projects to fund, and how to monitor ongoing projects. To initiate the project award process, SBIR programs work with the science and technology and acquisition communities to generate and prioritize research and development topics. These topics describe technical areas of interest and capability needs, which the programs use in their solicitations for proposals from small businesses. DOD conducts three solicitations each year where small businesses compete for Phase I contract awards that are expected to respond to the needs identified in each topic. Once awarded, SBIR projects are managed through a three- phase program structure, which is outlined in table 1. The number of Phase I and Phase II projects varies from year to year based on technology needs and funding availability. Table 2 shows the budgets and project awards reported for the military department SBIR programs in fiscal year 2012. We and others have previously found that DOD and its technology development programs, such as the SBIR program, have encountered challenges to transitioning their technologies to acquisition programs or directly to the warfighter for use in the field. For instance, in our past work we found several reasons why technologies may not transition, including insufficient maturity, inadequate demonstration or recognition by users of a technology’s potential, and unwillingness or inability of acquisition programs to fund final stages of development. To address SBIR technology transition challenges, DOD, the Small Business Administration, and Congress have established additional program provisions, incentives, and reporting requirements. For example, the Commercialization Readiness Program was initiated to accelerate the transition of SBIR funded technologies to Phase III, especially those that lead to acquisition programs and high priority military requirements, such as fielded systems. military departments to use up to 1 percent of SBIR funding for administrative activities that facilitate transition. This funding is used to support program staff and contractors who provide assistance to SBIR awardees, including efforts to enhance networking and build relationships among small businesses, prime contractors, and DOD science and technology and acquisition communities. The National Defense Authorization Act for Fiscal Year 2006 authorized the Commercialization Pilot Program under the Secretary of Defense and the Secretary of each military department. Pub. L. No. 109-163, § 252. The National Defense Authorization Act for Fiscal Year 2012 continued the program and renamed it the Commercialization Readiness Program. Although the program may support any Phase III awards, such as technology transition to commercial products, DOD is required to provide goals to increase the number of Phase II SBIR contracts that lead to technology transition into programs of record or fielded systems and to use incentives to meet those goals. Pub. L. No. 112-81, § 5122(a). Requires DOD to set a goal to increase the number of Phase II contracts awarded that lead to technology transition into acquisition programs or fielded systems, and use incentives to encourage program managers and prime contractors to meet the goal. Requires that DOD report specific transition-related information to the Administrator of the Small Business Administration for inclusion in an annual report to designated congressional committees. This includes reporting the number and percentage of Phase II contracts that led to technology transition into acquisition programs or fielded systems, information on the status of each project that received funding through the Commercialization Readiness Program and efforts to transition those projects, and a description of each incentive used to meet the department’s transition goal. Authorizes DOD to establish goals for the transition of Phase III technologies in subcontracting plans for contracts of $100 million or more, and to require prime contractors on such contracts to report the number and dollar amount of contracts entered into by prime contractors for Phase III projects. Sets the ceiling for discretionary technical assistance that can be provided annually for all Phase I and Phase II projects at $5,000 per project. Programs can use this funding to assist awardees in making technical decisions on projects, solving technical problems, minimizing technical risks, and commercializing projects. Establishes a pilot effort to allow DOD SBIR programs to use not more than 3 percent of their SBIR budgets for, among other things, program administration, technical assistance, and the implementation of commercialization and outreach initiatives. The military department SBIR programs use several management practices and tools to support technology transition efforts. We identified some common transition elements across the programs, but also found some differences in how each program approaches its technology transition efforts. The programs’ technology transition efforts are supported through use of administrative funds coming from their SBIR budgets and other funds provided by their respective military department. The transition facilitation practices, tools, and funds used to promote the transition of SBIR technologies include the following: Early focus on transition through topic generation and project selection: Technology transition efforts begin with topic generation and project selection processes that emphasize the pursuit of projects for which there is a demonstrated military need and potential transition opportunities. To do this, the military department programs formally engage stakeholders from the science and technology and acquisition communities in generating and endorsing topics for SBIR solicitations. In proposing topics and selecting projects, programs have to balance their desire for technological innovation with meeting pressing warfighter needs. SBIR officials stated that projects that pursue incremental improvements generally are more likely to deliver the technical capability expected for technology transition to occur. In contrast, they noted that projects that focus more on “leap-forward” technology innovations that can support future warfighting needs tend to require more long-term development and have greater technical and transition risks. DOD policy requires that at least 50 percent of military department topics are endorsed by the acquisition community, such as program executive offices. This helps ensure that the acquisition community is engaged with the SBIR programs and that a significant portion of projects are dedicated to addressing specific needs identified by military users. Phase II transition initiatives: Transition-focused activities increase as Phase II projects progress, commensurate with an increasing technology maturity and understanding of a project’s potential opportunities for use. In particular, the military department SBIR programs target transition opportunities through their Commercialization Readiness Programs and other initiatives that provide additional support to select Phase II projects. In some cases, the SBIR programs require formal technology transition agreements or matching funding as a condition to receiving additional Phase II funding. Technology transition agreements, which Air Force and Navy officials reported using, help manage project expectations and formalize stakeholder commitments by outlining cost, schedule, and performance expectations for transition to occur. Matching funds from intended users, which are required by the Navy for some projects, can help create greater buy-in for transition because the intended users have a monetary stake in the project. Transition facilitators: Each military department SBIR program has a network of transition facilitators who manage the Commercialization Readiness Program and other enhancement efforts, as well as broader SBIR activities that support technology transition. The facilitators are located at military labs, acquisition centers, and program executive offices to work directly with government stakeholders and help ensure projects are responsive to warfighter needs. They also help small businesses identify and position themselves for opportunities to transition their SBIR technologies. Although the roles and responsibilities vary somewhat across the programs, in general, transition facilitators assist with topic generation and prioritization; foster communication among small businesses, research laboratories, and the acquisition community in support of transition opportunities; and monitor project progress, including outcomes. Navy Transition Assistance Program: The Navy established an additional program over a decade ago to prepare its SBIR participants for technology transition opportunities. The Transition Assistance Program is a voluntary 11-month program with, on average, about two-thirds of Phase II recipients participating each year. It provides consulting services focused on improving the small businesses’ abilities to transition their SBIR products, including assistance in transition planning and developing marketing tools. Under the program, profiles are used to describe the expected capability, level of technology maturity, and potential technology transition opportunities for each project. These profiles are available in electronic form through a web-based portal called the Navy Virtual Acquisition Showcase, and support the annual Navy Opportunity Forum conference. The conference provides Transition Assistance Program participants with direct exposure and one-on-one opportunities to interact with prospective transition partners in the government and industry. Other transition facilitation tools: SBIR programs also use technology roadmaps and formal relationship-building activities, such as conferences and workshops, to support transition efforts. Technology roadmaps are schedule-based planning documents used to identify opportunities for SBIR technology insertion into acquisition programs or direct use by the warfighter. Conferences and workshops, such as the annual Beyond Phase II conference hosted by the Office of Small Business Programs, are used by the programs to provide opportunities for SBIR Phase II companies to interact directly with prospective government and industry users and showcase their projects. Administrative funds: The technology transition practices and tools used by the programs are supported by administrative funds provided through their SBIR budgets as well as non-SBIR sources from their respective agencies. The Commercialization Readiness Program and discretionary technical assistance provisions enable programs to use portions of their SBIR budgets to fund administrative activities, including transition support. For fiscal year 2012, this funding totaled about $12 million across the three military departments. The fiscal year 2012 SBIR reauthorization included a new provision—which DOD officials advocated—that allows the programs to use up to 3 percent of their funds to support administrative activities, which is in addition to funds available through the Commercialization Readiness Program and discretionary technical assistance. SBIR officials stated that although this additional funding allowance is in the initial stages of being used, they believe these funds will help enhance transition facilitation measures for their programs going forward. Additional agency funding outside of the SBIR budget is also used to manage programs and support transition activities, but the amount of such funding is not readily identifiable because the military departments do not all require that the amount of funding used to support associated administrative efforts be documented. We were unable to assess the extent of technology transition associated with the military department SBIR programs because comprehensive and reliable technology transition data are not collected. Tracking mechanisms used by DOD—Company Commercialization Reports (CCR) and the Federal Procurement Data System-Next Generation (FPDS- NG)—provide some information on SBIR Phase III activities, but these mechanisms have significant gaps in coverage and data reliability concerns that limit their transition tracking capabilities. The military departments have additional measures through which they have identified a number of successful SBIR transitions to DOD acquisition programs and directly to fielded systems, but these efforts capture a limited amount of transition information. DOD is assessing how to comply with the new transition reporting requirements directed by Congress, but has yet to develop a plan that will support identification and annual reporting of the extent to which SBIR technologies transition to DOD acquisition programs or to fielded systems. The military department SBIR programs rely, to varying degrees, on two data systems—CCR and FPDS-NG—as well as their own agency-specific data collection activities to identify transition results. Table 3 more fully describes the data sources used and their limitations. Although the CCR and FPDS-NG data systems do not capture complete data on the transition of SBIR technologies, they do provide high-level commercialization information that the SBIR programs use to track progress in achieving program goals. Because the data help support program management efforts, the Office of Small Business Programs and the military departments, to varying degrees, take steps to verify the quality of CCR and FPDS-NG data. For example, the Army assesses and validates CCR data for its projects on an ongoing basis. This process involves comparing recent updates to the database with FPDS-NG contract data and internal Army tracking data to confirm the accuracy of commercialization funding reported by the small businesses. The Navy SBIR program uses FPDS-NG as its primary source of commercialization data and employs similar validation techniques to improve the accuracy of commercialization data tracked through this system. By comparing contracts in FPDS-NG flagged as SBIR-related to DOD contract management systems, the Navy is able to verify the accuracy of Phase III awards data tied to government contracts. Both the Army and Navy officials acknowledged, however, that even with their data validation efforts, problems persist because of the limitations of the Company Commercialization Reports and FPDS-NG. The military department programs have developed some internal capabilities to track certain projects and provide insight into the types of capabilities enabled by them. Like Company Commercialization Reports and FPDS-NG, these capabilities do not provide comprehensive transition information, but may help the departments to gain more insight into transition outcomes for some technologies developed within SBIR programs and to respond to DOD and congressional inquiries about program results. In particular, the programs identify transition success stories for a limited number of projects, ranging from Phase III awards for additional research and development to transition to major acquisition programs or fielded systems. Information on these success stories can come from SBIR program officials, acquisition program officials, prime contractors, or directly from the small businesses. The Air Force’s database of identified transition successes includes 95 transition stories dating back to 2004. The Army’s program produces an annual report describing transition outcomes for 20-30 successful projects. The Navy’s program maintains a searchable database of SBIR projects that includes profiles on select transitioned projects as well. Table 4 provides examples of transition outcomes for projects identified through our review of these reporting mechanisms. SBIR program officials within the military departments emphasized that, in addition to their broader program efforts to identify transition outcomes, some acquisition organizations have implemented their own practices to track transition. For example, the Navy Program Executive Office for Submarines tracks the transition of SBIR technologies to its acquisition programs by managing a list of companies, the value of contract awards, the specific program office associated with each contract award, and the SBIR technology associated with the award. The office indicated that 20 active Phase III awards associated with its acquisition program efforts are being tracked. The National Defense Authorization Act for Fiscal Year 2012 mandated that DOD report new transition-related information to the Administrator of the Small Business Administration who will report this information annually to designated congressional committees. This reporting will include information on the number and percentage of Phase II projects that transition into acquisition programs or to fielded systems, the efficacy of steps taken by DOD to increase the number of transitioned projects, and additional information specific to the transition of projects funded through Commercialization Readiness Programs. In order to provide more complete and accurate transition data to support the new reporting requirements, DOD recognizes it may need to modify its existing data systems or develop new tools to better capture the transition results for SBIR projects. According to the Office of Small Business Programs, DOD’s response to the new reporting requirements is still being evaluated, in part because there are several challenges to compiling complete and accurate technology transition data. One such challenge we found was variation across the military departments in their definitions of technology transition. Specifically, transition definitions ranged from any commercialization dollars applied to a project, to only when a technology is actually incorporated into a weapon system or in direct use by the warfighter. The Office of Small Business Programs acknowledged that a standard DOD SBIR definition of technology transition must be ensured before the congressionally-required reporting begins. Standards for internal control state that management should establish procedures to ensure that it is able to achieve its objectives, such as being able to compile and report consistent, complete, and accurate data.according to SBIR officials, tracking transition outcomes can be Additionally, challenging because the sometimes lengthy period between SBIR project completion and transition to a DOD user can obscure a project’s SBIR linkages. Time lags can occur because of delays in transition funding availability, additional development or testing needs before transition, or schedule delays encountered by intended users. During the time between project completion and transition, personnel associated with projects may change and technologies may evolve. This increases the likelihood that transitions associated with SBIR technologies go unacknowledged. SBIR officials within the military departments also stated that limited resources for administrative activities constrain their ability to effectively follow up on the transition outcomes for completed projects. Although the Office of Small Business Programs acknowledges the limitations of CCR data, the initial plan is to use this data source—viewed by DOD as the best available—as the primary means for beginning to address the new transition reporting requirements. Additionally, in an effort to improve DOD’s future technology transition reporting and its understanding of transition results in general, the Office of Small Business Programs has initiated an assessment of different options for enhancing transition data. For example, as part of this assessment, DOD is examining whether CCR could be modified to improve reporting. Additionally, existing DOD reporting mechanisms, such as Selected Acquisition Reports—annually required for major defense acquisition programs—are being considered as potential vehicles for supporting SBIR technology transition reporting. Opportunities to build more SBIR awareness directly into acquisition activities are being considered as well, such as including provisions in acquisition strategy documents or formal program reviews. According to the Office of Small Business Programs, DOD intends to issue a policy directive in fiscal year 2014 that will provide guidance for implementing overall SBIR program requirements. However, SBIR officials indicated that addressing technology transition reporting requirements is viewed as a longer-term effort because of the challenges we have discussed, and no specific plan including a time line has been established for when DOD will be able to support those requirements. Without a plan that establishes a time line, it is unclear how and when DOD will begin to provide the technology transition information expected by Congress. Although Congress did not specify when reporting was to begin, it expects DOD to report new transition-related information to the Administrator of the Small Business Administration to meet the National Defense Authorization Act for Fiscal Year 2012 requirement. However, as stated above, DOD expects this to be a longer-term effort and designated congressional committees may not be aware of when DOD will likely have developed the capability to provide comprehensive and accurate data. Further, unless DOD communicates its plan and accompanying time line, these committees may be unaware that the transition-related information DOD plans to provide in the near-term to address the National Defense Authorization Act for Fiscal Year 2012 requirements has data quality issues. Standards for internal control emphasize the need for federal agencies to establish plans to help ensure goals and objectives can be met, including compliance with applicable laws and regulations. Further, communicating internal control efforts on a timely basis to external stakeholders, such as congressional committees, helps ensure that effective oversight can take place. The SBIR program efforts within DOD provide opportunities for small businesses to develop new technologies that may improve current U.S. military capabilities and provide innovative solutions to address future needs of the warfighter. However, information on technology transition outcomes for SBIR projects is limited. Consequently, DOD cannot identify the extent to which the program is supporting military users. The Office of Small Business Programs is taking steps to respond to new technology transition reporting requirements, but has not yet determined how and when it will more completely and reliably track and report on the extent of transition for SBIR technologies. While initial reporting efforts are expected to use existing data systems, such as CCR, DOD will need to overcome the inherent limitations of data collected through those systems if it expects to provide a comprehensive picture of transition outcomes. To improve tracking and reporting of technology transition outcomes for SBIR projects, we recommend that the Secretary of Defense direct the Office of Small Business Programs to take the following three actions: 1. Establish a common definition of technology transition for all SBIR projects to support annual reporting requirements; 2. Develop a plan to meet new technology transition reporting requirements that will improve the completeness, quality, and reliability of SBIR transition data; and 3. Report to Congress on the department’s plan for meeting the new SBIR reporting requirements set forth in the program’s fiscal year 2012 reauthorization, including the specific steps for improving the technology transition data. We provided a copy of a draft of this report to DOD for review and comment. Written comments from the department are included in appendix II of this report. DOD partially concurred with our recommendations. In its response, DOD stated that it has established a working group that is currently working with all stakeholders to develop a common definition of technology transition for all SBIR projects. DOD also agreed that it is important to improve the completeness, quality, and reliability of SBIR transition data, but noted that it has significant concerns related to the difficulty in actually capturing the data. The department indicated that the full scope of data collection challenges and associated resource needs is unknown at this time. While we recognize there are challenges to improving transition data, we believe there are avenues already available that DOD could pursue to improve transition data that may not require extensive resource commitments. For example, DOD’s SBIR program could work more closely with its acquisition community to track transition outcomes. As outlined in this report, some acquisition organizations have developed their own practices to track transition outcomes, which the program may be able to leverage for use on a broader scale. In addition, DOD could consider greater use of contracting provisions to require contractors to report on SBIR project activities, or use existing program reporting mechanisms, such as Selected Acquisition Reports, to capture additional transition information. We believe that collection of better data is not only needed to support the congressional reporting requirements, but also to help DOD assess the efficacy of existing transition efforts and the benefits the program yields for the warfighter. DOD stated it will continue with initiatives that seek to improve the collection of SBIR technology transition data. However, it did not specify if or when it intends to develop a plan for meeting the transition reporting requirements. We continue to believe a plan that includes a time line for when DOD will begin to support reporting requirements should be provided to the designated congressional committees in the near term to make clear the limitations of reported transition data and the department’s approach to improving the data over time. We are sending copies of this report to appropriate congressional committees and the Secretary of Defense. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To identify what processes are used by the Department of Defense (DOD) to facilitate transition for Small Business Innovation Research (SBIR) technologies, we reviewed prior reports by GAO, DOD and other organizations, such as the National Research Council and the Rand Corporation, as well as reviewed DOD policies, procedures, and funding information. Using this information, we scoped our work to focus on the SBIR activities conducted by the Air Force, Army, and Navy. These three organizations typically receive about three-fourths of the annual SBIR funding that supports the 13 participating DOD organizations. With the military department SBIR programs as our focus, we interviewed DOD officials from the Office of Small Business Programs and the SBIR program offices at the Air Force, Army, and Navy on practices and tools used to facilitate technology transition. In addition, we interviewed and collected documentation from DOD officials within the acquisition community concerning their use of and interactions with the SBIR program. Specifically, we interacted with officials at the Air Force Life Cycle Management Center and Air Force Research Laboratory; the Army Aviation and Missile, Research, Development, and Engineering Center; the Naval Sea Systems Command; and the F-35 Joint Program Office. This included interviewing SBIR program management and transition facilitation personnel at each location, as applicable. Similarly, to assess the extent to which SBIR technologies are transitioning to DOD users, we met with officials in the Office of Small Business Programs, military department SBIR program offices, and the aforementioned military acquisition organizations to discuss what data are available to measure transition of SBIR technologies to acquisition programs, or directly to warfighters in the field. We determined that DOD uses two primary data systems—Company Commercialization Reports and the Federal Procurement Data System-Next Generation. We discussed with DOD officials what data are collected by these systems, how the data are validated and used, and whether there are limitations to the data collected. We also reviewed available documentation on the systems. In assessing data limitations, we discussed with SBIR officials whether the systems provide accurate, reliable, and comprehensive data on SBIR projects that transition to military users. In addition, we interviewed military department officials about other data collection practices they may have implemented to track SBIR projects and results. Any limitations that were identified for the data collection practices and data systems used to identify technology transition outcomes for SBIR projects are discussed in this report. We conducted this performance audit from April 2013 to December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, John Oppenheim, Assistant Director; Danielle Greene; Victoria Klepacz; Sean Merrill; Scott Purdy; and Sylvia Schatz also made key contributions to the report. | To compete in the global economy, the United States relies heavily on innovation through research and development. The Small Business Innovation Development Act of 1982 initiated SBIR programs across federal agencies in an effort to stimulate innovation through small businesses. DOD spends over $1 billion annually to support SBIR awards. The Conference Report accompanying the National Defense Authorization Act for Fiscal Year 2013 mandated that GAO assess the transition of technologies developed through the DOD SBIR program. This report examines (1) practices the military department SBIR programs use to facilitate the transition of SBIR technologies, and (2) the extent to which SBIR technologies are transitioning to DOD users, including major weapon system acquisition programs. GAO reviewed SBIR program documentation and data. GAO also interviewed officials from DOD's Office of Small Business Programs and the military departments to determine the practices used to facilitate technology transition and assess SBIR transition outcome data. The Small Business Innovation Research (SBIR) programs within the military departments use a variety of practices and tools to facilitate technology transition--the act of passing technologies developed in the science and technology environment on to users such as weapon system acquisition programs or warfighters in the field. GAO identified some common transition practices and tools across SBIR programs. For example, specific initiatives, such as the Commercialization Readiness Program, are used by each SBIR program and focus resources on enhancing technology transition opportunities. Transition facilitators are also used by each program to provide a network of personnel who manage SBIR activities that support technology transition. GAO also found some different practices and tools used to support technology transition efforts, such as the Navy Transition Assistance Program, which provides consulting services and helps showcase SBIR projects in an effort to improve small businesses' abilities to transition their projects. Transition facilitation efforts are supported by administrative funds provided through each program's SBIR budget and from other funds received from their respective military department. A recent increase in the amount of administrative funding that can come from SBIR budgets is expected to help the programs enhance their transition facilitation efforts. GAO was unable to assess the extent of technology transition associated with the military department SBIR programs because comprehensive and reliable technology transition data for SBIR projects are not collected. Transition data systems used by DOD provide some transition information but have significant gaps in coverage and data reliability concerns. The military departments have additional measures through which they have identified a number of successful technology transitions, but these efforts capture a limited amount of transition results. SBIR transition reporting requirements recently established by Congress have led DOD to evaluate its options for providing transition data. GAO identified several challenges to attaining complete and accurate technology transition data. For instance, the lack of a common definition for technology transition across SBIR programs could cause reporting inconsistencies. Additionally, tracking transition can be challenging because of the sometimes lengthy period between SBIR project completion and transition to a DOD user. DOD initially plans to use transition data from Company Commercialization Reports--viewed by DOD as the best available source--to meet the new transition reporting requirements. However, SBIR officials indicated that addressing transition reporting requirements is a longer-term effort, and there is no specific plan including a time line for when DOD will be able to support those requirements. Without a plan that establishes a time line, it is unclear how and when DOD will begin to provide the technology transition information expected by Congress. Although Congress did not specify when reporting was to begin, it expects DOD to report new transition-related information to the Administrator of the Small Business Administration to meet the new reporting requirements. However, unless DOD communicates its plan and accompanying time line, the congressional committees to whom the Small Business Administration reports may be unaware of the data quality issues with the transition-related information DOD plans to use to support reporting in the near term. GAO recommends that DOD establish a common definition of technology transition for SBIR projects, develop a plan to track transition that will improve the completeness, quality, and reliability of transition data, and report to Congress its plan for meeting new SBIR technology transition reporting requirements. DOD partially concurred with these recommendations, but cited challenges to improving transition data. GAO believes options are available to address the challenges. |
As the organization charged with responsibility for overseeing U.S. securities markets at the federal level, SEC’s mission is to protect investors and ensure fair and orderly markets. Within SEC, the Division of Enforcement is responsible for investigating possible violations of the securities laws, litigating against violators in federal civil courts and administrative proceedings, and negotiating settlements. When an investigation reveals a possible violation, SEC can seek a range of sanctions and remedies, including disgorgement. When seeking disgorgement, SEC staff attempt to recover the amount of illegal profits or misappropriated funds as a way of ensuring that securities law violators do not profit from their illegal activities. When possible, SEC also attempts to return these funds to any investors harmed as a result of the violation. When it is not economically practical or efficient to locate and notify investors, the collected amounts are transferred into the general fund of the U.S. Treasury. Disgorgement sanctions are imposed against violators involved in activities such as insider trading, investment adviser fraud, market manipulation, and fraudulent financial reporting. Until 1990, SEC could obtain a disgorgement sanction only by obtaining a court order from a civil suit filed in federal district court. However, in 1990, Congress gave SEC the authority to impose disgorgement sanctions in its administrative proceedings through the Securities Enforcement Remedies and Penny Stock Reform Act of 1990. The majority of disgorgement orders result from suits filed in federal court. The amount to be disgorged in civil and administrative proceedings is based on the amount of the illegal gain, but SEC has discretion to waive all or part of a disgorgement claim. Waivers are granted based on a violator’s inability to pay and are typically granted in settled matters. If SEC believes that a violator is able to pay but refuses to make payments, it can take actions to compel the violator to pay, such as requesting that the court hold the violator in contempt for failure to pay. In addition, SEC may request that the court appoint a receiver, generally a private sector lawyer, to perform certain tasks, such as obtaining and managing a violator’s assets and overseeing the distribution of funds to harmed investors. Receivers are paid out of the funds collected to pay the disgorgement order. After SEC exhausts all practical collection actions, the agency is required to transfer its uncollected debt to the Treasury Department’s Financial Management Service for final collection efforts. As we have stated in two recent reports, SEC faces several challenges in fulfilling its mission. U.S. securities markets have grown tremendously and become more complex and international, increasing the volume and complexity of SEC’s workload. SEC’s staff resources have not increased at a similar rate. For example, between 1991 and 2000, Division of Enforcement staff devoted to investigations increased 16 percent, from 414 to 482 staff years, while the number of cases opened increased 65 percent, from 338 to 558. In addition, the number of cases pending at the end of the year increased 77 percent, from 1,264 in 1991 to 2,240 in 2000. As a result, SEC has been forced to become selective in its enforcement activities and has experienced an increase in the time required to complete certain enforcement investigations. In addition, SEC has been experiencing a staffing crisis that has left it with a large number of less experienced staff. For example, from 1998 to 2000 over 1,000 employees, or about one-third of all staff, left SEC, and in 2000 its overall turnover rate averaged 15 percent—more than twice the rate for comparable positions governmentwide. SEC’s Division of Enforcement has, likewise, had substantial turnover, with 89 professional staff leaving the division during 2000 and 2001, which was about 16 percent of the 553 staff in the division in 2001. SEC’s current collection rate is limited as a measure of the effectiveness of SEC’s collection program for several reasons. First, although its collection rate appeared to decline from prior periods, we found that SEC’s varying success in collecting large individual disgorgement orders caused the rate to differ significantly over time. Second, we found that weaknesses in the processes SEC staff used to enter and update the Disgorgement Payment Tracking System (DPTS), which tracks SEC’s disgorgement collections, have created errors that prevented us from determining the actual disgorgement collection rate. Finally, the collection rate is less useful to measure SEC’s program because factors beyond SEC’s control reduce the likelihood that the agency will be able to collect all disgorgement ordered. As recommended in our 1994 report, SEC now collects aggregate data on the amount of disgorgement ordered, waived, and collected. Our analysis of SEC data found that, as of November 2001, SEC apparently had collected approximately $424 million, or 14 percent, of the $3.1 billion in disgorgement that was ordered from 1995 through 2001 (fig. 1). However, our analysis also found that SEC’s collection rate varied over time and was heavily influenced by large individual disgorgement orders. According to SEC data, the disgorgement collection rate varied greatly from year to year. SEC data show that between 1990 and 1999 SEC was able to collect between 2 and 84 percent of the disgorgement amounts owed, not including amounts waived (fig. 2). An analysis of these collection rates shows that SEC’s success in collecting large individual disgorgement orders can greatly influence the collection rate. For example, figure 2 shows that in 1990 SEC collected approximately 75 percent of the disgorgement ordered (not including waived amounts) in the 2 years after the orders were issued but only 17 percent of the disgorgement ordered in 1991. However, our analysis found that approximately $400 million of the $427 million collected on disgorgement ordered in 1990 came from a single payment made by one violator. Excluding this case, the reported collection rate for 1990 would have been approximately 15 percent. Similarly, SEC’s reported collection rate of 84 percent for 1994 included a disgorgement order of $939 million for a single violator, the majority of which was collected in the 2 years following the order. Excluding this single case, the collection rate for 1994 would have been 23 percent. Similarly, comparing the overall 14 percent collection rate for 1995 to 2001 to the rate we reported in 1994 also is not meaningful as a result of the impact of these large cases. In our 1994 report, we calculated that SEC had collected 50 percent of the $2 billion of disgorgement ordered from 1987 to April 1994. However, if the $400 million 1990 case cited above is excluded from the collection rate for 1987 to 1994, the rate for that period would have been about 38 percent. As a result of the impact that just a few cases can have on SEC’s collection rate, using this rate as a measure of changes in the overall effectiveness of SEC’s collection efforts can be misleading. Another reason that we were unable to use SEC’s reported collection rate as a measure of SEC’s collection efforts is that the data used to calculate that rate are unreliable. According to standards issued by GAO, appropriate internal controls are necessary to ensure that data are accurate and complete. In addition, data about events should be promptly recorded so that they maintain their relevance and value to management. However, weaknesses in SEC’s procedures have resulted in unreliable data in its disgorgement database. As part of our review, we selected a sample of 57 cases and compared information from SEC case files and other documents to entries in DPTS. We found that 18 cases, or approximately 32 percent, contained at least one error in the amount ordered, waived, or collected, or in the status of the case or of the individual violators. Overall, for the 57 cases that we reviewed, DPTS data showed that SEC had collected around $25 million, or approximately 4 percent of the $597 million in disgorgement ordered, not including amounts waived. However, after correcting for the inaccuracies we identified in our review, we found that SEC had actually collected around $55 million, or approximately 11 percent of the disgorgement ordered, not including the amounts waived. Because we judgmentally selected the cases we reviewed, this error rate cannot be projected beyond our sample. SEC’s process for entering data on disgorgement orders into DPTS did not ensure the accuracy or completeness of that data. We found that the sources used as a basis for entering data into DPTS did not always provide the most accurate information. SEC staff in the Office of the Secretary who entered the data into DPTS relied heavily on SEC litigation releases that, according to the staff, may not contain all the details of a disgorgement order. The staff also told us that they do not independently verify the information in the litigation releases. Further, the staff told us that the payment dates recorded in DPTS might not be accurate, because staff used the day entry was made as the payment date if no other date was specified. Finally, we found that it was awkward for staff to accurately record information for individual violators when disgorgement orders were issued to multiple violators. In these instances, payments made by one violator may subsequently reduce the amount all the violators owe. However, the DPTS system does not provide a way to easily enter and track the amounts owed under joint and several liability cases. Instead, staff input the total amount of the disgorgement judgment under one violator and enter a $0 balance for the others, with a notation indicating that each violator is jointly and severally liable with other violators. Payments are recorded under the name of the selected violator, not necessarily the violator making the payment, and a note is made in the system as to which violator had paid. SEC staff said that they entered the data in this way to avoid overstating the amount of disgorgement ordered and paid. SEC officials told us that they will revise their procedures for entering information in cases with joint and several liability in order to more clearly present information related to individual violators. Of the cases we reviewed, five contained errors that appeared to result from the use of incomplete or inaccurate information as a source of data for DPTS. For instance, in one case with a disgorgement order of over $300,000, the entire disgorgement amount was waived at the same time the disgorgement was ordered in October 1997. But as of November 2001, DPTS did not show that any amount had been waived. In another case involving 10 violators, the attorney responsible for the case told us that 9 of the violators were jointly and severally liable for a disgorgement order of around $800,000. However, several litigation releases contained information on some, but not all, of the violators. As a result, a disgorgement order amount was recorded for one violator from each litigation release, resulting in an overstatement of the total amount of disgorgement ordered by approximately $1.6 million, or about 200 percent. We also found that SEC’s process for updating the information in DPTS may result in the information not being current. SEC’s Office of the Secretary sends out a report with the details of each case three times a year and asks that responsible SEC staff correct any inaccuracies and update the information. However, the staff who send out the report said that they have no assurance that each office has carefully reviewed the report and noted that some offices have not been timely in returning their reports. Thus the time lag in entering information into DPTS can be about 4 or 5 months, and the information in DPTS may not be current. Of the cases we reviewed, 14 contained errors that appeared to be caused by information not being updated in a timely manner. For example, in one case with a disgorgement order of around $18 million, court documents showed that as of late 1999, over $3 million had been collected and distributed to investors. However, as of November 2001, DPTS did not show that any money had been collected. In another case, a disgorgement order of over $5 million was discharged as part of bankruptcy proceedings in 1998, but this fact was not recorded in DPTS until at least October 2001. Without reliable data that is accurate and up to date, SEC management is limited in its ability to assess its collection program—for instance, in its efforts to determine the reasonableness of the amount of disgorgement waived or collected in individual cases or the overall effectiveness of its collection program. In addition, SEC cannot provide Congress with accurate statistics related to its disgorgement collection activities. Another limitation in the adequacy of the collection rate as a measure of the effectiveness of SEC’s disgorgement collection efforts is that factors beyond SEC’s control limit its ability to collect the full amount of disgorgement ordered in some cases. Disgorgement orders are based on all the funds obtained through violations and do not take into account the violators’ ability to pay. That is, the amount of disgorgement ordered represents the amount of illegal profits or misappropriated funds rather than the amount the violator might be able to pay. For example, in one case we reviewed, SEC obtained a disgorgement order for around $670,000, even though at the time of the order SEC knew the violator did not have any assets. SEC did not collect any money from the violator. According to SEC officials, although SEC may not collect the entire amount of disgorgement ordered in such cases, disgorgement can be a deterrent to future violations and limit the violator’s ability to raise funds to engage in new frauds. This contrasts with the way SEC seeks fines against violators of securities laws. When seeking fines, SEC can take into account a violator’s ability to pay or other factors such as the severity of the violation and the degree to which the violator cooperates with SEC. For example, the court can state that a fine is merited but not levy any amount based on the violator’s lack of ability to pay. According to SEC officials, the fact that fines are assessed this way is one reason why SEC’s collection rate is significantly higher for fines than for disgorgement; in a recent report, GAO calculated the collection rate for fines at approximately 91 percent. SEC officials also said that they are more successful in collecting fines than disgorgements for at least two other reasons. First, disgorgement orders are often much higher than fines, and the larger amounts are more difficult to collect. Second, many violators fined by SEC are current members of the securities industry and are motivated to pay their fines in order to maintain their reputation within the industry. But many of the violators who are ordered to pay large disgorgement orders are either not members of the securities industry or have no desire to remain so. Securities law violators can lack the ability to pay for a variety of reasons. In many cases, for instance, violators have few or no assets left and may have used the proceeds of their illegal activity on nonrecoverable expenses. For example, in 21 of 37 cases we reviewed in which violators did not pay all the disgorgement ordered, SEC staff said that disgorgement was not collected because the violators had already spent the money on personal or business expenses that SEC could not recover. In one case we reviewed, the violator had spent $175,000 on custom-made furniture, which the case’s court-appointed receiver was able to sell for only about 10 percent of its original cost. In addition, disgorgement orders may be obtained against defunct companies. In two of the cases we reviewed, disgorgement orders were obtained against shell companies, one for $1.6 million and one for $1.5 million. In each case, SEC staff knew that the company was defunct and most likely did not have any assets but obtained the disgorgement order to prevent the company from becoming involved in future fraudulent activities. Further, a violator’s assets may already have been used to pay other judgments, leaving little for SEC to collect. In one case involving a disgorgement judgment of $147 million, all the violator’s assets—around $40 million—were used to pay investors through private class action claims in a Securities Investor Protection Act case and a Chapter 11 bankruptcy case. Another reason that violators can lack the ability to pay is that they have little earning capacity. In some cases, violators may be unable to satisfy their disgorgement debt because they declare bankruptcy or are incarcerated. For example, for the period 1995 through 2000 at least 5 of the 10 violators with the largest disgorgement orders were incarcerated because of their fraudulent activities. Also, violators may be defunct companies with no prospects for future income. For example, the two shell companies in the example noted above were defunct at the time of the disgorgement judgment and had no prospects for future operations or income. In other cases, violators may have been banned from further participation in the securities industry, depriving them of their source of income. According to SEC staff, in many cases in which a violator has been ordered to pay disgorgement, SEC also bars the violator from working in the securities industry. Although disgorgement is intended to help deter fraud by forcing violators of the securities laws to return illegal profits, we found weaknesses in SEC’s disgorgement collection program. First, SEC lacks clearly defined strategic objectives and measurable goals for its collection program. SEC’s strategic and annual performance plans, prepared under the Government Performance and Results Act (GPRA), do not address the importance of disgorgement collections or provide measures that would help SEC management monitor its staff’s collection efforts. Without such guidance and measures, competing priorities and increasing workload could prevent SEC staff from pursuing collection activities to the degree desired by the agency. Second, SEC lacks the specific policies and procedures that would help maximize collection by ensuring that all appropriate actions are taken to collect disgorgement—for example, the types of collection actions staff should take and the timing of specific actions. Third, SEC does not have systems with accurate or complete information for monitoring whether staff are taking appropriate, prompt collection and distribution actions. Although its staff consider disgorgement collection to be an important means of deterring fraud, SEC had not clearly defined the priority that should be placed on disgorgement collection or established performance measures to monitor collection efforts. Currently, SEC Division of Enforcement staff must balance their disgorgement collection efforts with various other priorities and a workload that in recent years has been increasing faster than their resources. SEC has begun some efforts to assess alternatives means of reducing the conflicting demands on its staff, such as by contracting out collections or taking other actions, but these assessments have not been completed. Under GPRA, federal agencies are held accountable for achieving program results and are required to clarify their mission, set program goals, and measure their performance in achieving those goals. According to the Office of Management and Budget and GAO guidance related to GPRA, effectively achieving program results requires each agency to create a strategic plan that articulates the agency’s mission and includes long-term goals. To supplement the overall strategic plan, agencies are also required to prepare annual performance plans that specify goals and measures and that describe strategies to achieve results. Such goals and measures help managers determine whether the agency’s programs are achieving desired results. According to SEC’s strategic and annual performance plans, deterring fraud is an important part of protecting investors. SEC officials told us that disgorgement is an effective deterrent because it deprives violators of their illegal profits. However, SEC’s strategic and annual plans do not clarify the priority disgorgement collection should have in relation to SEC’s other goals. In addition, the plans do not establish performance measures for disgorgement collection. According to GPRA, agencies also are to establish performance indicators that can be used to measure or assess the relevant outputs, service levels, and outcomes of each program activity. SEC has not created the measures needed to assess the effectiveness of its disgorgement collection program or its deterrent effect. Such measures could include the percentage of disgorgement funds returned to investors, the timeliness of collection actions, or the number of violators ordered to pay disgorgement who go on to commit other violations. Without a well-defined strategy that clearly communicates the role and relevance of disgorgement in relation to SEC’s other goals—and without performance measures that assess the effectiveness of collection activities—the competing priorities and increasing workload faced by SEC staff create the risk that those staff will not be able to pursue collection activities to the level desired by the agency. The staff in SEC’s Division of Enforcement responsible for collecting disgorgement amounts have multiple additional responsibilities. Depending on the office to which they are assigned, they might also investigate potential violations of the securities laws, recommend SEC action when violations are found, prosecute SEC’s civil suits, negotiate settlements, and conduct collection activities for fines SEC levies. SEC staff told us that the agency’s limited resources force them to choose between the competing priorities of collecting disgorgement and taking direct action to stop ongoing fraud, and that they choose to devote more effort to stopping fraud than to collections. Similarly, SEC officials said that if a large, complex case requires SEC’s immediate attention, the agency shifts its resources to focus on that case. In such situations, collection actions on other cases are a secondary priority. As a result, a risk exists that SEC staff will not be able to pursue collection activities to the degree desired by the agency. SEC officials and staff also told us that, in most cases, investors are best served if the agency concentrates more of its resources on stopping ongoing fraud than on collecting disgorgement, because stopping ongoing fraud keeps investors from losing more money. Similarly, a former director of the Division of Enforcement stated that SEC’s primary responsibility is investor protection, not collecting all the money from fines and disgorgement. SEC is considering some actions to help address the challenges it faces in ensuring that staff have enough time to collect disgorgement but has yet to finalize any plans. For example, SEC is exploring contracting out a portion of its collection work to private collection agencies. Officials from the National Association of Securities Dealers Regulation, Inc., which began contracting out its collection activities in June 2001, told us that they saw contracting out as a way to help ensure that effective collection actions are taken. Contracting out allows the National Association of Securities Dealers Regulation, Inc. to use its resources to hire litigators and investigators rather than collection attorneys. Using external organizations to conduct collection activities would help alleviate the problem of competing priorities facing SEC staff and allow them to focus primarily on stopping ongoing fraud. As of the time of this report, SEC officials told us that they had spoken with several private collection agencies and were in the process of examining the legal issues involved with delegating some collection responsibilities to these agencies. Another step SEC has considered is increasing the number of staff dedicated to collection activities. In 1999, SEC created a position for an attorney dedicated to collections. This attorney and one paralegal are the only Division of Enforcement staff devoted solely to collection activities. SEC officials told us that they would like to expand the number of staff devoted exclusively to collections but added that they did not feel they could do so because they could not afford to take resources away from other areas. A recent initiative by SEC’s Chairman and commissioners may also affect how staff balance their priorities. In November 2001, SEC announced an initiative called real-time enforcement, which is intended to provide quicker and more effective protection for investors and better oversight for the markets with SEC’s limited enforcement resources. To achieve this, SEC intends to take action sooner than it has in the past. For example, the agency plans to obtain emergency relief in federal court to stop illegal conduct more file enforcement actions more quickly, thereby compelling disclosure of questionable conduct so that the public can make informed investment decisions; and impose swifter and more serious sanctions on those who commit egregious frauds, repeatedly abuse investor trust, or attempt to impede SEC's investigatory processes. Such prompt enforcement action may help SEC collect a greater amount of disgorgement by preventing violators from spending or hiding their assets. However, SEC officials also told us that such actions require significant staff resources, and may reduce the amount of resources that can be devoted to collection actions in other cases or later on in the same case. SEC’s overall disgorgement collection program lacks clear policies and procedures that specify the actions that staff could take to collect disgorgement. According to federal internal control standards, policies and procedures should be designed to help ensure that management’s directives are carried out. During the period covered by our review, SEC did not have in place such policies and procedures for disgorgement collections. Instead, the lead attorneys on the individual cases determined what actions should be taken, with supervisors reviewing the decisions. Supervisors told us that they met periodically with the lead attorneys to review the collection activities already taken and to determine whether further actions were needed. However, SEC management cannot readily determine whether staff take appropriate collection actions in all cases without clear collection procedures outlining which actions should be taken and when. SEC staff can take a wide range of collection actions, depending on the facts and circumstance of the case. For example, they can file a contempt action, seek to obtain liens on a violator’s property, or seek to have a violator’s wages garnished. Our review of the actions taken in individual cases reflected such a range of actions. In some cases, we could not determine what actions had been taken, because staff had left the agency or actions were not documented in the files we reviewed. In these cases, we relied on current staff to tell us what actions had been taken. Although collection actions must be tailored to individual cases, having clear guidance on the actions suited to different developments in a case would assist SEC management in ensuring that sufficient and appropriate efforts are made. This type of consistency is particularly important given SEC’s relatively high staff turnover rate. Collection policies that specify the timing and frequency of actions would also assist SEC management in establishing clear expectations on how the program should be managed. For example, we identified two cases in which certain collection and distribution actions appeared to have been delayed. In one case, the violator made the final payment in April 2000, but as of February 2002 a plan to distribute the assets had not been finalized. SEC staff on the case cited internal disagreement and staff turnover as reasons for the delay. In another case, little action was taken for about 14 months, during which time a new attorney was assigned to the case. The new attorney then unsuccessfully filed for contempt for nonpayment, but another 16 months elapsed with little activity. SEC ultimately transferred the case to the Treasury Department’s Financial Management Service without collecting any money. The lack of guidance that specifies when to pursue certain collection actions, and how often, affects staff as well as management, since staff are not held accountable to any clear standards. And SEC management cannot determine whether staff take all collection actions promptly, which increases the risk that staff could miss opportunities to maximize collections. SEC officials agreed that such guidance is needed, and in June 2002 provided us with draft collection guidelines that they plan to implement by the end of July 2002. The draft guidelines detail the types of actions that should be considered and give specific timeframes for their completion. If implemented, the guidelines would address the concerns noted above. At the time of our review, SEC had not finalized a means for ensuring that staff comply with the guidelines, such as a checklist that could be placed in each case file indicating the actions taken, how frequently, and why. At the time of our review, SEC did not have in place a system that would allow management to monitor activities to ensure that all appropriate actions are promptly taken. According to federal internal control standards, internal controls should assure not only that ongoing monitoring is a part of normal operations but also that it assesses the quality of performance over time. In our 1994 report, we recommended that SEC enhance DPTS to include aggregate and individual information on disgorgement cases. SEC’s current system for tracking disgorgement case information does not provide the accurate data SEC managers need to monitor collection efforts and identify cases that require their intervention. We also found that SEC was not using a monitoring system to oversee the distribution of disgorgement collected. In our 1994 report, we recommended that DPTS include the amounts of disgorgement distributed and the recipients. Currently, information on disgorgement funds available for distribution to investors is maintained in case files that are manually maintained and, therefore, cannot be easily analyzed or aggregated. SEC officials told us that aggregating this information would not help them collect or distribute funds. But because SEC cannot easily aggregate information on the distribution of funds, SEC staff could not tell us how much of the disgorgement collected was paid to investors or to the Treasury. As a result, neither SEC nor we could tell to what extent the disgorgement program was returning funds to harmed investors. Relying on individual SEC staff or their supervisors to monitor distribution efforts is not always adequate. Of the 18 cases we reviewed in which disgorgement had been collected in full, we found two cases in which the disgorgement collected had not been promptly distributed. In one case, the violator’s final disgorgement order payment occurred in July 2001, but as of March 2002, the funds had not been distributed, and SEC staff were still in the process of obtaining bids from potential receiver candidates. The attorney in charge attributed this 9-month delay to his heavy workload and trial responsibilities. In another case, approximately $100,000 collected through criminal restitution was transferred to SEC’s Office of the Comptroller and was to be distributed by the court-appointed receiver. However, SEC staff responsible for the case did not realize that the Office of the Comptroller had received the funds from the criminal restitution action until the case was examined in preparation for our review. As a result, this amount was not included in the final distribution made by the receiver. SEC staff responsible for the case stated that this was an oversight on the part of both the receiver and SEC. However, they also noted that the case was unusual in that the judge had required SEC to oversee not only disgorgement funds from SEC’s case but also restitution funds recovered as a result of the criminal case. Without reliable, accessible data, SEC is limited in its ability to monitor whether collection activity is taking place and whether collected funds are promptly distributed. More importantly, without using a system to manage the program, SEC management is unable to assess the extent to which its staff are returning funds to defrauded investors. SEC has improved its process for selecting individuals to recommend as court-appointed receivers. In addition, although SEC is not responsible for supervising receivers, its staff are taking actions to monitor the cases that have receivers. However, SEC still lacks a mechanism for tracking information such as the fees receivers charge and the amounts they collect, limiting management’s ability to ensure that as much money as is reasonably possible is returned to harmed investors. Receivers are used on SEC’s cases to perform tasks such as gathering and liquidating violators’ assets and distributing funds to harmed investors. SEC usually selects a candidate and then recommends the individual for receivership to the court for final approval. According to an SEC official, some courts accept SEC’s recommended receiver, but other courts prefer to appoint a receiver on their own. Because the court appoints the receivers and ultimately defines their duties, receivers are answerable to the judge of the court rather than to SEC. As court-appointed fiduciaries, receivers are subject to the same standards of trust and confidence as other fiduciaries, and need to be selected as impartially as possible. In 1994, we examined whether SEC had procedures and management controls for selecting receivers in response to concerns that former SEC employees were favored in the receiver selection process. We reported that SEC had no formal policies or qualifying standards in place to ensure that receivers were selected impartially, and we were unable to determine how many receivers were former SEC employees. As we recommended, SEC implemented guidelines in July 2001 for selecting candidates for receiverships that appear to address the concerns raised in our 1994 report. The guidelines have shifted responsibility for choosing receivers from the SEC attorneys themselves to a committee of higher-level managers. When receivers are needed, SEC must now obtain written proposals from at least three candidates detailing the applicants’ experience, fees, and staffing and operational plans. The candidates’ proposals are then submitted to a three-person committee for final evaluation and selection. The committee is composed of the chief or deputy chief litigation counsel; the investigating or litigating attorney on the case; and an associate director, regional director, or district administrator. SEC has also formalized criteria to use when evaluating candidates’ proposals. These criteria include costs and the candidate’s reputation, experience in securities regulations, and past service as a receiver on another SEC matter. The guidelines state that the committee should avoid selecting the same person repeatedly for appointments as a receiver, so as to avoid the appearance of favoritism. The committee must also justify its selection in writing. The names of receivers selected are entered into a database that can be used to identify receiver candidates on short notice. We reviewed 10 recent receiver recommendations and found that SEC was generally following the guidelines for selecting receivers. In every case we reviewed, the three-person committee had evaluated at least three candidates and documented the reasons for its selection. In addition, all the cases contained summary information on the candidates’ backgrounds, and nine cases contained fee information from at least two candidates. We found that most of the individuals selected as receivers—7 of the 10 selected—were not former SEC employees. In cases in which SEC had recommended former employees, documentation was provided justifying the nominations. In two such cases, SEC recommended former employees because they had the most experience relevant to the job. In one case, we could not tell whether the candidate was a former SEC employee, but SEC documented the candidate’s extensive relevant experience. SEC assists the court in monitoring receivers, helping to ensure that they adhere to their responsibilities as court-appointed fiduciaries tasked with protecting recovered funds and complying with court orders. In our 1994 report, we found that SEC did not have adequate oversight over receivers, and we could not tell whether SEC staff were adequately reviewing receiver fee applications. We recommended that SEC establish guidelines for monitoring court-appointed receivers. Although SEC still has not established such guidelines, we found that it has taken steps to monitor receivers’ actions. Staff in SEC’s Division of Enforcement told us that they monitor court-appointed receivers by working closely with them and by asking them to consult with SEC before taking any major actions, such as seizing or selling assets. In one case we reviewed, we saw documentation of phone conversations between SEC and the receiver concerning the receiver’s distribution plan and case status. In another case, we saw correspondence from the receiver regarding the progress made and the results of disposed assets. We also spoke with three receivers who work on SEC cases and it appeared that SEC was working closely with them to monitor their actions. One receiver we spoke with said that he regularly interacts with SEC while working on a case in order to avoid disputes about his handling of the case and fees. He added that disputes over how he handles a case could cost his firm time and money that are often not reimbursable under the receivership. Another receiver we spoke with said that while working on an SEC case, he is in frequent communication with the SEC attorney on the case, whom he found to be available, responsive, aggressive, and concerned about the progress of cases. We also found that SEC staff had reviewed the receiver fee applications and obtained additional information needed to assess the application. Reviewing the applications serves as an important control for ensuring that as much money as possible is returned to investors, as receivers are compensated for their services from the amounts collected in the case. Although the courts approve receivers’ fee applications, SEC attorneys review the applications beforehand and comment on the reasonableness of the fees. In the absence of guidelines, SEC attorneys use their knowledge of the facts and circumstances of a case to determine whether fees are reasonable. One senior SEC staff member told us that he reviews fee applications by considering the exact tasks the receivers and their staff have performed, assessing the need for specialized staff, and comparing the fees to fees for similar services in the same geographic area. During our review, we saw documentation showing that the attorney in one case had examined the number of staff the receiver hired to complete necessary tasks, assessed the necessity of the tasks, and examined the appropriateness of the receiver’s expenses. In the same case, we also saw documentation showing that the attorney had requested and examined information such as record of hours worked in order to assess the reasonableness of the fees. In another case, SEC had noted in a motion filed in support of the receiver’s fee application that the receiver apparently was not billing for all the work performed. In a third case, one attorney told us that after monitoring the rising costs of the fee applications, the SEC attorney had taken over some of the receiver’s duties, such as preparing a distribution plan, to minimize the receiver’s expenses and fees. We found that SEC was not using a centralized system to monitor receiver fees. Receivers are compensated for their services from the amounts collected, so when receivers' fees are high, less money is available for distribution to investors. In our 1994 report, we found that SEC did not track information on receivers, limiting its ability to assess the effectiveness of receivers and to monitor trends in costs. We recommended that SEC collect such information in a centralized management information system. However, SEC staff told us that they do not track this information in DPTS or any other system because it would not help with their collections efforts. Currently, receiver data on the amount recovered, costs and expenditures, and the amount disbursed to investors is accessible through case files that are manually maintained. However, tracking receiver data through a centralized management information system could improve SEC's oversight of all cases. Managers would be able to identify specific instances in which receivers' fees are high or are absorbing a large share of the funds available for distribution and, if appropriate, take prompt action to minimize these costs. While we found no evidence in the cases we reviewed that receiver fees were excessive, we did find that receivers’ fees have sometimes amounted to half or more of the disgorgement funds collected in cases. For example, in one case we reviewed, a receiver appointed to find and liquidate assets received over $285,000 in fees and expenses—approximately half of the total amount collected. In another case, the fees paid to the receiver exceeded the amount returned to harmed investors. This receiver, who negotiated the sale of oil and gas interests, was paid approximately $11.6 million for his services; the investors received around $10 million. Furthermore, if managers had access to such a centralized system, they would not have to rely solely on the case attorneys for information—a factor that is particularly important given SEC's relatively high turnover rate and resulting loss of experienced staff with knowledge of cases. A report by SEC’s Inspector General found that SEC staff were not making sufficient efforts to verify the financial condition of violators seeking waivers of a disgorgement amount. In response, SEC issued new guidelines on the waiver process, and our review of a sample of recent waiver recommendations found that SEC staff were following these guidelines. According to SEC officials, waivers are a tool SEC can use to more easily reach settlements with violators and thus avoid spending resources on litigation. When violators request a waiver based on an inability to pay, SEC staff are to gather the necessary information to validate this claim and provide to the Commission, which must approve any such waivers, a recommendation to either approve or deny the request. The Commission usually approves waivers at the same time it approves the settlement of the enforcement action, prior to the court’s final approval of the disgorgement order. Waivers also must be approved by the court, and in recommending that courts grant waiver requests, SEC must be able to show that it cannot collect the total amount of the court-ordered disgorgement. SEC guidelines also require that waiver recommendations be supported with sworn financial statements and stipulate that depositions and information from third parties can be used for further support. SEC staff must analyze the financial statements to determine whether the information is accurate and complete. SEC generally does not consider waivers when the violator is a recidivist, when SEC believes the violator has withheld information, or when SEC has spent significant resources obtaining a judgment. A June 2000 audit by SEC’s Inspector General staff found that SEC could improve its process for verifying the accuracy of the violator’s claimed lack of ability to pay the entire disgorgement order before recommending that waivers be approved by the Commission. Specifically, the audit identified two problems. First, staff did not verify that the information violators submitted was complete and accurate. Second, the procedures staff used to ensure that they had identified all of the violators’ assets were inadequate. For example, SEC staff did not sufficiently utilize online databases to verify the information contained in financial statements or to identify hidden assets. As a result, SEC staff could not offer sufficient assurance that violators had disclosed all of their assets to SEC. To improve SEC’s ability to provide such assurance, the Inspector General identified best practices that enforcement staff could use in verifying violators’ financial information and recommended that SEC adopt these procedures. In October 2000, SEC implemented guidelines designed to improve the waiver recommendation process. The guidelines require SEC staff to analyze violators’ financial statements by reviewing supporting documentation such as bank statements, tax returns, credit reports, and loan statements. In addition, SEC has contracted with a database provider that performs searches for information such as real property and motor vehicle records. SEC officials told us that under the guidelines, supervisors now review waiver recommendations made by their staff and that the Chief Counsel’s Office also reviews every waiver recommendation before it is submitted to the commissioners. The officials also told us that, when funds become available, they plan to hire an outside contractor to audit a sample of waiver recommendations in order to ensure that the guidelines are being followed and that the problems identified by SEC’s Inspector General have been addressed. We reviewed a sample of 10 recent waiver recommendations and found that SEC staff were following the revised guidelines. For example, the guidelines describe certain types of situations in which SEC staff should investigate further or request additional information, and we found that SEC staff were taking these actions. In one case we reviewed, the violator owned stock in a company, and enforcement staff on the case requested information on this stock in order to determine its value. In another case, the violator did not initially submit complete information on his financial condition. Enforcement staff questioned him about his sources of income, obtained all relevant loan statements, and verified the value of his personal property, real estate, and business interests. Enforcement staff also were using database searches to obtain information on violators’ assets and financial condition, as the guidelines require. However, not enough time has elapsed since the revised guidelines were put in place to determine their effect on the number or size of waivers recommended by SEC. Depriving securities law violators of their illegally obtained funds can help SEC achieve its mission of protecting investors and maintaining confidence in the fairness and integrity of the U.S. securities markets. Although we acknowledge that the collection rate is not likely the best measure for assessing the effectiveness of SEC’s disgorgement collection activities, improving the process for entering and updating the information in DPTS would provide accurate and current information for SEC to use to monitor progress on individual cases. Having such information would also allow SEC’s management to analyze potential trends in the aggregate data to ensure that any changes in the collection rate can be explained. Although SEC officials considered disgorgement to be an important tool for sanctioning securities law violators and deterring additional fraud, we identified weaknesses in various elements of SEC’s disgorgement collection program. Under GPRA, federal agencies are expected to become more performance oriented by setting goals for program performance and measuring progress toward those goals. However, we found that the strategic and annual performance plans that SEC has prepared under GPRA did not specifically address disgorgement collection or establish performance measures to assess the effectiveness of the agency’s disgorgement collection efforts. Because SEC’s Division of Enforcement staff already juggle competing priorities and an expanding workload, the lack of strategic guidance and measures against which to assess performance could result in less collection activity being undertaken than SEC management desires. To reconcile the competing demands on its staff, SEC will have to weigh the importance of other enforcement activities relative to disgorgement collection against the concern that disgorgement may lose its effectiveness as a sanction and deterrent to further fraud if collection activities are not attempted. The agency has begun this process as part of considering various alternative means of collecting disgorgement amounts but has yet to complete its assessment and take action to implement any resulting program changes. Similarly, SEC did not have in place specific policies and procedures that would provide staff with guidance on the type, timing, and frequency of collection actions they should consider and help them understand what is expected of them. SEC provided us with draft collection guidelines to be implemented by the end of July 2002 that would address these concerns, but has not yet finalized controls to help management ensure that staff follow the guidelines. Without such guidance and controls, SEC management cannot ensure that sufficient and appropriate collection efforts are being made consistently across all cases. Given SEC’s relatively high staff turnover rate, a tool to quickly determine what actions have been taken and when could help any staff that assume responsibility for cases with which they lack familiarity. Finally, SEC management did not have reliable, accessible information it could use to ensure that collection activity is taking place and that collected funds are being distributed promptly. With an accurate and current disgorgement tracking system, SEC managers could identify cases that may require attention, such as cases that have had considerable time pass without any collection activity. Furthermore, without an ability to centrally monitor subsequent distribution activities, SEC cannot assess the extent to which it is returning disgorgement funds to harmed investors. Since our last report, SEC has improved its process for selecting individuals to recommend as receivers, and in the cases we reviewed staff have been taking actions to oversee receivers’ efforts. However, SEC still does not track individual case information on receivers’ fees and expenses in a central management information system, as we recommended in our 1994 report. Without such a system, SEC managers cannot readily identify cases in which receiver fees have risen to a significant portion of the amount collected and thus could miss the opportunity to take additional actions to ensure that such charges are appropriate and that the maximum amount is returned to harmed investors. SEC has also taken steps to improve its ability to ensure that disgorgement waivers are recommended only when SEC has verified the violator’s inability to pay. Specifically, the agency implemented guidelines designed to provide better assurance that the financial information violators provide is accurate and that all assets have been identified. Based on our review of a sample of recent waiver recommendations, we found that SEC staff were following these guidelines. However, it was too early to determine what effect, if any, these guidelines were having on the number or amount of waivers granted. To improve SEC’s ability to ensure that the disgorgement collection program meets its goal of effectively deterring securities law violations and returning funds to harmed investors, we recommend the Chairman, SEC, take the following actions: Develop appropriate procedures to ensure that information maintained in DPTS is accurate and current. Ensure that disgorgement and the collection of disgorgement are addressed in SEC’s strategic and annual performance plans, including the development of appropriate performance measures. Expeditiously complete the evaluation of options for addressing the competing priorities and increasing workload faced by SEC’s Division of Enforcement staff, including assessing the feasibility of contracting certain collection functions and increasing the number of staff devoted exclusively to collections, and take steps to implement any recommended actions. Ensure the prompt implementation of collection guidelines that specify the various collection actions available, explain when such activities should be considered, and stipulate how frequently they should be performed. In addition, SEC should develop controls to ensure that staff follow these guidelines. Ensure that management uses information on the distribution of disgorgement, including the amounts due to and received by investors and the fees paid to receivers, to monitor the distribution of disgorgement, including the reasonableness of receiver fees. SEC officials provided written comments on a draft of this report that are reprinted in appendix II. In general, SEC agreed with most of the report’s findings, conclusions, and recommendations. As detailed in the written comments, SEC is taking or planning to take action to implement most of our recommendations. SEC officials also provided technical comments, which we have incorporated as appropriate. In response to our recommendation to monitor the distribution of disgorgement, including fees paid to receivers, SEC officials said that the agency plans on implementing a system to monitor when courts enter distribution plans and when receivers distribute funds. However, as stated in its letter, SEC does not believe that aggregating information on distributions of disgorgement and receiver fees would help the agency assess how well it is meeting its goal of deterring fraud and depriving wrongdoers of their ill-gotten gains. SEC noted that the amount distributed to investors is a function of numerous factors that vary from case to case, including the size of the disgorgement award, how much the agency could collect, and the costs of administering the receivership. We agree that aggregate statistics on the amount of disgorgement distributed to investors and the fees paid to receivers may have limitations as measures of SEC’s performance in these areas. However, in addition to depriving violators of their illegally obtained funds, returning money to harmed investors is an important element of the disgorgement program. Knowing the total amount of funds returned to investors every year would provide SEC with an important means of documenting the impact of its efforts in this area. Reviewing such information over time would also help SEC focus on ensuring that harmed investors receive the maximum, reasonable amount of funds. Another focus of our recommendation was to ensure that SEC management had an effective means for monitoring the fees paid to receivers in order to determine whether they are reasonable. While we recognize that receivership fees are within the purview of the court, SEC does have opportunity to object to those fees if they appear unreasonable. In addition, while we also recognize that the facts and circumstances of each individual case must be considered when making such determinations, a system that allows management to monitor cases across the Division of Enforcement can be a useful tool for identifying cases for further review. We believe that the system SEC plans to implement for monitoring the distribution of disgorgement can also be used for this purpose and would likely require only minimal additional resources. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairman and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs and its Subcommittee on Securities and Investment; the Chairman, House Committee on Energy and Commerce; the Chairman of the House Committee on Financial Services and its Subcommittee on Capital Markets, Insurance, and Government Sponsored Enterprises; and other interested congressional committees. We also will send copies to the Chairman of SEC and will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me or Cody J. Goebel at (202) 512-8678. Additional GAO contacts and acknowledgements are listed in appendix III. To determine SEC’s collection rate, we obtained and analyzed a copy of SEC’s database, as of November 16, 2001, containing all disgorgement orders ever entered into the database. Using this information, we calculated a single collection rate for all orders issued in fiscal years 1995 through 2001. We calculated a collection rate on disgorgement ordered for each fiscal year from 1989 to 1999. To ensure that the amount of time for collection was comparable in each of these years, we totaled the amount of disgorgement collected within a 2-year period following the date of each individual order. We also assessed the reliability of the database by comparing data on the amounts and dates of the disgorgement orders, waived amounts, and payment amounts in the database to information in the case files for 57 judgmentally selected disgorgement orders. These 57 cases included a judgmentally selected sample of 35 cases with full, partial, and no payments; 10 cases with waivers; and 2 cases used to pre-test our data collection instrument. We selected these cases based on a printout from DPTS to ensure that we reviewed cases with a variety of characteristics, such as whether collections had been successful and whether a receiver had been appointed. We confined our sample to civil cases with disgorgement ordered from fiscal years 1998 through 2000 from the SEC Chicago, Los Angeles, and Washington, D.C., offices, and we visited these offices to review the files maintained there and to discuss the cases with attorneys who had worked on them, whenever possible. Finally, we also reviewed the 10 case files with the largest disgorgement amounts ordered from fiscal years 1995 until 2000 because these represented about 24 percent of the total dollar amount of disgorgement ordered during that period. We compared data on the amounts and dates of the disgorgement orders, waived amounts, and payment amounts. We also interviewed SEC officials knowledgeable about DPTS regarding the purpose of the system, security, data quality controls, and the data entry process. We were unable to determine the extent of the errors in the database because our sample was not representative of all SEC cases. To determine factors that affect SEC’s ability to collect disgorgement, we spoke with officials from SEC, two private collection agencies, and three receivers that had worked on SEC disgorgement cases. In addition, we spoke with officials from other organizations and federal agencies that also conduct collections to learn how the characteristics of SEC’s disgorgement debts may have varied from other types of debts. These organizations and agencies included the Commodity Futures Trading Commission, the Department of Education, the Securities Investor Protection Corporation, and the National Association of Securities Dealers Regulation, Inc. We also used the case files we selected to identify any characteristics that appeared to affect collections and to corroborate the factors described by the officials with whom we spoke. To assess SEC’s disgorgement collection program, we reviewed SEC’s strategic and annual plans, its administrative rules of practice regarding disgorgement payments, rules relating to debt collection, and guidelines on distribution. We also discussed the collection and distribution processes with SEC officials from Washington D.C., the Chicago Midwest Regional Office, and the Los Angeles Pacific Regional Office. We also reviewed the judgmentally selected case files to examine collection actions taken after the disgorgement order date. In cases in which collections had occurred, we also used the files to determine what distribution activities had taken place. In instances in which we could not determine what collection actions had been taken or the reasons disgorgement went uncollected, we spoke with SEC attorneys familiar with these cases to learn what collection efforts had been made and what had contributed to any inability to collect the owed amounts. The results of our case file review are not representative of all SEC cases. We also spoke to officials at the National Association of Securities Dealers Regulation, Inc. about their experience with contracting out collection activities. To evaluate the changes in SEC’s process for recommending receivers and monitoring their activities, we reviewed documentation on SEC’s policies and procedures for selecting receivers. In addition, we discussed the selection and monitoring activities with officials from the Division of Enforcement, SEC’s Office of the General Counsel, and the Chicago and Los Angeles regional offices. We also spoke with three receivers appointed to SEC disgorgement cases to obtain their views on their role, responsibilities, and relationship with SEC officials. Finally, we reviewed a printout from the agency’s receiver database and examined 10 recent cases in which a receiver was recommended to assess SEC’s compliance with its selection procedures. We also reviewed seven cases in which a receiver had been appointed to determine how SEC monitors receiver activities. We also spoke with SEC attorneys to learn what actions had been taken to monitor receivers and to review receiver fee applications. The results of our case file review are not representative of all SEC disgorgement cases. To evaluate the improvements in SEC’s process for recommending the waiving of disgorgement amounts, we reviewed the SEC Inspector General’s January 2001 report and applicable guidelines related to recommending waivers. We discussed the waiver process with officials from SEC’s Division of Enforcement and the SEC Inspector General’s Office. In addition, we conducted a case file review of 10 cases with partial and full waivers and a final judgment ordered in fiscal year 2001. We judgmentally selected between two to four disgorgement cases from the SEC Chicago, Los Angeles, and Washington, D.C., offices. The results of our case file review are not representative of all SEC cases. In addition, our office of investigation conducted an asset search on one waiver case to confirm that the defendant had no means to pay and that the waiver was justified. We conducted our work at the SEC Washington, D.C., headquarters, Chicago Midwest Regional Office, and Los Angeles Pacific Regional Office from August 2001 through July 2002 in accordance with generally accepted government auditing standards. In addition to those individuals named above, Patrick Ward, Michele Tong, Anita Zagraniczny, Carl Ramirez, Jerry Sandau, Sindy Udell, and Emily Chalmers made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | Every year investors lose money to individuals and corporations that violate federal securities laws. One mission of the Securities and Exchange Commission (SEC) is to deter such violations and return lost funds to investors. SEC's primary tool is the disgorgement order, which requires violators to give up money obtained through securities law violations. In order for disgorgement to succeed, SEC must have an effective disgorgement collection program. Although the courts have ordered billions of dollars in disgorgement in the last decade, concerns exist about SEC's success in collecting these funds. For several reasons, SEC's disgorgement collection rate is not an adequate measure of the effectiveness of SEC's disgorgement program. First, while SEC data showed a collection rate of 14 percent for the $3.1 billion in disgorgement ordered in 1995-2001--compared with the 50 percent collection rate GAO reported in its 1994 report--GAO found that the rate varied widely from year to year and was influenced by large individual disgorgement orders. Second, the data used to calculate the collection rate was not reliable because of weaknesses in entering and updating information in SEC's disgorgement tracking database. Third, factors beyond SEC's control, including violators' inability to pay, reduce the likelihood that SEC will be able to collect the full amount of disgorgement ordered. To deprive securities law violators of illegally obtained funds, SEC needs an effective collection program with clearly defined objectives and measurable goals, specific policies and procedures for its staff, and systems to allow management to monitor performance. However, SEC's strategic and annual performance plans do not address disgorgement collection or clarify its priority relative to other activities. SEC has improved its process for recommending receivers and has taken steps to monitor receivers' actions, but lacks a mechanism for tracking receiver fees. In response to concerns noted in a recent internal report, SEC also has improved its waiver recommendation process for disgorgement orders. |
For the purposes of this report, we use the term deadline suit to mean a lawsuit in which an individual or entity sues because EPA has allegedly failed to perform any nondiscretionary act or duty by a deadline established in law. A nondiscretionary act or duty is an act or duty required by law. This report examines deadline suits that seek to compel EPA to either (1) issue a statutorily required rule when that rule has a deadline in law or (2) issue a statutorily required rule or make a determination that issuing such a rule is not appropriate or necessary pursuant to the relevant statutory provision, when issuing that rule or making that determination has a deadline in law. For example, a deadline suit may involve a person suing EPA because EPA failed to issue a rule by a date established in statute. Similarly, a person may sue EPA because it missed a recurring deadline to review and revise, as necessary or appropriate, an existing rule. In August 2011, we reported that the number of new environmental litigation cases—of all types—filed against EPA each year from fiscal year 1995 through fiscal year 2010 averaged 155 cases per year. Before filing a deadline suit, a person generally must file a Notice of Intent to Sue (NOI) with EPA. Among other things, a NOI generally must identify the provision(s) of the law that requires EPA to perform an act or duty and a description of the action taken or not taken by EPA that is claimed to constitute a failure to comply with the provision. Sixty days after filing the NOI, the filer may initiate a deadline suit seeking a court order requiring EPA to complete the statutorily required action. A settlement takes the form of either a settlement agreement or a consent decree. For purposes of this report, the term settlement refers to both settlement agreements and consent decrees. Both are negotiated agreements between EPA and the plaintiff. A settlement agreement is not subject to court approval but can result in a stay of the lawsuit. If EPA fails to meet the terms of the settlement agreement, then the plaintiff can ask the court to lift the stay in order to proceed with the lawsuit. A consent decree is entered as a court order. If EPA fails to meet the terms of a consent decree, the court can enforce or modify the consent decree, including citing EPA for contempt of court. Unless a more specific statute governs, when EPA or any other federal agency promulgates a rule, whether or not in conjunction with a deadline suit, it generally follows procedures prescribed in the Administrative Procedures Act (APA). Among other things, the APA governs the process by which federal agencies develop and issue regulations. It includes requirements for publishing notices of proposed and final rules in the Federal Register and for providing opportunities for the public to comment on notices of proposed rulemaking. Many rules promulgated under the authority of the Clean Air Act do not follow the procedures prescribed in the APA, but rather follow similar but more specific procedures set forth in the act. GAO identified seven key environmental laws that allow individuals to file a deadline suit to compel EPA to issue a statutorily required rule, or perform a statutorily required review of a rule to determine whether to revise the rule. EPA works with DOJ to consider several factors in determining whether or not to settle the deadline suit and the terms of any settlement. GAO identified seven key environmental laws for which EPA has primary regulatory authority that allow citizens to file a deadline suit. Table 1 lists the seven laws. With the exception of the Emergency Planning and Community Right-to- Know Act (EPCRA), the key environmental laws allow citizens to file deadline suits to compel EPA to perform any act or duty required by the respective law, including issuing any required rules. For example, the provision in the Clean Air Act states: “ny person may commence a civil action on his own behalf - … against the Administrator where there is alleged a failure of the Administrator to perform any act or duty under this chapter which is not discretionary with the Administrator.” The provision in EPCRA that allows citizens to file deadline suits is different from the other key environmental laws because citizen suits may only be filed to compel certain actions listed in the law. Within EPA, the Office of General Counsel is responsible for handling deadline suits. It works with the appropriate program offices in EPA, such as the Office of Air and Radiation (OAR) or the Office of Water, when negotiating settlements for deadline suits. EPA’s Office of General Counsel also coordinates with DOJ’s Environmental and Natural Resources Division. According to EPA and DOJ officials, when a deadline suit is filed, the agencies work together to determine how to respond to the lawsuit, including whether or not to negotiate a settlement with the plaintiff or allow the lawsuit to proceed. In making this decision, EPA and DOJ consider several factors to determine which course of action is in the best interest of the government. According to EPA and DOJ officials, these factors include: (1) the cost of litigation, (2) the likelihood that EPA will win the case if it goes to trial, and (3) whether EPA and DOJ believe they can negotiate a settlement that will provide EPA with sufficient time to complete a final rule if required to do so. EPA and DOJ officials told us that they often choose to settle deadline suits when EPA has failed to fulfill a mandatory duty because it is very unlikely that the government will win the lawsuit. In many such cases, the only dispute is over the appropriate remedy, i.e., fixing a new date by which EPA should act. Additionally, in such cases, officials may believe that negotiating a settlement is the course of action most likely to create sufficient time for EPA to complete the rulemaking if it is required to issue a rule. EPA and DOJ have an agreement under which both must concur in the settlement of any case in which DOJ represents EPA. See 28 C.F.R. §§ 0.160-0.163. duty. Thus, in general, this policy restricts DOJ from entering into a settlement if it commits EPA to take an otherwise discretionary action, such as including specific substantive content in the final rule unless an exception to this restriction is granted by the Deputy Attorney General or Associate Attorney General of the United States. According to EPA and DOJ officials, to their knowledge, EPA has been granted only one exception to the general restriction on creating mandatory duties through settlements—a 2008 settlement in a suit related to water quality criteria for pathogens and pathogen indicators. The Meese memo also provides that DOJ should not enter into a settlement agreement that interferes with the agency’s authority to revise, amend, or promulgate regulations through the procedures set forth in the APA.stated that they have not, and would not agree to settlements in a deadline suit that finalizes the substantive outcome of the rulemaking or declare the substance of the final rule. The terms of settlements in deadline suits that resulted in EPA issuing major rules in the last 5 years established a schedule to either promulgate a statutorily required rule or to promulgate a statutorily required rule or make a determination that doing so is not appropriate or necessary pursuant to the relevant statutory provision. EPA received public comments on all but one of the draft settlements in these suits. EPA issued 32 major rules from May 31, 2008 through June 1, 2013 (see app. II). According to EPA officials, the agency issued 9 of these rules following settlements in deadline suits. They were all Clean Air Act rules. The 9 rules stem from seven settlements. Two of the settlements established a schedule to complete 1 or more rules, and five established a schedule to complete 1 or more rules or make a determination that such a rule was not appropriate or necessary in accordance with the relevant statute. Some of the schedules included interim deadlines for conducting rulemaking tasks, such as publishing a notice of proposed rulemaking in the Federal Register. Appendix III provides information on the schedules contained in each settlement. In addition to schedules, the seven settlements also included, among other things, provisions that allowed deadlines to be modified (including the deadline to issue the final rule) and specified that nothing in the settlement can be construed to limit or modify any discretion accorded EPA by the Clean Air Act or by general principles of administrative law. Consistent with DOJ’s 1986 Meese memorandum, none of the settlements we reviewed included terms that required EPA to take an otherwise discretionary action or prescribed a specific substantive outcome of the final rule. The seven settlements, committing EPA to issue the 9 statutorily required rules, were finalized between about 10 months and more than 23 years after the applicable statutory deadlines. For each of the 9 rules, figure 1 shows the date the regulation was due, the date the settlement was filed with the court, and the date the final rule was published in the Federal Register. The Clean Air Act requires EPA, at least 30 days before a settlement under the act is final or filed with the court, to publish a notice in the Federal Register intended to afford persons not named as parties or interveners to the matter or action a reasonable opportunity to comment in writing. EPA or DOJ, as appropriate, must then review the comments and may withdraw or withhold consent to the proposed settlement if the comments disclose facts or considerations that indicate consent to the settlement is inappropriate, improper, inadequate, or inconsistent with Clean Air Act requirements. The other six key environmental laws with provisions that allow citizens to file deadline suits do not have a notice and comment requirement for proposed settlements. According to an EPA official, with the exception of the agency’s pesticide program, EPA generally does not ask for public comments on defensive settlements if the agency is not required to do so by statute. The 9 major rules EPA issued from May 31, 2008 to June 1, 2013 following seven settlements in deadline suits were Clean Air Act rules. For each settlement, EPA published a notice in the Federal Register providing the public the opportunity to comment on a draft of the settlement. EPA received between one and 19 public comments on six of the draft settlements. No comments were received on one of the draft settlements. Based on EPA summaries of the comments, the comments concerned the reasonableness of the deadlines contained in the settlements or supported or objected to the settlements. For example, some comments supported the deadline or asserted that the deadlines should be accelerated, others comments stated that EPA would have difficulty meeting the deadlines. EPA determined that none of the comments on any of the draft settlements disclosed facts or considerations that indicated that consent to the settlement in question would be inappropriate, improper, inadequate, or inconsistent with the act. Table 2 shows the number of public comments EPA received on each draft settlement. According to EPA officials, settlements in deadline suits primarily affect a single office within EPA—the Office of Air Quality Planning and Standards (OAQPS)—because most deadline suits are based on provisions of the Clean Air Act for which that office is responsible. According to EPA’s Office of General Counsel, provisions in the Clean Air Act that authorize the National Ambient Air Quality Standards (NAAQS) program and Air Toxics program account for most deadline suits. These provisions have recurring deadlines requiring EPA to set standards and to periodically review—and revise as appropriate or necessary—those standards. OAQPS sets these standards through the rulemaking process. For example, the Clean Air Act requires EPA to review and revise as appropriate NAAQS standards every 5 years and to review and revise as necessary technology standards for numerous air toxics generally every 8 years. The effect of settlements in deadline suits on EPA’s rulemaking priorities is limited. OAQPS officials said that deadline suits impact the timing and order in which rules are issued by the NAAQS program and the Air Toxics program, but not which rules are issued. The officials also noted that the impact of deadline suits on the two programs differs because of the different characteristics of the programs. Regarding the NAAQS program, the Clean Air Act requires EPA to review and revise as appropriate the NAAQS standards for six pollutants—called criteria pollutants—at 5-year intervals. NAAQS standards limit the allowable concentrations of criteria pollutants in the ambient air. There is more than one standard for each criteria pollutant. EPA establishes the required standards through the rulemaking process and recently conducted seven NAAQS reviews to review the standards and revise as appropriate. According to an OAQPS official, prior to 2003, EPA did not review NAAQS on a regular cycle. Beginning in 2003, EPA faced four deadline suits for failure to complete NAAQS reviews for the six criteria pollutants. EPA settled two of these suits and was subject to a court order regarding the other two suits after it failed to successfully negotiate settlements with the plaintiffs. The settlements and court orders led EPA to perform the statutorily required reviews of the NAAQS standards for the six criteria pollutants and to promulgate seven rules—one for each NAAQS review. The last of these seven rules was promulgated in April 2012. According to officials, the deadline suits addressing the NAAQS standards did not affect which NAAQS standards were reviewed since EPA reviewed all of the standards. According to officials, the deadline suits did affect the timing and order in which EPA conducted the reviews to accommodate the time frames in the settlements and court orders. Additionally, according to officials, as a result of the experience in responding to the deadline suits, the agency is striving to maintain the 5- year statutory review cycle for criteria pollutants going forward. However, officials noted that it is difficult for EPA to complete its NAAQS reviews every 5 years. From April 2012 through September 2014, EPA has promulgated one rule following a NAAQS review after it settled a deadline suit and has missed the statutory deadline for reviewing the standards of two other criteria pollutants, one of which EPA is under court order to complete by October 2015 following a deadline suit. Regarding the Air Toxics program, OAQPS officials said that the impact of deadline suits on the Air Toxics program is different from that of the impact on the NAAQS program because of the large number of rules that the Air Toxics program promulgates. For example, the Clean Air Act establishes a schedule under which EPA established 120 standards to reduce the emissions of 187 hazardous air pollutants. These National Emission Standards for Hazardous Air Pollutants (NESHAP) apply to certain categories of sources of these pollutants, such as cement manufacturing, municipal solid waste landfills, and semiconductor manufacturing. Generally, the act requires EPA, no less often than every 8 years, to review the standard, and revise as necessary. It makes any necessary revisions through the rulemaking process. The review must take into account developments in practices, processes, and control technologies. For sources subject to Maximum Achievable Control Technology (MACT) standards promulgated pursuant to section 112(d)(2) of the Clean Air Act, EPA must also conduct a residual risk assessment within 8 years after the initial promulgation of the standard. EPA refers to these two reviews together as the risk and technology review (RTR). As of October 2014, EPA has completed 28 RTRs (27 of these reviews following deadline suits) and has not completed 57 RTRs for which the statutory deadline has passed and 36 RTRs for which the statutory deadline has not passed. Additionally, officials report that, currently, most of the resources available to complete RTRs are focused on a 2011 settlement. This settlement listed 27 NESHAPs for which RTRs were overdue. OAQPS officials said that they have been unable to meet all of the time frames contained in the 2011 settlement and, as a result, have negotiated amendments to the settlement extending the time frames. Officials said that they intend to complete all of the overdue RTRs but are focused on fulfilling the terms of the 2011 settlement and several other settlements to which EPA has entered into that address a smaller number of reviews. Additionally, in September 2013, EPA received a NOI concerning 43 additional NESHAPs for which a RTR is overdue. EPA officials said that they are engaged in settlement discussions over one of these reviews for which EPA has been sued. Additionally, we discussed with EPA budget officials the potential impact of budget allocation decisions associated with deadline suits on EPA offices that are not subject to deadline suits. According to the budget officials, EPA accounts for anticipated workload arising out of litigation in its budgeting cycle for affected programs but does not make changes in existing budget allocations specifically to address settlements in deadline suits. Thus, according to the official, the resources available to EPA offices not subject to these settlements are not directly impacted by the settlements. We provided a draft of this report to EPA and DOJ for review and comment. In written comments from EPA, reproduced in appendix IV, the agency generally concurs with our analysis and states that the report accurately describes EPA’s approach to deadline suit litigation brought against it. EPA also provided technical comments, which we incorporated as appropriate. In addition, in an e-mail received November 24, 2014, the DOJ Audit Liaison stated that the DOJ concurs with our report and has no additional comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Attorney General, the Administrator of the EPA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix V. The objectives of this report are to examine (1) key environmental laws that allow citizens to file deadline suits that may compel the Environmental Protection Agency (EPA) to conduct a rulemaking and the factors EPA and the Department of Justice (DOJ) consider in determining whether or not to settle these lawsuits, (2) the terms of settlements in deadline suits that led EPA to promulgate major rules in the last 5 years and the extent to which the public commented on the terms of the settlements, and (3) the extent to which settlements in deadline suits have affected EPA’s rulemaking priorities. To examine the key environmental laws that allow citizens to file deadline suits that may compel EPA to conduct a rulemaking, we identified through legal research nine key environmental laws for which EPA has primary regulatory authority. Through additional legal research, we determined that two of these laws do not include provisions that permit citizens to file deadline suits. These laws are the Federal Insecticide, Fungicide, and Rodenticide Act and related provisions of the Federal Food, Drug, and Cosmetic Act. To understand the factors that EPA considers in determining whether or not to settle deadline suits, we held discussions with officials from EPA’s Office of General Counsel and DOJ because both agencies are involved in making these determinations. We also discussed the processes and procedures that EPA follows when settling citizen deadline suits. To examine the terms of settlements in deadline suits that led EPA to promulgate major rules in the last 5 years, we developed a list of major rules EPA issued from May 31, 2008 through June 1, 2013 by searching a database that GAO maintains to help implement the Congressional Review Act. We determined that the data were sufficiently reliable for the purpose of identifying major rules issued by EPA. EPA officials then identified which major rules EPA issued following a settlement in a deadline suit. We relied on EPA because neither EPA nor DOJ maintains a database that links settlements to rules, and there is no comprehensive public source of such information. For the purposes of this report, we use the term deadline suit to mean a lawsuit in which an individual or entity sues because EPA has allegedly failed to perform any nondiscretionary act or duty by a deadline established in law. A nondiscretionary act or duty is an act or duty required by law. This report only examines deadline suits that seek to compel EPA to either (1) issue a statutorily required rule when that rule has a deadline in law or (2) issue a statutorily required rule or make a determination that issuing such a rule is not appropriate or necessary pursuant to the relevant statutory provision, when issuing that rule or making that determination has a deadline in law. We did not review other types of suits against EPA. We obtained the settlements by accessing court records through the Public Access to Court Electronic Records. We then analyzed the content of each settlement and summarized the results. To examine the extent to which the public commented on the terms of the settlements, we obtained from EPA legal memoranda summarizing the number and content of public comments EPA received on drafts of the settlements. Because each of the major rules issued following settlements in deadline suits were Clean Air Act rules, EPA solicited public comments on drafts of the settlements as required by the Clean Air Act. The act also requires EPA or DOJ to consider any public comments provided on settlements and authorizes them to withdraw or withhold consent to the proposed settlement if the comments disclose facts or considerations that indicate consent to the settlement is inappropriate, improper, inadequate, or inconsistent with the Clean Air Act requirements. EPA made these determinations and documented its decisions in legal memoranda that it provided to us. We analyzed the contents of these memoranda to determine the extent and nature of the public comments EPA received on draft settlements. To examine the extent to which settlements in deadline suits have affected EPA’s rulemaking priorities, we obtained from EPA’s Office of General Counsel data on deadline suits it had settled from January 2001 through July 2014 and the EPA office(s) responsible for implementing the terms of the settlements. We assessed the reliability of the data by interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. The data showed that one office was responsible for implementing the terms of most of the settlements. We spoke with officials from this office to understand the extent to which settlements in deadline suits had affected the timing and order of the rules they promulgated, as well as which rules they promulgated. We also spoke with EPA budget officials to understand the extent to which settlements in deadline suits affected budget allocation decisions for EPA offices not subject to settlements in deadline suits. We also interviewed individuals from academia, an environmental group, industry, and a state official from Oklahoma, to obtain their perspectives on deadline suits. We chose these individuals because they had experience or knowledge related to deadline suits and could provide the perspective of different stakeholder groups. For example, one interviewee provided legal representation for an environmental group that filed a deadline suit, and another interviewee authored a report critical of how EPA responds when faced with a deadline suits. The views of these individuals cannot be generalized to those with whom we did not speak. We conducted this performance audit from September 2013 to December 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Environmental Protection Agency (EPA) issued 32 major rules from May 31, 2008 to June 1, 2013. According to EPA officials, the agency issued 9 of these rules following settlements in deadline suits and issued 5 of the 32 rules to comply with court orders following deadline suits in which plaintiffs and EPA were unable to reach a settlement. The remaining 18 rules, according to agency officials, were not associated with a deadline suit. Table 3 lists the 32 major rules EPA issued from May 31, 2008 to June 1, 2013. The Environmental Protection Agency (EPA) issued 9 major rules from May 31, 2008 to June 1, 2013 following seven settlements in deadline suits. Each of the seven settlements established a schedule to either issue a statutorily required rule or make a determination that such a rule is not appropriate or necessary pursuant to the relevant statutory provision. EPA negotiated extensions to the deadlines to issue the final rules in five of the seven settlements. Table 4 summarizes the contents of the seven settlements. In addition to the individual named above, Vincent P. Price, Assistant Director; Rodney Bacigalupo; Elizabeth Beardsley; John Delicath; Charles Egan; Cindy Gilbert; Tracey King; and Kathryn Smith made key contributions to this report. | Laws, such as the Clean Air Act, require EPA to issue rules by specific deadlines. Citizens can sue EPA for not issuing rules on time. These lawsuits are sometimes known as deadline suits. EPA sometimes negotiates a settlement to issue a rule by an agreed upon deadline. Some have expressed concern that the public is not involved in the negotiations and that settlements affect EPA rulemaking priorities. GAO was asked to review EPA settlements in deadline suits. This report examines (1) key environmental laws that allow deadline suits and the factors EPA and DOJ consider in determining whether to settle these suits, (2) the terms of settlements that led EPA to issue major rules in the last 5 years and the extent to which the public commented on the settlements, and (3) the extent to which settlements in deadline suits have affected EPA's rulemaking priorities. GAO identified key laws allowing deadline suits through legal research and interviewed agency officials to understand the factors considered in determining whether to settle these suits. EPA identified the major rules it issued following settlements and GAO examined the text of those settlements. GAO examined EPA documentation to determine the extent to which the public commented on the settlements. Through data from EPA's Office of General Counsel and discussions with officials, GAO determined the extent to which settlements affected EPA's rulemaking priorities. GAO identified seven key environmental laws that allow citizens to file a deadline suit against the Environmental Protection Agency (EPA) (see table) and EPA and the Department of Justice (DOJ) consider several factors in determining whether or not to settle these suits. The seven key environmental laws include, among others, the Clean Air Act and the Clean Water Act. EPA works with DOJ—which represents EPA in litigation —to decide whether to settle a deadline suit. EPA and DOJ officials stated that the factors they consider include (1) the cost of litigation, (2) the likelihood that EPA will win the case if it goes to trial, and (3) whether EPA and DOJ believe they can negotiate a settlement that will provide EPA with sufficient time to complete a final rule if required to do so. Of the total number of major rules EPA promulgated from May 31, 2008 to June 1, 2013, nine were issued following seven settlements in deadline lawsuits, all under the Clean Air Act. The terms of the settlements in these deadline suits established a schedule to issue a statutorily required rule(s) or to issue a rule(s) unless EPA determined that doing so was not appropriate or necessary pursuant to the relevant statutory provision. None of the seven settlements included terms that finalized the substantive outcome of a rule. The Clean Air Act requires EPA to solicit public comments on drafts of settlements. The nine major rules were Clean Air Act rules, and EPA solicited public comments on all of the drafts. EPA received between 1 and 19 comments on six of the settlements and no comments on one settlement. EPA determined that none of the comments disclosed facts or other considerations compelling it to withdraw or withhold consent for the settlement. The effect of settlements in deadline suits on EPA's rulemaking priorities is limited. According to EPA officials, settlements in deadline suits primarily affect a single office within EPA—the Office of Air Quality Planning and Standards (OAQPS)—because most deadline suits are based on provisions of the Clean Air Act for which that office is responsible. These provisions have recurring deadlines requiring EPA to set standards and to periodically review—and revise as necessary—those standards. OAQPS sets these standards through the rulemaking process. OAQPS officials said that deadline suits affect the timing and order in which rules are issued but not which rules are issued. Source: GAO. | GAO-15-34 GAO is not making any recommendations in this report. DOJ and EPA concur with GAO's findings. |
Madam Chairman and Members of the Subcommittee: We are pleased to be here today to participate in the Subcommittee’s inquiry into the administration’s fiscal year 1999 budget request for the Internal Revenue Service (IRS) and the status of the 1998 tax return filing season. This statement is based on (1) our review of the administration’s fiscal year 1999 budget request for IRS and supporting documentation, including IRS’ February 2, 1998, budget estimates, which provide details behind the administration’s request; (2) interim results of our review of the 1998 tax return filing season; (3) our past work on IRS information systems and performance measures; and (4) our ongoing reviews of the Taxpayer Advocate’s Office, IRS’ efforts to reduce noncompliance associated with the Earned Income Credit (EIC), and IRS’ efforts to make its information systems Year 2000 compliant. Our statement makes the following points: • The most critical issue facing IRS this year and next is the need to make its computer systems century-date compliant. IRS received $376.7 million for that effort in fiscal year 1998 and is seeking another $234 million for fiscal year 1999. However, IRS’ latest estimates indicate that additional funds will be needed for fiscal year 1998. IRS officials are also refining their budget estimates for fiscal year 1999 in light of more current information. • As shown in appendix I, the administration’s fiscal year 1999 budget request for IRS totals $8.339 billion and 102,013 full-time equivalent (FTE) staff years, which are increases of $534 million (6.8 percent) and 1,462 FTEs (1.5 percent) over IRS’ proposed operating level for fiscal year 1998.Included in the fiscal year 1999 request is $323 million for the information technology investments account. Because $246.5 million of that request has not been justified on the basis of analytical data or derived using a verifiable estimating method, we believe that Congress should consider reducing the administration’s request by that amount. We also believe that Congress should consider precluding IRS from obligating funds from the investments account to develop or acquire modernized systems until IRS has defined and implemented mature systems life cycle processes. • Also included in the fiscal year 1999 budget request is $103 million and 1,024 FTEs to enhance customer service. Most of the $103 million is to go toward providing better telephone service and improving customer service training; smaller amounts are for such things as improving walk-in service, strengthening the Taxpayer Advocate’s Office, and clarifying forms and notices. Each of these areas are important to good customer service and are in need of improvement. • Each year, IRS submits detailed budget estimates to support the administration’s budget request. The utility of this information for oversight purposes is limited because (1) the intermingling of enforcement and assistance resources within various budget activities precludes an assessment of the balance between those two areas; (2) periodic restructuring of IRS’ appropriations and the budget activities within those appropriations hinders long-term trend analyses; and (3) the estimates provide inadequate information on the resources being devoted to critical areas, such as the Year 2000 effort and the Taxpayer Advocate’s Office. • One aspect of IRS’ budget estimates that has improved over the years involves the use of performance measures. However, there is still much work to be done and many challenges to overcome. These challenges include (1) developing a reliable measure of taxpayer burden, including the portion that IRS can influence; (2) developing measures that can be used to compare the effectiveness of IRS’ various customer service programs; and (3) refining or developing new measures that gauge the quality of the services provided. • Data on the first 2 1/2 months of the 1998 filing season indicate that IRS is continuing to make progress in two important areas—the use of electronic filing and the ability of taxpayers to reach IRS by telephone. This is also the first year of a planned 5-year initiative to reduce EIC noncompliance. Although it is too soon to assess the results of this initiative, we do have some observations on two aspects of the initiative—special assistance being provided to EIC claimants and IRS efforts to develop a baseline measure of EIC compliance. IRS, like other federal agencies, has to make its computer systems “century-date compliant.” Because IRS’ systems, like many others in government and the private sector, use two-digit date fields, they cannot distinguish, for example, between 1900 and 2000 (both years would be shown as “00”). IRS estimates that failure to correct this situation before 2000 could result in millions of erroneous tax notices, refunds, and bills. Accordingly, the Commissioner of Internal Revenue has designated this effort a top priority. To make its systems Year 2000 compliant, IRS plans to (1) convert existing systems by modifying application software and data and upgrading hardware and system software where needed; (2) replace systems if correcting them is not cost-beneficial or technically feasible; and (3) retire other systems if they will not be needed after the year 2000. IRS’ Year 2000 effort includes the following two major system replacement efforts: IRS is replacing its primary tax return and remittance input processing systems (i.e., the Distributed Input Processing System and the Remittance Processing System) with a single system, the Integrated Submission and Remittance Processing System (ISRP). This new system is being piloted at the Austin Service Center. If the pilot is successful, IRS expects to begin rolling the system out to other service centers later this year. IRS is consolidating its mainframe computer processing operations from 10 service centers to 2 computing centers. This consolidation is to replace the computer hardware, systems software, and telecommunications infrastructure for most of IRS’ primary tax processing systems. IRS’ goal is to implement all Year 2000 efforts by January 1999. IRS established this goal so that (1) Year 2000 changes would be implemented before the start of the 1999 filing season and (2) IRS could conduct an extensive systemic test of tax data transactions through IRS’ mission critical systems in a Year 2000 environment to simulate how systems are likely to function and interact on or after January 1, 2000. As of March 1998, IRS estimated that the cost of its Year 2000 effort for fiscal years 1997 through 2001 would be about $925 million. IRS received $376.7 million for this effort in fiscal year 1998 and is seeking another $234 million for fiscal year 1999. IRS’ latest estimates indicate that additional funds will be needed for fiscal year 1998. IRS officials are also refining their estimates for fiscal year 1999 in light of more current information. Table 1 shows how the $376.7 million IRS received for Year 2000 efforts in fiscal year 1998 was allocated among various spending categories. As table 1 shows, most of the $376.7 million is to convert existing systems and consolidate mainframes. As discussed below, IRS officials have identified additional funding needs for fiscal year 1998 for the conversion of existing systems and are pursuing options for meeting those needs. Funding needs for mainframe consolidation will be more definite when IRS completes contract negotiations for this project. reflects the approach IRS used to assess the scope of its Year 2000 conversion work. IRS has three tiers of computing operations—mainframe computers, minicomputers and file servers, and personal computers. IRS focused its initial Year 2000 efforts on assessing and converting its mainframe computer operations that are largely controlled by IRS’ Chief Information Officer and encompass most of IRS’ key tax processing systems. Assessments for the two other tiers and telecommunications systems, not all of which are under the control of the Chief Information Officer, started late and were delayed, in part, because IRS did not have a complete inventory for these areas. Since receiving its fiscal year 1998 appropriation, IRS has been trying to complete its inventory and refine its cost estimates for these information systems areas as well as for non-information systems, such as building facilities and security. Thus far in fiscal year 1998, IRS has (1) reallocated funds among the spending categories identified in the fiscal year 1998 appropriation, (2) identified specific needs for the $42 million initially set aside for contingencies, and (3) identified additional needs of about $60 to $70 million that are not yet funded. IRS notified the Appropriations Committees of these additional needs in its Year 2000 status report for the first quarter of fiscal year 1998. According to IRS budget officials, IRS anticipates that it can meet most of the $60 to $70 million shortfall from two sources. First, the Department of the Treasury plans to submit a reprogramming letter to Congress, which will include a transfer request for IRS, in accordance with the President’s February 20, 1998, supplemental budget request for fiscal year 1998.According to IRS budget officials, IRS’ request will call for transferring up to $50 million from unobligated balances from prior fiscal years’ expired accounts. Second, according to IRS and Treasury officials, Treasury plans to fund up to $29 million in Treasury-wide telecommunications costs that IRS had previously factored into its base funding of $170 million. As a result, part of the base funding that was allocated to telecommunications costs will be available for other Year 2000 conversion work. IRS employees who might be affected by the consolidation. According to officials from IRS’ mainframe consolidation project office, the contractor’s latest cost proposal for fiscal year 1998 is $195.2 million—$37.5 million more than the amount appropriated. However, project office officials said that they do not consider the $37.5 million a funding shortfall because some of the work that is included in the contractor’s fiscal year 1998 proposal was started in 1997 and funded with fiscal year 1997 funds. According to documents prepared for the Commissioner’s Executive Committee on Century Date Change and the 1999 Filing Season, the fiscal year 1998 budget for mainframe consolidation will remain uncertain until the completion of (1) contract negotiations and (2) the project office’s validation of fiscal year 1998 budget requirements. The budget request for fiscal year 1999 includes $1.42 billion for operational information systems. According to IRS, $234 million of that request is for Year 2000 efforts—about $143 million less than the 1998 appropriation. Most of the $234 million is for Year 2000 work on existing systems ($140 million) and mainframe consolidation ($76 million). The rest ($18 million) is for ISRP. On the basis of information we obtained in mid-March 1998, IRS is refining its allocations of the $140 million for the conversion of existing systems. The funding requirements for mainframe consolidation could increase in light of expanded business requirements and schedule changes. At the time we prepared this statement, Year 2000 project office officials were refining their allocations of the $140 million included in the fiscal year 1999 budget request for the conversion of existing systems. According to information we obtained in mid-March, the largest spending categories for fiscal year 1999 are testing ($58 million); contractor support to the Year 2000 project office ($20 million); and IRS salary costs ($24 million). Although we cannot comment on the adequacy of these amounts, IRS has allocated a large portion of its request to testing, which is what we would have expected based on IRS’ conversion plans and schedule. However, we are concerned that IRS has not fully assessed the impact of not including all mission critical systems in a major test it is to conduct in fiscal year 1999. data transactions will move through mission critical systems in a Year 2000 environment. At the time we prepared this statement, IRS officials said that they had received a contractor’s cost proposal of about $30 million for a systemic test and that the contractor’s proposal is reflected in IRS’ budget request for fiscal year 1999. Under this proposal, the test is to include 39 of the 126 mission critical systems IRS has identified. Officials responsible for overseeing this test said that they believe these 39 systems affect the vast majority of taxpayers. IRS officials said that although they are still negotiating with the contractor to increase the number of mission critical systems that will be included in the systemic test, not all 126 will be included. The century date change project office Director said that those systems that are not included in the systemic test will undergo testing individually in a Year 2000 environment. We did not assess whether in fact the 39 systems that are included in the contractor’s proposal affect the vast majority of taxpayers and thus may be more important to include in the test than other mission critical systems. We are concerned, however, that IRS has not fully assessed the impact of not including the other mission critical systems and the associated risks. We are also concerned that IRS has not identified the total resources needed for testing mission critical systems that are not included in the systemic test. The century date change project office Director said total resource requirements for such testing may not be known for another 6 months. The fiscal year 1999 budget request also includes $76 million for mainframe consolidation—about $89 million less than in fiscal year 1998. According to mainframe consolidation project office officials, the $76 million represents IRS’ estimate of contractor costs at the time the budget request was prepared. According to the officials, several factors (final contract negotiations, an expanded set of business requirements, ergonomic furniture requirements, and a slippage in the original completion schedule) could increase the fiscal year 1999 funding requirements for mainframe consolidation. within IRS’ Information Systems organization. Those estimates were not available to us when we prepared this statement. Project office officials also said that additional funds will be needed for ergonomic furniture as a result of IRS’ February 19, 1998, agreement with the National Treasury Employees Union. The officials estimated that this furniture will cost about $8 million in fiscal year 1999. In addition to expanded business requirements, additional contractor costs may arise if IRS does not meet its original completion schedule for mainframe consolidation. According to IRS’ plans, all 10 service centers were to be consolidated by December 1998. The Memphis Service Center was consolidated in December 1997. However, because of field office concerns about the ambitious consolidation schedule and pending expanded business requirements, IRS is reassessing its schedule for the other nine centers. IRS is considering the following three consolidation options: (1) three centers in 1998 and six in 1999, (2) four centers in 1998 and five in 1999, or (3) five centers in 1998 and four in 1999. Because IRS has decided not to consolidate any service center during the filing season, consolidations would not start until June. Under this scenario, it is likely that IRS would incur additional costs by having to retain the contractor through most of calendar year 1999. Thus, the budget for mainframe consolidation will remain uncertain until IRS (1) makes final decisions on which expanded business requirements will be implemented, (2) identifies the number of service centers that will be consolidated in 1998 and 1999, and (3) completes contract negotiations. IRS’ goal is to complete negotiations by May 1, 1998. The administration’s fiscal year 1999 budget request includes $1.54 billion and 7,493 FTEs for IRS’ Information Systems appropriation. Of this $1.54 billion, $1.42 billion is to fund “Operational Systems” (i.e., the operation and maintenance of existing systems), and $125 million is to fund “Developmental Systems” (i.e., new systems that are intended to sustain IRS’ operations until modernization plans are implemented). IRS’ proposed categories of spending under this appropriation request are consistent with our recent recommendations and related congressional actions. also question IRS’ readiness to obligate funds in this investment account for the purpose of building or acquiring modernized systems because IRS has yet to complete and implement mature systems life cycle processes. In June 1996, we reported that although IRS had initiated a number of actions to respond to our recommendations for correcting pervasive management and technical weaknesses in its Tax Systems Modernization (TSM) program, many of these actions were incomplete, and none, either individually or collectively, responded fully to any of our recommendations. Accordingly, we suggested that Congress consider limiting TSM spending to cost-effective efforts that (1) support ongoing operations and maintenance (e.g., Year 2000 efforts); (2) correct pervasive management and technical weaknesses, such as a lack of requisite systems life cycle discipline; (3) are small, represent low technical risk, and can be delivered in a relatively short time frame; or (4) involve deploying already developed systems that have been fully tested, are not premature given the lack of a complete systems architecture, and produce a proven, verifiable business value. The act providing IRS’ fiscal year 1997 appropriations and the related conference report limited IRS’ information technology spending to efforts consistent with these categories. In September 1997, we briefed IRS’ appropriations and authorizing committees on the results of our assessment of IRS’ modernization blueprint. In those briefings and in a subsequent report, we concluded that the blueprint represented a good start but was not sufficiently complete to use as the basis for building or acquiring systems. As a result, the conference report accompanying IRS’ fiscal year 1998 appropriations act limited IRS’ 1998 spending to efforts that were consistent with the aforementioned spending categories. 2000 conversion efforts, service center mainframe consolidation, and implementation of recent tax law changes); (2) institutionalization of systems life cycle rigor and discipline; (3) establishment of an organization to manage the modernization contractor; and (4) establishment of an organization to independently ensure system quality. The remainder ($125 million) is for new systems that are either generally small, low risk, near-term projects (e.g., $33.3 million for replacement of 7-year-old laptop computers used by revenue agents) or projects that involve deployment of already developed systems, such as $60.7 million for the Integrated Collection System, for which IRS has analyzed the system’s actual performance at pilot locations to validate its expected cost effectiveness. Key provisions of the Clinger-Cohen Act, the Government Performance and Results Act (Results Act), and OMB Circular No. A-11 and supporting memoranda, require that, before requesting multiyear funding for capital asset acquisitions, agencies develop accurate, complete cost data and perform thorough analyses to justify the business need for the investment. For example, agencies must show that needed investments (1) support a critical agency mission; (2) are justified by a life cycle cost/benefit analysis; and (3) have cost, schedule, and performance goals. In its fiscal year 1998 budget request for IRS, the administration had proposed an “Information Technology Investments Account” and requested $1 billion to fund it—$500 million in fiscal year 1998 and $500 million in fiscal year 1999. In our testimony last year before this Subcommittee, we questioned the need for this funding because the amounts requested were not based on analytical data or derived using formal cost estimating techniques, as required by OMB. Subsequently, in IRS’ fiscal year 1998 appropriations act, Congress provided only $325 million for the investments account and made these funds available through fiscal year 2000. Additionally, Congress conditioned obligation of these funds on completion of the modernization blueprint and prohibited IRS from obligating any of the $325 million until September 1998. deploy systems under phase 1/release 1 of its modernization blueprint. However, IRS’ validated and approved business case justification and associated documentation for phase 1/release 1 specify development costs (derived using a formal cost estimating technique) of $401.5 million. IRS has not justified the remaining $246.5 million of this $648 million on the basis of analytical data or derived the $246.5 million using a verifiable estimating method. IRS’ budget estimates indicate that the $246.5 million will be used to develop business cases for subreleases 1.3 and 1.5 of phase 1/release 1 and to develop plans for releases 2 through 5 of phase 1. IRS officials could not explain how the additional $246.5 million was derived or what it was based on, other than to state that the funds will be used to develop IRS’ systems life cycle methodology and future modernization business cases. Additionally, IRS budget documents state that $20 million of this amount would be earmarked for development and integration of the systems life cycle methodology. However, this request for funding lacks analytical support and is contradicted by other information. For example, the phase 1/release 1 business case used to justify the $401.5 million in this account already covers all phase 1/release 1 subreleases. Moreover, the “Information Systems” appropriation request already includes $15 million for systems life cycle development. For these reasons, we suggest that Congress consider reducing the fiscal year 1999 request for the “Information Technology Investments Account” by $246.5 million. In our recent report on IRS’ modernization blueprint, we recommended that IRS limit future requests for information technology appropriations to the four categories we mentioned earlier until IRS has implemented mature systems life cycle processes for developing and acquiring systems across the agency. IRS has not yet implemented such processes. The fiscal year 1999 budget request includes funding for accomplishing just this, which we strongly support. However, until this implementation is accomplished, we suggest that Congress consider precluding IRS from obligating “Information Technology Investments Account” funds for the purpose of developing or acquiring systems under its modernization blueprint. The fiscal year 1999 budget request includes a new initiative that, if approved, will provide $103 million to enhance IRS’ customer service. This initiative is the result of findings and recommendations by a Customer Service Task Force formed in May 1997. Although the task force did not issue its report until March 1998, its findings and recommendations were available to IRS several months earlier. In that regard, IRS’ operating functions were told to develop cost estimates for implementing numerous changes proposed by the task force. The original estimate of $212.5 million was eventually reduced during the budget review and approval process to the $103 million in the administration’s budget request. According to IRS, some of the $109.5 million reduction represented more accurate costing of parts of the proposed initiative, such as the plan to provide better telephone services, while the rest of the reduction was accommodated by either deleting parts of the proposed initiative, such as plans to enhance the appeals process, or revising the scope of other parts, such as plans to strengthen support for small businesses (see app. II). Under the revised proposal, the greatest shares of the $103 million are to go toward providing better telephone service and improving customer service training ($50.4 million and $22.5 million, respectively). Smaller amounts are to be used to, among other things, strengthen the Taxpayer Advocate’s Office; create citizen advocacy panels; make it easier for taxpayers to get answers in person; and improve the clarity of notices, forms, and publications. The need for improvement in many of these areas has been apparent for some time, and certain of IRS’ proposed actions (such as providing better telephone service, creating citizen advocacy panels, and strengthening the Taxpayer Advocate’s Office) are attempts to address some of the problems recently highlighted by Congress and the Commission on Restructuring IRS. Whether the $103 million is a reasonable estimate of the funds needed in fiscal year 1999 to implement this initiative will not be known until more details are available on the various parts of the initiative. Another unknown is how, if at all, the revised organizational concept proposed by the Commissioner earlier this year will affect IRS’ plans for improving customer service in fiscal year 1999 or beyond. Each year, IRS submits detailed budget estimates to support the administration’s budget request. We have found recent years’ budget estimates to be more useful for oversight purposes, primarily because of the inclusion of better performance measures and more narrative information on actual and planned performance. Nevertheless, the utility of IRS’ budget estimates for oversight purposes is limited because (1) the intermingling of enforcement and assistance resources within various budget activities precludes an assessment of the balance between those two areas; (2) periodic restructuring of IRS’ appropriations and budget activities hinders long-term trend analyses; and (3) the budget estimates provide inadequate information on the resources being devoted to such critical areas as the Year 2000 effort and the Taxpayer Advocate’s Office. Achieving IRS’ strategic objectives of improving customer service and increasing compliance requires a mix of assistance and enforcement. Finding the appropriate mix is not easy, and we do not claim to have the answer. However, we do think that it is important for effective oversight that Congress know what mix IRS is achieving and what mix it plans to achieve. That information cannot be derived from IRS’ budget estimates. For example, IRS is requesting $891.6 million and 21,147 FTEs for the “Telephone and Correspondence” budget activity within the Processing, Assistance, and Management appropriation. That activity covers all non face-to-face contacts between IRS and taxpayers. Such contacts include typical forms of assistance, such as answering telephone calls and correspondence, as well as several enforcement activities, such as correspondence audits and attempts to collect overdue taxes via the telephone. Last year, IRS was able to provide a breakdown of the FTEs included in the fiscal year 1998 budget request for Telephone and Correspondence. As table 2 shows, 44 percent of those FTEs were for enforcement-related operations. Statement Tax Administration: IRS’ Fiscal Year 1999 Budget Request and 1998 Filing Season 11,619 (56 percent of total) 9,156 (44 percent of total) This year, because of a change in its accounting structure, IRS could not give us a breakdown of the Telephone and Correspondence budget activity for fiscal year 1999. Thus, we do not know how much of this request IRS expects to devote to assistance as opposed to enforcement. Similarly, despite its name, the Tax Law Enforcement appropriation is not exclusively for enforcement. The $3.2 billion and 46,130 FTEs being requested for that appropriation include an unspecified amount of money and FTEs for various forms of assistance, including walk-in service, taxpayer education efforts, and problem resolution. The $143 million and 2,184 FTEs being requested for the EIC compliance initiative, which we discuss in more detail later, also involve a mix of assistance and enforcement, but, again, that mix is not apparent in IRS’ budget estimates. It is often useful, in assessing agency operations, to analyze trends over several years. IRS’ annual budget estimates are not conducive to such analyses because IRS periodically restructures its appropriations and the budget activities within those appropriations. financial statements by simplifying account reconciliation and providing an easier audit trail, (3) distinguish capital investments from operations, and (4) provide maximum resource flexibility. Another restructuring seems likely if and when the Commissioner’s proposed reorganization becomes reality. We are not taking issue with the changes IRS made for fiscal year 1998 or with the need to restructure in general. Our intent is to point out how restructuring can hinder the ability to conduct long-term trend analyses. For example, IRS established a new budget activity in fiscal year 1998 called Telephone and Correspondence, which was formed by merging pieces from the Taxpayer Services budget activity, which was discontinued, and the Examination and Collection budget activities, which were retained in reconfigured forms. When IRS restructured its budget activities for fiscal year 1998, it recalculated its fiscal year 1997 accounts to be compatible with the new structure. However, years before 1997 are not compatible with the new structure, making long-term analyses difficult. For example, it would be of little value to compare IRS’ request for the Examination budget activity in fiscal year 1999 with the actual figures for that activity in fiscal year 1996 because the 1999 version of that activity includes certain programs (such as Taxpayer Education) that were not part of the 1996 version and excludes programs (such as Service Center Correspondence) that were part of the 1996 version. Even with restructuring, long-term analysis could still be possible if there was adequate detail behind the various budget activities. However, some key details are no longer available. As discussed earlier, IRS no longer has the level of detail behind the Telephone and Correspondence activity that it had when it first restructured that budget activity in 1998. Two IRS activities that are of considerable interest to Congress in the current environment are the Year 2000 effort and IRS’ efforts to identify and resolve taxpayer problems. IRS’ budget estimates for fiscal year 1999 provide inadequate information on both of those activities. needs for fiscal year 1998 or specify how much of the $1.5 billion being requested for information systems in fiscal year 1999 is for Year 2000 activities. During the past year, Congress questioned the independence of IRS’ Taxpayer Advocate and the adequacy of resources devoted to the resolution of taxpayers’ problems through the Problem Resolution Program (PRP). IRS’ budget estimates do not accurately reflect the level of resources being devoted to problem resolution. In addition, concerns about independence may be exacerbated by the way IRS funds the work of the Taxpayer Advocate’s Office. According to IRS, the fiscal year 1999 budget request includes about $38 million and 628 FTEs for the Taxpayer Advocate’s Office, an increase of about $14 million and 191 FTEs over the proposed operating level in fiscal year 1998. Those resources are not separately identified in IRS’ budget estimates but are included within the Telephone and Correspondence budget activity. Even if those resources were separately identified, they would significantly understate the level of resources IRS has been allocating and plans to allocate to activities of the Taxpayer Advocate’s Office. That is because many of the staff who work PRP cases and who participate in Problem Solving Days are funded by other functions, such as Examination and Collection. In that regard, according to a January 1998 report by the Taxpayer Advocate, his resources for fiscal year 1998 are being supplemented by more than 1,000 other field employees, on either a full or part-time basis. We believe that oversight of the operations of the Taxpayer Advocate’s Office would be enhanced if (1) the Office were given more visibility in IRS’ budget structure and (2) IRS’ budget estimates provided complete information on the amount of resources being devoted to those operations. A more fundamental question, however, is whether the Taxpayer Advocate’s independence is compromised in any way by the need to rely on other functions for needed staff. While working PRP cases, these employees receive program direction and guidance from the Taxpayer Advocate’s Office but are administratively responsible to their functional organizations—oftentimes the same organizations responsible for the problems that led taxpayers to seek the Advocate’s help. We are pursuing this and other issues in an ongoing study of the Taxpayer Advocate’s Office for this Subcommittee. As mentioned earlier, one aspect of IRS’ budget estimates that has improved over the years involves the use of performance measures. The performance measures shown in IRS’ budget have become more useful as IRS strives to develop and implement a results-oriented performance measurement system that will meet the requirements of the Results Act. As IRS acknowledges, there is still much work to be done in that area. IRS’ budget estimates for fiscal year 1999 include numerous performance measures, some of which have yet to be developed. The budget estimates include a brief description of each measure and, for those that have been developed, provide such information as the source and reliability of data used to compile the measure. Tracking performance measures over time is not always possible because some are added or dropped each year and others are revised. These kinds of changes are to be expected as IRS gets input from Congress and other stakeholders and learns more about how to measure its performance. In its fiscal year 1999 budget estimates, for example, IRS lists 16 discontinued performance measures, some of which were dropped in response to congressional concern about an undue emphasis on enforcement results. IRS has a three-tiered system of performance measures. At the highest (mission) level, IRS has a mission effectiveness indicator intended to measure the agency’s overall performance in collecting the proper amount of tax revenue at the least cost or burden to the government and the taxpayer. The second (strategic) level of indicators is intended to gauge IRS’ progress in meeting its strategic objectives to improve customer service, increase taxpayer compliance, and increase productivity. According to IRS’ fiscal year 1999 budget estimates, for example, IRS has four indicators and plans to develop two others to gauge its progress in improving customer service. The four existing indicators are (1) taxpayer burden cost for IRS to collect $100, (2) initial contact resolution rate for taxpayer inquiries, (3) toll-free telephone level of access, and (4) tax law accuracy rate for taxpayer inquiries. The two indicators IRS plans to develop are (1) customer satisfaction rates and (2) employee satisfaction rate. The third (program) level of indicators is intended to measure the accomplishments of specific IRS programs or operations. For example, IRS’ fiscal year 1999 budget estimates include 18 program-level customer service measures, covering such things as refund timeliness, number of telephone calls answered, the quality of PRP cases, and the number of walk-in service contacts. (See app. III for a list of all of the performance measures in IRS’ fiscal year 1999 budget estimates and a comparison of those measures for fiscal years 1997, 1998, and 1999.) IRS faces some difficult challenges as it strives to improve its performance measurement system. We discussed some of those challenges in a recent report to the Subcommittee on measuring customer service. As noted in that report, key challenges facing IRS include (1) developing a reliable measure of taxpayer burden, including the portion that IRS can influence; (2) developing measures that can be used to compare the effectiveness of the various customer service programs; and (3) refining or developing new measures that gauge the quality of the services provided. Measuring burden is especially difficult. IRS currently measures burden by using a model that estimates the time taxpayers spend on each tax form. As such, the measure excludes the burden taxpayers face after they file their tax returns, such as the time and costs incurred in responding to IRS notices and audits. Flaws in the burden measure also limit the usefulness of IRS’ mission effectiveness indicator, because burden is a key component of that indicator. IRS recognizes the limitations of its burden measure and is looking for alternatives. Devising ways to measure the burden that IRS influences and overcoming the other challenges our report identified will not be easy. IRS is faced with devising reliable measures that are useful in improving agency and program performance, improving accountability, and supporting policy decisionmaking. At the same time, IRS is faced with making decisions on how to minimize the costs of collecting data and measuring results over time. it is too early to assess the results of this year’s efforts, we do have some preliminary observations on two parts of the initiative. As shown in table 3, as of March 13, 1998, IRS had received 23.4 percent more electronic returns than at the same time last year. This increase is even more significant considering that the total number of individual income tax returns filed as of March 13, 1998, was up less than 1 percent from the same time last year. National Change of Address File and is now able to accept TeleFile returns from some persons who moved after they filed last year. According to IRS, this new procedure allowed it to mail TeleFile tax packages to about 1.6 million potentially eligible TeleFilers who would not have been given the opportunity to file via TeleFile under the old procedure. The use of traditional electronic filing had also increased as of March 13—by about 23 percent over the same period last year. There have been a few changes in the program this year that may have contributed to this increase. For example, two more states (Alabama and Arizona) joined the Fed/State electronic filing program, and IRS added two more forms to the list of forms that can be filed electronically. We have insufficient information at this time to determine how much of the increase might be due to those changes rather than to a general growth in the willingness of taxpayers and tax return preparers to use this alternative way of filing. Another continuing positive trend this filing season is an increase in the ability of taxpayers who need assistance to reach IRS by telephone. In our report on the 1997 filing season, we noted that the accessibility of IRS’ telephone assistance had increased from 20 percent during the 1996 filing season to 51 percent during the 1997 filing season. As shown in table 4, IRS data for the first 2 1/2 months of the 1998 filing season indicate that the level of access to IRS’ toll-free telephone assistance has continued to increase. One clear indicator of that increased access is the significant drop in the number of calls receiving busy signals. IRS took some steps this year to improve accessibility. For example, it (1) increased the hours assistors are available to answer telephone calls from 10 hours a day, 5 days a week in 1997, to 16 hours a day, 6 days a week in 1998, and (2) increased the number of complex tax topics that are to be handled through a voice messaging system. However, despite these changes, the data in table 4 indicate that the number of calls answered by IRS has remained constant compared to the number for 1997 and that the increase in level of access is due to a decrease in call attempts. To independently check whether the level of access to IRS’ toll-free assistance had increased, we conducted a test from February 9 through 26, 1998. Our results, which are not projectable, showed that the level of access we achieved during our test was close to the 91-percent level of access reported by IRS for the first 2 1/2 months of this filing season. We made 384 total calls to IRS and gained access to the telephone system 333 times, a level of access of 86.7 percent. On the other 51 calls, we received busy signals. Of the 333 times we gained access to the telephone system, we were routed to lines that were to be answered by IRS’ assistors 263 times and to lines that were to be answered by a voice messaging system 70 times. Of the 263 times we were routed to an assistor, we made contact with an assistor 239 times (90.9 percent). We abandoned the other 24 calls (9.1 percent) without making contact with an assistor after remaining on hold for 7 minutes. For each of the 70 calls that were routed to the voice messaging system, we left a message. In 57 of those cases, (81.4 percent), we received a call back from IRS within 3 business days. IRS’ fiscal year 1998 appropriation included $138 million for the first year of what is to be a 5-year EIC compliance initiative. IRS’ budget request for fiscal year 1999 includes $143 million for the second year of that initiative. IRS has developed a plan for using these appropriated funds that calls for various efforts directed at reducing EIC noncompliance, including expanded assistance, increased enforcement, and enhanced research. We are gathering data on IRS’ efforts as part of two reviews for the Subcommittee: a review of EIC noncompliance and a review of the 1998 filing season. We are unable to comment at this time on the impact of any efforts undertaken this filing season because not enough time has elapsed for us to assess results. about 19.5 million returns filed last year with EIC claims, about 11.9 million (61 percent) were received by IRS before the end of March. We also have questions about IRS’ baseline measure of EIC compliance. IRS did a study in 1995 involving a sample of taxpayers who claimed an EIC on their tax year 1994 returns. The study showed that EIC claimants were not entitled to about 26 percent of the EIC dollars they were claiming—a noncompliance rate that generated considerable congressional concern, eventually leading to the EIC compliance initiative. However, in response to our questions about the current EIC initiative, IRS officials told us that the results of the 1995 study could not be used as a baseline measure of EIC compliance, although they were unable to satisfactorily explain why. IRS’ assertion that the 1995 study cannot be used as a baseline measure of compliance raises the question whether decisions to develop and fund the 5-year EIC initiative were founded on reliable compliance data. If IRS does a new baseline study, we question whether the results will be available soon enough to be of any value to Congress. Our concern stems from IRS’ history in conducting past EIC compliance studies. For example, IRS did not release the results of its 1995 study until April 1997. If data from a new baseline study are not available until 2000, IRS will already be in the third year of the initiative and will have finalized its funding request for the fourth year. That concludes my statement. We welcome any questions that you may have. | Pursuant to a congressional request, GAO discussed the administration's fiscal year (FY) 1999 budget request for the Internal Revenue Service (IRS) and the status of the 1998 tax return filing season. GAO noted that: (1) the administration is requesting about $8.3 billion and 102,000 full-time equivalent (FTE) staff years for IRS in FY 1999; (2) this is an increase of about $500 million and 1,500 FTEs over IRS' proposed operating level for FY 1998; (3) the most critical issue IRS faces this year and next is the need to make its computer systems century date compliant; (4) the goal is to implement all year 2000 efforts by January 1999 to allow time for testing; (5) IRS' latest estimates indicate that additional funds will be needed for FY 1998 beyond the amount already available; (6) IRS is also refining its budget estimates for FY 1999 in light of more current information; (7) for FY 1999, the administration is requesting $323 million for IRS' Information Technology Investments Account; (8) when combined with the $325 million appropriated for this account last year, the request would increase the account's total to $648 million; (9) because $246.5 million of the request has not been justified on the basis of analytical data or derived using a verifiable estimating method, GAO believes that Congress should consider reducing the administration's request by that amount; (10) the administration's request also includes $103 million to enhance customer service; (11) IRS plans, among other things, to provide better telephone service, improve customer service training, strengthen the Taxpayer Advocate's Office, make it easier to get answers in person, and improve the clarity of forms and notices--all areas that are critical to good customer service and that need improvement; (12) each year, IRS submits detailed budget estimates to support the administration's budget request; (13) in GAO's opinion, several factors limit the utility of those budget estimates for oversight purposes; (14) interim data on the 1998 filing season indicate that IRS is continuing to make progress in two important areas--the use of electronic filing and the ability of taxpayers to reach IRS by telephone; and (15) although it is too soon to assess the results of IRS' new initiative to reduce Earned Income Credit noncompliance, GAO does have some observations on two aspects of that initiative. |
Inventory shipped for repair or in support of repairs typically involves the following types of material: Manager-directed material, which item managers direct to be shipped to a contractor for repair, alteration, or modification. Government-furnished material, which contractors requisition in support of repairs, alterations, or modifications. Generally, this material is incorporated into or attached onto deliverable end items (final products such as aircraft) or consumed or expended in performing the contract. For fiscal year 2000, Air Force logistics records for all inventory control points showed the following number, value, and type of material had been shipped to contractors (see table 1). Table 2 shows a breakdown of all shipments to contractors in fiscal year 2000 by the security type, number of items, and dollar value of shipments. Department of Defense (DOD) policy contains specific internal control procedures to help ensure that shipped inventory is accounted for. When an item is shipped, a shipping notification should be sent to the receiving contractors. The intended recipient of the material is responsible for notifying the inventory control point once the item has been received or if a discrepancy exists (e.g., the item was not received or the quantity received was less than expected). The notification of receipt and discrepancy reporting processes are internal controls designed to account for all shipped assets. If within 45 days of shipment the inventory control point has not been notified that a shipment has arrived, it is required to follow up with the intended recipient. The rationale behind this requirement is that until receipt is confirmed, the exact status of the shipment is uncertain and therefore vulnerable to fraud, waste, and abuse. As a result of departures from required procedures or ineffective procedures, the Air Force’s shipped inventory is vulnerable to loss or theft. First, the Air Force has allowed repair contractors access to government- furnished material not needed to fulfill the repair contract. Second, inventory control points have not provided property administrators with the required government-furnished material status reports to use in verifying contractor records of government-furnished material received. Third, contractors have not adequately recorded receipt of items and reported receipt to inventory control points. Fourth, contractors have not routinely reported discrepant shipments to the designated shipping activity. Fifth, Air Force procedures for following up on shipments that contractors have not confirmed as received are ineffective. Sixth, the Air Force has not provided adequate oversight of shipments to contractors. DOD requires inventory control points to establish one or more internal control systems (i.e., management control activities) to restrict contractor access to government-furnished material. Among other things intended, the control systems are to screen all repair contractor requisitions for validation and approval and to restrict contractor access to government- furnished material to the specific items and quantities listed in the repair contract. However, the inventory control points’ systems generally screen and restrict access to government-furnished material by a federal stock class or stock group rather than by stock number and quantity. Also, the contracts we reviewed generally did not specify, as required, both the items and the quantities of material that the inventory control points had agreed to furnish to contractors. As long as contractors requisition items within an authorized federal stock class or stock group, government- furnished material is automatically provided whether or not it is needed to fulfill the repair contract. In a July 1997 memorandum, the Air Force Materiel Command reiterated the requirement that the inventory control points screen all repair contractor requisitions by stock number and quantity for validation and approval, and it developed procedures for an automated method of loading stock numbers and quantities into the control systems. Air Force officials indicate that the major obstacle now is that the procedures for the automated method of loading stock numbers do not work as designed. For this reason, the Air Force Materiel Command waived the requirement for screening by stock number and quantity and allowed inventory control points to continue to screen contractor requisitions for government- furnished material at the federal stock class or stock group level. The following example illustrates the weakness in the current screening process. A contract we reviewed listed 14 specific, stock-numbered parts—from seven different stock classes—that were required to repair the end item (an electronic countermeasures system for the B-52H aircraft). However, because the inventory control point’s system screened and restricted the contractor’s access to government-furnished material by federal stock class, the contractor could requisition any item from the seven different stock classes in which the 14 parts are grouped. The seven stock classes contain over 502,900 other stock-numbered parts that are not needed to repair the end item. The contractor could requisition any of these parts, in any quantity, and the improper requisition could pass through the inventory control point’s screening system and be approved. We did not determine whether contractors had obtained unauthorized material as a result of their access to material by federal stock class or group. However, these control weaknesses are the same as those identified in earlier reports as having allowed contractors to obtain unneeded and unauthorized material. For example, in a 1998 report on the adequacy of government oversight over government-furnished material to a contractor, the Air Force Audit Agency reported 2,978 of the 5,569 validated requisitions were not needed to accomplish the contract. The unneeded requisitions included 1,090 stock numbers valued at $17.4 million. Similarly, a 1995 DOD inspector general report on management access to the DOD supply system concluded that granting contractors access to government-furnished material in the DOD supply system by federal stock class continued to be a material internal control weakness that placed DOD material at undue risk. To independently verify that contractors have accounted for all government-furnished material received, DOD policy requires inventory control points to provide to property administrators at the Defense Contract Management Agency quarterly status reports showing all shipments of Air Force material to contractors. Inventory control point officials responsible for distributing the reports to property administrators told us that the reports have not been sent. We found that existing Air Force procedures governing distribution of the quarterly status reports do not assign responsibility for distributing these reports to officials at inventory control points and are outdated (e.g., the systems for generating the reports no longer exist). Air Force officials acknowledge that the procedures are not current and stated that they are in the process of updating them. Proper distribution of government-furnished material status reports has been a long-standing issue. For example, a 1995 Department of Defense inspector general audit report on management access to the DOD supply system stated that the Air Force should take the distribution of its status report more seriously, ensuring that the report is issued each quarter. The audit report asserts that property administrators are the last line of defense in protecting material resources and, as such, they need an independent record of the government-furnished material shipped to contractors. The Air Force’s quarterly status report provides such a record; without it, property administrators must rely entirely on contractors’ records. Department of Defense and Air Force policies contain specific procedures governing the notifications that contractors should send to their inventory control points when they receive shipped inventory. The policies state that, upon receipt of an item, a receiving contractor must enter the shipment into its inventory records and notify the inventory control point of material receipt. To accomplish notification of receipt, the Air Force requires contractors to enter receipts into a reporting system at the appropriate inventory control point. The notification of receipt is an internal control designed to account for all shipped assets. During fiscal year 2000, the Air Force shipped thousands of items with a reported value at about $2.6 billion to contractors. As part of our review, we sought to determine whether items reportedly shipped to repair contractors had in fact been received and entered into both the contractors’ records and the inventory control points’ reporting systems. Our review indicated that contractors are not following policies governing receipt notification. Of the $2.6 billion of inventory shipped to contractors in fiscal year 2000, we judgmentally selected and reviewed 9,003 items valued at $814.2 million. We found that contractors had not always properly posted material receipts for these items into their records or into the inventory control points’ reporting systems. Specifically, 48 percent of these items had been received and properly posted by contractors to the inventory control points’ reporting systems; 19 percent of the shipped items had been received but were either improperly posted or not posted by contractors to their records and/or the inventory control points’ reporting systems; and 33 percent of items reportedly had not been received by contractors or lacked sufficient documentation to prove that they had been accounted for by the contractors. The items unaccounted for included those that warrant a high degree of protection and control because of their high value and/or their security classification, such as circuit card assemblies and navigation set control units. This lack of documentation is in itself an internal control weakness. For example, the federal acquisition regulation requires that contractors’ property control records provide a complete, current, and auditable record of all transactions involving government property. Table 3 presents more detailed information on the items in our review. No dominant cause for these failures to properly account for shipment receipts emerged in our discussions with contractor officials. However, in our interviews with contractor personnel, they identified a number of factors: inadequate training/instruction on how to use and enter information into the reporting system, lack of awareness of reporting procedures, data transmission problems (e.g., transactions entered by the contractors did not show up in the reporting system), data input errors made while attempting to enter information into the data deleted from the reporting system because of data storage constraints. In addition, two contracts in our review did not contain a reporting requirement. Because of these reporting problems, the inventory control points’ reporting systems contained inaccurate information on large numbers of shipment receipt notifications, thus reducing the value of the information as a means of accurately and adequately accounting for all shipped assets. Inventory control point personnel indicated that they are often forced to work around the reporting systems, and they expend considerable time and effort to collect, maintain, and analyze receipt information that should be readily available to them in these automated systems. Visibility over shipped material depends in part on accurate contractor reporting of material receipts; without adequate reporting, the Air Force cannot readily account for shipped material, making it vulnerable to theft or loss. Air Force policy also requires contractors to notify the shipping activity if a discrepancy exists between items shipped and items received. The purpose of discrepancy reporting is to determine the cause of discrepancies, effect corrective action, and prevent recurrence. Such reports also provide (1) support for adjustment of property and financial inventory accounting records, (2) information as a basis for claims against contractors, and (3) information for management evaluations. As table 3 shows, 1,829 of the items (valued at about $24.2 million) we reviewed had reportedly not been received, but only 8 of the items were reported as discrepancies and resolved. For the remaining 1,821 items, we found a number of problems in discrepancy reporting. Contractor personnel did not report the discrepancies. According to most contractor personnel, this situation occurred primarily because the shipping activity did not notify them of impending shipments, thus they did not expect the shipment and could not monitor its status. Others indicated that they simply never report any discrepancies. Contractor personnel reported the discrepancies, but they did not route the discrepancy reports to the appropriate shipping activity personnel who could investigate and resolve the discrepancies. Although we found that contractor personnel did not properly route shipping discrepancies to the appropriate shipping activity, they were under the impression that they had. Contractor personnel reported the discrepancies, but did not follow up when no response was received from the shipping activity. They did not follow up because they planned to reorder the material that they had not received. Contractor personnel reported the discrepancies, but when they later determined that the materials had been received, they did not cancel the discrepancy reports. This failure to comply with Air Force procedures undermines the Air Force’s ability to determine the cause of discrepancies, effect corrective action, and prevent recurrence. This situation can also result in loss of control over material, lost recovery rights, and material remaining in a questionable status for long periods of time. To ensure proper reporting and accounting of material receipts, DOD policy requires that inventory control points follow up with the contractor within 45 days from the date of shipment if they have not been notified that a shipment has arrived. The rationale behind this requirement is that until receipt is confirmed, the exact status of the shipment is uncertain and therefore vulnerable to fraud, waste, and abuse. At present, Air Force procedures do not ensure adequate follow-up on unconfirmed receipts. According to Air Force officials, inventory control points send electronic inquiries to contractors to follow up on all shipments. However, the Air Force has not yet established a system by which (1) the inventory control points can reconcile material shipped to contractors with material received by contractors to determine unconfirmed receipts and (2) contractors can respond to the follow-up inquiries to confirm receipts or discrepancies. Consequently, inventory control points assume that all material shipped to contractors is received by them, and they close the record on the shipments without contractor confirmation of material receipt. The inventory control point does not become aware that material has not been received unless the contractor inquires about the shipment. The result is a situation in which unconfirmed receipts are officially considered delivered, an assumption that, in turn, places this material at risk of fraud, waste, abuse, and theft. The following example illustrates how the lack of adequate follow-up on unconfirmed receipts places this material at risk. In June 2000, the Defense Distribution Depot in Warner Robins, Georgia, reportedly issued and delivered 85 electron tubes to an Air Force repair contractor, but, according to contractor personnel, the shipment was never received and the contractor never reported the discrepancy to the inventory control point. In January 2002, we requested proof of issuance and delivery from the Warner Robins depot. The depot provided proof of issuance but could not confirm delivery. According to depot personnel, a delivery signature was not obtained from the contractor’s receiving personnel at the time of delivery. Nevertheless, the inventory control point closed the record on this $3.5 million shipment, assuming the electron tubes had been received. The electron tubes remain unaccounted for. To address its deficiencies relating to proper reporting and accounting of material receipts, the Air Force plans to transition to the Department of Defense Commercial Asset Visibility System (CAV II). This will require a 2-year scheduled transition starting in fiscal year 2003 and ending in fiscal year 2004. Another weakness preventing effective accountability over shipped inventory relates to the Air Force’s financial management system. The Chief Financial Officers Act of 1990 requires a plan for the integration of agency financial management systems. The Federal Financial Management Improvement Act of 1996 built upon the 1990 act and required agencies to maintain an integrated system (i.e., an integrated general ledger controlled system). With such a system, accounting records and logistics records (i.e., records from the supply and repair side of inventory control points) should be updated automatically when inventory items are purchased and received. Any differences between these two sets of records should be identified periodically and research conducted to alert management at the inventory control points to possible undetected loss or theft of shipped items. As part of the its latest efforts to reform its financial operations, the Department of Defense has stated that it will develop Defense-wide integrated systems. If effectively designed and implemented, these systems will be integral to ensuring effective accountability over the Air Force’s shipped inventories. To evaluate and improve supply operations and reporting performance, Air Force policy requires shipping activities to record, summarize, and report to Air Force headquarters the volume and dollar value of shipment discrepancies, and headquarters is required to analyze this data to identify the causes, sources, and magnitude of discrepancies so that corrective actions can be taken. This policy is consistent with federal government standards for internal controls that require ongoing oversight to assess the quality of performance over time and to ensure that findings of audits and other reviews are promptly resolved. Air Force headquarters acknowledges that it has not requested nor collected contractor shipment discrepancy data, and, as of February 2002, had not developed a definite plan of action or a target date for full implementation. The lack of program oversight may represent inadequate management emphasis. Even if the Air Force were collecting the contractor shipment discrepancy data, it would not be meaningful because, as shown earlier, contractors are not reporting discrepancies accurately. The lack of this information impedes the Air Force’s ability to evaluate and improve supply operations as well as its ability to determine which activities are responsible for lost or misplaced items. Inventory worth billions of dollars has been vulnerable to fraud, waste, and abuse because the Air Force either did not adhere to control procedures or did not establish effective procedures. Because of these control weaknesses, repair contractors have access to items and quantities of items not specified in their contracts, and the Defense Contract Management Agency does not have the quarterly reports on shipment status that it needs to independently verify that contractors have accounted for shipments of government-furnished material. In addition, contractor receipt posting and discrepancy reporting practices produce incomplete and inaccurate information, impairing the ability of the Air Force to monitor shipments. Even if contractor records on shipment receipts were accurate, the Air Force’s system cannot reconcile material shipped to contractors with material received by contractors, so the Air Force cannot readily identify shipments with unconfirmed receipts. Consequently, the Air Force cannot readily account for these shipments, which include classified, sensitive, and pilferable items. Finally, the Air Force has not exercised the required extent of program oversight by collecting data on contractor shipment discrepancies and using it to assess practices for safeguarding shipped inventory; as a result, it cannot identify the extent and cause of contractor shipment discrepancies or take corrective action. To improve the control of inventory being shipped, we recommend that the Secretary of Defense direct the Secretary of the Air Force to undertake the following: Improve processes for providing contractor access to government- furnished material by listing specific stock numbers and quantities of material in repair contracts (as they are modified or newly written) that the inventory control points have agreed to furnish to contractors; demonstrating that automated internal control systems for loading and screening stock numbers and quantities against contractor requisitions perform as designed; loading stock numbers and quantities that the inventory control points have agreed to furnish to contractors into the control systems manually until the automated systems have been shown to perform as designed; and requiring that waivers to loading stock numbers and quantities manually are adequately justified and documented based on cost- effective and/or mission-critical needs. Revise Air Force supply procedures to include explicit responsibility and accountability for generating quarterly reports of all shipments of Air Force material to distributing the reports to Defense Contract Management Agency property administrators. Determine, for the contractors in our review, what actions are needed to correct problems in posting material receipts. Determine, for the contractors in our review, what actions are needed to correct problems in reporting shipment discrepancies. Establish interim procedures to reconcile records of material shipped to contractors with records of material received by them, until the Air Force completes the transition to its Commercial Asset Visibility system in fiscal year 2004. Comply with existing procedures to request, collect, and analyze contractor shipment discrepancy data to reduce the vulnerability of shipped inventory to undetected loss, misplacement, or theft. In written comments on a draft of this report (see app. II), the Department of Defense concurred with six of the recommendations, non-concurred with one recommendation, and partially concurred with three recommendations. DOD did not concur with our second recommendation—to improve processes for controlling contractor access to government-furnished material by developing automated internal control systems for loading stock numbers and quantities and screening them against contractor requisitions. DOD states that the Air Force Special Support Stock Control system already has the recommended capability in place. Although the Air Force Special Support Stock Control system may be capable of loading stock numbers and quantities and screening them against contractor requisitions, we found that in practice the system was not able to carry out this function as designed. A January 2002 software change implemented to address the issue did not resolve it, and Air Force officials acknowledged in April 2002 that this system was still not working properly. To correct the weakness in its current automated internal control systems, Air Force officials stated that in April 2002 the Air Force Materiel Command planned to revise its existing procedures for an automated method of loading stock numbers into the current control systems. We believe that the Air Force’s actions to correct its internal control systems deficiencies are a step in the right direction, and, if the revised procedures do work as designed, they will improve the process for controlling contractor access to government-furnished material. Based on DOD’s comments, we modified our recommendation to emphasize the need to demonstrate that automated internal control systems for loading and screening stock numbers and quantities against contractor requisitions perform as designed. DOD partially concurred with the recommendation to load stock numbers and quantities for items requisitioned by contractors into the control systems manually until the automated system is implemented. DOD again stated that the Air Force Materiel Command’s current control systems already provide the capability for loading and screening national stock numbers and quantities against contractor requisitions. However, DOD directed the Air Force to determine the feasibility of establishing an interim capability until all repair contracts are written in compliance with Air Force policies and procedures. We continue to believe our recommendation will be valid until the previously discussed automated internal control system for loading stock numbers and quantities and screening them against contractor requisitions is proven to work as designed. DOD partially concurred with the recommendation to require that waivers to loading stock numbers and quantities manually are adequately justified and documented based on cost-effective and/or mission-critical needs. DOD reiterated that the current Air Force control systems already provide the capabilities for loading stock numbers and quantities and screening them against contractor’s requisitions. DOD further states that it will direct Headquarters, Air Force Installations and Logistics, to ensure that future decisions affecting validation of contractor orders involving government-furnished materiel or equipment are based on cost- effectiveness and/or mission-critical needs and that requests are processed in accordance with DOD policies and procedures. We agree with DOD that any future waivers should be justified and documented. However, we continue to believe our recommendation will be valid until the waiver is rescinded, because the waiver allows for the inventory control point to continue to load and screen contractor requisitions at the federal stock class or stock group levels rather than loading contracts at the required national stock number level. Finally, DOD partially concurred with the recommendation to comply with existing procedures to request, collect, and analyze contractor shipment discrepancy data to reduce the vulnerability of shipped inventory to undetected loss, misplacement, or theft. DOD stated that in February 2001, Headquarters, Air Force Installations and Logistics, directed all Air Force major commands to collect and analyze these types of supply discrepancies for possible trends. DOD added that each major command was tasked to provide Air Force Installations and Logistics a semi-annual report of its findings (negative reports were not required). DOD recently directed Headquarters, Air Force Installations and Logistics, to re- emphasize this requirement to all major commands and to require that all major commands submit a report on their findings for the last 12 months. Moreover, a negative report will be required if no supply discrepancies were received. While we believe this is a step in the right direction, we also believe the DOD response to our recommendation does not address the contractor discrepancy reporting issues raised in this report. Although some Air Force major commands may actually be collecting and analyzing shipment discrepancies at their Air Force bases, we found that similar contractor shipment discrepancy data has not been requested nor collected. As we stated in this report, Air Force headquarters acknowledges that it has neither requested nor collected these contractor discrepancy report data for shipments, and, as of February 2002, had not developed a definite plan of action or a target date for full implementation. We continue to believe the conditions we reported on and our recommendation are still valid and should be addressed by DOD. Based on DOD’s comments, we have made it clear that the shipment discrepancy data referred to in this report and in the related recommendation was provided by the contractors. Appendix I contains the scope and methodology for this report. DOD’s written comments on this report are reprinted in their entirety in appendix II. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Air Force; the Director, Office of Management and Budget; and the Director, Defense Logistics Agency. We will also make copies available to others upon request. Please contact me at (202) 512-8412, or Lawson Gist, Jr. (202) 512-4478, if you or your staff have any questions concerning this report. Other GAO staff acknowledgments are listed in appendix III. To assess the Air Force’s and its repair contractors’ adherence to procedures for controlling shipped inventory, we took the following steps: To identify criteria for controlling shipped inventory, we reviewed Department of Defense and Air Force policies and procedures, obtained other relevant documentation related to shipped inventory, and discussed inventory management procedures with officials at the following locations: Headquarters, Department of the Air Force, Washington, D.C.; the Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the Ogden Air Logistics Center, Hill Air Force Base, Utah; the Warner Robins Air Logistics Center, Robins Air Force Base, Georgia; the Defense Contract Management Agency, Alexandria, Virginia; and the Defense Logistics Management Standards Office, Fort Belvoir, Virginia. To identify the number, value, and types of shipped inventory, we obtained computerized supply-side records of all government-furnished material shipments and manager-directed material shipments between October 1999 and September 2000 from the Air Force Materiel Command at Wright- Patterson Air Force Base, Ohio. The records contained descriptive information about each shipment, including the document number, national stock number, and quantity shipped. We excluded broken items shipped from end-user activities to contractor repair facilities and repaired material returned from a contractor repair facility to a storage activity or end user because the Air Force Materiel Command could not readily identify and provide the descriptive information. To determine the security type of selected shipments in fiscal year 2000, we identified the national stock number for all shipments of government-furnished material and manager-directed material. We then matched the national stock number with security classification codes in the Department of Defense Federal Logistics Information System. To select contractors and items shipped to them, we used computerized shipment data obtained from the Air Force Materiel Command. To develop our methodology, we conducted a preliminary review using three judgmentally selected contractors; two contractors were chosen on the basis of their proximity to the inventory control points, and the third was selected because of the substantial volume of shipments between it and all of the inventory control points. For these initial contractors, we selected 214 government-furnished material items and 1,159 manager-directed material items, based on such factors as the national stock number of the items and the number of items and/or dollar value of the shipments. Subsequently, we judgmentally selected an additional nine repair contractors, three for each inventory control point, that had either the largest dollar value or the largest number of government-furnished and manager-directed items shipped to them. For these contractors, we then selected 188 government-furnished material items and 7,442 manager- directed material items based on the military sensitivity of the items in the shipments and the unit price and/or dollar value of the shipments. Because the number of selected contractors and shipments was limited and judgmentally selected, the results of our analysis cannot be projected to all Air Force repair contractors and shipments. To assess whether shipments had been received and entered into the inventory control points’ repair-side reporting system, we obtained from the inventory control points their computer-generated shipment receipt histories. The receipt histories contained descriptive information about each shipment, including the document number, national stock number, and quantity reported as received. We did not independently verify the overall accuracy of the databases for which we obtained data, but used them as a starting point for selecting shipments that we then tracked back to records and documents on individual transactions. Because our conclusions are based only on those shipments for which we tracked back to documents, use of this data is reasonable for our purposes. To determine whether contractors had accounted for our selected shipments, we then matched the Air Force Materiel Command supply-side records of shipments to inventory control points’ repair-side receipt histories. When we identified discrepancies, we followed up with the repair contractors and inventory control points by tracking items back to contractor inventory records and by holding discussions with officials at the following locations: BAE Flight Systems, Mojave, California; Boeing, San Antonio, Texas; Boeing Electronic Systems, Heath, Ohio; Heroux, Inc., Quebec, Canada; ITT Avionics, Clifton, New Jersey; Lockheed Martin, Marietta, Georgia; Lockheed Martin, San Antonio, Texas; Lockheed Martin Lantirn, Warner Robins, Georgia; Northrop Grumman, Baltimore, Maryland; Northrop Grumman, Warner Robins, Georgia; PEMCO Aeroplex, Birmingham, Alabama; Teledyne Electronic Technologies, Warner Robins, Georgia; and the Defense Distribution Depots (located in Warner Robins, Georgia, and Oklahoma City, Oklahoma). To determine what happened to selected items that had reportedly not been received by contractors, our Office of Special Investigations followed up with commercial carriers by obtaining proof of delivery information and by holding discussions with officials at the following locations: ABF Freight Systems, Inc., Fort Smith, Arkansas; Associated Global Systems, Inc., New Hyde Park, New York; CorTrans Logistics, LLC, Wooddale, Illinois; Emery Worldwide, Ontario, California; Federal Express Corporation, Somerset, New Jersey; and United Parcel Service, Washington, D.C. To learn whether issues associated with unaccounted-for shipments were adequately resolved, we reviewed Department of Defense, Air Force, and Air Force Materiel Command implementing guidance. Such information provided the basis for conclusions regarding the adherence to procedures for controlling shipped inventory. To determine whether the Air Force had emphasized shipped inventory as part of its assessment of internal controls, we reviewed assessments from the Department of the Air Force, the Oklahoma City Air Logistics Center, the Ogden Air Logistics Center, and the Warner Robins Air Logistics Center for fiscal years 1999 and 2000. Our work was performed from May 2001 through April 2002 in accordance with generally accepted government auditing standards. We conducted our other investigative work during March 2002 and April 2002 in accordance with investigative standards established by the President’s Council on Integrity and Efficiency. Key contributors to this report include Sandra F. Bell, George Surosky, Susan Woodward, Jay Willer, David Fisher, Norman M. Burrell, and John Ryan. Performance and Accountability Series: Major Management Challenges and Program Risks—Department of Defense. GAO-01-244. Washington, D.C.: January 2001. High-Risk Series: An Update. GAO-01-263. Washington, D.C.: January 2001. Defense Inventory: Plan to Improve Management of Shipped Inventory Should Be Strengthened. GAO/NSIAD-00-39. Washington, D.C.: February 22, 2000. Department of the Navy: Breakdown of In-Transit Inventory Process Leaves It Vulnerable to Fraud. GAO/OSI/NSIAD-00-61. Washington, D.C.: February 2, 2000. Defense Inventory: Property Being Shipped to Disposal Is Not Properly Controlled. GAO/NSIAD-99-84. Washington, D.C.: July 1, 1999. DOD Financial Management: More Reliable Information Key to Assuring Accountability and Managing Defense Operations More Efficiently. GAO/T-AIMD/NSIAD-99-145. Washington, D.C.: April 14, 1999. Defense Inventory: DOD Could Improve Total Asset Visibility Initiative With Results Act Framework. GAO/NSIAD-99-40. Washington, D.C.: April 12, 1999. Defense Inventory: Navy Procedures for Controlling In-Transit Items Are Not Being Followed. GAO/NSIAD-99-61. Washington, D.C.: March 31, 1999. Performance and Accountability Series: Major Management Challenges and Program Risks—Department of Defense. GAO/OCG-99-4. Washington, D.C.: January 1999. High-Risk Series: An Update. GAO/HR-99-1. Washington, D.C.: January 1999. Department of Defense: Financial Audits Highlight Continuing Challenges to Correct Serious Financial Management Problems. GAO/T- AIMD/NSIAD-98-158. Washington, D.C.: April 16, 1998. Department of Defense: In-Transit Inventory. GAO/NSIAD-98-80R. Washington, D.C.: February 27, 1998. Inventory Management: Vulnerability of Sensitive Defense Material to Theft. GAO/NSIAD-97-175. Washington, D.C.: September 19, 1997. Defense Inventory Management: Problems, Progress, and Additional Actions Needed. GAO/T-NSIAD-97-109. Washington, D.C.: March 20, 1997. High-Risk Series: Defense Inventory Management. GAO/HR-97-5. Washington, D.C.: February 1997. High-Risk Series: Defense Financial Management. GAO/HR-97-3. Washington, D.C.: February 1997. | GAO has considered Department of Defense (DOD) inventory management to be a high-risk area since 1990 because inventory management systems and procedures are ineffective. This report evaluates the Air Force's inventory control procedures for material shipped to contractors for repair or for use in repair. The Air Force and contractor personnel have not complied with DOD and Air Force inventory control procedures designed to safeguard material shipped to contractors, placing items worth billions of dollars at risk of fraud, waste, and abuse. The Air Force's three inventory control points have not restricted repair contractors' access to the specific items and quantities of government-furnished material needed to accomplish the contract. Quarterly reports on the status of shipped material have not been sent to property administration officials at the Defense Contract Management Agency. Contractors receiving shipped material have not (1) properly entered the receipt of shipments into their records and into the inventory control points' reporting systems or (2) routinely reported shipment discrepancies. Air Force procedures for following up on shipments that contractors have not confirmed as received are ineffective, leaving the status of the shipments uncertain. The Air Force has not provided adequate program oversight because it does not request and analyze data on contractor shipment discrepancies to identify their extent and cause so that corrective action may be taken. |
HUD, through its Federal Housing Administration (FHA), helps finance home purchases by insuring private lenders against losses on mortgages for single-family and multifamily homes. If a borrower defaults on a loan and the loan is subsequently foreclosed, the lender may file a claim for most of its losses with HUD. After an insurance claim is paid from one of HUD’s various mortgage insurance funds, HUD assumes title to the property and the property becomes part of the HUD property inventory. HUD’s mortgage insurance funds support a wide variety of MF and SF insured loan activities, including management of HUD properties until they are sold. HUD’s mortgage insurance funds are financed by annual appropriations from the Congress, upfront and periodic mortgage insurance premiums from transactions with the public, or interest revenue. In 1994, we first designated HUD’s programs as high risk due to serious, long-standing, departmentwide management problems. In January 2001, we reduced the number of programs deemed to be high risk from all HUD programs to two of its major program areas. One of the two programs was SF mortgage insurance, which includes the management of property inventory. In fiscal year 2003, we designated HUD’s acquisitions management, to include contractor monitoring, as a new major management challenge because of extensive and growing reliance on contractors. Our initial evaluation of payments related to HUD properties was first discussed in our October 2002 testimony on our review of, among other things, fiscal year 2001 payments to a contractor responsible for managing HUD multifamily properties. We reported that this contractor engaged in questionable billing practices that resulted in potentially fraudulent payments. The contractor split construction renovation charges into multiple projects to stay below the $50,000 threshold of HUD-required approval. We identified about $10 million of this contractor’s invoices that individually were less than $50,000. We also found cases for which HUD paid this contractor for goods or services that were not received. In its SF property program, HUD contracts with six property management firms who are responsible for all activities associated with managing and marketing properties. Each of the contracts includes (1) having the properties appraised; (2) securing the properties to prevent unauthorized entry; (3) inspecting the properties to ensure that they are clean and in presentable condition; (4) performing routine maintenance, as well as repairs and renovations necessary to preserve and protect the property; (5) listing the properties for sale, and (6) selling them. Each contract covers a different geographic area that is under the jurisdiction of what are referred to as HUD’s homeownership centers. Contractors may have agreements in more than one geographical area. HUD also contracts with a support services contractor to facilitate payment to these management firms and other vendors. The homeownership centers are located in Atlanta, Georgia; Denver, Colorado; Philadelphia, Pennsylvania; and Santa Ana, California. Figure 1 shows the geographical jurisdiction of each of the four centers. The centers report directly to HUD’s Deputy Assistant Secretary for Single-Family Housing who, in turn, reports to the Assistant Secretary for Housing– Federal Housing Commissioner. The Director of the Real Estate Owned Division in each of the four centers is responsible for monitoring contractors’ performance in the respective center’s jurisdiction. Homeownership center staff manage and conduct the monitoring and prepare monthly assessments on contractors’ performance. The homeownership centers have a number of resources upon which they can draw to aid them in making these assessments. For instance, to assist in HUD’s oversight, third-party contractors are to inspect 10 percent of the properties handled by each management and marketing contractor. Also for oversight purposes, another national contractor is to follow a HUD checklist of procedures to be performed for the review of 10 percent of the management and marketing contractors’ property case files each month. In fiscal year 2002, HUD paid contractors $11 million for assistance in reviewing and processing payments and performing quality control oversight of management contractor performance. In addition, the center’s program support staff are to conduct follow-up property inspections and file reviews, as well as a monthly on-site review at the contractors’ offices. As part of the analysis, the HUD staff assign a risk rating of low, medium, or high to the contractor’s performance on each of 11 performance dimensions such as claims review, property maintenance, and sales procedures. According to agency data, payments for SF property expenses totaled more than $310 million in fiscal year 2002. The SF Acquired Asset Management System (SAMS) payment process reimburses management contractors for costs incurred in managing and marketing HUD properties, and makes direct payments to certain other vendors, such as oversight contractors, and pays management fees to the management contractors. The HUD payment process, as designed, includes four key steps: (1) the preparation of the payment request by the property management contractor or other vendor seeking funds, (2) initial review of the payment request by a support services contractor, (3) HUD approval, to include a technical review by a person appointed as the government technical monitor (GTM), and final authorization by a government technical representative (GTR), and (4) payment either by electronic transfer or paper check. In its MF program, HUD contracts for property management services, such as on-site management, rent collection, and maintenance for the multifamily properties it acquires through the foreclosure process. In 1994, due to a substantial increase in HUD’s inventory of MF properties, HUD entered into an agreement with a state housing agency for the renovation, disposition, and interim management of certain MF properties located in one geographical region. HUD officials told us this program was intended to be a demonstration or pilot program, to determine if this type of agreement is feasible as a common practice within HUD. Under this pilot, HUD is responsible for providing all the money needed to fund the program, and the state agency is responsible for developing and monitoring the project. HUD’s internal controls did not provide reasonable assurance that improper payments would not occur or would be detected in the normal course of business. We identified fundamental weaknesses in the four-step process used to pay for SF property expenses. Our Standards for Internal Control in The Federal Government include (1) establishing a positive control environment throughout the organization, (2) performing of control activities, which are an integral part of an entity’s accountability for stewardship of government resources, and (3) monitoring to assess the quality of performance over time. However, we found HUD did not delegate functions in a way that supported a positive control environment; specifically, the agency routinely relied on a support services contractor to prepare management contractors’ and other vendors’ payment requests and perform technical reviews of payment requests. HUD also routinely failed to require or ensure that all transactions were clearly documented, which is a control activity that helps ensure accountability for resources. HUD monitoring of contractors’ performance, particularly the review of the nature and amount of expenses incurred, was also inadequate. In addition, HUD did not respond appropriately to identified vulnerabilities that increased the risk of unsatisfactory performance. These internal control weaknesses made the HUD SF property program highly susceptible to fraud, waste, and abuse. HUD delegated oversight functions in a manner that weakened its control environment and resulted in established controls not being followed. We found that HUD routinely relied on a support services contractor to perform key elements of the first three steps in the four-step payment process (fig. 2) for SF property expenses. These delegated oversight functions included preparing payment requests (step 1), performing the administrative review (step 2) and performing a technical review (step 3.1) of payment requests. HUD relied on the support services contractor to perform the functions in both step 1 and step 2 of the payment process when HUD staff or the support service contractor determined that the original request needed to be modified. When payment requests received by HUD from the property management firms or other vendors needed modification to either the amount requested or property to be charged with the expense or other information, the support service contractor would make the change and prepare a revised payment request. This function is analogous to voiding and then replacing a check in a manual payment system. HUD’s written policies require that the management contractor create its own payment requests and those needing modification be returned to the requesting contractor for any changes. That is, the contractor requesting the payment was responsible for resubmitting the payment request after addressing the issues that caused the need for modification. However, HUD regional officials advised us and were aware that this control was not followed primarily to avoid delays in processing payments. In these cases, the support service contractor, using the same transmittal number that was used for the original request, created the revised payment requests. HUD also relied on the support service contractor to prepare payment requests (step 1) and perform the administrative review (step 2) of payment requests when the vendors requesting payment did not have access to the electronic payment system—HUD SAMS. While the property management contractors have access to SAMS, other vendors who routinely request payment from HUD, for example closing agents, do not. In some of the cases, the vendors without access to SAMS were submitting requests for payments directly to the support services contractor, rather than to the property manager of the underlying property or a HUD official who would be in a better position to monitor the completion and quality of work. In addition, HUD requires that vendors provide a signed request form, as assurance that the information submitted on payment requests is true and accurate. We found numerous requests created by the support services contractor that did not have this signature. Instead, the support services contractor signed the request form. This text box as well as those following in this section of the report are quotes from the U.S. General Accounting Office, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.31.1. (Washington, D.C.: Nov. 1999). “Transactions and other significant events should be executed by persons acting within the scope of their authority.” homeownership centers were not properly approved due to a lack of technical review. Specifically, we found that the support services contractor was routinely performing the technical review (step 3.1) reserved for the HUD-appointed GTM at two of the four HUD homeownership centers. In these situations, HUD permitted the support services contractor to act outside the contractor’s scope of authority granted through the HUD control structure by conducting the review, which requires a HUD-appointed individual with specific technical expertise. That is, the request for payment should not have been approved due to the absence of the GTM technical review. HUD’s delegation of the oversight functions for three of the four steps of the SF property payment process significantly weakened the control environment. The overreliance on the support service contractor resulted in a control environment where the controls over rejected payment requests and the approval of requests were not followed. When oversight functions are delegated in a manner that does not support a positive control environment, the control process may not be effective in detecting and preventing improper payments. On the basis of our statistical sample, we estimated that about 42 percent of the total number of SF property payments at the four homeownership centers were not adequately supported. That is, the minimum support necessary for a third party to determine the validity of the payment was not included in the documentation provided with the payment. Control activities, such as clearly documenting all transactions, are an integral part of an entity’s accountability for stewardship of government resources. HUD did not enforce consistent program wide documentation requirements, but rather allowed each HUD approving official to determine the adequacy of supporting documentation. As a result, the nature and extent of acceptable supporting documentation was inconsistent from region to region. For example, two of the four HUD homeownership centers accepted “manual” payment requests that were created outside of the HUD automated system. This deviation from the written internal control policy created inconsistencies in the payment request process among the contractors for that region as well as across the four homeownership centers. These “manual” payment requests did not have all the supporting data elements (e.g., payee Social Security number or tax identification number, address, remittance address) that the system- generated payment request included. Therefore, edit checks in the automated system, such as limitations on who was authorized to change the payment remittance address, were lost when manual payment requests were created. We found payments that also lacked adequate support, such as evidence that goods or services had been received, and that competitive bids had been obtained prior to the work being performed. Some supporting documentation lacked evidence of any validation of the charges. For example, payments to the contractor responsible for spot inspections of properties typically would be based on an invoice that reflected a fixed rate per property inspected and a list of the properties inspected. However, the support for these payments was devoid of any indication that the reviewer had verified the rate used, HUD’s ownership of the properties inspected, or otherwise determined the validity of the amount and relevant terms of the payment request. We also found that invoices and other supporting documentation were not effectively canceled to prevent unauthorized or inadvertent reuse as support for subsequent billings, and that other than original documents were used to support payments without any indication as to why other than an original invoice was accepted. We advised HUD officials when adequate supporting documentation was missing and we could not determine the validity of a payment selected for review. HUD management advised us that the support for certain amounts paid was not included with the payment documentation; however, the reviewing, approving and certifying officials were to simply review the contract file to verify the accuracy of the charges. We did not see evidence of this contract review. For example, as discussed later, through data mining we identified $15.2 million in payments for contract modifications with insufficient supporting documentation. The support for these payments was typically limited to copies of e-mails to the homeownership center from headquarters directing that payment be made, incomplete standard contract modification forms, and spreadsheets detailing by property only an insignificant portion of the total amount paid. We also identified cases where the HUD approving official at one of the homeownership centers was not requiring a contractor to provide specific support for payments to subcontractors, even though certain minimum support was required by the terms of the management contract, such as evidence that the contractor had paid the subcontractor before requesting reimbursement from the government. Adequate support for these amounts is critical because the payment is a reimbursement for the amount paid by the contractor to the subcontractor. Later in this report, these and other examples of payments without adequate support that were identified through data mining will be discussed in more detail. Neither HUD headquarters personnel, nor the regional staffs, systematically performed detailed analytical reviews of the millions of dollars in expenses generated by payments to contractors and other vendors. Monitoring the quality of performance over time is a critical control activity. Detailed analytical review of expenses focusing on key data elements is a way for management to assess performance and identify areas of risk. Although monitoring was deficient, HUD did perform some program wide analysis of certain financial performance indicators and limited analysis was done on a region-by-region basis. For example, the average holding costs per property and average time held in inventory were calculated. HUD headquarters officials stated that when a region had a “spike” in one of its performance indicators, a conversation to identify potential causes will take place. However, without specific analytical review of expenses, the real causes of the “spikes” may not be identified. Analytical reviews include focusing on key data elements, such as property number, vendor name, and expense classification, to identify patterns or anomalies that may require further inquiry or analysis. The results of detailed reviews can lead to cost-saving opportunities, the identification of usual patterns, and ultimately the discovery of instances of fraud, waste, and abuse. The automated SF payment system captures expenses by (1) case number and (2) expense category. These system features have the potential to assist HUD in strengthening its oversight of contractors. For example, totaling expenses by property provides HUD with the ability to compare and analyze property expenses over time from acquisition through sale of the property in a variety of ways, including by geographical region and contractor. Further, analyzing expenses by category, such as board-up, general repair, and clean-up expenses, would provide HUD with meaningful oversight information. Also, HUD management may find focused expense analysis work, similar to that which we performed for this review, to be an effective and efficient method for assisting in preventing and detecting improper payments. Our detailed analytical reviews of HUD payment data identified patterns that led us to specific improper payments. For example, figure 3 illustrates one of the basic types of analyses we performed to determine areas of high risk that allowed us to focus on the areas we viewed as most vulnerable to improper payments. Our analysis, as depicted at figure 3, focused our attention on determining the reason for the relatively high expense per property in the Philadelphia homeownership center, and then on one particular contractor within that center, when compared to other regions. We ultimately identified significant potentially fraudulent payments made at the New York City properties that are discussed later in the report. Our Executive Guide: Strategies to Manage Improper Payments explains how data mining and other forensic auditing techniques analyze data for relationships that have not previously been discovered. The guide also provides examples of various federal and state agencies that had performed such analysis. HUD officials indicated that lack of resources was the primary reason that they did not perform detailed expense analysis. We found that one potential roadblock to a meaningful detailed analytical review was HUD’s lack of control over expenses that it classified as allocated costs (AC). This expense category was intended to be used to accumulate expenses that could not be directly charged to a property and then allocate those expenses over all properties or those that received some benefit from the expense. For instance, the expense incurred for bonding coverage and file reviews for the entire program would be properly chargeable to AC and then allocated to all properties. However, HUD routinely used AC for expenses that should have been charged to specific properties. For example, we identified renovation charges for a specific property being classified as allocated costs. The accountability for HUD resources and ultimately the monitoring of the contractors’ performance were negatively affected when expenses were not consistently and accurately classified. HUD’s internal control monitoring of its contractors did not ensure timely and effective action in response to identified risks. For example, we found there was not an effective property inspection program that linked physical inspections to work billed. We also found that HUD made payments in its single-family program to a contractor for 1 year after we testified that the same contractor was engaging in abusive billing practices in HUD’s multifamily program. Although HUD held numerous meetings with the contractor over several years since shortly after the inception of the contract in June 2001, HUD did not promptly or effectively address the identified risk by implementing compensating controls over this contractor’s activities. We will discuss these issues in greater detail later in this report. The lack of fundamental internal controls over the process used to pay SF property expenses likely contributed to $16.3 million of questionable and $181,450 of potentially fraudulent payments that we identified through the use of forensic auditing techniques, including data mining and document analysis. We found questionable payments for invoices that had not been appropriately reviewed and authorized, that lacked adequate support and documentation, and where one person falsified a key support document. In addition, HUD did not monitor contractor performance and take prompt action to correct known deficiencies. As a result, we found a number of instances where HUD paid for contractor services that were substandard or not performed at all. These potentially fraudulent billings were all made by a contractor we identified in our previous work on certain fiscal year 2001 MF property payments as carrying out highly questionable billing practices. HUD recently took action to end its use of this contractor. The $16.5 million of questionable and potentially fraudulent payments made to contractors and other vendors during fiscal years 2002 and 2003 demonstrates the unacceptably high vulnerability of the program to questionable and potentially fraudulent payments. We classified payments as questionable if they were not supported by sufficient documentation to enable an objective third party to determine if each payment was a valid use of government funds. For the $16.3 million in payments we classified as questionable, we could not determine, as applicable, one or more of the following: (1) the nature of the goods or services HUD was paying for, (2) if the quantity and cost for the goods or services was correct for each item purchased, (3) if the government received the goods or services, (4) if a valid contract or other agreement existed to support the payment, (5) if the payment was for a valid obligation of the program, (6) if competitive bids had been obtained for the work, and (7) if there was a legitimate government need for the goods or services. Table 1 summarizes these questionable payments. For illustrative purposes, we provide specific examples of actual support for eight payments. The documents that are reproduced in the examples were provided to us by HUD as support for these payments and are the same support that HUD officials relied on to review and approve the payments. We identified $15.2 million of questionable payments to contractors for contract change orders with inadequate support for the payments. For example, five property management contractors received $10.6 million in payments for change orders when either no standard contract modification agreement supported the payments or the modification agreement was not signed by one or both parties. In addition, details of the amount charged for each property were not provided for most of the amounts included in each payment. Frequently, the payments were for services performed over long periods of time prior to the date of payment and the supporting documentation did not address why. For example, a change order issued March 2001 led to payments over a year later of fixed amounts totaling in the millions without an adequate explanation for the delay in payment included in the supporting documentation. In January 2004, HUD headquarters officials advised us that fully executed contract modification agreements existed at the time each of the payments was made. While HUD officials acknowledged that the agreements and underlying detail support by property were not in the payment files, they stated that the reviewer, approver, and certifying official reviewed all documentation in order to verify the accuracy of the charges. We found no evidence that the reviewers, approvers, or certifying officials had located these documents to validate payments. Figure 4 shows that the only support for a single payment of $1,318,692 in June 2002 was an invoice that included two lines of explanation for $452,000 and $862,661 with the description: “Lump sum Payment for Change Order.” No other support was provided for these two line items, such as a copy of a contract modification agreement signed by both parties, a list of the property numbers for the properties that received the goods or services, the time period covered by the payments, or an explanation as to what the government was paying for. Allocated costs, a pooled expense category, was charged with $1,314,661 of the total payment. The balance of $4,031 was charged to specific properties. HUD internal control policies require that specific properties be charged for all identifiable expenses. Further, there was no indication that the approver sought to determine the time period of the charges and relate that period to the dates HUD owned the properties for which the payments were made. Example 2–Water and Sewer Services We identified a $206,597 payment to one contractor for water and sewer charges related to 31 HUD properties for an average of $6,664 per property. HUD acquired the majority of the properties in 2001 or 2002. Substantially all the $206,597 paid was for services provided prior to HUD's ownership. We considered this payment as questionable because the support was inadequate. HUD regional officials informed us that payments were made to protect the properties from liens by the water authority. We found no indication in the payment support as to why HUD was paying for services provided even before the period of ownership covered by the most recent HUD-insured mortgage. While recognizing HUD's concern to protect the property from liens, we found no indication that HUD pursued why these charges had not been identified at the time of settlement and acquisition of the properties, or that the contractor or HUD had pursued negotiating a settlement with the water authority. Further, our review of the charges indicated numerous large amounts given the nature of the property and the time period involved. For example, one invoice dated July 2002 was for $35,756 for water and sewer services from May 1995 through June 2002 for a property HUD acquired in June 2001. Our research identified three prior owners of this property during the period covered by the bill paid by HUD. Another invoice was for $18,530 for services from January 1999 through May 2002 on a property HUD acquired in May 2002. Furthermore, at least two of the prior owners received FHA loans to purchase this property, even though at the time of purchase there were outstanding water and sewer bills related to the property. Yet, HUD's payment files contain no indication that HUD officials reviewed the charges for accuracy, despite the unusually large amounts. Example 3–Lead-Based Paint Abatement Program Our review found eight payments totaling $268,800 that HUD headquarters directed its regional offices to make to a contractor for partial reimbursement of claimed expenses of $529,682 to develop a lead-based paint abatement program. The contractor claimed that the lead-based paint abatement program was developed in response to a HUD request. Support for these payments did not include a signed contract modification form or other agreement for the contractor to develop such a program, or any indication of the amount or basis for settlement against the claimed expense of $529,682. The support provided to us for two payments totaling $99,000 had no signatures by HUD officials indicating the requests had been reviewed, approved or certified for payment. Further, the entire amount of the payments was charged to allocated costs, a pooled expense category, and not to specific properties as required by HUD policy. Although we started asking for support for these payments in August 2003 and received some information from HUD over time, it was not until January 2004, HUD headquarters provided us with documentation that included emails from a HUD contracting official to the contractor that indicate that the contractor agreed to accept $240,000 as the remaining payment on the lead-based paint services, that invoices for this amount will be handled as pass-through expenses similar to what was done for the initial payment of $300,000 and that “we won’t need to issue contract modifications this way.” No explanation was provided as to how the $540,000 settlement was reached or why such amount exceeded the total amount claimed by the contractor. Further, the management contract provides that all costs of performance are at the expense of the contractor unless otherwise specifically identified as pass-through cost in the contract. HUD must approve any additional pass-through expenses prior to the expense being incurred. We found no evidence that development of a lead based paint abatement program was specifically identified as a pass-through cost or that HUD granted approval prior to the contractor incurring the expense. Example 4-Signs and Airfreight Delivery We identified 10 payments totaling $58,343 for the purchase and shipping of over 14,950 “for sale” signs for use by contractors on HUD properties. The airfreight fee to ship the signs from Texas, the contractor's home office, to field offices in various states including California, Tennessee, and Illinois, totaled $6,805. The contractor did not obtain the required competitive bids. Further, we found no evidence that (1) local supply sources had been considered, (2) the quantity of signs paid for was reviewed for reasonableness, or (3) the large air freight charges had been questioned. Also, we could not reconcile some amounts paid to the invoices used to support the payments. In January 2004, HUD headquarters officials advised us that they agreed to be responsible for all costs incurred by the contractor for developing the signs and for the accelerated delivery and that a contract modification agreement was executed. However, they did not provide an explanation as to why local supply sources had not been considered, nor did they address the other issues described above. We identified five payments totaling $30,366, to reimburse a management contractor for fencing installed at multiple locations by a single vendor. Figure 5 shows one such fence. There was no evidence that the contractor obtained the required written competitive bids. The representatives of the contractor that we interviewed in August 2003 told us that a city ordinance requires this particular fencing vendor be used. In January 2004, HUD headquarters officials advised us that the vendor was awarded the work after a competitive bidding process. However, HUD could not locate the supporting documentation as the bidding process had occurred some years earlier. We identified one payment of $98,695 and two subsequent payments of $98,696 each for a total of $296,087 to a contractor for “Records Management.” The support for the first payment (fig. 6) was the billing from the contractor and an annotation from a HUD employee indicating that it was “OK to pay.” There was no contract included with the support for any of the payments. There also was no indication that the amount of the payments had been compared to a contract. HUD regional staff advised us that the “OK to pay” notation on one of the invoices by the division director was sufficient to process the payment. In January 2004, HUD headquarters officials advised us that a valid agreement for the services was in place at the time of payment, but neither the contract for the services, nor the modification to the agreement with the contractor, is required to be attached to the payment. However, we found no evidence that these documents were reviewed or considered prior to payment of the invoices. We identified a payment of $1,300 to reimburse a contractor for a fee paid to an individual to vacate a property not in the HUD inventory. The support for the payment was an internal e-mail questioning if the property was in HUD’s inventory, a generic invoice that was unsigned and did not include the FHA property number–a HUD requirement for all payments (fig. 7), an unsigned “Agreement to Vacate” (fig. 8), and a final approval e-mail (fig. 9) without explanation. HUD regional staff subsequently told us that a signed “Agreement to Vacate” existed; however, we found no evidence of the signed agreement with the support for the payment. HUD charged the payment to Allocated Costs, a category for pooled expenses. The property located at the address on the form did not have a FHA property number, which is required to submit expenses for a HUD property. In January 2004, HUD regional officials confirmed that at the time of payment the property was not in the HUD automated payment system. Example 8–Lawn Service and Repairs During our document analysis work we identified suspicious documents supporting a number of payments. Specifically, we found numerous work, one of which is illustrated by figure 10, that were initialed three times by the same person certifying (1) receipt of competitive bids, (2) completion of the work by the subcontractor, and (3) inspection of the work performed. During our site visit to Santa Ana, California, we interviewed representatives of the contractor and asked for an explanation for the initials on these work orders. We were told that they had not noticed the similarity in the initials and did not know the identity of “S. C”, the person whose initials were on the work orders. Later the same day, the contractor advised HUD by phone that the work orders had been falsified to support disbursement requests. We suggested to HUD that it perform an extensive review of payments meeting certain criteria to identify any additional potentially improper payments. HUD advised us in August 2003 that it was seeking reimbursement of approximately $23,000 in payments that had been made based on similar falsified work orders. However, approximately 2 months later, HUD reversed its position and decided not to seek reimbursement for these payments because the contractor assured them the work had been performed. In January 2004, HUD headquarters officials advised us that they supported the decision to not seek reimbursement from the contractor because the work was “verified for completion.” However, the verification of performance of the work was provided–months after the date of payment–by the contractor that had falsified the documentation; HUD has not independently verified that the work was performed. In addition to the previous eight examples, we identified 22 other questionable payments totaling $228,773. These included payments for steel roll-up doors, appraisal services, newspaper advertising, and utilities. The common issue with these payments, like others classified as questionable, was the lack of adequate supporting documentation included with the payment. Without this support, we could not determine whether these payments were a valid use of government funds. Because we tested only a small portion of the transactions that appeared to be high risk and HUD internal controls did not provide reasonable assurance that improper payments would not occur or would be detected in the normal course of business, there are likely other questionable payments that we have not identified. HUD’s failure to monitor contractor performance and institute additional control activities in response to known risks resulted in at least $181,450 of potentially fraudulent payments. We identified $163,965 of potentially fraudulent payments made in fiscal year 2002 and $17,485 made in fiscal year 2003. We classified payments as potentially fraudulent when the scope or quality of the work appeared to be misrepresented by the contractor or the work appeared not to have been done at all. Through data mining, we initially identified 287 invoices, totaling $476,104, for single-family construction renovations that were submitted to HUD by the contractor that our prior work identified as using highly questionable billing practices, including (1) alleging that construction renovations were emergencies, thus not requiring HUD preapproval, and (2) splitting renovations into multiple projects to stay below the dollar threshold requiring HUD approval. Each of the 287 invoices supported fiscal year 2002 payments and was for an amount less than the $2,500 threshold requiring HUD approval. We selected properties to test the validity, by physical inspection, of some of these $476,104 in payments, focusing on those that appeared to be for tangible goods that we could readily identify. In total, we tested the validity of payments totaling $136,264. In June 2003, we visited nine HUD-owned single-family properties in New York City being managed by the contractor referred to above. HUD staff responsible for oversight of this contractor accompanied us to the properties. At each of the nine properties we visited, we noted discrepancies between what was represented on select invoices and what was actually received, and determined that all of the $136,264 payments tested were potentially fraudulent payments. We took photographs to support our observations, when possible. All of the invoices that we tested indicated that the work performed was purported to have been for emergency repairs, meaning that no HUD preapproval was required, nor was the property manager required to obtain competitive bids for the work. Many of the work projects for the same addresses were split among multiple invoices, most likely to stay below the dollar threshold requiring HUD approval–as in the case we reported last year about the multifamily construction renovation work performed by this contractor. The labor charge was always $91 an hour–whether for clean up and debris removal or a project typically requiring a mastered skill, such as masonry. We noted serious discrepancies between what was represented on invoices and what was actually received at each of the nine properties we visited. For illustrative purposes, we are providing specific examples of some of the discrepancies noted at five of the nine properties. On the basis of physical inspection, we determined that HUD paid at least $30,701 in fiscal year 2002 for goods or services related to this property that were incomplete or do not appear to have been provided at all by the contractor. For example, HUD paid (1) over $4,000 for replacement of the entire apartment floor, including the bathroom, (2) $2,320 for a new ceiling and bathroom door, (3) $2,170 to have four workers repair and install new Sheetrock®, and (4) $1,590 for a small kitchen cabinet. The photographs (figs. 11 through 14) show little or no evidence that this work was performed. As illustrated in figures 11 through 14, we saw no evidence of new flooring in the apartment, and in fact, most of the floors were missing tiles or otherwise very worn. The “new” ceiling was severely damaged and caved in, and there was no new bathroom door. We found no new Sheetrock®, but about two square feet of wall had been roughly patched. While there was a new cabinet, we found a cabinet similar to the one pictured at a large retailer for a price of less than $50. On the basis of our physical inspection, we determined that HUD paid at least $11,176, in fiscal year 2002, for goods or services related to this occupied property that were incomplete or do not appear to have been provided at all by the contractor. For example, HUD paid $2,060 for “emergency repairs” to a bathroom and $1,082 for repairs to a stairway. The photographs (figs. 15 through 17) show the condition of the “repaired” bathroom and minimal work performed in the stairway at the time of our physical inspection. As illustrated in figure 17, the bathroom was in total disrepair. The repairs to the stairway were merely two wooden dowels that replaced missing balusters. On the basis of physical inspection, we determined that HUD paid at least $9,538 for goods or services related to this property that were incomplete or do not appear to have been provided at all by the contractor. Specifically, HUD paid (1) $2,265 for new ceilings, (2) $3,560 for repairing and painting walls and ceilings, (3) $3,162 for floor repairs and replacement, and (4) $551 for a new refrigerator. The photographs (figs. 18 through 20) show the general condition of the ceilings, walls, and floors throughout this property after the repairs. As shown in the preceding pictures, it appeared that new Sheetrock® was installed on the kitchen ceiling, however the job was not completed–the ceiling had not been sanded or painted. The dining room ceiling was caved in and the floors were old and in poor condition. In addition, the new refrigerator was missing. On the basis of a physical inspection, we determined that HUD paid at least $32,677 for goods or services related to this property that were incomplete or do not appear to have been provided at all by the contractor. For example, HUD paid $2,292 for four new metal doors and installation. We only found one metal door in the basement, shown in figure 21, which does not appear to be new. In addition, HUD reimbursed the contractor for five invoices, totaling $8,407, for additional work performed in the basement, including clean up and debris removal and replacement of a wooden floor. The occupant we spoke with said the only work he was aware of being done to the basement was the installation of the one old metal door. HUD also paid $3,978 for repairs to the front entrance stoop (fig. 22). Although we did observe patches of relatively new concrete, it appears that HUD was overcharged for this work. In addition, HUD paid $3,200 for cleaning and removing debris from the backyard. The occupant said no one had cleaned the backyard and we noted that the backyard was currently covered with debris, including old broken bicycles and large broken slabs of concrete. On the basis of physical inspection, we determined that HUD paid at least $5,021 for goods or services related to this property that were incomplete or never received. The contractor was reimbursed $1,048 for “emergency” repair and painting of the public hall. The photograph (fig. 23) is of the public hall. As shown in figure 23, only portions of the walls were roughly painted. In addition, HUD paid $2,167 for repairs, including plastering and painting the walls and ceiling in the living room and dining room of one of the apartments on this property. The photograph (fig. 24) shows the condition of the ceiling and part of the wall in one of the rooms where this work was said to have been performed. Similar conditions were observed in the other rooms purported to have been repaired. We noted similar discrepancies, totaling $47,151, at the other four properties we visited. In total, based on our June 2003 physical inspection, our work indicated that 82 invoices, totaling $136,264, were most likely fraudulent. In June 2003, we met with HUD officials in headquarters to discuss the results of our June visit. The HUD Philadelphia office officials that accompanied us on our physical inspection were teleconferenced in on the meeting. We used the photographs included in this report to help communicate the severity of the deficiencies we noted. We also discussed the results of our June visit with Committee staff, which resulted in an expansion of our work (1) to determine whether HUD had made changes to its internal controls to address the causes of the potentially fraudulent payments that we had identified in June 2003 and (2) to test for additional potentially fraudulent payments. Our work included additional tests for receipt of goods and services for payments made in fiscal year 2002, as well as certain payments made in fiscal year 2003. As a result, we found another $45,186 in potentially fraudulent payments, consisting of $25,657 of fiscal year 2002 payments and $19,529 of fiscal year 2003 payments. We determined that HUD had not implemented new controls or modified existing controls to address weaknesses previously identified. For example, HUD did not institute monitoring policies that would increase the frequency or scope of the inspection to verify that goods or services paid for had, in fact, been received. Furthermore, HUD officials told us that the contractor had not been directed to perform the services for which HUD had paid $136,264 and received little in return. In November 2003, we attempted to physically inspect the same apartments in each of the nine previously visited properties. However, we only had access to the apartments within each property where the occupants allowed us to enter. In total, we gained access to seven of the nine properties that we visited in June. At these seven properties, we saw no evidence that any attempt was made to correct the work that HUD paid $39,686 for and we previously identified as having been incomplete or not performed at all. The additional $45,186 in potentially fraudulent payments that we found included $13,138 for repairs and renovations to the properties we visited in June. The remaining $32,048 of additional potentially fraudulent payments was related to nine additional properties that we visited. The same contractor managed these properties. During this second visit, we again noted numerous discrepancies between what was represented on invoices and what was actually received. For illustrative purposes, we are providing specific examples of a few of the discrepancies noted. At one of the properties that we revisited, HUD paid a total of $2,759 for “emergency” repairs to a bathroom wall and floor tiles ($1,756) and bathtub repairs ($1,003). As evidenced by the photograph (fig. 25), the only indication that the bathroom wall or floor tiles had been repaired was that a few tiles on the wall by the toilet had been replaced. As shown in the photograph (fig. 26), the “repaired” bathtub was old and rusted and did not appear to have received $1,003 in repairs. At yet another revisited property, we found an additional $2,977 that HUD paid the contractor for repairs to the entrance lobby and public hallway. As discussed previously in this section, the entrance lobby had not been repaired and only portions of the public hall had been roughly painted. It was not evident that any further work had been done. The remaining $32,048 of additional potentially fraudulent payments was related to the nine new properties that we visited. We determined that HUD paid at least $19,763 for goods and services related to one of these properties that did not appear to have been delivered. For example, HUD paid $1,813 for installing new tiles to the stairs pictured (fig. 27). As evidenced by the photograph above, the stairway was not retiled. At this same property, HUD paid the contractor $2,008 to install a new ceramic tile bathroom floor, a shower rod, and a medicine cabinet. The floor we saw (fig. 28) appeared to be several years old. Furthermore, the occupant stated that he purchased and installed the medicine cabinet. At another property, we determined that HUD paid at least $7,420 in potentially fraudulent payments, including $1,847 for installing a new bathroom floor. As indicated in the photograph (fig. 29), portions of the bathroom floor were missing, and clearly had not been recently replaced. Our analysis of supporting documentation indicated that the contractor might have used the same scheme in the SF payment process that it had used to circumvent controls in the MF payment process, which we reported in October 2002. The scheme involved (1) alleging that construction renovations were emergencies, thus not requiring multiple bids or HUD preapproval, and (2) splitting renovations into multiple projects to stay below the dollar threshold of HUD-required approval. We referred the improprieties that we previously identified to the HUD OIG. A HUD OIG investigator told us that these improprieties have been referred to the U.S. Attorney’s Office. As illustrated in figure 30, HUD hired the contractor in July 1997 to manage a portfolio of MF properties. During the first year of the contract, HUD became concerned about the contractor’s billing practices. HUD questioned the contractor about the rotation of vendors, determination of fair pricing, and reasonableness of work orders issued. HUD documented its many concerns, including that “it is evident that a great amount of money is being spent with little control in place.” In June 2001, despite serious performance deficiencies, including questionable procurement practices, HUD increased the contractor’s responsibilities by modifying the contract to include certain SF properties in New York City. In November 2001, HUD began to question the contractor about its billing practices related to the SF properties. In October 2002, we testified about the potentially fraudulent fiscal year 2001 billing practices of this contractor. According to HUD officials, the MF contract expired in February 2003. According to HUD officials, the agency aggressively monitored the contractor’s performance; identified performance deficiencies from the onset of the task order; and identified deficiencies in services, products, and billings to the contractor management. A HUD memorandum summarizing its efforts to improve the contractor’s performance indicated that in November 2001, it began meeting with the contractor to review issues of concern about the SF properties. HUD noted that one of the obstacles to the contractor’s successful performance was that the contract did not clearly define the details of work to be performed. In addition, the statement of work for the contract did not take into account the special nature of the 203(k) property challenges, such as sites dispersed throughout New York City and extensive legal and municipal involvement. The memorandum also stated that HUD considered terminating the contract. However, since the agency (1) had not issued any formal notices of concern to the contractor and (2) difficulty existed in quickly finding a qualified replacement contractor, HUD agreed to one final effort to improve the contractor’s performance. HUD and the contractor agreed to revise the statement of work to reflect a clearer and mutually agreed upon basis by which to measure the contractor’s performance. The new statement of work was issued in June 2003. Within 60 days of the issuance of the new statement of work, HUD noted serious findings, including unacceptable work, unreasonable prices, and some work that appeared not to have been done at all. Throughout all of this period, HUD continued to pay bills from this contractor. On October 23, 2003, HUD issued an amendment to the contractor’s task order to end the contractor’s management responsibilities on October 31, 2003. A new contract was put in place on October 17, 2003, which requires the new contractor to absorb the cost of all routine maintenance and repairs. HUD officials stated that the agency held back the payment of recently submitted billings from the prior contractor that HUD deemed questionable. However, HUD officials also told us that the agency was not seeking reimbursement for any previously paid billings, including those identified by our audit for which HUD received little in return. HUD paid this contractor $2 million in fiscal year 2002 and over $2.5 million in fiscal year 2003 for SF property expenses. We found HUD’s monitoring of a major multifamily pilot program with a state housing agency to be insufficient. HUD entered into an agreement that made it responsible for providing all the money needed to complete the program, while the state agency was responsible for developing and monitoring the program. HUD viewed the program as a way for the state agency to employ innovative management and disposition methods and entered into a sole source agreement with an initial development budget in the amount of $187.5 million. However, HUD did not fully assess the program’s inherent risks under the terms of the agreement and design compensating internal controls to address these risks. In addition, HUD did not implement internal controls appropriate for monitoring the escalating risks, while the cost of the program, borne by one of HUD’s mortgage insurance funds, climbed to over $537 million from inception in 1994 through September 30, 2003. Additional oversight by HUD may have helped prevent at least some of the more than half a billion dollars in program costs—$286,400 expended per apartment unit—for the renovation, interim management, and ultimate disposition of 1,875 apartment units. The National Housing Act authorizes the Secretary of HUD to delegate to state agencies the performance of management and disposition-related functions. HUD determined that it was in the public interest for it to enter into a sole source agreement with a state housing agency for interim property management, to include renovation and ultimate disposition of 18 HUD-owned multifamily properties within the agency’s home state. HUD is providing all the money for the program and the state agency is responsible for all spending, including amounts for construction, renovation, and day- to-day operations of the properties. Although the program agreement did not list an expected completion date, HUD officials told us that the program was intended to take approximately 3 years. It is currently in its 10th. Our publication, Standards for Internal Control in The Federal Government, states that (1) management needs to comprehensively identify risks and should consider all significant interactions between the entity and other parties, (2) internal control monitoring be performed to assess the quality of performance over time, and (3) appropriate internal controls should be implemented to improve accountability. However, we found that HUD’s monitoring was limited to the approval of property budgets. Internal controls should be designed to ensure that ongoing monitoring occurs in the course of normal operations. When, as here, the terms of the agreement charge one party with authority over virtually all spending decisions, including to whom, in what amount, and under what terms large construction contracts will be let, and the other party is responsible for paying all the bills, strong monitoring controls are necessary to control spending and encourage financial accountability. In spite of the inherent risks that stemmed from the terms of the underlying agreement with the state housing agency, HUD did not analyze these risks and design controls to address them either initially or in reaction to escalating costs over a 9-year period. HUD did not incorporate adequate spending controls that may have served to limit its financial exposure before entering into the program agreement with the state agency. Spending controls that may have been appropriate considering the terms of the agreement with the state agency include: specifying performance penalties for missed completion dates, requiring that feasibility studies be conducted prior to undertaking major contractual commitments, providing a cost-sharing formula that would assign some economic risk to the program developer, and limiting the amount that HUD would pay for specific line items by project, such as a ceiling on the amount that would be reimbursed for tenant upgrades by property. Furthermore, despite significant spending in excess of the original budget, HUD’s oversight of the program never evolved beyond approval of the properties’ initial development budgets. HUD also did not establish processes to routinely estimate and compare projected development costs to total estimated costs per the program agreement or to consider the impact of unanticipated occurrences, such as expenses to mitigate environmental hazards. The largest categories of expenditures were for general construction and other contractor charges. The state agency was responsible for all aspects of the contracting process including the competition plan, which typically is a key function in ensuring that the government receives the best combination of price and quality. These general construction payments totaled approximately $178 million of the total incurred cost of $481 million through September 30, 2002. The state agency awarded 23 separate construction contracts to rehabilitate the 16 properties in the program that ranged in amount from $49,600 to over $45 million. The construction contractors received periodic payments based on the percentage of work completed, which was reflected in monthly requisitions. The state agency was solely responsible for reviewing and approving the monthly requisitions and construction contractors’ periodic payment requests. HUD paid all of these expenses, while providing no oversight of the construction contractor’s monthly payments beyond the approval of initial development budgets. HUD paid significant amounts for expenses in excess of amounts in the original contract award due to unforeseen natural conditions, tenant requests, and other contract amendments for changes in the architectural scope of work. For example, HUD paid an additional $8.9 million in contract amendments at one property, which had an initial construction contract of over $45 million. Contract amendments were granted when the architectural specifications included in the original construction contract did not include certain work being performed by the construction contractor. These payments were for unforeseen conditions, such as the need to address environmental hazards and requests from tenants for tenant upgrades. Tenant upgrades at one 236-unit property included the following: installation of molding on stairs, $101,779 ($431 per unit); labor and materials for two coats of varnish on stair molding, $115,000 ($487 per unit); upgrades to ceiling light fixtures, $114,648 ($189 per bedroom); ceramic tile back splashes, $71,775 ($304 per unit); soap dispensers at sinks for $19,430 ($82 per unit); and upgrades of door hardware from satin chrome to satin brass for $18,650 ($79 per unit). HUD was not in a position, due to its limited monitoring of the program, to challenge or otherwise determine the validity of these payments. HUD also granted the state agency the flexibility to make payments “off- budget.” The expenses designated as off-budget included such costs as environmental hazards, consulting and monitoring, and tenant relocation expenses. The state agency’s annually adjusted asset management fee was also considered off budget. In addition, for one property we found that HUD directed the classification of charges associated with extraordinary site development and building demolition as environmental hazard expenses. This classification allowed the costs to be considered as off budget. Since inception of the program, payments made by HUD that were classified as off-budget totaled over $241 million (see table 2), including environmental hazard expenses of approximately $58 million, and expenses related to tenant relocation of $46 million. When HUD granted the state agency the ability to charge off-budget items and then did not define or limit this type of spending, it substantially weakened the effectiveness of the limited control stemming from its approval of the original development budget for each property. As of the close of fiscal year 2002, the program has cost HUD in excess of $481 million, almost $300 million more than the original development budget, and remains a work in progress. In addition, HUD reported that an additional $56 million was expended for this program in fiscal year 2003. HUD officials have advised us that they do not plan on entering into any future agreements with similar terms. Internal controls tailored to address the inherent risk, including additional oversight by HUD, may have prevented some of the cost escalation and would have provided management with a reasonable basis for ensuring that the more than half a billion dollars in program payments were properly supported as a valid use of government funds. The problems we identified with internal controls and risk management over HUD single-family and multifamily property programs leave the agency vulnerable to wasteful, fraudulent, or otherwise improper payments. This vulnerability was capitalized upon by at least one contractor and potentially others during the period of our review as evidenced by the potentially fraudulent and questionable payments we identified in the SF program. Even after HUD officials became fully aware of this improper activity, they did not take timely action to stop the flow of money being paid to this contractor for substandard or nonexistent services. Further, HUD also failed to establish any kind of control over money it provided for the major multifamily program, even though costs escalated to triple the original development budget. Improper payments increase the expense of program delivery and may reduce the quality of program services. This additional expense must be funded by either a decrease in spending-in the affected program area or in other FHA programs-or by an increase in revenue from congressional appropriations or mortgage insurance premiums paid by those buying homes through the FHA SF program. Because of the long-term nature of funding decisions for the HUD mortgage insurance funds, including the rates charged for mortgage insurance, the impact of improper payments might not be visible to policymakers and managers. Such hidden expenses are nevertheless real and cumulative in effect. HUD must take steps to identify and manage its improper payments in order to minimize costs to FHA mortgage holders and taxpayers and maximize funds available to carry out its programs. To improve internal controls over HUD’s single-family property program, we recommend that the Secretary for Housing and Urban Development direct the Assistant Secretary for Housing-Federal Housing Commissioner to take the following 22 actions to address the weaknesses within the single-family program discussed in this report. Establish policies and procedures that create a positive control environment for all key steps in the single-family payment process. These polices and procedures should require management contractors to prepare all payment requests for which they are the payee, including any revised payment requests that may be required; provide adequate controls over preparation, review, and approval of payment requests for vendors that do not have access to the automated HUD Single-Family Acquired Asset Management System; specify that the technical review of payment requests be performed solely by HUD-appointed individuals with the requisite training and experience; and require HUD monitoring at prescribed time intervals to ensure that these control features are being consistently implemented at all payment review locations. Establish policies and procedures over single-family payments to contractors and other vendors that ensure all such payments are clearly documented and the documentation is readily available for appropriate officials to consider at the time they review and approve payment requests. Depending on the type of payment, these policies and procedures should necessitate evidence of the nature of the goods or services the payment is for; call for documenting that the quantity and cost for the goods or services is correct, was received and has been reviewed for each item purchased; require annotated verification that the amount and timing of the payment is supported by a valid contract or other agreement signed by both parties; require documentation that the payment is for a valid obligation of stipulate that competitive bids be obtained and evaluated before the require confirmation that the goods or services are for a legitimate require that all invoices and other supporting documentation be effectively canceled to prevent reuse; and require that only original documents be used to support payments, or that there be evidence of compliance with policies concerning the use of reproduced documents. Establish policies and procedures over single-family payments to contractors and other vendors that will improve the effectiveness of HUD’s oversight of contractor performance. These policies and procedures should establish standard business metrics for comparing contractor performance, to include expense data by contractor, total expenses per property, and per expense classification; ensure the preparation and review of these metrics regularly to identify cost saving opportunities, unusual patterns that require attention, and potential instances of fraud, waste, and mismanagement; and establish specific guidelines for when single-family payments to contractors and other vendors may be classified as allocated costs. Establish consistent practices for single-family payment processes, to include the preparation of payment requests, review and approval of payment requests, and minimum supporting documentation standards for all payments, that will clarify what policies and procedures must be adhered to by headquarters and all homeownership centers. Follow up on each of the payments we identified as questionable or determine if the payments are a valid use of government funds; identify the causes for these payments to occur and not be detected in the ordinary course of business; and pursue recovery of amounts paid, as appropriate. Perform a risk assessment of single-family payments to contractors and other vendors to determine the nature and extent of HUD’s exposure to improper payments. The risk assessment should include a comprehensive review and analysis of operations to determine where risks exist and what those risks are, including assessing the need for linking property inspections with billed amounts for goods and services provided; measure the potential or actual impact of identified risks on program establish compensating internal controls to address areas of vulnerability identified through the risk assessment process. To address the significant internal control weaknesses that we identified related to monitoring the multifamily program with a state housing agency under a sole source agreement, we recommend that the Secretary of Housing and Urban Development direct the Assistant Secretary for Housing-Federal Housing Commissioner take the following two actions. Implement risk-based oversight and monitoring policies and procedures to reduce HUD’s vulnerability to fraud, waste, abuse, and mismanagement in the multifamily program with the state housing agency. Consider requesting the HUD Office of Inspector General to review the propriety of the use of funds under the program with the state housing agency. In written comments on a draft of this report, from HUD’s Assistant Secretary for Housing–Federal Housing Commissioner which are reprinted in appendix II, HUD agreed with some of our findings and recommendations and disagreed with others. In particular HUD (1) disagreed with our classification of certain payments, including $15.2 million of inadequately supported payments for contract change orders, as questionable payments; (2) agreed that the contractor for the New York properties failed to provide certain services or provided unacceptable services, but stated it had held back certain payments to the contractor that included amounts we reported as potentially fraudulent; and (3) regarding our recommendations related to the MF pilot program, acknowledged that its agreement with the state agency did not contain the necessary controls and oversight protocols to preclude the types of problems we identified and agreed to examine opportunities to enhance its oversight over the remaining life of this particular program; however, it did not agree to enlist the HUD IG’s support to review the propriety of the use of the funds at this time. HUD did not specifically comment on the other 22 recommendations related to its SF program. With regard to contract change orders, HUD stated that it is inappropriate for us to consider these 23 payments totaling $15.2 million as questionable. HUD stated that for each of these payments (1) it had provided us signed copies of the contract modifications, (2) the payments are clearly supported by contract modifications issued by approved HUD contracting specialists, (3) agency staff adhered to appropriate procedures in the review and payment for the services identified in the modifications, and (4) notations on the respective invoices by the HUD reviewing official that he or she had looked at the contract modification to confirm its existence prior to payment is certainly not a prescribed procedure and should not cause these payments to be classified as questionable. We disagree. We classified payments as questionable if they were not supported by sufficient documentation to enable an objective third party to determine if the payment was a valid use of government funds. We found each of these 23 payments totaling $15.2 million to be inadequately supported at the time the payment was made. As stated in the report, these payments were made without basic support such as standard contract modification or other agreements signed by the contractors and HUD, indication of the timing, quantity, and nature of the goods and services provided, and identification of the specific properties covered by the payments. In January 2004, several months after our initial request for supporting documentation for these payments and after our fieldwork was completed, HUD headquarters officials advised us that fully executed contact modification agreements existed at the time each of these payments was made. These officials acknowledged that the agreements and underlying detail support by property were not in the payment files but that the reviewing, approving, and certifying officials reviewed all documents in order to verify the accuracy of the charges. However, we found no evidence of this during our site visits. In fact not one HUD reviewing, approving, or certifying official indicated to us that they went beyond the documentation contained with the payment request to ascertain the propriety of the payments we reviewed. Also, in January 2004, HUD headquarters officials forwarded us signed copies of some, but not all of the contract modifications. That fact that the signed contract modifications may have existed at the time payments were made is peripheral to one of our core points that—HUD reviewing, approving, and certifying officials have available and consider at the time they review and approve the payment sufficient supporting documentation to determine that a payment is a valid use of government funds. It is clear from our review that this did not occur in the case of these $15.2 million payments for contract change orders. In addition, the existence of the contract modifications does not negate our other core point. Without specific documentation, which HUD did not provide, indicating such things as (1) the properties the charges relate to, (2) the time period the charges were incurred, (3) an explanation of what goods or services were provided, and (4) that HUD owned the properties at the time the goods or services were provided, we could not determine the validity of these payments. Therefore, they remain questionable. HUD raised similar issues with regard to each of the other 8 categories of payments totaling $1,113,266 that we classified as questionable. We address these other issues in our more detailed comments in appendix II, where we reaffirm our position that all of these payments are questionable. Regarding the potentially fraudulent payments, HUD stated, and we agree, that the department is obligated by government contracting procedures to work with a sub-performing contractor to improve performance, rather than move to immediate termination. However, it is not clear to us why knowing of these serious performance deficiencies, including questionable procurement practices that became apparent within the first year of the contract, HUD continued for over 6 years to pay this contractor over $425 million for charges related to SF and MF properties without instituting additional controls to determine whether the goods and services billed for had actually been provided at the properties. In addition, we disagree with HUD’s statement that its “hold back” included disbursements reported as potentially fraudulent payments in this report. First, we assume that HUD means that it is recouping some of these payments by holding back payment on newly received invoices from this contractor. However, HUD staff responsible for oversight of the contractor informed us that the held back payments relate to a barrage of old invoices, some going back 2 or 3 years, that the contractor submitted during the contract termination process. Many of these invoices had previously been rejected by HUD, and the contractor merely resubmitted them. Therefore, holding back payment on these invoices, while helping HUD avoid making additional potentially fraudulent payments, does not result in the recoupment of previously made payments for invalid charges. In regards to our MF program recommendation to consider requesting the HUD IG to review the propriety of the use of funds under the program, HUD stated it has already initiated enforcement actions and claims against certain architects and contractors for failing to perform satisfactorily and plans to vigorously pursue all necessary enforcement actions that arise related to this program. However, HUD said it would not consider requesting the IG to review the propriety of the use of the funds for this program at this time, as we recommended, but may elect to refer unsatisfactory performance issues to the IG for further review. HUD also said that it intends to complete a full evaluation of the program goals and implementation at the end of the program and that the current HUD administration would not recommend that the program be repeated. We share HUD’s concern regarding this type of program and continue to believe that given the hundreds of millions of dollars in budget overruns, and the minimal oversight by HUD, that an independent review by the IG office should be considered a part of HUD’s fiduciary responsibilities over the funds. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its date. At that time, we will send copies to the Ranking Minority Member of the House Committee on Government Reform; the Secretary of Housing and Urban Development; and other interested parties. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have questions on matters discussed in this report, please contact me at (202) 512-9508 or [email protected] or Robert Owens, Assistant Director, at (202) 512-8579 or [email protected]. Major contributors to this report are acknowledged in appendix III. To assess internal controls over HUD SF property payment transactions and determine whether they provide reasonable assurance that improper payments will not be made or will be detected in the normal course of business, we reviewed HUD SF policies and procedures, property management and marketing contracts and amendments, our previous reports, and reports issued by HUD’s IG, a financial management consultant, and an independent contractor. conducted walk-throughs of transactions and interviewed officials at HUD headquarters, each of the four homeownership centers, and three contractor offices. tested internal controls using a statistically selected sample of transactions. Specifically, we selected a stratified random probability sample of 145 single-family disbursement transactions from a population of 238,411 fiscal year 2002 transactions. We stratified the population into HUD’s four regions--Atlanta, California, Denver, and Philadelphia--on the basis of SF disbursement transactions made during fiscal year 2002. The sampling unit was one disbursement transaction occurring between the October 1, 2001, and September 30, 2002 posting dates. Our estimates were calculated using a 95 percent confidence level. In other words, we are 95 percent confident that each of the confidence intervals in the report includes the true values in the population. We tested the following attributes: (1) proper approval and (2) validity of payment. We provided HUD with the transactions selected and obtained and reviewed related supporting documentation. To determine whether payments are properly supported as a valid use of government funds, we performed data mining on the database of HUD’s fiscal year 2002 disbursements for HUD SF properties to identify potentially improper and questionable payments. We discussed the results of our analysis with HUD regional and headquarters managers and requested that they provide specific written responses to the payments that we identified as potentially improper and questionable. We considered the responses we received to assess whether in fact the payments were improper–that is, questionable or potentially fraudulent and nonstatistically selected certain payments described in the payment records as incurred for tangible goods and services and physically inspected the properties to test if the work described in the books and records was fully performed and the tangible goods received. In June 2003, on the basis of the results of the above-described work, we communicated to HUD and representatives of your Committee on Government Reform, that we had identified certain potentially fraudulent payments. In November 2003, at the request of the Committee, we expanded our work to (1) determine whether HUD had made changes to its internal controls to address the causes of the potentially fraudulent payments that we had identified in June 2003 and (2) test for additional potentially fraudulent payments. We performed this work by interviews with officials at HUD headquarters and one homeownership center, data mining and physical inspection of properties. Our data mining and physical inspection work included additional tests for receipt of goods and services for payments made in fiscal year 2002 as well as certain payments made in fiscal year 2003. To assess internal controls over HUD SF properties and to identify generally accepted principles and practices for a sound internal control environment, we used our Standards for Internal Control in the Federal Government, Internal Control Management and Evaluation Tool, Guide for Evaluating and Testing Controls Over Sensitive Payments, and Strategies to Manage Improper Payments. While we identified some improper payments–questionable and potentially fraudulent–our work was not designed to identify all improper payments made in the HUD SF property program. To assess HUD’s monitoring of the multifamily program with a state housing agency, we reviewed HUD’s MF policies and procedures, the state housing agency’s policies and procedures, contracts and agreements including HUD’s contract with the state housing agency, our previous reports, as well as reports issued by HUD’s IG and conducted walk-throughs of transactions and interviewed officials at HUD headquarters, a field office, and the state housing agency and its contractor offices to identify what controls had been established to manage the inherent risk of the program as well as monitor payments over time. We also performed analytical reviews of the payment activity since inception in 1994 through September 30, 2002. Specifically, we developed a template of program expenses for each property by expense line items, such as general construction expense, environmental abatement expenses, and expense per housing unit. We compared total costs per property to amounts per the initial contract with the general contractor as well as subsequent amendments. We discussed the results of our analysis with HUD regional and headquarters managers and requested that they provide specific written responses to issues and questions identified by our analysis. We considered the responses we received – in writing and orally– in assessing HUD’s performance in monitoring the program. We conducted our review, in accordance with generally accepted government auditing standards as well as with the investigative standards established by the President’s Council on Integrity and Efficiency, from December 2002 through January 2004 at HUD headquarters, a field office, and homeownership centers in Atlanta, Ga.; Philadelphia, Pa.; and Santa Ana, Calif. We also visited contractor offices in Atlanta, Ga.; Philadelphia, Pa.; Santa Ana, Ca.; and Falls Church, Va. We requested written comments on this report from the Acting Secretary of HUD or his designee. Written comments were received from Assistant Secretary for Housing–Federal Housing Commissioner and are reprinted in appendix II. The following are GAO’s comments on the Department of Housing and Urban Development’s letter dated February 19, 2004. 1. During the course of our work, we considered whether changes had been made to HUD’s processes and procedures. For example, we identified HUD’s over reliance on a support services contractor in the payment process for fiscal year 2002 payments and determined that this practice continued through the conclusion of our work. In addition, in November 2003, we updated our work to determine whether HUD had made changes to its internal controls to address the cause of the potentially fraudulent payments that we had identified in June 2003. Again, no changes had been made. 2. See “Agency Comments and Our Evaluation” section. 3. We understand that it may be necessary to pay for services provided before HUD owned the properties in order to avoid liens. However, given the unusually large amounts involved and the nature of the properties and time period covered by the bill, we continue to believe that HUD officials should have questioned these charges before payment. As stated in our report, we found no indication that HUD attempted to find out why these charges were so large, or why they had not been identified at the time of settlement and acquisition of the properties. We also found no indication that the contractor or HUD had pursued negotiating a settlement with the water authority or recovery from other parties who may have been responsible for the charges. 4. The draft of this report sent for agency comment included 6 payments totaling $169,800 that were paid to a contractor for developing a lead based paint abatement program that we classified as questionable because they were not adequately supported at the time the payments were made. Based on recently provided HUD documents, we shifted $99,000 previously included in the “other” questionable payments category to this category (lead based paint abatement program). We had originally classified it as “other” because we had received no support for the payments and thus had no basis for knowing that the $99,000 related to the paint abatement program. We initially requested support for 2 payments totaling $99,000 in October 2003. On February 17, 2004—for the first time—HUD provided some documentation for these two payments which indicated that these were made to the same contractor for developing the lead based paint abatement program. On this basis, we changed our report to clarify that 8 payments totaling $268,800 is the amount of questionable payments to a single contractor for developing a lead based paint abatement program. None of the $268,800 in payments were adequately supported because, among other things, there was no evidence of a contract modification or other agreement for the contractor to develop such a program. In addition, there was no indication of the total amount to be paid by HUD to satisfy the amount claimed by the contractor, or the basis for reimbursing the contractor for these types of costs that, according to written provisions in HUD’s management agreement with the contractor, were not allowable unless approved by HUD in advance. 5. Our review found no indication of the emergency nature of the charges in HUD’s supporting documentation for any of these payments. The payments we identified took place over an extended period of time in fiscal year 2002 and were not limited to a narrow “emergency” period. Our stated concerns about duplicate invoices used to support payments and invoices not matching amounts paid also are unresolved. Further, we continue to question why local supply sources were not considered, which would have avoided the incurrence of significant airfreight charges. 6. Each of the five payments we tested were made not only without documentation of the competitive bids, but also without any indication by the reviewing, approving or certifying official that they were even aware of the possible existence of competitive bids, or whether the billing contractor had in fact been the successful bidder. Therefore we continue to view these payments as questionable. 7. HUD provided us with a copy of the signed modification on January 26, 2004, more than 3 months after our initial request for documentation to support the payments totaling $296,087 that HUD paid for records management services. As stated in the report, there was no contract (or modification) included with the support for these payments or any indication that the payment had been compared to a contract (or modification) prior to payment to confirm that HUD had authorized these services at the amount charged. Rather, HUD regional staff advised us that the “OK to pay” notation on one of the invoices by the division director was sufficient to process the payment. Such action represents circumvention of HUD’s payment process controls and therefore these payments continue to be questionable. 8. HUD’s response does not address the points raised in our report regarding the lack of proper documentation to support the validity of the payment including whether HUD owned the property or a related FHA loan existed at the time of payment. 9. As described in the report it is the actions of the management contractor that are at issue, not the actions of a subcontractor. 10. Regarding HUD’s statement that “it is important to note that in these two Centers, two of the three internal controls reviews did occur”, our view is that internal controls over payments are not one event, but rather a sequential process with each action being dependent on the preceding steps having been satisfactorily performed. The flaw we identified in these cases relates to a fundamental control for authorizing payments whether in the public or private sector. Without confidence that payment requests are justified based on contracts or other agreements for those specific services at the prices billed and that the work has been satisfactorily completed, there is no basis for payment. 11. It is unclear what HUD means by “detailed analytical reviews of vouchers.” Our point is that detailed analytical reviews of expenses did not take place. As stated in our report, such reviews would be an efficient and effective way of analyzing expenses to identify anomalies and cost saving opportunities. It was just such review that alerted us to potential improprieties in payments related to the New York City properties. 12. Neither our report nor our recommendations address the timing for completion of construction and rehabilitation of the MF program. However, given that the program has now extended over 10 years and the costs are in excess of $500 million dollars through fiscal year 2003, we agree with HUD’s stated goal to complete the program by the end of this year. Staff members who made key contributions to this report include Sharon Byrd, Stephanie Chen, Lisa Crye, Bonnie Derby, Carmen Harris, Kelly Lehr, Sharon Loftin, Julia Matta, Irvin McMasters, Andrew O’Connell, Lien To, Estelle Tsay, and Brooke Whittaker. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | In our 2003 performance and accountability report on the Department of Housing and Urban Development (HUD), we continued to identify HUD's single-family (SF) mortgage insurance program as highrisk --an area we have found to be at high risk for fraud, waste, abuse, and mismanagement. Also, for years, GAO and HUD's Office of Inspector General (OIG) have reported weaknesses in HUD's contract administration and monitoring for both SF and multifamily (MF) programs. Given these known risks and the millions of dollars in disbursements made by the agency each year, GAO was asked to review payments related to the single-family property program and determine whether (1) internal controls provide reasonable assurance that improper payments will not be made or will be detected in the normal course of business and (2) payments are properly supported as a valid use of government funds. We also assessed HUD's monitoring of a major multifamily project with a state housing agency. Significant internal control weaknesses in the process used to pay for SF property expenses made HUD vulnerable to and in some cases resulted in questionable payments and potential fraud. These weaknesses included (1) delegation of oversight functions in a manner that weakened the control environment, (2) lack of key control activities, including proper documentation and approvals and (3) limited monitoring of contractor performance. These weaknesses likely contributed to the $16.5 million in questionable and potentially fraudulent payments that we identified using data mining, document analysis and other forensic auditing techniques. GAO classified $16.3 million of payments as questionable because they were not supported by sufficient documentation to determine their validity. GAO also classified $181,450 of payments as potentially fraudulent after visiting single-family properties being managed by a certain contractor. At all the properties visited, GAO noted discrepancies between what was represented on paid invoices and what was actually received. The photographs below were taken at one of the occupied properties after HUD paid $2,060 for bathroom repairs. These potentially fraudulent payments for single-family properties were made to the same contractor that was engaging in potentially fraudulent billing practices related to our earlier work on the HUD MF property program. HUD paid this contractor $2 million in fiscal year 2002 and $2.5 million in fiscal year 2003 for SF property expenses. GAO also identified insufficient HUD monitoring of a major MF program with a state housing agency. While HUD provided all the funding for the program, it provided little oversight and instead relied on the state housing agency to perform oversight functions. Ten years into the program, actual cost totaled over $500 million dollars, almost triple the original development budget. |
Fishery products, including wild catch, aquaculture, and processed fish products, are one of the most traded commodities in the world today. More than half of this commodity originates in developing countries, and almost 75 percent of it ends up either in the EU, Japan, or the United States. Not only is the United States importing more of the seafood it consumes today than it did 10 years ago, but more of those imports are from fish farms. Currently the United States imports 84 percent of the seafood consumed, and about 50 percent of it is from aquaculture. Figure 1 shows the proportion of imports to the United States from the top six countries exporting seafood to the United States. Concerns regarding the use in aquaculture operations of veterinary d that are unapproved in the United States and the misuse of approved drugs have increased substantially as the aquaculture industry has grown, according to FDA documents. While antimicrobials, including antibiotics, are used to treat diseases in animals, including seafood, the use of unapproved antibiotics in aquaculture has raised significant public health concerns. For example, nitrofurans are specifically not allowed for use in seafood, among other foods, by the United States because they have been shown to have a carcinogenic effect after prolonged exposure. However, some drugs that remain unapproved by FDA, such as emamectin benzoate and oxolinic acid, may be used in aquaculture by other countries. Another concern associated with the use of drugs in animals used for food, including seafood, is the extent of their contribution to antimicrobial resistance. HACCP regulations require seafood processors to conduct a hazard analysis and to develop and implement HACCP plans for hazards whenever an analysis shows that one or more hazards are reasonably likely to occur, including hazards resulting from drug residues. Processors must verify that their HACCP plans are adequate to control the identified significant hazards and are being effectively implemented. This verification must include, at a minimum, a periodic reassessment of the plan as well as ongoing verification activities, such as regular testing of the product. Processors are responsible for addressing hazards that may have been introduced into the products before they reach the processors, which could include hazards resulting from drugs unapproved by FDA for use in aquaculture. According to FDA documents, the agency targets countries for inspection based on the volume of imports from that country, the nature of the product (high- or low-risk potential), and violation history, among other things. According to FDA officials, the agency also targets facilities for inspection based on, among other things, their history of violations and seafood products refused entry into the United States. FDA has guidance that provides instructions on the inspection of foreign seafood processing facilities and products. From fiscal years 2005 through 2010, FDA inspected, on average, 84 foreign processing facilities annually out of an estimated 17,000 worldwide. (See app. II for additional information on the foreign facilities FDA inspected.) In addition, FDA inspects importers of seafood products to ensure their compliance with HACCP requirements. HACCP regulations require importers to demonstrate, through documentation, that the seafood the y import into the United States complies with HACCP requirements. Under HACCP, every importer of seafood products must either (1) obtain its seafood products from foreign firms in countries that have an agreement with FDA that documents the equivalency or compliance of the foreign inspection system with the U.S. system for imported products or (2) maintain written verification procedures that include product specifications designed to ensure that the product is not adulterated and take at least one of six affirmative steps to document that the foreign fi supplying the seafood products comply with HACCP requirements. We discuss the most commonly used affirmative steps later in this report. According to FDA officials, the agency currently has no such agreements with any foreign countries. FDA has guidance that provides direction on the inspection of seafood importers. From fiscal years 2005 through 2010, FDA inspected, on average, 217 importers annually out of about 3,900 importers registered with the FDA. The Department of Commerce’s NMFS also has a role in promoting seafood safety and quality. Under the Federal Agricultural Marketing Act of 1946, as amended, NMFS’ Seafood Inspection Program provides inspection services on a fee-for-service basis to assist in marketing seafood products. NMFS services include inspections for safety, wholesomeness, and proper handling, as well as seafood grading, laboratory analysis, training, and product inspection and certification. In 2010, NMFS had contracts with 123 domestic processing facilities under its HACCP Quality Management Program, which requires NMFS to provide, at a minimum, quarterly HACCP-based inspections. NMFS also had contracts with 37 foreign seafood processing facilities to provide HACCP inspections. According to NMFS officials, it inspects about one-third of all seafood consumed in the United States. The 1974 MOU outlined actions for each agency regarding, among other things, FDA’s agreement to notify NMFS before taking regulatory action and to conduct periodic joint meetings to develop collaboration efforts. Despite the MOU, however, FDA did not take advantage of NMFS inspection services or results to reduce its own inspection workload. In particular, from fiscal years 2005 through 2009, we found that FDA inspected 315 facilities that NMFS also inspected. In addition, in 2005, FDA considered taking legal action against NMFS officials because FDA believed NMFS was interfering with its responsibilities, according to senior NMFS officials. In the end, FDA did not pursue this course of action. According to NMFS officials, as a result of this incident, FDA and NMFS began negotiating an update of the 1974 MOU that was finalized in October 2009. According to NMFS officials, since the signing of the 2009 MOU, there have been instances where NOAA and FDA have worked closely together to address safety issues that arose from the Gulf of Mexico oil spill as well as coordinate on FDA regulatory actions. Provisions included in the FDA Food Safety Modernization Act, enacted in January 2011, may impact FDA’s role in ensuring the safety of seafood. For example, the act requires FDA to increase every year the number of inspections of foreign food facilities. This may include additional inspections of foreign seafood processing facilities. In addition, the act includes provisions to encourage interagency cooperation in regards to seafood inspections. This includes FDA coordinating with the Secretary of Commerce on the inspections of foreign seafood facilities and using Department of Commerce employees to conduct inspections for FDA. The act’s provisions also give FDA the authority, as part of a third party accreditation program, to review a foreign country’s food safety programs, systems, and standards to determine that the foreign government is capable of ensuring foods certified for export to the United States meet the requirements of the Federal Food, Drug, and Cosmetic Act. In addition, the act requires the Secretary of Health and Human Services to issue guidance to assist importers in developing a foreign supplier verification program to help importers perform risk-based activities to verify that imported goods comply with U.S. requirements. Facilities that are required to comply with seafood HACCP regulations are exempt from the supplier verification program. FDA also noted that the act gives the agency important new tools, such as suspension of a facility’s registration, to ensure that imported seafood is as safe as domestic seafood (See app. IV, where the Department of Health and Human Services provides details on these tools in its comments on our report.) FDA’s program to ensure the safety of imported seafood from residues of unapproved drugs is limited, because the agency’s primary oversight program generally involves reviews of documents at individual foreign processing facilities and importers for HACCP compliance. In contrast, the EU reviews foreign government structures, food safety legislation, and the foreign country’s fish farm inspection program to ensure imported seafood products come from countries with seafood safety systems equivalent to that of the EU. Moreover, FDA’s sampling program is limited in scope, is not effectively implemented, and does not fully use the capabilities of FDA’s laboratories. FDA’s program to ensure the safety of imported seafood against unapproved drugs is generally limited to the HACCP regulations it enforces. While the EU also requires compliance with HACCP, it also takes a wide-ranging review of the food safety system of the foreign country that wants to export its seafood products to the EU. In order to export seafood to the United States, foreign processors must meet the same HACCP regulations as domestic processors and FDA inspects some foreign seafood processors each year to ensure compliance. These inspections involve reviewing the processors’ HACCP plans and other records to ensure the processors have considered drug residues as a hazard that is reasonably likely to occur if the seafood products it receives are from fish farms. In general, as part of foreign HACCP inspections, FDA inspectors do not visit fish farms to evaluate drug use or controls. FDA inspectors also do not evaluate the capability, competence, and quality controls of laboratories used to sample seafood from fish farms to determine if they contain unapproved drugs because these facilities are not considered processors under the regulations and are therefore not covered by HACCP. We reviewed the 15 FDA inspection reports for seafood processing facilities from fiscal years 2007 through 2009 from countries exporting seafood to the United States—Bangladesh, Chile, China, and Thailand. According to the reports, we found that during their visits to these processing facilities, the inspectors generally conducted these inspections as described above. In contrast, the EU includes inspection visits to farms and other pertinent areas, such as laboratories, to undertake a more comprehensive review of a foreign country’s food safety system. The EU conducts a review of the country’s relevant legislation; the government’s structure for implementing it; and the country’s implementation of its national residues monitoring plan, which the EU directs its trading partners to submit. Foreign countries that trade with the EU are directed to implement the monitoring plan and sample for drugs of specific concern to the EU. Once implemented, these foreign countries are to provide an annual report on the sampling results. In addition, the EU also reviews a sample of farms and processing facilities, and the capabilities and quality of the country’s laboratories. The EU also requires that foreign countries exporting seafood to the EU maintain seafood safety systems that meet EU requirements or equivalent conditions, or meet specific requirements provided in an agreement between the EU and the foreign country. In addition to FDA’s HACCP inspections, the agency conducts foreign country assessments to gather information about other countries’ aquaculture programs including the country’s competent authority and regulatory infrastructure. During these assessments, FDA officials visited some farms where aquaculture products originated to evaluate veterinary drug use and reviewed some laboratories that analyzed the seafood products for drug residues for processors, among other places. FDA officials stated that these visits are planned and tailored for each country and conducted in a systematic and consistent manner. The information the agency collects during these assessments results in a written report and can be used to direct future foreign facility HACCP inspections and FDA’s sampling program for imported seafood. However, according to FDA officials, the agency does not have any written operating procedures or any criteria or standards that it uses for these assessments to evaluate a country’s regulatory infrastructure; farms; or the capabilities, competence, and quality controls of foreign laboratories. Without policies and procedures or guidance to direct the implementation of these assessments and criteria or standards to evaluate foreign systems, it may be difficult for FDA to conduct foreign country assessments that are either systematic or consistent and that result in valid findings. By systematically and consistently conducting its foreign country assessments, FDA can better assure that it is using its resources effectively and efficiently. FDA has conducted such foreign country assessments in five countries: Chile, China, India, Indonesia, and Vietnam. FDA conducted its first foreign country assessment in April 2006 and according to FDA officials, each assessment cost about $45,000. About a week after our closing meeting, FDA provided us with newly prepared standard operating procedures for conducting its foreign country assessments. FDA prepared these procedures almost 5 years after conducting its first assessment in Vietnam. These new procedures include the purpose of the assessments, country selection process, provisions on conducting these assessments, and structure of the assessment reports, among other things. We did not evaluate the newly prepared procedures. Still, FDA has not documented (1) the assessments on its Web site, including any program guidance manuals, and (2) the link between these assessments and its HACCP inspections of foreign facilities or its imported seafood sampling program. The following are examples of some of the limitations of FDA’s oversight approach of reviewing records and other documentation of foreign processors as required by HACCP and limited effectiveness of its foreign country assessments. As described in FDA’s inspection reports for three Chilean salmon processing facilities in 2008, FDA’s review of their records during the inspectors’ visits to these facilities revealed that, contrary to HACCP regulations, they had received fish farm products that had been treated with oxolinic acid, flumequine, or emamectin benzoate—drugs unapproved for use in aquaculture in the United States. According to FDA documents, the agency placed all three facilities on an import alert for failing to comply with HACCP. FDA removed one of these facilities from the import alert 14 days later and the other two facilities several weeks later after they made changes to their respective HACCP plans. FDA, however, could not provide documents detailing the changes these facilities made in order for FDA to remove them from the import alert. Two of the facilities then shipped salmon to the United States, where it was accepted for import. While this approach is in concert with FDA’s routine inspection process, FDA had no assurance that the changes the facilities made to their HACCP plans were implemented, since it did not reinspect the facilities to conduct follow-up reviews of their records. In March and April of 2009, FDA officials conducting a foreign country assessment visited Chile to gather information about Chile’s measures to control drug residues in aquaculture seafood products it exported to the United States. These officials found that the same unapproved drugs were still in use in the country. According to these officials, Chilean officials told them that the Chilean government could not prohibit the export of products containing residues of drugs approved for use in Chile without a special agreement with the importing country. According to FDA officials, the agency has not taken steps to develop such an agreement. Chile represents about 4 percent of seafood imported into the United States and in 2009, it was the largest source of farmed salmon imports into the United States. In addition to the 15 inspection reports, FDA documented the results of its officials’ visit, part of a foreign country assessment, in September 2008 to Vietnam to gather information about the country’s drug residues control program. The documentation indicated that all processing facilities’ HACCP plans stated if a drug unapproved by the EU is found in a seafood product, that product should be diverted to another market. The FDA officials concluded that this HACCP plan requirement could result in such products being imported into the United States. In addition, the documentation indicates these FDA officials found that Vietnam permitted 38 drugs, most of which are unapproved by the United States, to be used in aquaculture. For example, FDA’s documentation on this visit stated that fish farms were likely using fluoroquinolones. FDA officials asked that the Vietnamese government notify processors that seafood products purchased from farms using this drug could not be exported to the United States. FDA also asked the government to test 100 percent of seafood products destined for the United States for unapproved drugs such as nitrofurans and chloramphenicol. The Vietnamese government responded that it performed 100 percent testing only for products intended for countries with which it had a bilateral agreement, of which the United States was not one. The government stated, however, that it was taking other actions that would preclude the need for this level of testing, such as disseminating information on unapproved drugs, providing training to local authorities, and disciplining violators. According to FDA officials, the agency has not taken steps to develop such an agreement. Vietnam represents about 5 percent of the seafood imported into the United States. In 2009, Vietnam was the largest source of farmed catfish-pangasius imports and the third largest source of farmed shrimp imports into the United States. In addition to foreign processors, FDA also inspects the records of importers that bring seafood products into the United States to make sure they follow HACCP regulations, which includes requirements that importers maintain documents showing that the imported products are from foreign suppliers that have themselves complied with HACCP regulations or that the product is obtained from a country with an active agreement with FDA covering the product that documents equivalence or compliance with the U.S. system. We found limitations with this aspect of FDA’s program as well. According to FDA officials, importers most frequently comply with this regulation in one of the three following ways: Importers obtain a copy of a foreign processor’s HACCP plan and an attestation that the foreign firm processes its seafood products in compliance with HACCP regulations. Importers review the HACCP plan they get from their foreign suppliers and determine if all of the hazards the importers identified in their specifications are controlled in the HACCP plan. However, according to a senior FDA official, foreign processors can obtain a HACCP plan that is not associated with its own operation, thus defeating the purpose of importers’ acquiring a copy of the plan unless the importers also visit the foreign processor to validate the information in the plan and that it is being implemented. FDA does not require importers to visit the foreign processors to ensure they effectively implement their HACCP plans. Importers obtain inspection certificates from what FDA calls a “competent authority,” such as the Canadian Food Inspection Agency, that attests to the safety of the seafood product. However, FDA has not made any formal judgments about any entity’s capability to declare that any foreign seafood products meet U.S. safety standards or concluded any agreements on a foreign certification program. Importers obtain seafood products from Canada or Japan from firms that those governments stated both are in “good standing” and are listed on an FDA Web site as processing seafood in accordance with HACCP regulations. However, FDA has neither evaluated the Canadian or Japanese seafood safety systems to determine the extent to which these countries’ systems meet U.S. standards nor verified the lists or the information on them. For example, FDA has inspected for HACCP compliance, 4 Canadian and 22 Japanese seafood processing facilities out of an estimated total of 944 and 2,697 facilities in each country, respectively, from fiscal years 2005 through 2010. The EU not only requires individual processors to meet HACCP requirements, but also requires the foreign countries that want to export farmed seafood to the EU to demonstrate that their seafood safety systems meet EU or equivalent requirements, or meet requirements specified in an agreement between the EU and the exporting country. The EU Web site provides information for foreign countries on the EU standards for food products, including seafood, destined for the EU. These standards are used to evaluate foreign food safety systems. The EU publishes its foreign country inspection reports on its Web site, along with the foreign country’s comments and their plan to address the inspection report’s recommendations. To ensure continuous compliance with EU requirements, EU inspectors periodically conduct follow-up reviews of foreign countries’ seafood safety systems. If inspectors identify deficiencies, they recommend solutions and ask the government in question to develop an action plan to address the recommendations. Using this approach, the EU has been able to persuade foreign governments to take appropriate action to address recommendations. For example: The EU inspected Indonesia in November 2009 to evaluate, in part, the country’s measures to control drug residues in animal products including seafood. The inspectors concluded that the effectiveness of the system to control drug residues was compromised by failings in the planning and implementation of Indonesia’s national residue control plan and problems in laboratory performance, including questionable validation of methods to detect drug residues in aquaculture products. According to the EU inspectors, the system to control drug residues did not provide guarantees equal to those required by EU regulations. The EU inspectors made specific recommendations to resolve the problems, including aligning the Indonesian limits for drug residues with those of the EU and ensuring that government controls on the distribution and use of veterinary medicinal products were carried out throughout the distribution chain. The Indonesian government developed an action plan to address all the recommendations. Nevertheless, as a result of the inspection report findings, the EU imposed a 20 percent sampling requirement at the EU ports of entry for all farmed fish imports because it believed that there was a risk that imported farmed products from Indonesia contained residues of chloramphenicol, nitrofurans, and tetracyclines. In November 2008, the EU inspected Bangladesh, in part, to evaluate the country’s programs to control drug residues in seafood and review the implementation of corrective actions promised by the Bangladesh government to address previous EU recommendations. EU inspectors found that Bangladesh was making changes to its sampling and laboratory analysis, among other things. Nevertheless, the inspectors concluded that despite the steps taken by the Bangladesh government to eliminate all sources of nitrofurans and chloramphenicol from farmed fish, the high detection rate of these drugs identified by Bangladesh’s own national monitoring program suggested that fish farms were still using these drugs. According to the EU inspectors, the Bangladesh system to control residues did not provide assurances equal to those required by EU regulations, among other things. In part because of the findings of this inspection, the Bangladesh government imposed a voluntary ban on the export of freshwater shrimp to the EU from May 2009 until January 2010. The Bangladesh government recognized that it had a problem with nitrofurans in freshwater shrimp and took this action to avert any potential ban by the EU. The EU placed Bangladesh on special import conditions in 2008, which required 100 percent testing of all shrimp bound for the EU for chloramphenicol, tetracycline, nitrofurans, malachite green, and crystal violet in Bangladesh prior to export. In addition, 20 percent of all shrimp imports must also be tested at EU ports of entry at the importers’ expense. In contrast, FDA inspected five Bangladesh seafood processing facilities in February 2009, and a review of the inspection reports indicated that FDA inspectors did not identify the continued use of nitrofurans and chloramphenicol by the fish farms. Because FDA’s focus was on HACCP compliance—which required the review of documents to ensure consideration was given to whether potential hazards were reasonably likely to occur as a result of drug residues, among other things—rather than the review of elements of the Bangladesh seafood safety system, FDA was unable to identify this issue. Although the Bangladesh government considered the EU findings from 2008 significant enough to impose a ban of shipments of freshwater shrimp to the EU about 3 months after the FDA inspections, Bangladesh officials present at FDA’s inspections did not provide information on the EU findings of the continued use of unapproved drugs by fish farms to FDA. Moreover, Bangladesh did not impose a similar ban on shipments to the United States, and according to FDA officials, the agency, at the time, had no knowledge of the Bangladesh ban on shipments to the EU. Had the FDA inspectors had this information, they could have more effectively scrutinized the methods processors used to ensure the safety of the seafood products they received from fish farms. FDA inspectors could have also discussed Bangladesh government efforts to eradicate the use of unapproved drugs by the fish farms. With information on the use of nitrofurans by Bangladesh shrimp farms, FDA inspectors could have helped direct FDA’s import sampling program to target these products. Because it lacked this information, FDA did not adjust its sampling program to take into account the likelihood that shrimp exports from Bangladesh would be contaminated. In fact, from June through December 2009—the period of the ban—FDA analyzed four shrimp samples from Bangladesh for nitrofurans. Finally, equipped with this information, the United States could have potentially received similar consideration as was given to the EU in regards to the ban by the Bangladesh government. Like the EU, the Department of Agriculture’s FSIS regulations place greater responsibility on the foreign country that wants to export meat, poultry, or processed egg products to the United States. More specifically, imported meat, poultry, and processed egg products are not eligible for export to the United States unless FSIS has determined that the exporting country has a food safety system equivalent to that of the United States. The FSIS Web site provides information on its equivalence process and on the standard for eligibility of foreign countries to export FSIS regulated products to the United States. FSIS audit reports also provide information on the criteria used for its audits. In addition, FSIS publishes its foreign country audit reports on its Web site. FSIS staff not only review documents provided by foreign governments to ensure their food safety regulations and oversight are adequate and that processors implement HACCP, among other things, but also conduct onsite evaluations of the governments’ inspections of slaughter processing facilities and their audits of laboratories and controls over, among other things, drug residues, sanitation, and animal disease of public health concern. In addition to the reviews and onsite evaluations, FSIS also conducts drug residue sampling, microbiological sampling, and labeling verification, among other things, at U.S. ports of entry to promote compliance. FSIS’ program and the requirements it places on foreign governments wishing to export food products to the United States may have an effect on how countries react to problems that FSIS identifies with their products. The potential effect that the FSIS’ oversight approach can have on the food safety actions of other countries can be illustrated in the situation that occurred with Brazilian beef. In May 2010, as part of FSIS’ port-of- entry inspection program, the agency analyzed samples of cooked beef products from a Brazilian plant and identified levels of ivermectin, an antiparasitic agent, above allowed limits. FSIS increased its testing of cooked beef products from this plant and continued to find drug residue problems. Consequently, FSIS refused entry of cooked beef products into the United States from this plant and expanded its sampling effort to include Brazilian cooked beef products already in commerce and cooked beef products from other Brazilian plants. The testing data indicated that cooked beef products from other Brazilian plants also had levels of ivermectin above allowed limits. Given the consistency of the data, FSIS concluded that the Brazilian government’s oversight program—including its residue sampling and control programs—had broken down. The U.S. government communicated its findings to the Brazilian government and asked that it resolve this violation of U.S. regulations. Although FSIS had the authority to deny entry into the United States of the products of all of these Brazilian plants if this issue had not been resolved appropriately, the Brazilian government voluntarily stopped exporting cooked beef products from 24 plants and prepared and submitted a plan to FSIS for how it intended to address this issue. According to FSIS, on December 28, 2010, FSIS accepted Brazil’s corrective action plan, resulting in Brazil removing its voluntary suspension to allow 12 of the 24 plants to export cooked beef products to the United States. In addition, to verify that Brazil’s corrective actions are adequate and effective in preventing a recurrence of this situation, FSIS will request Brazil to provide documentation demonstrating that its residue plan is working. FDA’s sampling program for detecting residues from unapproved drugs in imported seafood products is limited in scope. Although FDA tests for residues of 16 unapproved drugs, some other countries importing from the same countries as the United States test for up to 57 drugs. In addition, although the 16 drugs include drugs such as flumequine and oxolinic acid, which are approved in certain other countries, FDA is not testing for residues of other drugs, such as emamectin benzoate or tetracycline, that are approved in other countries but unapproved in the United States. Thus, FDA does not generally test for drugs that some countries and the EU have approved for use in aquaculture. Because these drugs may be used in countries with which the United States conducts considerable trade, seafood products containing these unapproved drugs may be entering the country. For example, China, a major seafood exporter to the United States, approves the use of tetracycline in aquaculture although the United States does not. Vietnam, also a major seafood exporter to the United States, approves the use of neomycin in aquaculture but the United States does not. Both tetracycline and neomycin have been determined to be highly important antimicrobials in humans; according to the World Health Organization, however, the overuse of these drugs in food animals could contribute to increasing the risk of antibiotic resistant bacterial infections in humans. In 2007, Japan detected excessive levels of tetracycline residues in the shrimp products it imported from China and in 2010, the EU detected excessive levels of neomycin in imported catfish from Vietnam. Because FDA does not include tetracycline and neomycin in its sampling program, it has no assurance that seafood containing these drug residues has not entered the United States. In addition, FDA does not effectively implement its limited sampling program. According to FDA officials, the equipment and personnel the agency dedicates to its sampling program are sufficient to complete its assignment plan in its entirety. However, FDA did not meet the performance goals it set for its targeted unapproved drugs for fiscal years 2006 through 2009: the agency planned to collect on average 975 import samples annually for testing but collected an average of about 680 samples (or about 70 percent). According to FDA officials, the agency may not achieve its goals because a specific seafood product may not come into the country as anticipated or there may be a need to shift laboratory resources to handle other urgent tasks, such as testing imported honey for chloramphenicol. Moreover, FDA’s planned number of import samples to collect represents a small portion of the annual seafood imports into the United States. Thus, in fiscal year 2009, the seafood samples FDA reported it collected for drug residue testing amounted to 0.1 percent of all the seafood products imported into the United States. In addition, although FDA’s import sampling program states that it prioritizes the testing of all shrimp and all catfish and catfish-related species for residues of nitrofurans, during fiscal years 2006 through 2009 FDA analyzed 279 shrimp samples out of the 1,060 shrimp samples collected for residues of nitrofurans and did not analyze any catfish samples for nitrofurans. In fiscal year 2008, according to its annual work plan, FDA planned to collect 125 shrimp samples for nitrofurans analysis. Although FDA collected a total of 349 shrimp samples, it tested only 34 for residues of nitrofurans, and 6 (18 percent) of these samples were found to contain nitrofurans. Because of FDA’s limited sampling, some of the more than 2.5 million metric tons of shrimp and 156,000 metric tons of catfish imports that entered the United States during fiscal years 2006 through 2009 could have contained residues of nitrofurans. In addition to the limitations of FDA’s sampling program for drug residues, the agency does not effectively use its laboratory resources. For example, while some other countries have increased their laboratory capabilities through programs to accredit commercial laboratories, FDA relies on 7 of its 13 laboratories to conduct all of its aquaculture drug residue testing. According to FDA officials, the number of laboratories participating in the sampling program is not important because sufficient laboratory capacity and capabilities are developed to meet obligations. However, as discussed above, FDA has not met its sampling performance goals during the past years and the number of laboratories participating in the sampling program may play a part in this. In terms of the laboratories that FDA uses for its sampling program, not all seven have the capability to test for all of the drugs included in FDA’s sampling program. For example, one laboratory is capable of testing for residues of chloramphenicol, and four laboratories are capable of testing for nitrofurans, three of which have the capability to test for malachite green, gentian violet, fluoroquinolones, and quinolones. Further, FDA lacks some of the analytical methods that its laboratories need to test for specific drugs in aquaculture. For example, FDA has no method to detect residues of emamectin benzoate, a drug unapproved for use in U.S. aquaculture but used in Chile, as noted above, and approved for use in other countries as well. Moreover, although FDA can test for nitrofurans in four of its laboratories, it has only one method for testing nitrofurans in catfish samples. FDA’s laboratory capabilities are also limited by the personnel available to perform the tests. Although FDA has assigned personnel to its sampling program, these resources can be shared across FDA’s food programs. Consequently, FDA can divert personnel to other programs that it may consider higher priority when the need arises, which could result in a lag in the turnaround time for drug residue testing. For example, according to FDA officials, FDA allows 14 calendar days to test a sample for drug residues. In addition, time frames for the completion of analyses under the sampling program will vary by residue and species. We found that the average time between sample collection and testing was about 22 calendar days. In one instance, testing for one sample was completed 154 calendar days after it was collected; in another instance, FDA took 56 days to complete the analysis of two separate samples—both of which turned out to contain residues of unapproved drugs. In contrast with FDA’s import sampling program, the sampling programs of Canada, the EU, and Japan test for significantly more drugs: Canada tests its imported seafood products for more than 40 different drugs, select EU member countries test for 50 drugs, and Japan tests for 57. In addition, Canada and Japan test for levels of drugs they have approved for use in aquaculture as well as for drugs that are unapproved in their own country but approved in other countries. Moreover, Canada, the EU, and Japan generally test more samples of seafood and have more extensive laboratory capabilities than FDA. For example, Canada routinely tests at least 5 percent of all seafood imports, and Japan tested about 11 percent of seafood imports in fiscal year 2009. Select EU member countries test for as much as 4 percent of their seafood imports. In addition, the EU requires more testing for countries that produce larger quantities of seafood because of the increased risk of more adulterated products. Further, unlike FDA, which relies only on its own laboratory capabilities, Canada, the EU, and Japan have systems in place to accredit commercial laboratories which may be involved in the testing for drug residues in seafood products. For example, Belgium has 8 national laboratories as well as a network of 62 EU member state and commercial laboratories to assist with drug residue testing. FDA and NMFS have made limited progress in implementing the 2009 MOU, resulting in a lack of systematic collaboration between the agencies. Since March 2010, the agencies have collaborated to some extent in developing procedures for certain MOU responsibilities, specifically FDA notification of regulatory action. In addition, while FDA and NMFS effectively collaborated and successfully leveraged each others’ resources during the 2010 emergency Gulf of Mexico oil spill, FDA has not yet fully met its MOU responsibility to utilize NMFS’ foreign and domestic inspection resources in a systematic manner. NMFS Seafood Inspection Program describes its mission as ensuring the safety and quality of the seafood it inspects. FDA officials stated that training NMFS inspectors would bring them to a level commensurate with the level that FDA requires of its own inspectors. By effectively utilizing NMFS inspections resources to help minimize its own inspection responsibilities, FDA could inspect other facilities that have not yet been inspected. During a meeting to discuss the MOU in March 2010, the agencies agreed to create standard operating procedures for certain MOU responsibilities. FDA officials told us that in September 2010 they sent NMFS a letter notifying them of an FDA regulatory action, which is one of the MOU responsibilities. According to NMFS officials, this letter was the first prior notification of regulatory action FDA had ever provided. NMFS officials added that communication between the agencies has consisted of periodic conference calls that included discussions of the oil spill. Frequent communication among collaborating agencies is a means to facilitate working across agency boundaries and prevent misunderstanding; without such communication, enhanced collaboration may not be sustained. According to NMFS officials, NMFS has developed guidance for its staff regarding its 2009 MOU responsibilities. Similarly, according to FDA officials, the agency has developed some guidance like the notification letter template. However, the agencies have not developed guidance for items of mutual responsibility. As we previously reported, agencies need to address the compatibility of standards, policies, and procedures in order to facilitate collaboration. The agencies have agreed to develop standard operating procedures for information sharing and cross training of personnel, but they have not yet done so. Even with FDA’s and NMFS’ success in leveraging each other’s resources in the 2010 Gulf of Mexico oil spill, FDA has yet to fully meet its responsibility under the MOU to utilize NMFS inspection resources or results in a systematic manner. While NMFS describes its mission as ensuring the safety and quality of the seafood it inspects, FDA officials stated that training NMFS inspectors would bring them to a level commensurate with the level that FDA requires of its own inspectors. The leveraging of resources played a crucial role in FDA and NMFS’ ability to address the effects of the Gulf of Mexico oil spill. Using guidance developed from previous oil spills, the agencies quickly and jointly developed a protocol to reopen oil-impacted areas closed to seafood harvesting. The emergency nature of the spill meant that implementing the protocol required timely collaboration between FDA and NMFS. The agencies successfully implemented their reopening protocol by, among other things, sharing staff and laboratory resources and cooperating efficiently. In accordance with the reopening protocol, the agencies jointly organized the seafood sampling plan and agreed upon the use of NMFS’ sensory testing protocol following FDA review. The agencies successfully coordinated the chemical testing of samples for oil residue among their respective laboratories. As agreed upon in their reopening protocol, both agencies reviewed all sample results and consulted with each other before NMFS communicated the results to the states. Going forward, FDA has not developed a process to leverage NMFS’ domestic and foreign inspections or results in order to maximize its limited resources and inspect other facilities that have not yet been inspected. Both the 1974 and 2009 MOUs address the leveraging of NMFS’ inspections by FDA in order to maximize the use of available resources. As we stated in our October 2005 report, collaborating agencies bring different levels of resources and capacities to the collaborative effort and can leverage each others’ resources to obtain additional benefits that would not be available if they were working separately. FDA’s inspection work plan does not consider establishments under contract with NMFS in determining the facilities FDA plans to inspect in any given year. In addition, by not effectively utilizing NMFS inspection resources or results, FDA has allowed some processing facilities to go without an inspection. A 2010 audit by the Department of Health and Human Services’ Office of Inspector General found that 56 percent of domestic food facilities had gone 5 years or more without an FDA inspection. The audit report pointed out that FDA cannot ensure that these facilities are complying with applicable laws and regulations if it does not routinely inspect them. The need to leverage NMFS inspection resources or results was especially critical in China, which accounts for 23 percent of seafood imports into the United States. FDA has inspected 41 of 2,744 (or 1.5 percent) Chinese seafood processing facilities in the last 6 years. FDA officials provided new information during our closing meeting concerning the agency’s plans to use NMFS inspections results. According to these officials, the agency needs to first increase the level of training of NMFS inspectors. Towards that goal, FDA has begun to train NMFS inspectors using an advanced FDA course to increase the inspection capabilities of NMFS inspectors to a level commensurate with the level that FDA requires of its own inspectors. For example, FDA plans to train at least 16 NMFS inspectors during fiscal year 2011. NMFS officials confirmed that NMFS inspectors are attending FDA’s training in order to meet FDA’s training requirement and advance the MOU’s provision of leveraging resources. In addition, these NMFS inspectors who completed the training and took the FDA exam, passed. However, according to NMFS officials, NMFS training and its inspectors’ capabilities are already equivalent to those of FDA inspectors. According to FDA officials, once NMFS inspectors are trained, the agency plans to inspect some Chinese seafood processing facilities jointly with NMFS to evaluate the NMFS inspectors’ capabilities. FDA officials also noted that once NMFS inspection capabilities reach FDA’s required level, the agency will consider using NMFS inspection results as another source of information that will feed into FDA’s risk analysis process for determining the facilities to inspect in any given year. However, FDA has yet to fully develop this risk approach and no time frames or documentation exists for its full development. FDA noted that the use of NMFS inspection results would be part of FDA’s implementation of any third-party certification program, which is mandated by the FDA Food Safety Modernization Act. Therefore, at this point, FDA has not documented how it plans to use NMFS inspection results. FDA has previously provided other reasons for not using NMFS inspection resources or results. In 2005, we recommended that FDA recognize the results of NMFS inspections when the agency determined the frequency of its seafood inspections. In response, FDA stated that it would assess this issue. However, FDA officials also stated that the agency did not rely on NMFS’s inspection information because NMFS could have conflicts of interest due to its fee-for-service inspection approach and because FDA did not know what facilities NMFS was inspecting. A year earlier, we recommended that FDA and NMFS develop a MOU so that FDA, in part, would use and leverage NMFS inspection services to more efficiently and effectively monitor the safety of imported seafood. In response, FDA stated that it would explore additional opportunities to better leverage NMFS inspection resources and more efficiently and effectively protect the public health. We also noted that an FDA official raised concerns about potential conflicts of interest with NMFS inspections, but that other officials thought that these concerns could be addressed in an agreement. With about a 20 percent increase in the consumption of imported seafood in the last 10 years, FDA’s responsibility has also increased for ensuring the safety of the nation’s food supply, including imported seafood. However, FDA still uses the same approach it developed more than 10 years ago to ensure the safety of imported seafood, even though the United States’ reliance on imported seafood has increased and aquaculture has emerged as a major source of those imports. FDA’s approach is generally focused on reviewing records of foreign processors and importers and does not consider other pertinent areas of a foreign country’s food safety system. Its foreign country assessments have been limited by the lack of formal structure and necessary policies, guidance, and criteria. In addition, FDA’s sampling program does not give appropriate consideration to testing for the drugs approved for use in aquaculture by major U.S. seafood trading partners but unapproved by the United States and does not effectively use its laboratory resources. There are practices employed by other entities with similar regulatory responsibilities as FDA, including another U.S. government agency, which show potentially more effective alternatives to the current FDA approach. The recently enacted food safety legislation provides FDA with new authorities that may enable it to more comprehensively review a foreign country’s seafood safety system and implement the practices that other entities employ to ensure the safety of imported food products. For example, the EU requires foreign countries with which it trades to maintain seafood safety systems that meet EU requirements or equivalent conditions, or meet specific requirements provided in an agreement between the EU and the foreign country before the EU will accept seafood imports from that country. Also, the EU specifically directs that the foreign country submit a national residues monitoring plan, which provides information on the sampling for drugs of concern to the EU for seafood products destined to the EU. That monitoring plan must have an effect at least equivalent to those required within the EU. To facilitate consideration and implementation of a different oversight approach to ensure the safety of imported seafood, FDA must utilize its current resources in the most efficient manner. However, FDA is not efficiently using its resources when it does not effectively implement the 2009 MOU with NMFS and fully utilize the resources of NMFS’ Seafood Inspection Program, an agency dedicated specifically and solely to ensuring the quality and safety of seafood. According to FDA officials, training NMFS inspectors would bring their capabilities to a level commensurate with FDA requirements. Furthermore, although FDA worked effectively with NMFS in ensuring the safety of domestic seafood during the Gulf of Mexico oil spill, it lacks systematic collaboration with that agency. Provisions in the new food safety legislation also provide FDA with more specific direction and opportunity for greater collaboration with NMFS through, in part, more effective use of its inspection resources or results. To better ensure the safety of seafood imports, we recommend that the Secretary of Health and Human Services direct the Commissioner of FDA to take the following three actions: study the feasibility of adopting other practices used by other entities, such as requiring foreign countries that want to export seafood to the United States to develop a national residues monitoring plan to control the use of aquaculture drugs, to more efficiently ensure the safety of imported seafood and report its findings to the Secretary; develop a more comprehensive import sampling program for seafood by more effectively using its laboratory resources and taking into account the imported seafood sampling programs of other entities and countries; and develop a strategic approach with specific time frames for enhancing collaborative efforts with NMFS and better leveraging NMFS inspection resources. We provided the Departments of Agriculture, Commerce, and Health and Human Services (HHS) a draft of this report for their review and comment. We also provided a draft of this report as a courtesy to the Department of Homeland Security, the Department of State, and the Office of the United States Trade Representative. On March 23, 2011, we received written comments from HHS, which are reproduced in appendix IV; HHS neither agreed nor disagreed with the findings and recommendations in the report. The Departments of Agriculture and Commerce did not provide written comments. HHS notes that our report represents a baseline against which FDA can measure its ongoing progress. The department also states, however, that reading our report may not result in a full understanding of FDA’s multifaceted and risk-informed seafood safety program that relies on information from various sources and provided additional information in this regard. (See app. IV for our response to this and other general comments.) In addition, while HHS did not explicitly agree or disagree with our recommendations, the department provided information in its written comments on actions in process or planned related to each of the recommendations we made in our draft report. The additional information related to each of our three recommendations follows: Study the feasibility of adopting other practices used by other entities, such as requiring foreign countries that want to export seafood to the United States to develop a national residues monitoring plan to control the use of aquaculture drugs, to more efficiently ensure the safety of imported seafood and report its findings to the Secretary: HHS stated that as part of implementing the Food Safety Modernization Act, FDA will determine whether the legislation supports the kind of precondition for export to the United States that the FDA stated our recommendation envisioned. Develop a more comprehensive import sampling program for seafood by more effectively using its laboratory resources and taking into account the imported seafood sampling programs of other entities and countries: HHS stated that FDA agrees that effective use of laboratory resources and import sampling programs are important facets of a comprehensive and risk-informed program to ensure seafood safety. HHS stated that FDA is evaluating proposed research to further expand residue and species coverage and identify areas for improved laboratory testing efficiencies. Develop a strategic approach with specific time frames for enhancing collaborative efforts with NMFS and better leveraging NMFS inspection resources: HHS stated that FDA agrees that it is important for the agency to maintain and foster this collaborative and effective working relationship with NMFS. Further, FDA will work with NMFS to develop strategic approaches for enhancing collaboration and better leveraging seafood inspection resources. However, the agency did not comment on its intent to establish specific time frames for this enhanced collaboration, which we believe remains essential to help ensure accountability for and expeditious implementation of this strategic approach. HHS and the Department of Commerce also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Health and Human Services, Agriculture, Commerce, Homeland Security, and State; the United States Trade Representative; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The Department of Health and Human Services’ Food and Drug Administration (FDA) has responsibility for ensuring the safety of seafood imports. The Department of Commerce’s National Marine Fisheries Service (NMFS) provides voluntary fee-for-service inspections to ensure compliance with FDA’s Hazard Analysis and Critical Control Point (HACCP) regulations, among other things. To assess the extent to which FDA ensures the safety of seafood imports against residues from unapproved drugs, we analyzed information on FDA’s oversight mechanism for seafood imports—importer and foreign country processing facilities inspections—and its seafood import sampling program. In particular, we analyzed information on the major components and requirements of FDA’s importer and foreign facility HACCP inspections. Specifically, we reviewed FDA’s inspection reports for seafood processing facilities from major seafood exporting countries to the United States— Bangladesh, Chile, China, and Thailand—and focused our review on 15 FDA inspection reports for facilities that processed aquaculture seafood products during fiscal years 2007 through 2009. We analyzed fiscal years 2006 through 2009 data on FDA’s import sampling program’s test results to determine the magnitude and scope of the program. As part of our data request, we asked FDA to provide the drug residue being tested for in each analysis. However, information on drug residue, country of origin, and type of seafood was in data fields combined with other information and not easily analyzable. Consequently, we used a statistical program searching for key words to analyze the data. After this preliminary identification of the drug being analyzed, country, and seafood type, we independently verified that the information was correct. In addition, we conducted several data checks, including reviewing the data for missing or incomplete information and testing for obvious errors in accuracy and completeness, to ensure the reliability of the data. Furthermore, we interviewed knowledgeable FDA officials to discuss the database’s internal controls and other measures used to ensure the reliability of the data. We determined that the data were sufficiently reliable for our purposes. We reviewed documents regarding the seafood importing programs of the European Union (EU), the largest importer of seafood worldwide; Japan, the second largest importer of seafood worldwide; and Canada, a major provider of seafood to the United States. We reviewed the EU’s importing program to determine if its practices for ensuring the safety of seafood imports have the potential for enhancing our own practices. As part of this effort, we reviewed the EU’s inspection reports of select foreign countries that are the major providers of seafood products. We reviewed the imported seafood sampling programs of Canada, the EU, and Japan to determine if their sampling practices had the potential for enhancing our own practices as well. We reviewed information on import refusals and alerts identified by Canada, the EU, and Japan’s to determine the types of drug residues identified in these countries’ seafood imports. In addition, we reviewed the approach the Department of Agriculture’s Food Safety and Inspection Service (FSIS) uses to ensure the safety of imported meat and poultry products to identify promising practices used by another federal agency responsible for the safety of imported food products. We visited the European Commission (Brussels, Belgium) and its inspection office—the Food and Veterinary Office (Grange, Ireland)—to gain a better understanding of its programs and oversight controls for seafood imports. During the visit, we met with officials from the Belgian and Irish governments to learn about their drug residue testing programs for seafood imports. In addition, we visited a government laboratory in Ghent, Belgium, and Rinville, Ireland, each to learn about the analytical methods available to detect drug residues in seafood products. We also visited the Port of Antwerp (Antwerp, Belgium), the largest port of entry for seafood products in the EU, to learn about oversight controls for seafood imports. We visited the Port of New York/Newark in Newark, New Jersey, the largest port of entry for seafood products on the East Coast, and met with Customs and Border Protection to learn about its activities related to ensuring the safety of seafood imports. We also visited a cold storage facility—in close vicinity to the New York/Newark port and where FSIS inspectors are stationed—to learn about the measures FSIS uses to ensure the safety of imported meat and poultry products. During the same trip, we visited FDA’s Northeast laboratory in Jamaica, New York, and a Customs and Border Protection’s laboratory in Newark to learn about the analytical methods available to detect drug residues in seafood products. We visited FDA’s and NMFS’ laboratories that specialize in seafood research—FDA’s Gulf Coast Seafood Laboratory (Dauphin Island, Alabama) and NMFS’s National Seafood Inspection Laboratory (Pascagoula, Mississippi)—to learn about the research the agencies are conducting on drug residues in seafood products. We visited a state actively involved in testing seafood imports—Florida’s Department of Agriculture’s laboratory (Tallahassee, Florida) and the Florida Agricultural and Mechanical University’s Research and Extension facility (Quincy, Florida)—to learn about fish farming practices. We interviewed knowledgeable officials from Canada; FDA’s Center for Food Safety and Applied Nutrition, Office of Regulatory Affairs, and Center for Veterinary Medicine; FSIS; and Japan to better understand how their respective programs function. For informational purposes, we spoke with representatives from the states of Alabama and Mississippi because of their testing program for imported seafood and proximity to the Gulf of Mexico. To gain various stakeholders’ perspectives on the safety of seafood imports, we also spoke with representatives from industry (Charm Sciences, Inc.; Costco; Darden; and SGS—a third party entity that certifies seafood farms and processors), trade associations (the Catfish Farmers of America; National Aquaculture Association, National Fisheries Institute; and Southeastern Fisheries Association, Inc.), and consumer advocacy groups (the Center for Science in the Public Interest and Food and Water Watch). To assess the extent to which FDA and NMFS have implemented the 2009 memorandum of understanding (MOU) to enhance federal oversight of seafood, we analyzed relevant agency documents on its implementation. Specifically, we obtained and reviewed the 1974 MOU, letters of notification between the agencies, and MOU guidance provided by each agency to their respective field offices. We focused on two of the eight practices identified in our previous work to enhance cooperation between federal agencies in order to determine the extent that a collaborative working relationship exists between FDA and NMFS: (1) establish policies and procedures to facilitate systematic collaboration across agency lines and (2) identify potential ways to leverage resources to maximize and sustain collaborative effort. We did not address the remaining practices: (1) define and articulate a common outcome; (2) establish mutually reinforcing or joint strategies; (3) agree on roles and responsibilities; (4) develop mechanisms to monitor, evaluate, and report on results; (5) reinforce agency accountability for collaborative efforts; and (6) reinforce individual accountability for collaborative efforts. We did not address the first three practices because the agencies have already implemented them; additionally, due to the lack of compatible policies and leveraging of resources, we did not expect the agencies to have developed mechanisms for evaluation or agency and individual accountability. We obtained and reviewed the 2009 MOU implementation plan as well as compared lists of establishments that received FDA or NMFS inspections for fiscal years 2005 through 2009 to determine the extent of inspection duplication. In order to present information on possible duplication for background purposes, we matched facility names and addresses using a statistical program; for any potential but nonexact matches, we independently verified the matches to determine whether they were correct. We determined that the inspection data were sufficiently reliable for our purposes. We interviewed knowledgeable FDA and NMFS headquarters officials to determine their progress in implementing the 2009 MOU. We reviewed the National Oceanic and Atmospheric Administration’s report on ensuring seafood safety after an oil spill and the jointly written 2010 protocol for reopening oil-impacted areas to assess the cooperation between FDA and NMFS in response to the oil spill in the Gulf of Mexico. We interviewed officials at NMFS’ laboratory in Pascagoula, Mississippi, as well as FDA’s mobile laboratory in Tallahassee, Florida, and Gulf Coast Seafood Laboratory in Dauphin Island, Alabama, to determine the extent to which the agencies coordinated efforts and leveraged resources during this emergency situation. We conducted this performance audit from April 2010 to April 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following are GAO’s comments on the Department of Health and Human Services’ (HHS) letter dated March 23, 2011. 1. We acknowledge that FDA has a multifaceted seafood safety program, and our report discusses various measures that the agency uses to ensure the safety of imported seafood. For example, our report discusses facility and importer HACCP inspections, FDA’s drug residue sampling program, and foreign country assessments. As we note in the report, these measures are limited when compared to more comprehensive reviews conducted by the EU and the Department of Agriculture’s FSIS. FDA notes that another measure is information from its overseas offices. In our September 2010 report on FDA’s overseas offices, however, we found that although the offices have engaged in a variety of activities to help ensure the safety of all FDA imported products, overseas FDA officials report facing a variety of challenges that may limit their ability to enhance agency oversight. 2. HHS notes that FDA is also implementing the Predictive Risk-based Evaluation for Dynamic Import Compliance Targeting (PREDICT), which the department states will improve its current electronic screening system by targeting higher risk products for exam and sampling. The department notes that PREDICT will make more efficient use of FDA’s import resources and allow the agency to adjust its import sampling level for seafood products over time. In our April 2010 report, we found that according to FDA officials, the agency had delayed a nationwide rollout of PREDICT due primarily to information technology infrastructure problems, such as server crashes and overloads. 3. HHS describes the role of FDA’s foreign country assessments in ensuring the safety of imported seafood by evaluating a foreign country’s aquaculture systems and controls and to assess products. We state in our report, however, that until recently, FDA had not developed written standard operating procedures for conducting its foreign country assessments. In its comments, HHS states that, during a foreign country assessment, FDA assesses a foreign country’s laws and their implementation for the control of animal drug residues in the aquaculture products it ships to the United States. However, in the absence of written criteria, standards, and program policies, it may be difficult for FDA to carry on such an effort in a systematic or consistent manner. In its comments, HHS describes the breadth and value of FDA’s foreign country assessments as part of its import oversight program, but these assessments are not identified in FDA’s publicly available information as is its HACCP inspection program. FDA also has not documented that these assessments are linked to any inspection or sampling program. In addition to the individual named above, Jose Alfredo Gomez (Assistant Director), David Moreno (Analyst-in-Charge), David Adams, Nancy Crothers, Diana Goody, Christine Ramos, and Kiki Theodoropoulos made key contributions to this report. Important contributions were also made by Kevin Bray, Michele Fejfar, and Catherine Hurley. | About half of the seafood imported into the U.S. comes from farmed fish (aquaculture). Fish grown in confined aquacultured areas can have bacterial infections, which may require farmers to use drugs like antibiotics. The residues of some drugs can cause cancer and antibiotic resistance. The Department of Health and Human Services' (HHS) Food and Drug Administration (FDA) is charged with ensuring the safety of seafood against residues from unapproved drugs, and the Department of Commerce's National Marine Fisheries Service (NMFS) provides inspection services on request. In 2009, these agencies signed a memorandum of understanding (MOU) to enhance seafood oversight and leverage inspection resources. GAO was asked to assess the extent to which (1) FDA's program is able to ensure the safety of seafood imports against residues from unapproved drugs and (2) FDA and NMFS have implemented the 2009 MOU. GAO reviewed data and documents from each agency and interviewed agency officials and other key stakeholders. FDA's oversight program to ensure the safety of imported seafood from residues of unapproved drugs is limited, especially as compared with the European Union (EU). FDA's program is generally limited to enforcing the Hazard Analysis and Critical Control Point--the internationally recognized food safety management system--by conducting inspections of foreign seafood processors and importers each year. These inspections involve FDA inspectors reviewing records to ensure the processors and importers considered significant hazards, including those resulting from drug residues if the seafood they receive are from fish farms. The inspectors generally do not visit the farms to evaluate drug use or the capabilities, competence, and quality control of laboratories that analyze the seafood. In addition, FDA has conducted foreign country assessments in five countries to gather information about those countries' aquaculture programs. However, these assessments have been limited by FDA's lack of procedures, criteria, and standards. In contrast, the EU reviews foreign government structures, food safety legislation, the foreign country's fish farm inspection program, and visits farms to ensure that imported seafood products come from countries with seafood safety systems equivalent to that of the EU. In addition, the scope of FDA's sampling program, which supplements its oversight program, is limited. Specifically, the sampling program does not generally test for drugs that some countries and the EU have approved for use in aquaculture. Consequently, seafood containing residues of drugs not approved for use in the United States may be entering U.S. commerce. Further, FDA's sampling program is ineffectively implemented. For example, for fiscal years 2006 through 2009, FDA missed its assignment plan goal for collecting import samples by about 30 percent. In addition, in fiscal year 2009, FDA tested about 0.1 percent of all imported seafood products for drug residues. Moreover, FDA's reliance on 7 of its 13 laboratories to conduct all its aquaculture drug residue testing raises questions about the agency's use of resources. FDA and NMFS have made limited progress in implementing their 2009 MOU. The agencies have developed procedures for certain MOU activities, such as notifying NMFS of pending FDA regulatory actions. However, because FDA believes NMFS inspectors need training to conduct inspections according to FDA standards, it has not utilized NMFS' inspection resources or results in a systematic manner. Better leveraging available resources is critical, especially in places like China, where FDA has inspected 1.5 percent of Chinese seafood processing facilities in the last 6 years. GAO recommends that FDA study the feasibility of adopting practices used by other entities to better ensure the safety of imported seafood, enhance its import sampling program, and develop a strategic approach for enhancing collaboration with NMFS and better leveraging resources. HHS neither agreed nor disagreed with GAO's recommendations but cited actions in process or planned that are generally responsive to them. |
In August 1993, the Congress enacted the Omnibus Budget Reconciliation Act of 1993 (OBRA 1993, P.L. 103-66), which established the EZ/EC program. The act specified that an area to be selected for the program must meet specific criteria for characteristics such as geographic size and poverty rate and must prepare a strategic plan for implementing the program. The act also authorized the Secretary of Housing and Urban Development and the Secretary of Agriculture to designate the EZs and ECs in urban and rural areas, respectively; set the length of the designation at 10 years; and required that nominations be made jointly by the local and state governments. The act also amended title XX of the Social Security Act to authorize the special use of Social Services Block Grant (SSBG) funds for the EZ program. The use of SSBG funds was expanded to cover a range of economic and social development activities. Like other SSBG funds, the funds allotted for the EZ program are granted by the Department of Health and Human Services (HHS) to the state, which is fiscally responsible for the funds. HHS’ regulations covering block grants (45 C.F.R. part 96) provide maximum fiscal and administrative discretion to the states and place full reliance on state law and procedures. HHS has encouraged the states to carry out their EZ funding responsibilities with as few restrictions as possible under the law. After the state grants the funds to the EZ or the city, the EZ can draw down the funds through the state for specific projects over the 10-year life of the program. The Clinton administration announced the EZ/EC program in January 1994. The federal government received over 500 nominations for the program, including 290 nominations from urban communities. On December 21, 1994, the Secretaries of Housing and Agriculture designated the EZs and ECs. All of the designated communities will receive federal assistance; however, as established by OBRA 1993, the EZs are eligible for more assistance through grants and tax incentives than the ECs. After making the designations, HUD issued implementation guidelines describing the EZ/EC program as one in which (1) solutions to community problems are to originate from the neighborhood up rather than from Washington down and (2) progress is to be based on performance benchmarks established by the EZs and ECs, not on the amount of federal money spent. The benchmarks are to measure the results of the activities described in each EZ’s or EC’s strategic plan. When we issued our December 1996 report, all six of the urban EZs had met the criteria defined in OBRA 1993, developed a strategic plan, signed an agreement with HUD and their respective states for implementing the program, signed an agreement with their states for obtaining the EZ/EC SSBG funds, drafted performance benchmarks, and established a governance structure. However, the EZs differed in their geographic size, population, and other demographic characteristics, reflecting the selection criteria. In addition, the local governments had chosen different approaches to implementing the EZ program. Atlanta, Baltimore, Detroit, New York, and Camden had each established a nonprofit corporation to administer the program, while Chicago and Philadelphia were operating through the city government. At the state level, the types of agencies involved and the requirements for drawing down the EZ/EC SSBG funds differed. HHS awarded the funds to the state agency that managed the regular SSBG program unless the state asked HHS to transfer the responsibility to a state agency that dealt primarily with economic development. Consequently, the funds for Atlanta and New York pass through their state’s economic development agency, while the funds for the other EZs pass through the state agency that manages the regular SSBG program. Each urban EZ also has planned diverse activities to meet its city’s unique needs. All of them have planned activities to increase the number of jobs in the EZ, improve the EZ’s infrastructure, and provide better support to families. However, the specific activities varied, reflecting decisions made within each EZ. According to HUD, the EZs had obligated over $170 million as of November 1996. However, the definition of obligations differed. For example, one EZ defined obligations as the amount of money that had been awarded under contracts. Another EZ defined obligations as the total value of the projects that had been approved by the city council, only a small part of which had been awarded under contracts. As of September 30, 1997, the six EZs had drawn down about $30 million from the EZ/EC SSBG funds for administrative costs, as well as for specific activities in the EZs. We interviewed participants in the urban EZ program and asked them to identify what had and had not gone well in planning and implementing the program. Our interviews included EZ directors and governance board members, state officials involved in drawing down the EZ/EC SSBG funds, contractors who provided day-to-day assistance to the EZs, and HUD and HHS employees. Subsequently, we surveyed 32 program participants, including those we had already interviewed, and asked them to indicate the extent to which a broad set of factors had helped or hindered the program’s implementation. While the survey respondents’ views cannot be generalized to the entire EZ/EC program, they are useful in understanding how to improve the current EZ program. In the 27 surveys that were returned to us, the following five factors were identified by more than half of the survey respondents as having helped them plan and implement the EZ program: community representation on the EZ governance boards, enhanced communication among stakeholders, assistance from HUD’s contractors (called generalists), support from the city’s mayor, and support from White House and cabinet-level officials. Similarly, the following six factors were frequently identified by survey respondents as having constrained their efforts to plan and implement the EZ program: difficulty in selecting an appropriate governance board structure, the additional layer of bureaucracy created by the state government’s involvement, preexisting relationships among EZ stakeholders, pressure for quick results from the media, the lack of federal funding for initial administrative activities, and pressure for quick results from the public and private sectors. From the beginning, the Congress and HUD have made evaluation plans an integral part of the EZ program. OBRA 1993 required that each EZ applicant identify in its strategic plan the baselines, methods, and benchmarks for measuring the success of its plan and vision. In its application guidelines, HUD amplified the act’s requirements by asking each urban applicant to submit a strategic plan based on four principles: (1) creating economic opportunity for the EZ’s residents, (2) creating sustainable community development, (3) building broad participation among community-based partners, and (4) describing a strategic vision for change in the community. These guidelines also stated that the EZs’ performance would be tracked in order to, among other things, “measure the impact of the EZ/EC program so that we can learn what works.” According to HUD, these four principles serve as the overall goals of the program. Furthermore, HUD’s implementation guidelines required each EZ to measure the results of its plan by defining benchmarks for each activity in the plan. HUD intended to track performance by (1) requiring the EZs to report periodically to HUD on their progress in accomplishing the benchmarks established in their strategic plans and (2) commissioning third-party evaluations of the program. HUD stated that information from the progress reports that the EZs prepare would provide the raw material for annual status reports to HUD and long-term evaluation reports. HUD reviews information on the progress made in each EZ and EC to decide whether to continue each community’s designation as an EZ or an EC. At the time that we issued our December 1996 report, all six of the urban EZs had prepared benchmarks that complied with HUD’s guidelines and described activities that they had planned to implement the program. In most cases, the benchmarks indicated how much work, often referred to as an output, would be accomplished relative to a baseline. For example, a benchmark for one EZ stated that the EZ would assist businesses and entrepreneurs in gaining access to capital resources and technical assistance through the establishment of a single facility called a one-stop capital shop. The associated baseline was that there was currently no one-stop capital shop to promote business activity. The performance measures for this benchmark included the amount of money provided in commercial lending, the number of loans made, the number of consultations provided, and the number of people trained. Also by December 1996, HUD had (1) defined the four key principles, which serve as missions and goals for the EZs; (2) required baselines and performance measures for benchmarks in each EZ to help measure the EZ’s progress in achieving specific benchmarks; and (3) developed procedures for including performance measures in HUD’s decision-making process. However, the measures being used generally described the amount of work that would be produced (outputs) rather than the results that were anticipated (outcomes). For example, for the benchmark cited above, the EZ had not indicated how the outputs (the amount of money provided in commercial lending, the number of loans made, the number of consultations provided, and the number of people trained) would help to achieve the desired outcome (creating economic opportunity, the relevant key principle). To link the outputs to the outcome, the EZ could measure the extent to which accomplishing the benchmark increased the number of businesses located in the zone. Without identifying and measuring desired outcomes, HUD and the EZs may have difficulty determining how much progress the EZs are making toward accomplishing the program’s overall mission. HUD officials agreed that the performance measures used in the EZ program were output-oriented and believed that these were appropriate in the short term. They believed that the desired outcomes of the EZ program are subject to actions that cannot be controlled by the entities involved in managing this program. In addition, the impact of the EZ program on desired outcomes cannot be isolated from the impact of other events. Consequently, HUD believed that defining outcomes for the EZ program was not feasible. Concerns about the feasibility of establishing measurable outcomes for programs are common among agencies facing this difficult task. However, because HUD and the EZs have made steady and commendable progress in establishing an output-oriented process for evaluating performance, they have an opportunity to build on their efforts by incorporating measures that are more outcome-oriented. Specifically, HUD and the EZs could describe measurable outcomes for the program’s key principles and indicate how the outputs anticipated from one or more benchmarks will help achieve those outcomes. Unless they can measure the EZs’ progress in producing desired outcomes, HUD and the EZs may have difficulty identifying activities that should be duplicated at other locations. In addition, HUD and the EZs may not be able to describe the extent to which the program’s activities are helping to accomplish the program’s mission. Madam Chairman, this concludes our prepared remarks. We will be pleased to respond to any questions that you or other Members of the Subcommittee might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the Empowerment Zone and Enterprise Community (EZ/EC) program, focusing on: (1) the status of the program's implementation in the six urban empowerment zones, which are located along the east coast and in the mid-west regions of the United States; (2) the factors that program participants believe have either helped or hindered efforts to carry out the program; and (3) the plans for evaluating the program. GAO noted that: (1) all six of the urban EZs had met the criteria defined in the program's authorizing legislation, developed a strategic plan, signed an agreement with the Department of Housing and Urban Development (HUD) and their respective states for implementing the program, signed an agreement with their states for obtaining funds, drafted performance benchmarks, and established a governance structure; (2) however, the EZs differed in their geographic and demographic characteristics, reflecting the selection criteria in the authorizing legislation; (3) many officials involved in implementing the program generally agreed on factors that had either helped or hindered their efforts; (4) for example, factors identified as helping the program's implementation included community representation within the governance structures and enhanced communication among stakeholders; (5) similarly, factors identified as hindering the program's implementation included preexisting relationships among EZ stakeholders and pressure for quick results; (6) from the beginning, the Congress and HUD made evaluation plans an integral part of the EZ program by requiring each community to identify in its strategic plan the baselines, methods, and benchmarks for measuring the success of its plan; and (7) however, the measures being used generally describe the amount of work that will be produced (outputs) rather than the results that are anticipated (outcomes). |
OPM is the central human resources agency for the federal government, tasked with ensuring the government has an effective civilian workforce. To carry out this mission, OPM delivers human resources products and services, including personnel background investigations, to agencies on a reimbursable basis. These investigations are the responsibility of OPM’s FIS division. FIS conducts approximately 90 percent of all personnel background investigations for the federal government. FIS provides the results of the investigations to agencies for use in determining individuals’ suitability or fitness for federal civilian, military, or federal contract employment as well as eligibility for access to classified national security information. FIS also has responsibility for developing and implementing uniform policies and procedures to ensure the proper completion of investigations. For example, FIS issued internal agency guidance, called the Investigator’s Handbook, to direct its federal and contract investigators as they conduct investigations. In fiscal year 2009, FIS conducted over 2 million investigations of varying types. In addition to background investigations, FIS conducts other types of investigations and checks, including—among others—credit searches of all three major credit bureaus regarding financial responsibility and periodic reinvestigations (generally for moderate or high-risk positions). Many of these may be limited to contacting other federal agencies or private institutions for information and may not require an investigator to conduct traditional investigation activities such as interviewing individuals familiar with the subject. FIS’s investigations staff consists of approximately 2,300 federal employees and 6,000 contractor staff. To conduct these investigations, FIS officials use information technology systems located at FIS headquarters, known as the Federal Investigations Processing Center (FIPC), to coordinate investigative activities and store all of the information generated by such investigations. At FIPC, officials store and maintain electronic, microfilm, and paper records of OPM- conducted background investigations. Officials at FIPC make security clearance information available to federal personnel offices through a Web portal. FIPC receives requests for investigations from federal agencies, processes the requests through an automated system, and fields questions about its process and ongoing investigations. Security clearances are required for access to national security information, which may be classified at one of three levels: confidential, secret, and top secret. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national security. Unauthorized disclosure could reasonably be expected to cause (1) “damage,” in the case of confidential information; (2) “serious damage,” in the case of secret information; and (3) “exceptionally grave damage,” in the case of top secret information. Background investigations allow federal agencies to make decisions both about suitability for employment, as well as access to national security information. The scope of information gathered in an investigation depends on the purpose of the investigation, such as whether it is being conducted for an employment suitability determination, an initial clearance, or a clearance renewal. For example, investigators collect information from agencies such as the Federal Bureau of Investigation (FBI) for all initial and renewal clearances. However, for initial top secret clearances investigators need, among other things, to also corroborate the subject’s education and interview educational sources, as appropriate. For an investigation for a confidential or secret clearance, investigators gather much of the information electronically. For an investigation for a top secret clearance, investigators gather additional information through more time-consuming efforts such as conducting in-person interviews to corroborate information about a subject’s employment and education. In 2009, OPM estimated that approximately 6-10 labor hours were needed for each investigation for a secret or confidential clearance, and 50-60 labor hours were needed for the investigation for an initial top secret clearance. The primary laws that provide privacy protections for personal information accessed or held by the federal government are the Privacy Act of 1974 and E-Government Act of 2002. These laws describe, among other things, agency responsibilities with regard to protecting PII. The Privacy Act places limitations on agencies’ collection, disclosure, and use of personal information maintained in systems of records. A system of records is a collection of information about individuals under control of an agency from which information is retrieved by the name of an individual or other identifier. The E-Government Act of 2002 requires agencies to assess the impact of federal information systems on individuals’ privacy. Specifically, the E-Government Act strives to enhance the protection of personal information in government information systems and information collections by requiring agencies to conduct privacy impact assessments (PIA). A PIA is an analysis of how personal information is collected, stored, shared, and managed in a federal system. Specifically, according to Office of Management and Budget (OMB) guidance, the purpose of a PIA is (1) to ensure handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) to determine the risks and effects of collecting, maintaining, and disseminating information in identifiable form in an electronic information system; and (3) to examine and evaluate protections and alternative processes for handling information to mitigate potential privacy risks. The Privacy Act of 1974 is largely based on a set of internationally recognized principles for protecting the privacy and security of personal information known as the Fair Information Practices. A U.S. government advisory committee first proposed the practices in 1973 to address what it termed a poor level of protection afforded to privacy under contemporary law. The Organization for Economic Cooperation and Development (OECD) developed a revised version of the Fair Information Practices in 1980 that has, with some variation, formed the basis of privacy laws and related policies of many countries—including the United States, Australia, and New Zealand—and the European Union. These practices are now widely accepted as a standard benchmark for evaluating the adequacy of privacy protections. The eight principles of the Fair Information Practices are shown in table 1. The Fair Information Practices are not precise legal requirements. Rather, they provide a framework of principles for balancing the need for privacy with other public policy interests, such as national security, law enforcement, and administrative efficiency. Ways to strike that balance vary among countries and according to the type of information under consideration. The OPM Privacy Office is tasked with ensuring that the agency is in compliance with privacy laws by providing guidance on how to implement privacy provisions needed to protect personal information. To oversee its implementation of privacy protections, OPM has designated its Chief Information Officer (CIO) as its senior agency official for privacy. The CIO, in turn, uses the Privacy Program Manager to assist in providing oversight to ensure the agency is complying with privacy policies and guidance. Among other things, the Privacy Program Manager is responsible for developing policies and procedures for the development of PIAs as well as reviewing and recommending their approval. Within each OPM division, information system owners are responsible for implementing OPM’s privacy policies and guidance. To assist division-level officials in assessing potential privacy risks and protecting personal information, OPM’s Privacy Office established guidance for conducting PIAs. The guidance includes a template consisting of two parts: (1) an initial screening assessment tool to determine whether system owners are required to complete a PIA and (2) the PIA itself, which requires system owners to answer seven basic questions about the nature of their systems in addition to their intended uses and purposes for collecting personal information. Upon completion of the PIA template, system owners are required to submit PIAs to the Privacy Program Manager for evaluation and recommendation for approval to the CIO. According to OPM guidance, the CIO is responsible for reviewing and signing all OPM PIAs, which signify that a PIA is complete and can be posted to OPM’s Web site for public viewing. Additionally, OPM has developed and issued an agency-wide information security and privacy policy for both its federal and contractor employees to follow in protecting information resources from loss, theft, misuse, and unauthorized access. To supplement guidance provided by the OPM Privacy Office, FIS also has developed a Policy on the Protection of Personally Identifiable Information (PII) to provide employees, including contractors, with a description of their responsibilities in protecting PII and reporting PII breaches. FIS also requires its investigators to adhere to its Investigator’s Handbook for procedures and policies related to conducting personnel background investigations for the federal government. These two documents guide federal and contract investigators in the protection of PII during the course of their work. These documents specify procedures that align with the Fair Information Practices. For example, the documents direct investigators to protect PII they possess at their duty stations using a “two-barrier” approach, such as storing it within a locked desk that is located inside of a locked house, which aligns with the security safeguards principle. In addition to its policies and guidance, FIS promotes awareness of privacy protection requirements through PII training and agency newsletters. For example, to support the agency’s initiative to reduce privacy breaches, employees participated in a “no breach” week initiative to help ensure that FIS policies and guidance were being followed. In April 2009, the OPM Office of the Inspector General (OIG) completed an audit of the security of PII within the FIS division and made nine recommendations to improve the protection of these data. The OPM OIG reviewed FIS controls for the storage, security, and transmission of PII. The OIG’s report identified, among other things, that (1) required security awareness and PII training had not been completed by all FIS employees and contractor staff; and (2) FIS did not have adequate controls for ensuring that PII incidents were reported by FIS employees and contractors in a timely manner. In response to the OIG’s recommendations, FIS recently established a security and PII training program and required all employees and contractors to complete PII awareness training. Furthermore, to better ensure PII incidents are properly reported, FIS updated its incident response procedures to require supervisors to ensure that employees and contractors report incidents to the OPM Situation Room—the agency’s central repository for PII incidents—within 30 minutes of identifying a breach or loss. FIS conducts background investigations using extensive amounts of PII collected from a variety of sources. FIS uses a combination of automated and manual steps during the course of a background investigation. These steps can be categorized into four distinct phases: (1) Questionnaire Submission, (2) Scheduling and Initiation, (3) Investigation, and (4) Review. Figure 1 provides an overview of the background investigation process delineating these four phases. The following sections outline detailed steps and how PII is used within each of the phases of FIS’s background investigation process and the measures taken within each phase to protect PII. In order to initiate an investigation, a questionnaire must be submitted with the required information and accepted by FIS. Figure 2 shows detailed steps in the questionnaire submission phase. 1. A security officer at the requesting agency forwards to the subject— the individual who will be investigated—an investigative questionnaire, which seeks information on the subject’s personal history and includes identifying information such as the subject’s first and last name, Social Security number, and place and date of birth. In addition, subjects are asked to provide personal information on family members, friends, and other contacts. The questionnaire can be completed either electronically using OPM’s Electronic Questionnaires for Investigations Processing (e-QIP) system or in paper form. Most questionnaires are currently completed electronically. 2. The completed questionnaire is reviewed by the originating agency’s security office and then sent with supporting documentation, such as fingerprints, to FIS. If a questionnaire is submitted electronically using e-QIP, it is automatically uploaded into the Personnel Investigations Processing System (PIPS), a FIS system containing over 15 million background investigation records of federal employees, military personnel, and contractors used for the automated entry, scheduling, case control, and closing of background investigations. Should FIS receive a paper questionnaire, the information is manually entered into PIPS. 3. Once a questionnaire is received at FIPC, a physical case file is created that contains the questionnaire, a summary sheet, and any documentation provided as a supplement to the questionnaire. 4. Before the investigation is initiated, the questionnaire must pass a review by a FIS contractor for completeness and identification of any obvious errors. If there is missing or erroneous information, or required attachments that are missing, such as fingerprints, FIS contractors first attempt to correct this with the agency. If this is unsuccessful, the investigation request is returned to the agency. If the questionnaire is deemed complete, the contractor completes the on- line screening or data entry process in PIPS to initiate the investigation. After a questionnaire is accepted by FIS, the associated investigation is scheduled and initiated. Figure 3 represents detailed steps in this phase. Once online screening or data entry is completed, PIPS initiates a four-step scheduling process: 1. Goals and milestones are established for the initial security clearance investigation to comply with statutory requirements. Investigation timelines are based on provisions of the Intelligence Reform and Terrorism Prevention Act of 2004, which required adjudicative agencies to develop plans to ensure that, to the extent practical, determinations could be made on at least 90 percent of all applications for a security clearance within 60 days, with no longer than 40 days allotted for the investigation and 20 days allotted for the adjudication. 2. PIPS requests information through a National Agency Check (NAC): a set of queries sent to national record repositories, such as OPM, the FBI, and Department of Defense (DOD) investigation databases; and a fingerprint-based criminal history check through the FBI. Once the agencies have manually or electronically checked their databases for the information, the results are returned to FIS headquarters and stored in PIPS or in the physical case file after being scanned into PIPS. The results returned to FIS can include FBI fingerprint and investigation records, DOD investigations records, and the subject’s credit history. 3. PIPS automatically readies inquiries in the form of scannable inquiries that are mailed to a variety of entities—including universities and local law enforcement—and individuals listed as contacts by the subject. The inquiries include questions concerning the subject’s character and what association an entity or individual had with the subject. Once a recipient returns the completed scannable inquiries, FIS uses high- speed scanners to upload these data into PIPS. 4. PIPS automatically assigns the investigation to a field office based on the zip code for the activities to be covered. A supervisory agent in charge at the office assigns the items to be completed to a specific investigator. Often, work is assigned to multiple investigators who are responsible for conducting the investigation. Processes exist to reassign a case if there is a better located investigator. The investigators assigned to conduct the field work for the investigation may be contractors or federal employees. When the investigator receives the assignment, he or she is provided the case papers in hard copy or electronic form. The investigator may also receive a summary of the NAC items once they have been completed. Once assigned to the case, an investigator receives the case information and conducts the investigation of the subject. The detailed steps for the Investigation phase are displayed in Figure 4. 1. When an investigator has been assigned a case in PIPS, he or she can access the case information maintained in the system. The investigator can input the results of the interviews and record checks into templates in PIPS-Reporting (PIPS-R)—a computer application housed on the investigator’s laptop computer, which is used to electronically document the investigation and transmit the investigation report electronically to FIPC. PIPS-R temporarily stores the report of investigation, while the physical case file is maintained at FIPC. 2. Investigators gather information on the subject including data about the subject received during interviews with the contacts listed in the questionnaire. Investigators share limited personal information on a subject with identified contacts during an interview. Information obtained from these interviews includes character descriptions and details of any criminal activities. The information is used to determine the accuracy of subject-provided information and generate further leads to complete an investigation. This part of the process may take several weeks, as investigators attempt to contact and interview multiple contacts. PIPS-R requires the investigators to enter information into templates that allow PIPS-R to compile the information into a report. 3. Upon completion of the investigation, the investigator closes the case in PIPS-R and electronically transfers the data into PIPS. The investigator then delivers the case notes to an assigned regional investigations office, where the notes are shredded 30 days after the case is closed. The report in PIPS-R is manually deleted by the investigator 30 days after the case is closed. Upon the completion of the field work by the investigators, a case review is initiated to ensure the investigative report is complete. Figure 5 outlines detailed steps in the Review phase. 1. A case reviewer at FIPC determines the completeness of the investigation and identifies any inconsistencies, errors, and omissions in the investigator’s report. For example, if the investigator did not corroborate the subject’s education, the investigator may need to interview educational sources. 2. Should the reviewer identify any discrepancies or omissions, the case is returned to the investigator for correction, sometimes through additional field work. 3. If the reviewer determines that the case is completed, FIS closes the case and provides a summary report to the agency that requested the investigation for adjudication. Currently this is done by mailing a hard copy of the report to the agency or using electronic delivery with agencies that have signed up for electronic dissemination. 4. The agency may return an investigation to FIS for further work if it does not provide the information necessary to make an adjudication decision. 5. The investigation information is kept by FIS for varying time periods. The main case file within FIPC is scanned and saved as an electronic image within 30 days of a case closing. After 30 days, the physical case file, along with the investigator’s notes, and PIPS-R records are destroyed. The scanned file is maintained either electronically or on microfilm, according to OPM’s retention guidelines, for 16 or 25 years if potentially actionable issues exist or unless the record becomes part of a new investigation. FIS has taken steps to incorporate key privacy principles into policies and procedures that guide and direct agency officials in performing background investigations. Specifically, FIS has complied with requirements of the Privacy Act and E-Government Act by publishing information on its use of PII and by conducting privacy impact assessments of its major information systems. However, it has not assessed the risks associated with the use of PII, an important element of conducting a privacy impact assessment. In addition, while FIS policies and practices for conducting investigations generally align with the Fair Information Practices, the agency has exercised only limited oversight of the use of PII by its field investigators and customer agencies. The major requirements for the protection of personal privacy by federal agencies come from two laws, the Privacy Act of 1974 and the privacy provisions of the E-Government Act of 2002. Under the Privacy Act, federal agencies must issue public notices, known as System of Records Notices (SORN), in the Federal Register identifying, among other things, the type of data collected, the types of individuals about whom information is collected, and procedures that individuals can use to review and correct personal information. To address Privacy Act requirements, OPM published two SORNs that apply to FIS’s information systems, known as the Central 9 and Internal 16 notices. These notices include— among other things—a description of FIS’s purpose for collecting and using personal information and how individuals can access and correct information maintained about them. For example, both SORNs state that individuals can request access to records by writing to FIPC. In addition to notice requirements established by the Privacy Act, federal agencies are tasked by the E-Government Act to conduct privacy impact assessments (PIA) to ensure the protection of PII. As described earlier, a PIA is an analysis of how personal information is collected, stored, shared, and managed in a federal system. In response to these requirements, OMB has developed guidance for agencies on conducting PIAs. Assessing privacy risks is an important element of a PIA intended to help program managers and system owners determine appropriate privacy protection policies and techniques to implement those policies. A privacy risk analysis should be performed to determine the nature of privacy risks and the resulting impact if corrective actions are not in place to mitigate those risks. For example, in ensuring that personal information is used only for specified purposes—the use limitation principle—system owners should identify potential ways in which unauthorized use could occur and implement privacy controls to prevent disclosure of personal data for such uses. OPM has developed assessments for a number of systems throughout the agency. For example, assessments for key FIS systems such as PIPS and e- QIP have been developed and approved by OPM’s Chief Privacy Officer. These assessments were last revised in August 2007. Although OPM developed PIAs for each of the key FIS background investigation systems, it did not assess the risks associated with the handling of PII within the systems or identify mitigating controls to address risks. For example, the assessment prepared for PIPS provided general descriptions of system functions—such as that sources of information will be “directly from the person to whom the information pertains, from other people, other sources, such as databases, Web sites, etc.”—but did not include analysis of privacy risks associated with this broad collection of personal information. Without analyzing privacy risks, agency officials may be forgoing opportunities to identify measures that could be taken to mitigate them and enhance privacy protections. Current OPM guidance on PIAs does not instruct divisions to conduct privacy risk analysis. Instead it directs officials to answer general questions for each system to aid OPM’s Privacy Office in assessing potential privacy risks. While OPM guidance emphasizes the need for system owners to provide detailed information in response to questions, the guidance does not instruct system owners to assess privacy risks. Until the current guidance is revised to require risk analysis and new and existing PIAs are updated to include risk analyses, OPM will continue to have limited assurance that PII contained in its systems is being properly protected from potential privacy threats. FIS has taken steps to include privacy protections in its procedures for conducting background investigations. Privacy protections can be categorized in relation to the Fair Information Practices, which, as discussed earlier, form the basis for privacy laws such as the Privacy Act. In a number of cases, the protections instituted by FIS can be aligned with the Fair Information Practices. For example, the agency’s publication of privacy notices addresses the openness and individual participation principles. The principles can be applied in varying degrees to all FIS activities that involve PII. The following are selected FIS procedures that illustrate specific ways in which the Fair Information Practices have been addressed. Collection limitation. FIS investigators are directed to limit the PII they collect and include in their investigation reports to information directly relevant to the assigned investigation. Investigators do not report PII in the investigation reports unless they develop information that varies from the subject-provided information. If an investigator collects information that is not vital, he or she is to destroy the information at the end of the investigation. This information is included with the investigator’s notes and returned to the supervisor’s office when the investigator has completed his or her portion of the case. The information is then destroyed 30 days after the case is closed. This aligns with the principle that the collection of PII should be limited. Data quality. When FIS receives a hard copy questionnaire, two personnel input the same PII data into PIPS. The system then confirms that both inputs match exactly before uploading the questionnaire data into PIPS, thus helping to ensure that the information provided in the hard copy questionnaire is correctly transferred to the electronic system. Additionally, FIS officials review the final investigation report prior to its delivery to the customer agency in order to ensure that the investigator took all of the steps necessary to conduct the investigation and that there are no errors or omissions in the report. Finally, in an effort to ensure completeness of an investigation, a customer agency can request additional investigative work be conducted by FIS if it identifies inaccuracies in the final investigation report or areas that require additional information prior to making an adjudication decision. This aligns with the principle that the collected information should be accurate and complete. Purpose specification. Questionnaire forms used by FIS—such as the Standard Form 86—include disclaimer language that informs the subject that the information he or she provides will only be used for the purpose of the specific background investigation and lists the reasons the information may be disclosed. Further, automated inquiry forms sent out during the Scheduling and Initiation phase contain disclaimer language that specifies that information provided on the forms will be used solely for the related investigation. This aligns with the principle that the purposes of an information collection should be disclosed before collection. Use limitation. FIS agreements with customer agencies limit how background investigation reports may be used by stating that information provided by FIS should be used only for the purpose of adjudication. Additionally, all attempts to access case files within PIPS (e.g., viewing or editing) are recorded in an automated log file. These logs are reviewed daily by FIS personnel to identify unauthorized access attempts that violate agency restrictions on use. This aligns with the principle that the information should not be disclosed or used for anything other than the specified purpose. Security safeguards. FIS uses a collection of security safeguards to protect and control access to PII located physically at FIPC. Physical security controls and processes include (1) screening individuals with metal detectors and x-ray machines prior to entry to the facility; (2) using electronically coded cards and badges to grant access to the room containing hard copies of active case files; (3) checking manifests of case files mailed to other facilities to ensure that the contents of the files have not changed; and (4) ensuring the proper destruction of investigative materials with locked disposal bins and supervised shredding by a FIS official. FIS officials also reported that a number of information security measures are used to protect personal information maintained in FIS systems. For example, FIS policy requires that access to PIPS is to be limited to officials who are authorized by their respective agencies’ security offices and have appropriate background investigations. The system is also to restrict agency user access to information from cases they have been specifically authorized to review. Furthermore, officials stated that annual security assessments are conducted on all FIS systems to ensure that they are compliant with governmentwide information security control standards, including National Institute of Standards and Technology (NIST) Special Publication 800-53 and Federal Information Processing Standard (FIPS) 140-2. This aligns with the principle that information should be protected with security safeguards against risks such as unauthorized access, use, or modification. Although FIS has established a number of privacy protection measures for its investigations program that reflect the Fair Information Practices, it has taken limited steps to oversee its field investigators and customer agencies to ensure they are implementing the measures appropriately. Such oversight would align with the accountability principle, which states that individuals controlling the collection or use of PII should be accountable for ensuring the implementation of the Fair Information Practices. Without such oversight, it is unclear whether the agency’s protection measures are being properly implemented. In recent years, field investigators have been involved in over 80 percent of reported incidents of lost or stolen paper files in the FIS division (see figure 6). As previously discussed, the more than 7,000 field investigators who conduct background investigations for OPM collect and are responsible for safeguarding extensive amounts of PII. As a result, these field investigators are key to ensuring that PII is properly protected, especially when it is in paper form. Recently, FIS has taken steps to promote better accountability for the protection of personal information provided to and received from investigators. This includes providing training to all employees and holding a “No PII Loss Week,” during which all staff were encouraged to focus on proper handling and storing of PII in their possession. Oversight of these investigators and FIS employees can ensure that appropriate protections are being implemented for the PII contained in investigative files. Recent recommendations by the OPM OIG highlight the importance of such oversight. In response to recommendations by the OIG to conduct oversight, FIS officials began conducting periodic checks of documents received from investigators once an investigation is closed to encourage a full and proper accounting of PII. However, FIS officials had not monitored whether investigators are following agency policies described in the Investigator’s Handbook and the Policy On The Protection Of Personally Identifiable Information (PII) for handling PII while investigative activity is underway. Officials from the agency’s oversight groups responsible for federal and contract investigators said they used other methods for determining investigators’ adherence to PII protection requirements. For example, officials stated the investigators are required to report to their supervisors daily on the case information or other PII they have with them during the course of their work. This is to account for the information they have on hand if there is a loss or the investigator becomes incapacitated due to an accident or medical emergency. The tallies provided by the investigators are intended to allow their supervisors to account for all such information. In addition, officials from FIS oversight units recently began conducting physical audits of regional field offices to determine compliance with PII requirements. Although these recent efforts may increase assurance that investigators are adequately accounting for the investigative files in their possession, no process currently exists to monitor investigators’ compliance with FIS privacy protection policies as they perform their field work. For example, FIS does not have procedures for examining how investigators protect information while traveling to conduct interviews or how they ensure that only appropriate information is being gathered. Without an oversight mechanism to ensure investigators’ adherence to PII protection policies during investigations—such as through periodic, structured evaluations by supervisors—the agency lacks assurance that sensitive information is being handled appropriately during this critical phase of the background investigation process. We previously reported on the federal legal framework for privacy protection, including issues and challenges associated with ensuring compliance with privacy protections when PII is transferred among agencies. We highlighted the need for an effective oversight structure to monitor how PII is protected. For example, requiring agencies to establish agreements with external government entities before sharing PII is a practical method that enables an agency’s privacy controls to be forwarded to its recipients, thus offering assurance that personal information is adequately protected from privacy risks following the data transfer. Designating entities within those agreements who are responsible for ensuring the proper implementation of privacy requirements is also consistent with the Fair Information Practice of accountability, which calls for those who control the collection or use of personal information to be held accountable for taking steps to ensure it is protected. FIS relies on memoranda of understanding (MOU) with its customer agencies to establish procedures and policies for protecting PII related to background investigation case files, and these agreements specifically designate OPM as being responsible for ensuring that customer agencies comply with the requirements of the Privacy Act when handling PII received from OPM. Within these agreements, FIS outlines, among other things, system security controls, appropriate uses of investigative information, and other provisions for adherence to the Privacy Act. For example, the agency’s e-Delivery system—an information system used to electronically assemble and deliver closed case files from FIS to requesting agencies—includes a description of security and privacy expectations and responsibilities necessary for agencies to utilize the system. However, OPM has not taken any steps to carry out its responsibility for ensuring that personal information is protected at customer agencies. Specifically, it does not monitor customer agencies’ adherence to the requirements agreed upon through the MOUs. FIS officials stated that they visit customer agencies on a recurring basis to review other aspects of the agreements but that reviews of customer agencies’ privacy protection measures take place only if a potential compromise of PII has been identified. Although these frequent visits to customer agencies provide opportunities for OPM to ensure that customer agencies are protecting PII properly, without focusing on privacy protections outlined within the MOUs as a key element of its established process, OPM may not be meeting its responsibility to ensure that agencies comply with the requirements of the MOU. As a result, OPM may not have reasonable assurance that the personal information contained within background investigation files is being appropriately used and adequately protected by customer agencies. OPM and FIS have incorporated key privacy principles into their processes and documentation that guide agency officials in the performance of background investigations. Key agency activities include measures addressing the Fair Information Practices, and steps have been taken to meet requirements of the Privacy Act and the E-Government Act. However, limited oversight of the implementation of key processes reduces assurances that PII is properly protected. Current OPM guidance does not require assessments of the privacy impact of FIS systems to be accompanied by privacy risk analyses. Until the guidance requires privacy risk analyses with PIAs and existing PIAs are revised to include privacy risk analyses, OPM will continue to have limited assurance that PII contained in its systems is being properly protected. While FIS has policies and procedures to protect PII used by its field investigators, there is no process to assess the level of protection of PII provided by these investigators while investigative activity is underway. Without an oversight mechanism that directly assesses investigators’ adherence to OPM PII protection policies, the agency lacks assurance that PII is being properly protected. Finally, OPM does not actively monitor customer agency adherence to requirements for protecting PII as established in MOUs it has with its customers. As a result, FIS may not have reasonable assurance that the personal information contained within background investigation files is being appropriately used and adequately protected by customer agencies. To ensure that appropriate privacy protections are in place during all stages of a background investigation, we recommend that the Director of the OPM take the following four actions: develop guidance for privacy impact assessments that directs agency officials to perform an analysis of privacy risks and identify mitigating techniques for all FIS systems that access, use, or maintain PII; ensure that all existing PIAs are revised to adhere to this guidance; perform periodic, structured evaluations to ensure that field investigators handle and protect PII according to agency policies and procedures while conducting their investigations; and develop and implement procedures for monitoring customer agencies’ adherence to the privacy provisions agreed to within memoranda of understanding. In written comments on a draft of this report transmitted via e-mail by the GAO audit liaison, OPM agreed with our recommendations. However, OPM disagreed with the report’s finding regarding protection of PII by field investigators, stating that it was written in a way that suggested that there is no oversight or monitoring. OPM noted that it recently implemented procedures for checking compliance by both federal and contract investigators to agency PII protection requirements. OPM requested that language in the report be modified to recognize these recent efforts. We adjusted language within our report to clarify the nature of OPM’s oversight activities at the time of our review. In addition, the draft report highlighted such recent efforts by FIS to monitor investigator compliance, including daily checks by supervisors of investigator inventories of case information and the division’s recently developed program for conducting physical audits of regional field offices to determine compliance with PII requirements. Nevertheless, these recent efforts by FIS have yet to demonstrate that investigators are monitored for compliance while conducting investigations. For example, FIS had yet to develop procedures for examining how investigators protect information while traveling to conduct interviews or how they ensure that only appropriate information is being gathered. In addition, OPM provided technical comments that were addressed as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies of this report to interested congressional committees and the Director of the Office of Personnel Management. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6244 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Our objectives were to determine: how the Office of Personnel Management (OPM) uses personally identifiable information (PII) in conducting background investigations, and the extent to which OPM’s privacy policies and procedures for protecting PII related to investigations meet statutory requirements and align with widely accepted privacy practices. To address our first objective, we identified key steps in the agency’s background investigation process by analyzing OPM and Federal Investigative Services (FIS) division policies, procedures, and guidance; conducting site visits of FIS headquarters at the Federal Investigations Processing Center (FIPC) in Boyers, Pennsylvania; and interviewing FIS officials involved in overseeing and conducting key steps in the process located at FIPC and at OPM headquarters. We compiled a four-phase description of the investigation process and confirmed the accuracy of its contents with FIS officials in an iterative fashion. To address our second objective, we reviewed OPM and FIS privacy policies and procedures and analyzed agency actions to (1) comply with the Privacy Act of 1974 and the E-Government Act of 2002 and (2) align with the Fair Information Practices, a set of widely accepted privacy principles. We interviewed OPM’s Chief Information Officer in order to obtain information on OPM policies and procedures on the protection of PII and how OPM monitors compliance with its privacy policies and procedures. We also interviewed key FIS officials, including those from the agency’s Field Management Oversight Group, Contract Development and Oversight Group, and the Memorandum of Understanding/Liaisons Group, to discuss their practices and procedures for protecting personal information when performing their oversight responsibilities. Additionally, we reviewed previous GAO and OPM Office of the Inspector General reports pertinent to engagement objectives. We conducted this performance audit from October 2009 to September 2010 in the Washington, D.C., and Boyers, Pennsylvania, areas, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, John de Ferrari, Assistant Director; Sher`rie Bacon; Neil Doherty; Matthew Grote; Nicholas Marinos; Lee McCracken; David Plocher; and Jeffrey Woodward made key contributions to this report. | Approximately 90 percent of all federal background investigations are provided by the Office of Personnel Management's (OPM) Federal Investigative Services (FIS) division. In fiscal year 2009, FIS conducted over 2 million investigations of varying types, making the organization a major steward of personal information on U.S. citizens. GAO was asked to (1) describe how OPM uses personally identifiable information (PII) in conducting background investigations and (2) assess the extent to which OPM's privacy policies and procedures for protecting PII related to investigations meet statutory requirements and align with widely accepted privacy practices. To address these objectives, GAO compared OPM and FIS policies and procedures with key privacy laws and widely accepted practices. FIS, a component of OPM, conducts background investigations using extensive amounts of PII. Specifically, FIS collects PII from the individual being investigated, government agencies holding relevant data on the subject, and contacts familiar with the subject of the investigation. It uses this information during the four phases of the investigation process: (1) Questionnaire Submission, when requesting agencies submit a questionnaire completed by the individual who will be investigated; (2) Scheduling and Initiation, during which goals and milestones are set, automated information requests occur, and an investigator is assigned; (3) Investigation, during which an investigator gathers information from the automated requests and from interviews and prepares a report; and (4) Review, during which a reviewer determines if a report is complete before allowing it to be sent to the requesting agency. FIS has taken steps to incorporate key privacy laws and widely accepted privacy practices into policies and procedures for conducting background investigations. For example, field investigators are directed to limit collection of PII to only information relevant to an investigation, and several procedures are in place to ensure that such information is recorded as accurately as possible in OPM's systems. However, the agency has conducted limited oversight of FIS's development of privacy impact assessments (PIA), investigators' implementation of privacy protection guidance, and customer agencies' adherence to privacy agreements. A PIA is an analysis of how personal information is collected, stored, shared, and managed in a federal system. It is required by the E-Government Act of 2002. Related Office of Management and Budget guidance emphasizes the need to identify and assess privacy risks in concert with developing a PIA. However, OPM's guidance for PIAs does not require that privacy risks be analyzed or mitigation strategies be identified for those risks. Consequently, OPM cannot be sure that potential risks associated with the use of PII in its information systems have been adequately assessed and mitigated. Additionally, widely accepted privacy practices call for accountability to ensure privacy-protection policies are implemented to safeguard personal information from potential risks. Such accountability includes monitoring to ensure proper implementation of privacy protection measures. However, although FIS tracks PII that is provided to and received from field investigators, it had not monitored investigators' adherence to its policies and procedures for protecting PII while investigations are underway. Further, while FIS has developed agreements with customer agencies related to the protection of PII contained in investigation case files, it does not monitor customer agencies' implementation of these policies, even though its agreements state it is responsible for doing so. Without oversight processes for monitoring investigators' and customer agencies' adherence to its PII protection policies, OPM lacks assurance that its privacy protection measures are being properly implemented. GAO is recommending that the Director of OPM (1) develop guidance for analyzing and mitigating privacy risks in privacy impact assessments, and (2) develop and implement oversight mechanisms for ensuring that investigators properly protect PII and that customer agencies adhere to agreed-upon privacy protection measures. OPM agreed with our recommendations. |
DOD oversees a worldwide school system to meet the educational needs of military dependents and others, such as the children of DOD’s civilian employees overseas. The Department of Defense Education Activity (DODEA) administers schools both within the United States and overseas. In school year 2006-07, DODEA had schools within 7 states, Puerto Rico, Guam, and in 13 foreign countries. DOD has organized its 208 schools into three areas: the Americas (65), Europe (98), and Pacific (45). Almost all of the domestic schools are located in the southern United States. The overseas schools are mostly concentrated in Germany and Japan, where the U.S. built military bases after World War II. Given the transient nature of military assignments, these schools must adapt to a high rate of students transferring into and out of their schools. According to DOD, about 30 percent of its students move from one school to another each year. These students may transfer between DOD schools or between one DOD school and a U.S. public school. Although DOD is not subject to the No Child Left Behind Act of 2001 (NCLBA), it has its own assessment and accountability framework. Unlike public schools, DOD schools receive funding primarily from DOD appropriations rather than through state and local governments or Department of Education grants. U.S. public schools that receive grants through the NCLBA must comply with testing and reporting requirements designed to hold schools accountable for educating their students and making adequate yearly progress. DOD has adopted its own accountability framework that includes a 5-year strategic plan, an annual report that measures the overall school system’s progress, and data requirements for school improvement plans. The strategic plan sets the strategic direction for the school system and outlines goals and performance measures to determine progress. In annual reports, DOD provides a broad overview of its students’ overall progress, including the results of standardized tests. On DOD’s Web site, DOD publishes more detailed test score results for each school at each grade level. DOD also requires each school to develop its own improvement plan that identifies specific goals and methods to measure progress. School officials have the flexibility to decide what goals to pursue but must identify separate sources of data to measure their progress in order to provide a more complete assessment. For example, if a school chooses to focus on improving its reading scores, it must identify separate assessment tests or other ways of measuring the progress of its students. DOD is subject to many of the major provisions of IDEIA and must include students with disabilities in its standardized testing. However, unlike states and districts subject to NCLBA, DOD is not required to report publicly on the academic achievement of these students. States and public school districts that receive funding through IDEIA must comply with various substantive, procedural, and reporting requirements for students with disabilities. For example, they must have a program in place for evaluating and identifying children with disabilities, developing an individualized education program (IEP) for such students, and periodically monitoring each student’s academic progress under his or her IEP. Under IDEIA, children with disabilities must be taught, to the extent possible, with non-disabled students in the least restrictive environment, such as the general education classroom, and must be included in standardized testing unless appropriate accommodations or alternate assessments are required by their IEPs. Although DOD schools do not receive funding through IDEIA, they generally are subject to the same requirements concerning the education of children with disabilities. However, unlike states and districts that are subject to NCLBA, DOD schools are not required to report publicly on the performance of children with disabilities on regular and alternate assessments. Definitions of dyslexia vary from broad definitions that encompass almost all struggling readers to narrow definitions that only apply to severe cases of reading difficulty. However, DOD and others have adopted a definition developed by dyslexia researchers and accepted by the International Dyslexia Association, a non-profit organization dedicated to helping individuals with dyslexia. This definition describes dyslexics as typically having a deficit in the phonological component of language, the individual speech sounds that make up words, which typically causes difficulty with accurate or fluent word recognition, poor spelling ability, and problems in reading comprehension that can impede growth of vocabulary. Recent research has identified a gene that may be associated with dyslexia and has found that dyslexia often coincides with behavior disorders or speech and language disabilities and can range from mild to severe. Nevertheless, the percentage of people who have dyslexia is unknown with estimates varying from 3 to 20 percent, depending on the definition and identification method used. Research promotes early identification and instruction for dyslexics to help mitigate lifelong impacts. DOD offers professional development to all staff to help them support students who struggle to read, including those who may have dyslexia, and used designated funds to supplement existing training efforts across its schools. This professional development prepares teachers to assess student literacy skills and provides strategies to help instruct struggling readers. DOD used funds designated to support students with dyslexia for the development of two new online training courses containing modules on dyslexia, for additional seats in existing online courses, and for additional literacy assessment tools. DOD offers professional development to all staff who teach struggling readers, including students who may have dyslexia, primarily through online courses. The department offers online training courses through a professional development series known as Scholastic RED. These courses are DOD’s primary professional development on literacy for general education teachers. According to DOD, the department began offering the courses during the 2003-04 school year. DOD officials told us that since that time about half of the nearly 8,700 teachers in DOD schools have taken at least one Scholastic RED online course. Of the school principals who responded to our survey, almost all indicated that some of their staff members, including administrators and general and special education teachers, had participated in Scholastic RED training. Beyond Scholastic RED courses, DOD officials we interviewed told us that general education teachers also receive literacy development through instructional training in subject areas other than reading. For example, professional development on teaching at the middle school level may include guidance on how to enhance students’ reading skills through the study of a particular science. Most professional development for staff working with struggling readers focuses on the assessment of student literacy skills and presents strategies for instructing students who struggle to read, some of whom may have dyslexia. Scholastic RED online courses train teachers in five basic elements of reading instruction: phonemic awareness, comprehension, phonics, fluency, and vocabulary. Research suggests that both phonics and phonemic awareness pose significant challenges to people who have dyslexia. According to course implementation materials, the training is designed to move beyond online course content and allow participants the opportunity to apply new skills in site-based study groups as well as in the classroom. Some principals and teachers indicated their schools follow this model with groups of teachers meeting to discuss best practices for applying Scholastic RED knowledge and resources in their classrooms. DOD districts and schools sometimes offer their own literacy training through a localized effort or initiative. Professional development unique to a DOD district or school may be offered by a district’s special education coordinator. For example, the special education coordinator in a domestic district told us she offers literacy training to all staff, explaining that she tries to create a broader base of professionals who can more accurately identify and instruct students who are struggling readers. Regarding overseas schools, administrators in Korea told us they offer in-service workshops to help teachers improve student literacy, reading comprehension, and writing. DOD designed and provided additional training on literacy instruction for most special education teachers and other specialists under a special education initiative. The training provided these staff members with courses on how students develop literacy skills and how to teach reading across all grade levels. According to a 2004-05 DOD survey on the initiative, over half of special educators and other specialists said they had completed this training. Since the 2003-04 school year, special education teachers and other specialists have received training on topics such as the evaluation of young children’s literacy skills and adjusting instruction based on student performance. The department also provided speech and language pathologists specialized training to help them assist struggling readers, including guidance on basic elements of literacy instruction and development, such as phonological awareness and vocabulary development. DOD offers another literacy professional development program for special education teachers and other specialists known as Language Essentials for Teachers of Reading and Spelling (LETRS). According to the department, LETRS is designed to give teachers a better understanding of how students learn to read and write, showing instructors how to use such knowledge to improve targeted instruction for every type of reader. According to our survey results, about 10 percent of schools had staff who had taken this course. The LETRS course is based on the concept that once teachers understand the manner in which students approach reading and spelling tasks, they can make more informed decisions on instructional approaches for all readers. Much like the other literacy training DOD offers, LETRS modules contain reading instruction approaches on areas that may present challenges for those who have dyslexia: phonemic awareness, vocabulary, and reading comprehension. Overall, DOD staff told us the literacy training the department offered was useful for them, with some indicating they wanted additional training. In responding to our survey, more than 80 percent of the principals who said their staff used Scholastic RED courses rated them as very useful for specialized instruction. Principals we interviewed told us their teachers characterize Scholastic RED concepts as practical and easy to apply in the classroom. While teachers we interviewed told us Scholastic RED training is helpful, some special education teachers indicated the course material is basic and better-suited to meet the developmental needs of general education teachers than special education teachers. For example, one special education teacher we spoke to said Scholastic RED courses do little to enhance the professional skills of special education teachers because many of these teachers have already received advanced training on reading interventions. Special education teachers did indicate, however, that training offered through the department’s special education initiative has provided them with identification strategies and intervention tools to support struggling readers. Regarding the impact of the initiative’s training, a DOD survey of special education teachers and other specialists found that over half of respondents said they had seen evidence of professional development designed to maximize the quality of special education services, and most had completed some professional development. The department did report, however, that respondents working with elementary school students frequently requested more training in areas such as phonemic awareness, while respondents working with high school students requested more professional development in a specific supplemental reading program used at DOD schools: Read 180. Moreover, teachers we interviewed in both foreign and domestic locations said they would like additional training on identifying and teaching students with specific types of reading challenges, including dyslexia. For example, one special education teacher we interviewed told us this specific training could help general education teachers to better understand the types of literacy challenges struggling readers face that in turn could help teachers better understand why students experience difficulties with other aspects of coursework. DOD reported it had fully obligated the $3.2 million designated for professional development on dyslexia, with about $2.9 million for online courses and literacy assessment tools. Between fiscal years 2004 and 2006, the conference committee on defense appropriations designated a total of $3.2 million within the operation and maintenance appropriation for professional development on dyslexia. As of September 2007, DOD reported it had obligated these funds for professional development in literacy, including online training courses containing components on dyslexia. Reported obligations also included tools to help teachers identify and support students who struggle to read, some of who may have dyslexia. DOD obligated the remaining designated funds for general operations and maintenance purposes. All related obligations, as reported by the department, are outlined in table 1. The online training included two newly developed courses that may be too new to evaluate and the purchase of extra seats in existing Scholastic RED training courses. The first of the new training courses to be fully developed was Fundamentals of Reading K-2. According to DOD, this course was designed to present teachers with strategies for instructing struggling readers in the early K-2 grade levels and contains six modules on the components of reading, including a specific module on dyslexia. The K-2 course was first made available in January 2006 to teachers who participated in a pilot project. DOD then opened the course to all teachers in February 2007. According to our survey results, 29 percent of the schools serving grades K-2 had used the course by the end of the school year. Nearly half of those school principals who indicated their staff used the course, however, did not indicate the extent to which it had been helpful in supporting struggling readers. It is possible the course is still too new for DOD schools to evaluate as some principals indicated on our survey that they had not heard of the course or they were not aware it was available to their staff. The second of the new online training courses, Fundamentals of Reading Grades 3-5, is not fully developed for use at this time. According to DOD officials, the course will be available to all staff in the 2007-08 school year and will also contain six modules on the components of reading, including a module on dyslexia. Additionally, DOD reported purchasing another 1,100 seats in selected Scholastic RED online training courses. The department also added a page entitled, Help your Students with Dyslexia to its main online resource site that is available to all teachers. DOD reported also using designated funds to purchase electronic literacy assessment tools and other instruments that were widely used in DOD schools, one of which received mixed reviews on its usefulness. DOD reported obligating about one-third of the designated funds for the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) assessment tool. The DIBELS assessment allows a teacher to evaluate a student’s literacy skills in a one-on-one setting through a series of one-minute exercises that can be administered via pen and paper or through the use of a hand-held electronic device. By using the exercises, teachers can measure and monitor these students’ skill levels in concepts such as phoneme segmentation fluency, a reading component that often gives dyslexics significant difficulty. DIBELS was used to help identify struggling readers in at least half of the schools serving grades K-2, according to our survey results, and DOD plans to begin use of the assessment in additional locations during the 2007-08 school year. However, school officials and teachers had mixed reactions regarding the ease and effectiveness of using DIBELS to help identify struggling readers. In responding to our survey, about 40 percent of principals whose schools used DIBELS to help identify struggling readers indicated it was very or extremely useful, about 30 percent indicated it was moderately useful, and about 20 percent indicated it was either slightly or not at all useful. Several principals we surveyed indicated that they liked the instant results provided by the DIBELS assessment. For example, one principal called the assessment a quick and easy way to assess reading skills, saying it provides teachers with immediate feedback to help inform decisions about instruction. Others indicated the assessment is time-consuming for teachers. One kindergarten teacher we interviewed said that it is challenging to find the time to administer the test because it must be individually administered. Another principal expressed concern about the difficulty in using the electronic hand-held devices, saying the technology poses the greatest challenge to teachers in using the DIBELS assessment. According to DOD officials, the agency is currently evaluating its use of DIBELS, searching for other assessment tools, and will use the results to determine whether to continue using DIBELS or replace it with another tool. DOD purchased four other instruments to aid teachers in the evaluation of literacy skills; however, the tools are targeted to specific reading problems. According to DOD officials, they selected these tools because they measure specific skills associated with dyslexia. Table 2 shows reported use of each literacy assessment tool across DOD schools. DOD schools identify students who have difficulty reading and provide them with supplemental reading services. DOD uses standardized tests to determine which students are struggling readers, although these tests do not screen specifically for dyslexia. DOD then provides these students with a standard supplemental reading program. For those children with disabilities who meet eligibility requirements, DOD provides a special education program in accordance with the requirements of IDEIA and department guidance. Schools primarily determine students’ reading ability and identify those who struggle through the use of standardized assessments. DOD uses several standardized assessments, including the TerraNova Achievement Test, and identifies those students who score below a certain threshold as having the most difficulty with reading and in need of additional reading instruction. DOD requires that schools administer these reading assessments starting in the third grade. However, some schools administer certain assessments as early as kindergarten. For example, some schools used Dynamic Indicators of Basic Early Literacy Skills (DIBELS) to identify struggling readers in grades K-2. In an effort to systematically assess students in kindergarten through second grade, DOD plans to identify assessment tools designed for these grades during school year 2007-08 and require their use throughout the school system. In addition to assessments, schools also use parent referrals and teacher observations to identify struggling readers. Several school officials with whom we spoke said that parent feedback about their children to school personnel and observations of students by teachers are both helpful in identifying students who need additional reading support. Like many public school systems in the United States, DOD school officials do not generally use the term “dyslexia.” However, DOD officials told us they provided an optional dyslexia checklist to classroom teachers to help determine whether students may need supplementary reading instruction and if they should be referred for more intensive diagnostic screening. According to our survey results, 17 percent of schools used the checklist in school year 2006-07. DOD schools provide a supplemental reading program for struggling readers, some of whom may have dyslexia, a program that has some support from researchers and has received positive reviews from school officials, teachers, and parents we interviewed. The program, called READ 180, is a multimedia program for grades 3 through 12. It is designed for 90- minute sessions during which students rotate among three activities: whole-group direct instruction, small-group reading comprehension, and individualized computer-based instruction. The program is designed to build the reading skills, such as phonemic awareness, phonics, vocabulary, fluency, and comprehension. In responding to our survey, over 80 percent of the school principals indicated it was very helpful in teaching struggling readers. Several school administrators stated that it is effective with students due to the nonthreatening environment created by its multimodal instructional approach. Several teachers said the program also helped them to monitor student performance. Several parents told us that the program increased their children’s enthusiasm for reading, improved their reading skills, and boosted their confidence in reading and overall self- esteem. Some parents stated that their children’s grades in general curriculum courses improved as well since the children were not having difficulty with course content but rather with reading. At the secondary level, however, school officials stated that some parents chose not to enroll their child in READ 180 because of the stigma they associate with what they view as a remedial program. According to the Florida Center for Reading Research, existing research supports the use of READ 180 as an intervention to teach 6th, 7th, and 8th grade students comprehension skills, however; the center recommends additional studies to assess the program’s effectiveness. Certain districts and schools have implemented additional strategies for instructing struggling readers such as using literacy experts, offering early intervention reading programs, and prioritizing reading in annual improvement plans. In the Pacific region and the Bavaria district, literacy experts work in collaboration with classroom teachers and reading specialists to design appropriate individualized instruction for struggling readers and monitor student performance. All of the elementary schools in the Pacific region offer reading support to struggling readers. Some schools offer early reading support in grades K-2. Certain districts offer early intervention to first and second graders in small groups of five and eight students, respectively. Some schools in Europe provide intensive instruction to students in first grade through Reading Recovery, a program in which struggling readers receive 30-minute tutoring sessions by specially trained teachers for 12 to 20 weeks. According to the Department of Education’s What Works Clearinghouse, Reading Recovery may have positive effects in teaching students how to read. Several superintendents and principals we interviewed said that improved reading scores was one of the school’s goals in their annual school-improvement plan, which is in line with DOD’s strategic plan milestone of having all students in grades three, six, and nine read at their grade level or higher by July 2011. For example, to improve reading scores, officials in the Heidelberg District developed a literacy program requiring each school to identify all third grade students who read below grade level and develop an action plan to improve their reading abilities. Those students whose performance does not improve through their enrollment in supplemental reading programs or who have profound reading difficulties may be eligible to receive special education services. DOD provides this special education program in accordance with the requirements of department guidance and the IDEIA, although DOD is not subject to the reporting and funding provisions of the act. According to our survey results, almost all schools provided special education services in the 2006-2007 school year. The level of special education services available to students with disabilities varies between districts and schools, and may affect where some service-members and families can be assigned and still receive services. DOD established the Exceptional Family Member Program to screen and identify family members who have special health or educational needs. It is designed to assist the military personnel system to assign military service members and civilian personnel to duty stations that provide the types of health and education services necessary to meet their family members’ needs. In general, parents with whom we spoke said that they were pleased with the services their children received in DOD schools at the duty locations where they were assigned. DOD conducts a comprehensive multidisciplinary assessment to evaluate whether a student is eligible to receive special education services under any of DOD’s disability categories, and most parents we interviewed were complimentary of the program. A student who is identified as having a disability receives specific instruction designed to meet the student’s academic needs. A team comprised of school personnel and the student’s parents meets annually to assess the student’s progress. While the majority of parents we interviewed were complimentary of DOD’s special education program, a few expressed concern that their children were not evaluated for special education eligibility early enough despite repeated requests to school personnel that their children needed to be evaluated for a suspected disability. According to DOD officials, department guidance requires school officials to look into parent requests, but officials do not have to evaluate the child unless they suspect the child has a disability. However, they must provide parents with written or oral feedback specifying why they did not pursue the matter. Students with dyslexia may qualify for special education services under the specific learning disability category, but students must meet specific criteria. To qualify as having a specific learning disability, students must have an information-processing deficit that negatively affects their educational performance on an academic achievement test resulting in a score at or near the 10th percentile or the 35th percentile for students of above average intellectual functioning. There must also be evidence through diagnostic testing to rule out the possibility that the student has an intellectual deficit. DOD schools provide children with disabilities instruction through two additional programs that have some research support. Fifteen percent of our survey respondents were principals of schools that used the Lindamood Phoneme Sequencing Program (LiPS), a program that helps students in grades prekindergarten through 12 with the oral motor characteristics of individual speech sounds. According to the What Works Clearinghouse, one research study it reviewed in 2007 suggested the LiPS program may have positive effects on reading ability. Our survey results indicated that 37 percent of schools serving grades 7 through 12 used a program called Reading Excellence: Word Attack and Rate Development Strategies that targets students who have mastered basic reading skills but who are not accurate or fluent readers of grade-level materials. According to a Florida Center for Reading Research report, there is research support for the program, but additional research is needed to assess its effectiveness. DOD assesses the academic achievement of all students using standardized tests. The department administers the TerraNova Achievement Test to students in grades 3 through 11. Test scores represent a comparison between the test taker and a norm group designed to represent a national sample of students. For example, if a student scored at the 68th percentile in reading, that student scored higher than 68 percent of the students in the norm group–the national average is the 50th percentile. DOD uses these scores to compare the academic achievement of its students to the national average. In addition, DOD schools participate in the National Assessment of Educational Progress (NAEP), known as the nation’s report card, which provides a national picture of student academic achievement and a measure of student achievement among school systems. According to an agency official, DOD administers NAEP to all of its fourth and eighth grade students every other year. The NAEP measures how well DOD students perform as a whole relative to specific academic standards. Overall, DOD students perform well in reading compared to the national average and to students in state public school systems, as measured by their performance on standardized tests. The latest available test results showed that DOD students scored above average and in some cases ranked DOD in the top tier of all school systems tested. According to the 2007 TerraNova test results, DOD students scored on average between the 60th and 75th percentile at all grade levels tested. The 2007 NAEP reading test results ranked the DOD school system among the top for all school systems. Specifically, on the eighth grade test, DOD tied for first place with two states among all states and jurisdictions and on the fourth grade test, tied with one state for third place. All students, including those with disabilities, participate in DOD’s systemwide assessments using either the standard DOD assessment or alternate assessments. In some cases, students who require accommodations to complete the standard assessment may need to take the test in a small group setting, get extended time for taking the test, or have directions read aloud to them. Some students with severe disabilities may take an alternate assessment if required by the student’s individualized education program. An alternate assessment determines academic achievement by compiling and assessing certain documentation, such as a student’s work products, interviews, photographs, and videos. According to an official from DODEA’s Office of System Accountability and Research, DOD provides an alternate assessment to fewer than 200 of its roughly 90,000 students each year. For use within the department and in some districts and schools, DOD disaggregates TerraNova test scores for students with disabilities. DOD officials reported that they disaggregate scores for the entire school system, each area, and each district, in order to gauge the academic performance of students with disabilities. DOD’s policy states that DOD shall internally report on the performance of children with disabilities participating in its systemwide assessments. According to DOD officials, they use the data to determine progress toward goals and to guide program and subject area planning. According to our survey results, over 90 percent of DOD schools disaggregate their test scores by gender and race and about 85 percent disaggregate for students with disabilities for internal purposes. Some school officials told us they use test data in order to track students’ progress, assess the effectiveness of services they offer students, identify areas of improvement, and assess school performance. For example, one Superintendent who shared her disaggregated data with us showed how third-grade students with disabilities made up over half of those who read below grade level in her district. DOD does not generally report disaggregated test scores for students with disabilities. DOD’s annual report provides data at each grade level, and test scores posted on its Web site provide data for each school. DOD also reports some results by race and ethnicity for the NAEP test. However, DOD does not disaggregate its TerraNova test data for students with disabilities or other subgroups. A primary goal of its strategic plan is for all students to meet or exceed challenging academic content standards, and DOD uses standardized test score data to determine progress towards this goal. Disaggregating these data provides a mechanism for determining whether groups of students, such as those with disabilities, are meeting academic proficiency goals. However, unlike U.S. public school systems that are subject to the No Child Left Behind Act, DOD is not required to report test scores of designated student groups. According to DOD officials, they do not report test results for groups of fewer than 20 students with disabilities because doing so may violate their privacy by making it easier to identify individual students. Where there are groups of 20 or more students with disabilities, DOD officials said they do not report it publicly because it might invite comparisons between one school and another when all of them do well compared to U.S. public schools. DOD officials did not comment on any negative implications of such comparisons. On the whole, DOD students perform well in reading compared with public school students in the United States, and in some cases DOD ranks near the top of all school systems, as measured by students’ performance on standardized tests. DOD has programs and resources in place to provide supplemental instruction to students who have low scores on standardized tests or who otherwise qualify for special education services, some of whom may have dyslexia. The department generally includes these students when administering national tests. Nevertheless, by not reporting specifically on the achievement of students with disabilities, including those who may have dyslexia, DOD may be overlooking an area that might require attention and thereby reducing its accountability. Without these publicly reported data, parents, policymakers, and others are not able to determine whether students with disabilities as a whole are meeting academic proficiency goals in the same way as all other students in the school system. For example, high performance on the part of most DOD students could mask low performance for students with disabilities. To improve DOD’s accountability for the academic achievement of its students with disabilities, including certain students who may have dyslexia, we recommend that the Secretary of Defense instruct the Director of the Department of Defense Education Activity to publish separate data on the academic achievement of students with disabilities at the systemwide, area, district, and school levels when there are sufficient numbers of students with disabilities to avoid violating students’ privacy. We provided a draft of this report to DOD for review and comment. DOD concurred with our recommendation. DOD’s formal comments are reproduced in appendix II. DOD also provided technical comments on the draft report, which we have incorporated when appropriate. We will send copies of this report to the Secretary of Defense, the Director of the Department of Defense Education Activity, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributions to this report are listed in appendix III. Our objectives were to determine: 1) what professional development DOD provides its staff to support students with dyslexia and how the fiscal year 2004-to-2006 funds designated for this purpose were used, (2) what identification and instructional services DOD provides to students who may have dyslexia, and (3) how DOD assesses the academic achievement of students with disabilities, including dyslexia. To meet these objectives, we interviewed and obtained documentation from DOD and others, conducted a Web-based survey of all 208 DOD school principals, and visited or interviewed by phone officials and parents in six school districts. We conducted our work between January 2007 and October 2007 in accordance with generally accepted government auditing standards. To obtain information on how schools support students with dyslexia we interviewed officials from the Department of Defense Education Activity (DODEA) and the Department of Education, as well as representatives from the International Dyslexia Association and the National Association of State Directors of Special Education. We obtained several DODEA reports including: a 2007 report to Congress on DODEA’s efforts to assist students with dyslexia, a 2006 evaluation of DODEA’s English and language arts instruction, and a 2005 survey of DODEA special education personnel. We reviewed relevant federal laws, regulations, and DOD guidance, and also obtained information on DOD’s obligation and disbursement of funds designated for professional development on dyslexia. We also reviewed the DODEA web site for schools’ student performance data to determine how DOD assesses the academic achievement of students with disabilities. We also obtained summary reports on the scientific evidence regarding the effectiveness of DODEA’s supplemental reading programs from the Department of Education’s What Works Clearinghouse and the Florida Center for Reading Research, two organizations that compile and evaluate research on reading. To gather information concerning dyslexic students in DoDEA schools, including how DoDEA schools identify dyslexic students and the instructional services provided to such students, we designed a Web-based survey. We administered the survey to all 208 DODEA school principals between May 10, 2007 and July 6, 2007, and received completed surveys from 175 school principals—an 84 percent response rate. In order to obtain data for a high percentage of DOD schools, we followed up with principals through e-mail and telephone to remind them about the survey. We also examined selected characteristics to ensure that the schools responding to our survey broadly represent DODEA’s school levels, geographic areas, and special education population. Based on our findings, we believe the survey data are sufficient for providing useful information concerning students with dyslexia. Nonresponse (or, in the case of our work, those DOD school principals that did not complete the survey) is one type of nonsampling error that could affect data quality. Other types of nonsampling error include variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, and data collection and processing errors. We included steps in developing the survey, and collecting, editing, and analyzing survey data to minimize such nonsampling error. In developing the web survey, we pretested draft versions of the instrument with principals at various American and European elementary, middle, and high schools to check the clarity of the questions and the flow and layout of the survey. On the basis of the pretests, we made slight to moderate revisions of the survey. Using a web- based survey also helped remove error in our data collection effort. By allowing school principals to enter their responses directly into an electronic instrument, this method automatically created a record for each principal in a data file and eliminated the need for and the errors (and costs) associated with a manual data entry process. In addition, the program used to analyze the survey data was independently verified to ensure the accuracy of this work. We visited school officials and parents of struggling readers in two of the three areas (the Americas and Europe) overseen by DODEA and contacted schools in the third area (the Pacific) by phone. For each location we interviewed the district Superintendent or Assistant Superintendent, school principals, teachers, and special education teachers. At each location we also interviewed parents of struggling readers. Each group had between two and seven parents, and in some cases we interviewed a parent individually. To see how DOD schools instruct struggling readers we observed several reading programs during classroom instruction including Read 180, Reading Recovery, and Reading Improvement, as well as the use of literacy tools such as the Dynamic Indicator of Basic Literacy Skills. We selected 6 of DOD’s 12 school districts, 2 from each area, using the following criteria: (1) geographic dispersion, (2) representation of all military service branches, (3) variety of primary and secondary schools, and (4) range in the proportion of students receiving special education services. Harriet Ganson, Assistant Director, and Paul Schearf, Analyst-In-Charge, managed this assignment. Farah Angersola and Amanda Seese made significant contributions throughout the assignment, and Rebecca Wilson assisted in data collection and analysis. Kevin Jackson provided methodological assistance. Susan Bernstein and Rachael Valliere helped develop the report’s message. Sheila McCoy provided legal support. | Many of our nation's military and civilian personnel depend on Department of Defense (DOD) schools to meet their children's educational needs. These schools provide a range of educational services including programs for students with disabilities and those who struggle to read, some of whom may have a condition referred to as dyslexia. To determine how DOD supports students with dyslexia and how it used $3.2 million in funds designated to support them, GAO was asked to examine: (1) what professional development DOD provides its staff to support students with dyslexia and how the fiscal year 2004-to-2006 funds designated for this purpose were used, (2) what identification and instructional services DOD provides to students who may have dyslexia, and (3) how DOD assesses the academic achievement of students with disabilities, including dyslexia. To address these objectives, GAO conducted a survey of all school principals and interviewed agency officials, school personnel, and parents in six school districts. DOD provides a mix of online and classroom training to teachers who work with students who struggle to read, and DOD used 2004-to-2006 funds designated for professional development on dyslexia, in particular, to supplement these efforts. Most of the online and classroom professional development prepares teachers and specialists to assess student literacy and provides them with strategies to teach students who have particular difficulties. For the 2004-to-2006 funding for professional development on dyslexia, DOD supplemented its existing training with online courses that include specific modules on dyslexia and tools to assess students' literacy skills. DOD identifies students who struggle to read--some of who may have dyslexia--through standardized tests and provides them with supplemental reading instruction. DOD uses standardized tests to screen its students and identify those who need additional reading instruction, but these schools do not generally label them as dyslexic. To teach students they identify as struggling readers, DOD schools primarily employ an intensive multimedia reading program that is highly regarded by the principals, teachers, and parents GAO interviewed. Those students whose performance does not improve through their enrollment in supplemental reading programs or who have profound reading difficulties may be eligible to receive special education services. DOD is subject to many of the requirements of the Individuals with Disabilities Education Improvement Act of 2004 on the education of students with disabilities. Students with dyslexia may qualify for these services, but they must meet program eligibility requirements. DOD uses the same standardized tests it uses for all students to assess the academic achievement of students with disabilities, including those who may have dyslexia, but does not report specifically on the outcomes for students with disabilities. A primary goal of DOD's strategic plan is for all students to meet or exceed challenging academic standards. To measure progress towards this goal, DOD assesses all students' academic achievement and school performance by comparing test scores to a national norm or to a national proficiency level. Overall, students perform well in reading compared to U.S. public school students. DOD disaggregates test scores for students with disabilities but does not report such information publicly. In contrast, U.S. public school systems under the No Child Left Behind Act of 2001 must report such data. Without this information, it is difficult for parents, policy makers, and others to measure the academic achievement of students with disabilities relative to all other students in the DOD school system. |
In recent years IRS and we have reported problems with IRS’ written communications to taxpayers. Instances of IRS correspondence being incorrect, incomplete, unclear, and nonresponsive have been documented by GAO and IRS. The need to improve the formats of notices has also been identified. In a recent annual report, IRS acknowledged that taxpayers too often find its notices confusing. Describing its written communications with taxpayers as a “seemingly intractable problem,” IRS made a commitment to improve the clarity of these documents. Its current Business Master Plan, a strategic planning document, establishes measurable clarity improvements in notices as one of its goals. IRS master file notices may request payment, seek information, inform taxpayers of account activity, or provide instructions related to account settlement. Many notices concern discrepancies identified during the processing of returns or result from collection efforts, examination of returns, and related audit activities. Under IRS procedures, notices are comprised of standard paragraphs written by staff in IRS functional units such as Returns Processing, Collections, Examination, and IRS field offices. NCU reviews notices to ensure that the text is clear and understandable. IRS’ master files contain specific account information for each taxpayer and IRS relies on these data to generate notices. The Individual Master File (IMF) and the Business Master File (BMF) contain histories of transactions maintained by IRS, including returns submitted by taxpayers, information returns submitted by third parties, and payments made. IMF and BMF notices may be generated, for example, when a discrepancy occurs between information reported on a taxpayer’s return and data stored in the master file. At that point, the notice is automatically printed and sent to the taxpayer. Appendix I illustrates a flowchart of a common situation precipitating the issuance of a notice—a mathematical error made by the taxpayer. The flowchart shows the steps involved in processing the notice. IRS has 94 different IMF-generated notices that it sends to taxpayers. However, as figure 1 shows, in 1993 13 IMF notices comprised approximately 71 percent of all IMF-generated notices sent that year. Figure 2 shows the distribution of the 152 different BMF-generated notices during 1993. A total of 31 BMF notices accounted for 92 percent of all BMF-generated notices sent to business taxpayers. Because a notice’s content and format may affect the taxpayer’s ability and willingness to comply, it is important that notices be clear, informative, and comprehensive. If a notice is unclear, a taxpayer may become less willing to respond out of frustration with IRS. IRS recognized the need to improve the quality of its written communication to taxpayers and established NCU in 1990 to initiate clarity reviews. NCU was tasked with evaluating notice revisions proposed by functional units as well as examining notices suspected of confusing taxpayers. This unit, comprised of approximately eight professional staff, analyzes notices for clarity, readability, and logical material presentation. According to IRS officials, functional units are required to obtain NCU’s approval of new notices and text revisions to existing notices before computer programs containing text will be created or altered. Appendix II depicts the notice revision process and the various IRS units involved. Our objectives were to (1) review a group of commonly used notices and offer suggestions to enhance their clarity where appropriate and (2) determine if the IRS’ process for issuing notices produces clear notices. To address the first objective, we examined 47 high-volume IMF and BMF notices. We selected the notices that were most frequently sent to taxpayers by IRS, excluding those we previously reviewed for our April 1993 report. These 47 notices resulted in the issuance of more than 33 million notices in 1993, or almost 52 percent of all IMF and BMF notices sent to taxpayers that year. We reviewed the versions of the notices currently being sent to taxpayers as well as any revisions to these notices proposed by NCU, but not yet implemented. In reviewing these notices for clarity, understandability, and usefulness, we considered if more specific language, clearer references, and consistent use of terminology would enhance these documents. We assessed whether the material was logically presented, whether sufficient information and detail was provided so taxpayers could evaluate their situations, and whether the taxpayer could resolve the matter without additional guidance. We also evaluated the notice’s format, the suitability of the notice’s title, the directions or guidance provided in enclosures or remittance forms, and whether IRS provided the taxpayer with all pertinent information in a single notice or whether additional notices would have to be sent to resolve the situation. Each notice was independently reviewed by at least two GAO evaluators. They considered the same factors in determining if the notices clearly conveyed the message IRS wanted to convey to taxpayers, including whether the text of the notice contained IRS’ intended message; title of the notice was consistent with the text; tax statement or statement of adjustment or other transaction was easy to read and compare to the taxpayer’s return; notice made any assumptions and, if so, whether they were clearly explained; terminology in the notice was easy to understand and logically presented; notice clearly explained what, if any, action was expected of the taxpayer notice provided the taxpayer with sufficient, but not excessive, information regarding the situation; and notice provided the taxpayer a telephone number to call or address to write to should he or she have questions or need additional guidance. We used appropriate guidance found in IRS’ Taxpayers Service’s Handbook and the Catalog of Federal Tax Forms, Form Letters and Notices to verify the purposes of the 47 notices. We also discussed all of our concerns and suggestions with the NCU Chief. However, we did not attempt to determine if the notices we reviewed were appropriate given a taxpayer’s particular circumstances. In addressing the second objective, we also gathered information to help us assess whether IRS had established a workable process for adopting and implementing notice text improvements. We also obtained data on the number of notice revisions proposed by NCU and the number implemented. However, data were not available on the length of time IRS took to implement the revisions. We identified the computer programming changes required to implement the revisions. We also gathered information concerning how IRS set priorities for requested computer programming changes, including notice revisions, and obtained information on proposed revisions that were rejected. Finally, we identified IRS’ efforts to improve the quality of notices and documented its recent testing of notice production on the Correspondex computer system, which may make revisions more efficiently. We did our work at IRS’ National Office in Washington, D.C., from August 1993 to June 1994 in accordance with generally accepted government auditing standards. We provided a draft of this report to pertinent IRS officials including the National Director of Planning, the Chief of NCU, and representatives from the Information Systems Management Division (ISM) and other organizational units involved in the notice development and review process. We met with these officials on September 26, 1994, to discuss this report. They suggested several minor technical modifications, which we adopted, but generally agreed with the facts presented as well as our conclusions and recommendations. Our review of 47 IMF and BMF notices revealed problems with both the language and format of 31 of these notices. For example, we found that many of the 31 notices would have been improved by more specific language, clearer references, consistent terminology, logical presentation of material, and sufficient information and guidance. Format problems included instances where the attached remittance form contained directions that were in conflict with those found in the body of the notice. Another format problem we identified was IRS’ inability to issue notices that addressed multiple or inter-related tax problems with a taxpayer’s account in a single piece of correspondence. Instead, taxpayers would receive several notices in a relatively short time period that addressed several different problems with their tax accounts rather than a detailed, comprehensive notice. This could cause taxpayers confusion and frustration and give taxpayers the impression that IRS is unsure of its position. IRS’ computer system is old and inefficient and is largely responsible for the delay in implementing notice language changes. Because of the time-consuming nature of the programming required to make notice text revisions along with other program changes, a bottleneck occurs. Consequently, IRS must evaluate and prioritize program requests. Notices are presently maintained on an aging computer system, which uses an old computer programming language—known as assembler language—that is difficult to change. Each master file notice exists as a separate program and, because of the technical difficulties involved in implementing language changes, minor revisions can result in major reprogramming efforts. Unlike modern word processing technology, which processes text changes almost as quickly as typing, the assembler language uses an older programming technique that requires each letter of every word and every character to be separately programmed. This character-by-character programming is known as hard coding and affects all IMF and BMF notices. Consequently, these notice revisions are not simple or quick to do. A single change in a word or punctuation mark would require that every subsequent character be reprogrammed. This is time consuming and inefficient and serves as a deterrent to improving notices. IRS established the National Automation Advisory Group (NAAG) in 1992 to facilitate establishment of programming priorities. According to IRS officials, NAAG is comprised of representatives from IRS’ Returns Processing, the major initiator of programming changes, and computer programming officials from ISM. NAAG allows Returns Processing to establish its own priorities, in view of limited resources and the technical difficulties specifically associated with the proposed changes. Faced with numerous demands to alter existing operational programs, ISM has found that it does not have the resources to respond to all requests. Programming changes to process returns for the next filing season and those related to implementing new tax laws, for example, take precedence over notice text revisions. These higher priority demands for computer programming changes lessen the likelihood that notice text changes will be made. While recommended notice text changes remain unprogrammed, the old version of the notice continues to be issued to taxpayers. The cyclical nature of IRS’ programming activities further delays prompt implementation of text revisions. Because certain programming must be performed at certain times of the year, IRS schedules specific programming tasks to be performed at particular times—for example, preparation for the upcoming filing season. If the development of a notice revision does not coincide with the appropriate programming cycle, its implementation may be delayed for months, until the next available cycle. Generally, because of the high demand for programming changes, IRS staff submit programming requests 6 to 12 months in advance of their preferred implementation date, to allow sufficient scheduling time. IRS officials responsible for programming notice text changes said that in most instances proposed notice revisions should be submitted at the beginning of the calendar year so they can be scheduled. Those submitted later in the year may not be considered for scheduling until the beginning of the next calendar year. Because of the high demand for computer programming changes, the submission of a revision request by a functional unit does not guarantee that the reprogramming will be done. Revisions may be assigned as priority 1 or priority 2, or the revision may be rejected outright. According to NCU officials, a priority 2 status has only a slight chance of being programmed. Only one programming request for a notice was given a priority 1 status at NAAG’s March 1994 meeting. Fifty-nine various programming requests were presented, and 17 related to notices. Of the 59 programming requests, only 22 received priority 1 status, 8 were granted a priority 2 status, 1 was withdrawn, and the remaining 28 were rejected. Of the 17 notice-related requests, only 1 was assigned as priority 1. This request called for the establishment of new notices to accompany a new tax form. IRS needed these notices for those taxpayers with tax problems who used the new form. Four notice-related requests were designated as priority 2s. The remaining 12 requests were rejected. Although a few of these rejected requests called for changes that would improve IRS’ internal processing of notices, others involved improving the clarity or usefulness of the notices to taxpayers. These rejected requests for improvements to benefit taxpayers could have enhanced over 3 million taxpayer contacts, the notice volume associated with these notices in 1993. One rejected request concerned a notice sent in 1993 to nearly 1 million earned income credit (EIC) filers. NCU officials identified an erroneous reference to a section of the EIC tax form in the text of the notice. By the time this error was discovered, the notice had already been sent to a group of recipients. When the request to correct the language was brought to NAAG, it decided to retain the incorrect reference. According to NCU officials, NAAG made this decision because some taxpayers had already received the incorrect version and, it seemed too late to do anything about the problem, which was not viewed by NAAG members from other units as very serious compared to other programming needs. Another request would have merged information now contained in two notices into a single notice with revised text. IRS had anticipated that this merger would not only simplify matters for taxpayers but also annually save an estimated $2.4 million in reduced processing and mailing costs. This request received a priority 2 status and was forwarded to ISM for consideration. Because of higher priority requests, including legislative changes, ISM determined it could not implement the change in January 1995 as requested. According to ISM computer programming officials, they could not make the large commitment of resources needed to make the change. Even when notice revisions are approved, it may be months before they are actually programmed. Because of the backlog of programming requests, the intense level of effort associated with those changes, and the cyclical nature of completing the program changes, revisions were often submitted months in advance. For example, the requester of the single notice revision that was approved at the March 1994 NAAG meeting had proposed a January 1995 implementation, as had many others requesting changes during that session. Revisions to improve the clarity of notices made by NCU were not always adopted promptly. Among the notices we reviewed were several that had been revised by NCU more than a year earlier, but not implemented as of May 1994. We believe that the changes NCU had made will improve the clarity of these notices, but we are concerned with the length of time that has elapsed since NCU revisions were proposed. Although programming delays are significant, IRS has not established a tracking system that would enable it to measure the extent of the delays. There is no system for monitoring whether requested changes are made, or if approved, the progression of notice revisions from submission to implementation. Without a system to track the progress of these revisions through the computer programming stage, it is difficult to document the overall timeliness of notice revision implementation. Without this documentation, delays and other problems may go unobserved. To collect data on the implementation of its recommendations, NCU conducted a special review in March 1993 to determine the status of all its prior recommendations. The study revealed that 36 percent of NCU’s revisions were never implemented. Although the report did not document the extent of overall delays in implementing those revisions that were ultimately programmed, it identified several instances where revisions to high-volume notices took a year or more to implement. IRS recognizes that notices need improvement and has several initiatives in process to enhance notice quality. First, several high-volume collection notices have been programmed and tested on IRS’ Correspondex computer system, a letter-writing system used for replying to taxpayers’ correspondence. Text changes can be made more quickly and easily on the Correspondex system than on the assembler language system currently producing notices. Correspondex officials acknowledge that while this system is not as efficient as word processing technology, Correspondex can make text revisions much sooner than the 6 to 12 months that it often takes to implement assembler language system changes. These officials told us that text changes to IRS’ Correspondex letters typically take 30 days but under critical circumstances can be made within 1 day. Correspondex has the capacity to produce most IMF and BMF notices. According to Correspondex officials, it seems that only those notices with an unusually large amount of data imported from a taxpayer’s master file record are unsuitable for transfer. Correspondex also provides the advantage of more visually appealing print features presently unavailable on the assembler language system, such as lower-case letters. Figure 3 shows an example of a commonly sent collection notice as it would look if produced by the assembler system. Figure 4 shows the same notice produced by Correspondex. The testing of notices on Correspondex has not fully demonstrated its suitability for producing IMF and BMF notices. Testing has been limited to the collection notices maintained on the Integrated Data Retrieval System (IDRS), which operates on the same computer system as Correspondex. This computer system is different from the computer system on which IMF and BMF operate. IDRS notices are easier to convert to Correspondex than IMF and BMF notices. However, Correspondex officials said that they are confident they can successfully produce IMF and BMF notices even though transferring these notices will technically be more difficult than the IDRS notices. While the officials said it would be fairly simple to reproduce the standard notice text on Correspondex, new computer programs would have to be written to merge taxpayer data into the appropriate places in the new Correspondex text. Assembler language system programmers would need to develop these programs and would continue to be responsible for accessing the master file. However, once this programming transition is complete, the assembler programming staff would play a smaller role in the notice process and may be able to devote more time to higher priority work. Correspondex officials also told us that they hope to test several IMF and BMF notices this year and, if successful, would like to ultimately transfer most notices, including IMF and BMF notices, to Correspondex. Even if all IMF and BMF notices could not be transferred, a substantial number of other notices could be improved by transferring those with recognized clarity problems or volume. As we discussed earlier, many taxpayer contacts could be improved by changing a relatively few notices. Both Correspondex and NCU officials are optimistic about this testing and view it as a way to improve the clarity and format of notices, at least until more sophisticated developments arrive later this decade under TSM. However, IRS management has not committed to expanding the testing to IMF and BMF notices. A second effort in progress is the testing of a new notice format, which includes a revised “tax statement” modelled after a version suggested in our April 1993 report on IRS forms, publications, and notices. Taxpayers who are sent math error notices from the IRS Kansas City Service Center receive either the traditional IRS format or the new version modelled after our suggestion. Each version has a unique control number. Taxpayers calling or writing IRS about the notice provide this unique number, thereby enabling IRS to determine which version generates the most questions. This test will help IRS decide whether it would be cost beneficial to convert to the new format. Preliminary response data clearly demonstrate that taxpayers who receive the traditional version continue to contact IRS with questions at twice the rate of the taxpayers receiving the new version. A third effort involves the acquisition of new printing equipment for IRS’ 10 service centers. These printers should improve the general appearance of notices. IRS prints master file notices in upper-case type because with the current equipment its lower-case type is illegible. The new printers could feature lower-case type and different fonts. Another advantage would be that the notice borders would be printed as text. These borders often contain important information regarding where taxpayers should call or write for additional assistance. Presently, borders are contained on various plastic overlays that are copied on to paper before the notice text is printed. By printing these borders as text, the likelihood that a notice would be issued with an inappropriate border, which could confuse taxpayers, should be reduced. Finally, a fourth effort involves a TSM initiative that may also lead to improved notices. TSM is exploring ways of issuing single notices that could address multiple tax issues. IRS currently sends taxpayers with multiple or inter-related tax problems a separate notice for each tax matter. The receipt of several notices within a brief period may both confuse and frustrate taxpayers. The master file lacks the ability to identify and address multiple tax problems in a single notice. However, TSM officials hope to be able to deliver to taxpayers comprehensive notices containing all account activity and adjustments. In addition to these ongoing efforts, IRS is considering other ways of supplementing notices so they become more useful and understandable to taxpayers. IRS is considering (1) placing commonly asked questions and answers on the back of each notice and (2) expanding the existing tele-tax system to include notice information. This system operates on a toll-free number and provides prerecorded explanations about tax return preparation. IRS officials told us that often taxpayers merely want to speak to a telephone assistor and confirm that their interpretation of a notice is correct. These officials speculated that the common questions and answers placed on the notices themselves, along with the general notice information to be put on tele-tax, may provide some taxpayers with sufficient information and a greater comfort level, thereby decreasing the number of taxpayers who require the assistance of a telephone assistor. IRS can do more to improve the clarity of its notices. We suggested clarity changes to 31 of the 47 notices we reviewed. These suggestions related to the content, appearance, and sufficiency of instructions the notices provided to taxpayers. In addition, the series of multiple notices, which may be sent to taxpayers with numerous or inter-related tax problems, is another area where gains in clarity improvement can be made. An ongoing TSM effort addressing this problem, if successful, would make a major contribution to notice clarity. While IRS recognizes the importance of better communications with taxpayers and makes efforts to enhance taxpayer understanding of existing notices, taxpayers continue to receive notices that do not reflect the most recently recommended versions approved by NCU. These recommended notice changes include language and format modifications that are designed to improve notice clarity and usefulness. Computer limitations appear to be one of the most important causes of continued use of notices that IRS processes have identified as needing revision. Notices are generated from the IMF-BMF computer system, and this system cannot make notice revisions efficiently. Text changes require extensive and time-consuming programming efforts. Because of other high-priority programming requests and limited programming resources, computer programming priorities generally do not favor notice language changes. Thus, few changes survive this process. Those that do are made with great difficulty and may take over a year to complete because of the programming requirements. IRS has a different computer system on which Correspondex operates, and Correspondex may provide an alternative to the IMF-BMF computer system for issuing notices. Text changes can be made much more quickly and easily on Correspondex. Although Correspondex officials are confident that Correspondex can produce IMF and BMF notices, they said tests using those notices have not been made. The lack of a system to track the progress of proposed notice language changes limits IRS’ ability to oversee notice clarity improvements. Delays may not be detected and millions of unclear notices may be issued to taxpayers in the interim. We recommend that the Commissioner of Internal Revenue test the feasibility of using Correspondex to produce IMF and BMF notices and, if possible, transfer as many IMF and BMF notices as practical to the Correspondex system. To help the transition to Correspondex, we recommend that notices be transferred in stages and that a mechanism be established or an existing body, such as NAAG, establish the order in which notices would be transferred. The ease of the transition, the costs of the transfer, and the benefits of making these transfers should all be considered in establishing the order. We recommend that the Commissioner establish a system to monitor proposed notice text revisions to oversee progress or problems encountered in improving notice clarity. This system should be able to identify when a revision was proposed and the revision status at all times until it is implemented. We also recommend that the Commissioner include in the monitoring system a threshold beyond which delays must be appropriately followed up and resolved. We obtained oral comments on a draft of this report from IRS officials. These comments were supplemented by a memo elaborating on remarks made during our previous discussion. IRS agreed with our comments that more can be done to improve the clarity of notices to taxpayers and also with our recommendations. IRS also suggested some technical changes that we considered in preparing the final report. Specifically, IRS has agreed to test the feasibility of using Correspondex to produce both IMF and BMF notices. IRS has also agreed to pursue the development of a system to monitor implementation of proposed notice text revisions in the context of its planned Tax System Modernization efforts and business vision-related actions. IRS intends for this system to ensure that proposed revisions are considered and implemented in a timely manner. In addition, IRS also agreed to consider most of the suggested notice text revisions we offered to clarify the text of the master file notices we reviewed during the course of this assignment. We are sending copies of this report to other congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. Major contributors to this report are listed in appendix IV. If you or your staff have any questions concerning the report, please call me on (202) 512-9110. ? ? detected? data detected? ? notice(s) NCU proposes revision-seeks agreement from ? RIS ? ? To assess the clarity and usefulness of IRS notices, we reviewed 47 Individual Master File (IMF) and Business Master File (BMF) notices that IRS frequently sends to taxpayers. These notices accounted for about 50 percent of all IMF and BMF notices sent to taxpayers in 1993. As explained in more detail in the objectives, scope, and methodology section of this report, we used a long list of factors to determine whether each notice clearly conveyed the message IRS wanted to convey. For example, we reviewed each notice to determine whether (1) the title of the notice was consistent with the text and (2) the terminology in the notice was easy to understand and presented in a logical order. We used these factors to judge clarity because IRS had not established guidance to determine what constitutes a clear notice. We identified items of concern in 31 of the 47 notices. Our concerns take into account the version of the notice currently being sent to taxpayers and, if applicable, the revision proposed by NCU. At the time we did our work, NCU had reviewed 46 of the 47 notices. Our concerns include the need for additional guidance, more specific language, clearer references, appropriate terminology, logical presentation of material, sufficient information or detail, and correct and consistent formats. We also identified several IMF and BMF notices that could confuse or frustrate taxpayers who may receive several of these notices, instead of a single comprehensive notice, that would summarize the status of their tax accounts. These notices are also identified in this appendix. Among the notices we reviewed were several NCU revisions proposed more than a year ago, but not yet implemented at the time we did our work. Our positions on these notices mirrored NCU’s. Our only additional concern was the length of time that had elapsed since NCU’s revision was proposed. IRS already has efforts underway that should help address some of our concerns. NCU officials generally agreed with our suggestions but typically could not specify if and when our suggestions would be adopted. The delays in implementing notice text revisions, as discussed in the body of this report, often precluded the officials from giving a more precise response. Our specific concerns with the IMF and BMF notices that we reviewed are noted in this appendix. IRS’ response immediately follows. Also, examples of these notices currently being sent to taxpayers accompany our concerns highlighting the potential problem. In some cases, we raised the same concern with more than one notice. In these instances, we described the concern in relation to a particular notice and mentioned the other notices with comparable problems. Type of change: additional guidance and specific language. This notice assumes that to Social Security Association’s (SSA) records need correction, which may not be true. It suggests that the error was made either by the taxpayer or SSA and does not acknowledge that the error could be IRS’s. We suggested IRS advise the taxpayer how to correct IRS’ information if the error was not made by SSA or the taxpayer. We also noted this concern on several other notices including CP 54B, CP 54G, CP 54Q, and CP 59. Examples of these notices are not shown in this report. Similarly, we found the notice’s title does not acknowledge the possibility that IRS records may need correction. We suggested a more suitable title such as “IRS/SSA Records Do Not Agree.” IRS agreed to consider this suggestion. Type of change: specific language. This notice does not stress the importance of why the SSA’ records should be correct. We suggested emphasizing that correct information is needed so SSA can provide individuals with proper credit for all earned income, thereby protecting their earnings record and future social security benefits. IRS agreed to consider this suggestion. Type of change: appropriate terminology. This notice currently includes excerpts from the applicable penalty and interest sections of IRS’ Notice 746, which is a preprinted explanation of IRS’ penalty and interest policies. The explanations in this notice are extremely detailed and may be confusing for the taxpayer. Some of the explanations may not apply in every case. We suggested that a brief and clear explanation of the specific penalty and interest charges being levied against the taxpayer receiving the notice be provided. IRS agreed that our suggestion had merit. IRS had already been working on a CP 14 revision designed by one of its Service Centers as an interim step in combining the CP 14 with the relevant parts of Notice 746. Typically, Notice 746 is enclosed with the CP 14. IRS plans to discontinue Notice 746 by providing only the applicable penalty and interest explanations in the notice text. Providing only the pertinent explanations will prevent the taxpayer from searching through irrelevant narrative. IRS is also clarifying the language and developing an easier to read format for the CP 14. Type of change: logical presentation of material. We suggested that the tax statement be placed at the top of this notice. Placing the paragraph requesting the taxpayer to write IRS with questions after the tax statement enhances the clarity of the notice. We also offered this suggestion on a related notice, the CP 30A, concerning a reduction in the estimated tax penalty (see fig. III.4). IRS agreed to consider this suggestion. Type of change: logical presentation of material. We suggested that a tax statement similar to the one recommended on page 8 of our previous report be adopted. Such a tax statement would provide a better summary of what the taxpayer reported on their return and how IRS had made any needed corrections. We also noted this concern on the CP 30A noted above and the CP 132, which is an IMF math error notice. IRS agreed that our suggested tax statement is preferable. They stated, however, that IRS is not able to use this type of statement at the present time because of the limitations of its current printing equipment. IRS is presently testing a new tax statement format on two other math error notices. The testing is being conducted on special printing equipment in one of the service centers. IRS cautioned us that complete implementation of this effort could not occur until 1995. Type of change: specific language. We suggested revising the first sentence of this notice to “We reduced your Estimated Tax Penalty . . .,” deleting the words “or eliminated” because they are unnecessary. If the penalty was eliminated, it would, in fact, be reduced to zero. We believe that this change would be less confusing to taxpayers. IRS agreed to consider our suggestion. Type of change: specific language. NCU has proposed a revision to this notice. The currently programmed title is more descriptive than the proposed title, which merely refers to “another debt.” The current title specifies “other federal taxes owed.” Because “another debt” could refer to debts owed to other federal agencies and IRS already has another separate notice to address such situations, we thought the title should be as specific as possible and refer to the other federal taxes owed. IRS agreed to consider revising the title of the proposed version of this notice and using the currently programmed title. Type of change: additional guidance. The text of this notice refers to a filing status code. We noted that taxpayers may not understand this code and may be confused. We suggested that IRS use a brief narrative explanation rather than a numerical computer code. IRS agreed to consider this suggestion. Type of change: specific language and sufficient information and detail. We suggested clarifying both the language and tax statement portion of this notice so taxpayers would have an easier time understanding IRS’ computations. First, the notice states the amount unpaid from prior notices should reflect any credits and payments made since the last notice. We suggested revising the last sentence in the first paragraph to read: “We figured this amount as follows:”. Second, to make this clearer to the taxpayer, we suggested the statement start with the amount due from the last notice. Separate lines could show credits or payments made since that notice. This would make it easier for the taxpayer to identify credits or payments reflected in IRS’ records since the last notice. IRS advised us that they do not maintain a history of taxpayers’ prior balances. As payments or adjustments are made, the “balance due” is updated and the prior balance is deleted. However, the NCU Chief noted that IRS may be able to show payments made by the taxpayer since the last notice. This would at least provide the taxpayer with information about whether all payments had been credited to the account. IRS agreed to explore this possibility. Type of change: appropriate terminology. We found the second sentence of the second paragraph to be confusing. We suggested revising it to read: “The penalty and interest above are based on amounts you paid late plus amounts unpaid from prior notices.” IRS agreed to consider this suggestion. Type of change: clear reference. We found the “Credit Balance” and “Underpayment” references in the “Tax Statement of IRS Changes” section of the NCU’s revision of this notice confusing. We thought the term Credit Balance suggested that the taxpayer had overpaid the tax and hence, received a credit. Yet the term Underpayment on the next line clearly shows that the taxpayer owes money to IRS. As this is a balance due notice, we suggested eliminating “Credit Balance” and replacing it with a term less likely to confuse taxpayers, such as “Total Credits Applied.” We also noted this problem on another notice, currently in use, the CP 161 —a request for payment notice (example not included in this report). IRS agreed to consider revising this terminology as we suggested. Type of change: logical presentation of material. For clarity, we suggested reversing the second paragraph concerning the amount owed and the third paragraph containing payment instructions. IRS agreed to consider rearranging these paragraphs. Type of change: clear reference. We questioned why the “penalty for late payment” appeared in the list of charges when the amount charged was zero. We suggested that if the taxpayer was not charged a penalty this line in the statement should be suppressed. We also suggested that if the taxpayer was to be charged a penalty, this fact should be explained in the preceding paragraphs. IRS advised us that a programming command may be responsible for the presence of the zero balance on the penalty line. IRS agreed it would be preferable to suppress this line if no penalty is to be applied. They agreed to pursue this matter. IRS also agreed to consider adding a brief explanation in the preceding paragraphs if a penalty has been charged. Type of change: logical presentation of material. The “Tax Statement” summarizing the status of the taxpayer’s account is the last item appearing on this notice. We suggested moving this statement before the payment instructions to enhance clarity. We also noted this problem on the CP 161 and CP 163, notices reminding taxpayers of balances due. Examples of these notices are not included in this report. IRS agreed to consider adopting our suggestion. Type of change: sufficient information and detail. To calculate the penalty, it is essential to know the number of months by which the return was late or considered incomplete. We suggested that IRS provide this information on the notice so the taxpayer can understand why a penalty has been assessed, determine how IRS calculated the penalty, and then decide if they agree the penalty is appropriate. IRS advised us that after the computer calculates the penalty it does not retain a history or any information regarding dates used in that calculation. However, IRS agreed to explore the possibility of inserting the date the return was due and the date it was received. This would allow the taxpayer to make this calculation themselves with the same information available to IRS. IRS said that the notice should specifically indicate whether the penalty is for a late or an incomplete return. IRS also said they are trying to eliminate notices with an “either/or” situation; in this case, the late or incomplete return. Type of change: appropriate terminology. We found the level of detail provided in this notice to be overwhelming, particularly as the notice is proposing, not assessing a penalty. We suggested requesting the necessary information from the taxpayer and advising them that if the information is not received within a specified time, that a penalty will then be assessed on the basis of available information. IRS stated they had already identified the excessive detail in this notice as a concern and raised this matter with Returns Processing, the appropriate functional unit. Although Returns Processing officials regard this information as necessary, IRS agreed to pursue this matter with a Returns Processing task force, which had recently been established to identify and resolve returns processing type problems. IRS indicated that one alternative may be to delay sending a notice until a penalty is actually assessed, rather than when it is proposed. Type of change: sufficient information and detail. We suggested that the notice contain a record of tax deposits. This would allow taxpayers to identify a discrepancy by reconciling their records of deposits to IRS’ records. IRS advised us that it is not possible for the existing computer and printing equipment to supply this kind of information on a notice. It may be possible with the acquisition of the new equipment under TSM. Type of change: appropriate terminology. We found that the currently used versions of these notices were too terse and lacked a sufficient explanation for taxpayers. However, we found the NCU’s proposed version to be confusing because of an excessive amount of detail. We found the statements relating to the installment agreements and the charges in the computation of change statement to be the most likely ones to confuse taxpayers. We suggested that NCU seek a middle ground so taxpayers were supplied with enough information to respond appropriately but were not overwhelmed with unnecessary details and technical terminology. We also noted this concern on the CP 220, a balance due adjustment notice. IRS agreed these notices are troublesome. They agreed to review them again to assess their clarity. NCU is presently working with Returns Processing to simplify these notices. Type of change: sufficient information and detail. We suggested that IRS provide additional information on the number of forms involved and the period by which they were late. This would clarify IRS’ penalty calculation and make the notice easier for the taxpayer to understand. IRS agreed to consider our suggestion. They explained there is a similar notice—the CP 945, which deals with Form 1099—that is sent in the same envelope. IRS is now working to combine these notices into a single notice. However, the computer programming involved in this change is complex and time consuming. IRS is not sure when this effort will be completed. Type of change: specific language. We suggested that the title of this notice be supplemented with “Statement of Your Account—Payment Applied”. This change would alert the taxpayer and would also be consistent with other IRS notices. IRS agreed to consider this suggestion. Type of change: appropriate terminology. We found the first sentence of this notice to be confusing. We suggested revising it to read: “We removed a credit for an amount that was incorrectly applied to your account for Form (xxxx) for Tax Year (xxxx).” IRS said NCU had not performed an in-depth review of this notice and indicated they would consider this suggestion. Type of change: sufficient information and detail. We questioned why this notice suggests that the taxpayer call the IRS number in their local directory rather than provide a number for the office that is most familiar with the taxpayer’s case. IRS said they are required to put both a local and toll-free number on notices. However, they acknowledged that including both the local and toll-free numbers may be confusing, as the “local” number may actually be a long-distance call. IRS hopes to clarify that local numbers may be long distance but that taxpayers are more likely to reach a representative familiar with their case than those calling the toll-free number. Linda Schmeer, Evaluator Donald R. White, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) taxpayer notices, focusing on: (1) possible improvements to the notices; and (2) IRS processes for ensuring that its notices clearly convey essential information to taxpayers. GAO found that: (1) 31 of the 47 most commonly used notices that it reviewed have clarity problems such as language, content, and format; (2) needed improvements to the notices include more specific language, clearer references, consistent terminology, logical presentation of material, and sufficient information and guidance on how to resolve taxpayer problems; (3) despite internal reviews and recommendations to improve the notices, IRS has delayed many notice revisions and has failed to implement others primarily due to its limited computer programming resources and higher priority programming demands; (4) IRS does not have a system to track recommended notice revisions; (5) IRS is testing whether its correspondence system can generate collection notices and make more timely notice revisions, since it has a number of capabilities that the master file notice system does not; (6) two other IRS initiatives to improve notices include testing a new notice format and acquiring new printing equipment for the service centers; (7) IRS is using its Tax Systems Modernization initiative to explore ways to issue single, comprehensive notices for taxpayers with multiple and interrelated tax problems; and (8) IRS is considering ways to supplement notices with commonly asked questions and appropriate answers to improve the quality of the notices. |
The destruction, looting, and trafficking of cultural property are heightened during times of political instability and armed conflict. Destruction of cultural property entails intentional or unintentional damage, such as bombing, to sites and objects. In the context of cultural property protection, looting usually refers to the illegal removal of undocumented objects from a structure or site not already excavated. Objects documented as part of a collection may also be stolen from individuals, museums and similar institutions, and other places of origin. Looted and stolen objects may be trafficked or illicitly traded, sometimes outside the location in which the objects were looted or stolen. A Deputy Assistant Secretary of State reported that ISIS has encouraged the looting of archeological sites as a means of erasing the cultural heritage of Iraq and Syria and raising money. The State official noted that the U.S. raid to capture ISIS leader Abu Sayyaf in May 2015 resulted in the discovery of documents that demonstrated ISIS had established an Antiquities Division with units dedicated to researching known archaeological sites, exploring new ones, and marketing antiquities. According to these documents, ISIS’s Antiquities Division collects a 20 percent tax on the proceeds of antiquities looting and issues permits authorizing certain individuals to excavate and supervise excavations of artifacts. Documents found during the raid also indicate ISIS made statements prohibiting others from excavating or giving permits not authorized by ISIS. Sales receipts indicated the terrorist group had earned more than $265,000 in taxes on the sale of antiquities over a 4- month period in late 2014 and early 2015. Figure 1 depicts antiquities recovered during a raid to capture Abu Sayyaf. While documents from the Abu Sayyaf raid show that ISIS has profited from the looting of antiquities, there are no reliable and publicly available estimates of the revenue ISIS earns from trade in stolen cultural property overall, according to the director of a State-funded project on cultural property. However, State officials have also noted that, although profits from trafficking are difficult to quantify, ISIS has increasingly turned to the antiquities trade as access to revenue from other sources, such as oil, has been restricted. In addressing destruction, looting, and trafficking of cultural property, the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted conventions in 1954 and 1970 to protect cultural property. The 1954 convention addresses cultural property protection during armed conflict, and the 1970 convention addresses the protection of cultural property against illicit import, export, and transfer of ownership. The United States enacted the CPIA into law in 1983, thereby implementing provisions of the 1970 UNESCO Convention on the Means of Prohibiting and Preventing the Illicit Import, Export and Transfer of Ownership of Cultural Property (1970 UNESCO Convention). Through the CPIA, the United States has restricted the importation of certain cultural property. Cultural property is defined in the CPIA by reference to the 1970 UNESCO Convention, that defines the term “cultural property” for purposes of the convention to mean property, which on religious or secular grounds, is specifically designated by each state as being of importance for archaeology, prehistory, history, literature, art, or science and which belongs to certain categories. According to State officials, the CPIA addresses undocumented looted materials of a State Party by providing the President the authority to enter into a bilateral or multilateral agreement with the State Party to impose import restrictions and by providing the authority to impose import restrictions if an emergency condition applies. As it relates to articles of stolen cultural property from Iraq and Syria, and other Parties to the 1970 Convention, the CPIA also restricts cultural property belonging to the inventory of a museum or a religious or secular public monument or similar institution, which was stolen from such museum, monument, or institution after April 12, 1983 or after the date the country of origin became a party to the Convention. In addition to the 1983 CPIA import restriction on stolen documented property, the United States has implemented other restrictions related to a wider range of cultural property from Iraq and Syria. In response to Iraq’s invasion of Kuwait on August 2, 1990, the United States imposed comprehensive sanctions against Iraq. After the 2003 intervention in Iraq, the Iraq National Museum in Baghdad was looted, resulting in the loss of approximately 15,000 items, including ancient amulets, sculptures, ivories, and cylinder seals, some of which were subsequently returned to the museum. In 2007, pursuant to the Emergency Protection for Iraqi Cultural Antiquities Act of 2004, State determined the existence of an emergency condition under the CPIA, and import restrictions for cultural property illegally removed from museums, and monuments, and other locations in Iraq since 1990 were also put in place. DHS’s CBP then issued a regulation on April 30, 2008, to reflect the imposition of the import restrictions. In issuing the regulation, to provide general guidance, CBP also issued the Designated List of Archaeological and Ethnological Material of Iraq that describes the types of articles, which State refers to as objects, to which the import restrictions apply. Furthermore, in February 2015, the United Nations Security Council unanimously adopted Resolution 2199, which notes, in part, that all member states shall take appropriate steps to prevent the trade in Iraqi and Syrian cultural property illegally removed from Iraq since August 6, 1990, and from Syria since March 15, 2011. In May 2016, the United States passed the International Cultural Property Act, which directs the President to exercise his authority under the CPIA to impose restrictions on any archaeological and ethnological material of Syria (as defined in the Act). In August 2016, CBP issued a regulation to reflect the imposition of import restrictions and issued a Designated List of Archaeological and Ethnological Material of Syria that describes the types of objects or categories of archaeological and ethnological material to which the import restriction applies. Included in the International Cultural Property Act is the sense of Congress that the President should establish an interagency committee to coordinate the efforts of the executive branch to protect and preserve international cultural property at risk from political instability, armed conflict, or natural or other disasters. According to this sense of Congress, such committee should 1. be chaired by a Department of State employee of Assistant Secretary rank or higher, concurrent with that employee’s other duties; 2. include representatives of the Smithsonian and federal agencies with responsibility for the preservation and protection of international cultural property; 3. consult with governmental and nongovernmental organizations, including the United States Committee of the Blue Shield, museums, educational institutions, and research institutions, and participants in the international art and cultural property market on efforts to protect and preserve international cultural property; and 4. coordinate core U.S. interests in—(A) protecting and preserving international cultural property; (B) preventing and disrupting looting and illegal trade and trafficking in international cultural property, particularly exchanges that provide revenue to terrorist and criminal organizations; (C) protecting sites of cultural and archaeological significance; and (D) providing for the lawful exchange of international cultural property. Pursuant to the sense of Congress, State has led the effort to create the CHCC. After State convened an informal interagency meeting in June 2016, State chaired a formal meeting to establish the CHCC in November 2016 and chaired additional CHCC-wide meetings in March and June 2017. At its inception, CHCC participants included officials from nine U.S. federal entities. Appendix I shows these entities’ reported activities related to protecting cultural property. The CHCC has also established three working groups, as follows: 1. Technology, a newly created working group that focuses on the application of new and existing technologies to combat cultural property trafficking. 2. Partnerships and Public Awareness, a newly developed working group that focuses on public outreach and public-private partnerships. 3. The Cultural Antiquities Task Force (CATF), a preexisting group that is now a third working group under the CHCC, focuses on efforts to support local governments, museums, preservationists, and law enforcement to protect, recover, and restore cultural antiquities and sites worldwide, particularly in Iraq and Afghanistan. Specifically, the CATF has previously funded a broad range of activities in support of law enforcement efforts to combat theft, looting, and trafficking of historically and culturally significant objects worldwide. State asked participants representing nine U.S. federal entities to voluntarily participate in individual working groups. As of June 2017, the newly formed working groups—Technology, and Partnership and Public Awareness—had each held two meetings. The CATF, which held regular meetings prior to the formation of the CHCC, met in June 2017 after the CHCC was established and the CATF became a CHCC working group. Figure 2 shows key events as of June 2017 related to the CHCC and its working groups since the passage of the International Cultural Property Act. In our prior work, we have identified key collaboration practices that could be used to assess collaboration at federal agencies. These practices can help agencies implement actions to operate across boundaries, including fostering open lines of communication. We also found that positive working relationships among participants from different agencies bridge organizational cultures and that these relationships can build trust and foster communication, which facilitates collaboration. Given many federal agencies’ long-standing challenges working across organizational lines, following these practices could help agencies to enhance and sustain collaboration at all organizational levels. Figure 3 depicts these key practices. DHS and DOJ take actions in five key areas to enforce laws and regulations related to restricted Iraqi and Syrian cultural property: (1) monitoring of shipments; (2) detention, seizure, and taking forfeiture actions on items; (3) investigation of objects; (4) repatriation of cultural property; and (5) prosecution of criminal violations to enforce laws and regulations related to restricted Iraqi and Syrian cultural property. According to DHS officials, DHS has the primary role for enforcing import restrictions on Iraqi and Syrian cultural property. CBP conducts monitoring of shipments, cargo, and travelers for illicit cultural property through border interdictions. CBP and the U.S. Immigration and Customs Enforcement’s (ICE) Homeland Security Investigations (HSI) detain, seize, and obtain forfeiture of suspected items; and ICE-HSI conducts investigations, pursues prosecutions through state and federal courts, and repatriates cultural property to rightful owners. DOJ actions to address restricted Iraqi or Syrian cultural property include detaining, seizing, and taking forfeiture action on items; conducting investigations; repatriating cultural property; and prosecuting criminal violations. According to Federal Bureau of Investigation (FBI) officials, the FBI conducts investigations; detains, seizes, obtains forfeitures, and repatriates restricted cultural property items; and, according to DOJ officials, the U.S. Attorney’s offices or the Criminal Division within DOJ pursue potential criminal violations. See figure 4 for a list of key actions taken by DHS and DOJ on restricted Iraqi and Syrian cultural property. Monitoring of shipments. According to DHS officials, CBP, within DHS, monitors shipments and travelers to identify items of Iraqi and Syrian cultural property imported in violation of U.S. customs laws. Within DHS, CBP monitors suspected shipments to identify restricted cultural property from Iraq or Syria that may be trafficked in the United States, according to CBP officials. These officials noted that CBP uses information obtained by other U.S. agencies or industry partners to identify high-risk transactions and shipments for further examination. These officials said that CBP may refer cultural property items found during its examinations to ICE-HSI for further investigation if they suspect a violation. Additionally, in collaboration with ICE-HSI, CBP also provides training to its officers at high-risk ports of entry. CBP monitoring activities have led to the discovery of smuggled cultural property. For example, according to DHS officials, a CBP inspection of a shipment exiting a Chicago mail facility led in December 2007 to the discovery of a Babylonian clay foundation cone originating from Iraq from 2100 B.C. (see fig. 5). The person exporting the item had misclassified it using a false country of origin. CBP detained the item and referred it to ICE-HSI for further investigation. According to ICE officials, ICE-HSI ultimately obtained forfeiture of the item and repatriated it to Iraq in February 2010. Detention, seizure, and taking forfeiture actions on items. DHS and the FBI detain, seize, and take forfeiture actions on Iraqi and Syrian cultural property items that are potentially in violation of U.S. law. Within DHS, CBP detains and, if appropriate, seizes Iraqi or Syrian cultural property if that property was potentially imported into the United States contrary to U.S. law, according to CBP officials. When CBP identifies such an item, it detains the property and, if further investigation is warranted, contacts ICE-HSI, which may conduct an investigation. ICE- HSI also receives leads regarding illegally imported cultural property already within the United States from other sources, including auctions, art galleries, and museums. Upon identification by CBP or found through other means, ICE-HSI seeks forfeiture of items of Iraqi and Syrian cultural property that have entered the United States in violation of U.S. customs law. According to ICE-HSI officials, although the import restrictions on Iraqi and Syrian cultural property are not criminal laws, the restrictions provide a legal basis for seizure and forfeiture actions by CBP and ICE- HSI. DHS actions to detain, seize, and pursue forfeiture of items that are suspected to be in violation of U.S. cultural property laws have led to the rescue and return of cultural property to Iraq. For example, according to DHS officials, in 2005 CBP discovered an inscribed stone tablet originating from Iraq during an inspection at a FedEx facility at Newark airport. After CBP detained the item, ICE-HSI consulted with local cultural property experts to determine the authentication and origin of the item, which, according to ICE-HSI officials, was imported using a false country of origin. ICE-HSI seized, took forfeiture action, and ultimately repatriated the item to Iraq in February 2010. When the FBI discovers restricted items of cultural property, the FBI’s Art Crime Team works to obtain or pursue forfeiture of the items, according to FBI officials. The FBI has detained, seized, and taken forfeiture actions on items of Iraqi cultural property. For example, according to FBI officials, the FBI opened an investigation after receiving a tip about an array of ancient artifacts originating from Mesopotamia for sale online (see fig. 6). Most of the items were cuneiform tablets used in Mesopotamia for record keeping, and three of the seized artifacts were inscribed foundation cones. According to an FBI document, the artifacts were looted from present-day Iraq and smuggled into the United States unlawfully. The antiquities dealer in California who held the items surrendered any right he had to the artifacts, which have been forfeited to the U.S. government. According to FBI officials, the government of Iraq asserts ownership over the items, but they have not yet been repatriated. Investigation of objects. ICE-HSI and the FBI conduct investigations into potentially restricted items of cultural property originating from Iraq and Syria. ICE-HSI conducts investigations involving the illicit importation, trafficking, and distribution of cultural property. CBP sometimes originates ICE-HSI cultural property investigations by referring incidents of suspected criminal activity related to illicit cultural property trafficking to ICE-HSI officials. According to ICE officials, most ICE-HSI cultural property investigations are based on other information and involve items of cultural property already in the United States, which may be held in private collections, museums, galleries, auction houses, or by other entities. ICE- HSI investigates potentially related criminal violations such as smuggling or falsely classifying an item. CBP and ICE officials reported collaborating with the FBI and State on investigations into illicit trade of cultural property from Iraq and Syria. According to these officials, CBP and ICE- HSI identify appropriate subject matter experts to examine detained cultural property to make a preliminary determination regarding the authenticity of the artifact or object. ICE-HSI cultural property investigations have led to the return of cultural property to Iraq. For example, ICE-HSI opened an investigation in January 2011 after receiving a tip about an Iraqi ceremonial sword for sale at an auction in the United States (see fig. 7). ICE-HSI found that the item was brought into the United States by a U.S. citizen who had served in the military. ICE-HSI consulted with a cultural property expert to authenticate the origin of the item and seized, obtained forfeiture, and ultimately repatriated the item to Iraq in July 2013. According to FBI officials, the FBI pursues Iraqi and Syrian cultural property items based on information from various sources, including from investigations into related matters. The FBI does not investigate or enforce import restrictions on cultural property, but FBI investigations on other criminal matters sometimes involve items of cultural property. In addition, according to officials, the FBI receives information on cultural property items from a variety of sources, including tips from informants, findings from other criminal investigations, and foreign government contacts. Officials added that, while ICE-HSI and the FBI lead distinct investigations involving cultural property, the two agencies coordinate with each other and outside experts, when appropriate. FBI officials reported sharing information with ICE-HSI and CBP on specific information and cultural property items, when appropriate. FBI officials also told us they regularly consult with outside experts to help identify cultural property items. The FBI has investigated suspected items of cultural property from Iraq that were discovered from investigations into related matters. For example, it discovered Iraqi antiquities during an investigation into public corruption of U.S. contractors in Iraq. According to FBI documents, the artifacts, including two pottery dishes, four vases, an oil lamp, three small statues, and seven terracotta relief plaques, were illegally taken from Iraq by DOD contractors in 2004. Investigators learned that the contractors took the items and used them as gifts and bribes or sold them to other contractors who then smuggled them into the United States. According to an FBI document, two of the contractors were ultimately sentenced to prison for their roles in the fraud scheme, and the items were recovered and returned to Iraq in July 2011. Repatriation of cultural property. ICE and the FBI repatriate cultural property items to the appropriate country, including the return of multiple items to Iraq. ICE works to repatriate the stolen or smuggled cultural property items to the rightful owner after CBP or ICE-HSI detains, seizes, or takes forfeiture action on an item found to have been brought into the United States in violation of U.S. law, according to ICE officials. ICE has repatriated a number of cultural property items to Iraq. For example, in 2008, ICE-HSI opened an investigation into a pair of Neo-Assyrian gold earrings for sale at an auction house in the United States (see fig. 8). ICE-HSI consulted with a cultural property expert to determine the authentication and origin of the item and worked with CBP to seize, obtain forfeiture, and ultimately repatriate the item to Iraq in February 2010. According to FBI officials, when the FBI detains, seizes, or obtains forfeiture of restricted cultural property items, its Art Crime Team works to repatriate the items. The FBI has repatriated a number of cultural property items to Iraq. For example, one FBI-led investigation involved a U.S. soldier serving in Iraq who purchased eight stone seals and brought them back to the United States (see fig. 9). The soldier had the items evaluated by an expert and, upon discovering their historical value, turned the seals over to the FBI, who repatriated them to Iraq in 2005. Prosecution of criminal violations. DOJ considers prosecution for criminal violations relating to investigations involving Iraqi and Syrian cultural property. According to DOJ officials, ICE-HSI and the FBI consult with DOJ’s Criminal Division or local U.S. Attorneys’ offices, and the assigned prosecutor determines whether to pursue criminal prosecution of related violations. State and local prosecutors may also consider whether to pursue prosecution for violations related to cultural property investigations. DOJ has prosecuted criminal violations from investigations involving items of cultural property from Iraq. For example, an FBI-led investigation into a man suspected of selling forged art and fake items led to the discovery of four Iraqi cylinder seals (see fig. 10). The FBI obtained forfeiture of the items and repatriated them to Iraq in 2013. According to FBI officials, the man with the seals was prosecuted and sentenced for conspiracy and mail fraud. The CHCC’s activities during its first year of formation reflected several key practices that can enhance and strengthen collaboration but did not demonstrate others. CHCC participants have demonstrated progress in the key areas of identifying leadership; including relevant participants; bridging organizational cultures, including developing ways to operate across agency boundaries and agreeing on common terminology; and addressing issues related to resources, including funding, staffing, and technology. However, CHCC participants could enhance their collaboration by implementing other key collaboration practices, such as developing goals, clarifying participants’ roles and responsibilities, and documenting agreements within the CHCC and its working groups. Leadership. The CHCC has followed the key collaboration practice of designating leaders, including strengthening the influence of leadership by high-level officials and establishing continuity in leadership. The CHCC has identified leadership in the full committee. Pursuant to the sense of Congress at Section 2(1) of the International Cultural Property Act that the CHCC “be chaired by a Department of State employee of Assistant Secretary rank or higher,” State’s Assistant Secretary for the Bureau of Educational and Cultural Affairs (ECA) has chaired all of the CHCC’s meetings thus far. According to State officials, the ECA Assistant Secretary will continue to chair CHCC meetings. State officials also noted that senior leadership’s involvement in the committee underscores the importance of the committee and the topic of cultural property protection. We have previously reported that the influence of leadership can be strengthened by high-level officials and that designating one leader is often beneficial because it centralizes accountability and can speed decision making. Each of the CHCC’s three working groups has also identified a primary entity to lead the group’s effort, such as identifying and soliciting input on agenda items for the working group meetings. The CHCC sought volunteers to lead its two newly formed working groups. DOJ’s FBI has volunteered to lead the Technology working group, and the Smithsonian serves as the lead entity of the Partnerships and Public Awareness working group. State continues to lead the preexisting CATF, and different members host regular meetings. For example, DOJ’s Criminal Division hosted the June 2017 CATF meeting. Participants. The CHCC has demonstrated our key collaboration practice of including relevant participants. We previously reported on the importance of ensuring that relevant participants are included in and have the appropriate knowledge and abilities to contribute to the collaborative effort. The CHCC invited and included several entities as participants of the committee and its working groups. For the first CHCC meeting in November 2016, State invited nine federal entities to participate and requested that these participants volunteer for the working groups. Representatives of these nine federal entities all attended and, with the exception of USAID, have attended at least one additional meeting since the committee’s inception. In July 2017, a USAID official informed us that USAID does not expect to participate in the CHCC. Most CHCC participants noted that they are confident that the members have the appropriate knowledge and commitment to contribute and participate in the committee. Most representatives who attended the first CHCC meeting also participated in the committee’s working groups. For instance, officials from six of the nine U.S. federal entities attending the first formal CHCC meeting also participated on a voluntary basis in the newly created Technology and Partnerships and Public Awareness working groups. These federal entities include State, DHS, DOJ, the Interior, the NEH, and the Smithsonian. Officials representing three of the nine federal entities—the Treasury, DOD, and USAID—stated that they did not volunteer for and have not participated in the new working groups because they did not clearly see how their entities could contribute to the topics of focus. In the preexisting CATF working group, four of the nine CHCC federal entities—State, DHS, DOJ, and the Interior—noted that they would continue to participate. State officials explained that DOD had been invited to CATF meetings in the past but had not participated extensively. According to the DOD representative on the CHCC, DOD had not participated in the CATF in years but attended the CATF meeting in June 2017, the first CATF meeting since the formation of the CHCC, after being asked to participate. Figure 11 depicts the participation of the nine U.S. federal entities whose officials attended the first CHCC meeting. One of the CHCC working groups has included participation from additional federal entities. Led by the Smithsonian, the participants in CHCC’s Partnerships and Public Awareness working group agreed to invite other federal entities to the group. The second meeting of the Partnerships and Public Awareness working group in May 2017 included additional federal entities that had not attended prior CHCC meetings. These federal entities included the National Endowment for the Arts, the National Archives and Records Administration, and the President’s Committee on the Arts and Humanities. According to Smithsonian officials, the Smithsonian also invited the Library of Congress, the Institute of Museum and Library Services, the National Science Foundation, DOD’s National Defense University, and the Wilson Center to participate in the working group. The CHCC and its working groups have also included the participation of nonfederal stakeholders in their activities. The CHCC has invited external stakeholders to participate in public events led by its Partnerships and Public Awareness working group. For example, State and the Smithsonian co-hosted an event to discuss cultural heritage protection and stabilization in northern Iraq that was open to the public and included a public panel discussion, led by U.S. Committee of the Blue Shield, a nongovernmental organization. Other participants in this event included those representing museums, educational institutions, and research institutions. Smithsonian officials noted the importance of hearing the perspectives of these nongovernmental organizations and participants in the international art and cultural property market—organizations suggested in the sense of Congress in the International Cultural Property Act. However, the full CHCC will not likely include nonfederal stakeholders in its regular interagency meetings. As the lead of CHCC, State officials commented that they intend to keep invitees to the full CHCC limited to U.S. federal entities because this composition facilitates the discussion of U.S. government law enforcement efforts related to cultural property protection. Other CHCC participants also expressed concerns about having nonfederal stakeholders participate in certain CHCC and its working groups’ discussions, particularly when law enforcement agencies need to discuss sensitive matters. Therefore, State officials reported that the CHCC may conduct periodic consultations with external stakeholders without making these stakeholders members of the committee. Bridging organizational cultures. CHCC participants have bridged different organizational cultures among the participating entities by establishing ways to operate across agency boundaries, a key collaboration practice that can involve developing common terminology and sharing information. For example, CHCC participants have generally agreed on common terminology in the cultural property area. Some participants reported that federal entities agree on the definitions of “cultural property” and the “protection and preservation” of such items, even though these terms could be interpreted differently by many in academia and nongovernmental organizations. In addition, most participants stated that they have working relationships with other members of the committee, which facilitates information sharing on an ongoing basis. The missions and cultures of the nine participating federal entities may differ, ranging from those focused on law enforcement to those that fund grants to protect cultural property. Nevertheless, many participants reported that CHCC members all share a common commitment toward the goal of cultural property protection. Furthermore, most participants reported that the committee was a helpful forum for collaborating on international cultural property protection efforts. According to State officials, the formation of CHCC facilitated collaboration of different U.S. federal entities when cultural property protection issues arose internationally. For example, in March 2017, State led an interagency delegation that included DHS, DOJ, and Smithsonian representatives to participate in an international culture ministerial meeting devoted to the topic of cultural property protection. According to a State report, various federal entities also contributed significantly to a UN Security Council resolution to focus on cultural heritage preservation, which the Security Council adopted unanimously in March 2017. Resources. Despite not having dedicated financial resources, the CHCC and its working groups have identified human and technology resources for their collaborative activities. We previously reported that collaborating agencies should identify the human, information technology, physical, and financial resources needed to initiate or sustain their collaborative effort. The CHCC has identified nine U.S. federal entities as participants that expect to participate in meetings of the committee and its working groups without using designated funding. Some participants noted that not having dedicated resources could present certain challenges to CHCC activities. For example, one CHCC participant noted that cultural preservation training programs are resource dependent and are, therefore, difficult to plan without funding. However, this participant also noted that collaborative efforts on the CHCC have helped participants coordinate interagency training, which has helped to mitigate these challenges. Moreover, participants generally noted that even without dedicated financial resources, they were committed to participate in CHCC activities as a collateral duty to their work. Another aspect of managing resources among interagency groups is the development of technological systems and compatible tools. CHCC participants have taken steps to explore the development of technological resources to enhance collaboration. For instance, several Technology working group participants noted that they have discussed the possibilities involved in establishing compatible technological systems among the CHCC’s members. According to one participant, the working group is in the process of obtaining the status of existing technological systems of participants and is planning on vetting new technologies. Outcomes and accountability. The CHCC could benefit from addressing the key collaboration practice of organizational outcomes and accountability, which includes clearly defining short-term and long-term goals, and developing a way to track and monitor progress toward these goals. In the first formal meeting in November 2016, the chair of the committee articulated that the CHCC’s role was to coordinate antitrafficking efforts and to tackle a wide range of cultural heritage challenges worldwide. However, subsequent to that meeting, the CHCC has not produced documents identifying specific CHCC outcomes or goals. CHCC participants also indicated that no clear consensus on the CHCC’s stated goals has emerged from CHCC meetings. Many CHCC participants noted that the CHCC had not developed short-term and long- term goals, with some adding that the CHCC was working on doing so. Other officials had different views of the short-term and long-term goals. For example, one participant stated that a short-term CHCC goal was to establish working groups and understand the roles of the different entities, while another participant said that a long-term goal was to solidify information sharing among participants. The CHCC’s three working groups varied in their development of goals. One participant of the CHCC’s Technology working group noted that the working group has developed short-term, medium-term, and long-term goals, including target time frames for achieving them. For example, the Technology group has a short-term goal to evaluate the technological strengths and weaknesses of the members relative to their mission. However, not all of the participants in that working group were aware of these goals. The other two CHCC working groups—Partnerships and Public Awareness, and Cultural Antiquities Task Force—have not developed goals. As the lead of the Partnerships and Public Awareness working group, the Smithsonian has compiled an inventory list that catalogues all of the programs, activities, and outreach that each of the working group’s participants worked on and planned to undertake. Smithsonian officials noted that the working group could develop outcomes based on the inventory list, such as support for other working group members’ training on cultural property protection. According to State officials, the Senate Appropriations Committee directed the CATF to train U.S. and foreign law enforcement and customs agents, and the CATF continued to fund cultural property training. However, such training has not been developed or documented as goals for the CATF. State officials explained that the full committee and its working groups were still early in their formation. Participants have been focused on other priorities and, therefore, have not yet developed goals. For example, according to Smithsonian officials, the Partnerships and Public Awareness working group has been working on creating goals as it concentrates on holding actual public awareness campaigns, but it has not established or documented any goals to share with other participants. Without clearly developed goals, participants of the CHCC and its working groups may not have the same overall interests and may even have conflicting interests and disagreement among missions while working toward the overall CHCC purpose of protecting and preserving international cultural property. We previously reported that by developing goals and outcomes based on what the group shares in common, a collaborative group can shape its own vision and define its own purpose. When articulated and understood by the members of a group, this shared purpose provides people with a reason to participate in the process. Clarity of roles and responsibilities. While participants seemed to understand each other’s activities related to international cultural property matters, CHCC participants have not clarified each participating entity’s roles and responsibilities on the committee and its working groups, a practice we have identified as helpful in enhancing collaboration. CHCC participants have discussed cultural property initiatives that each entity was carrying out, and the Partnerships and Public Awareness working group is maintaining a list of its participants’ activities. However, we found that there was no consensus and no clear delineation of the specific roles and responsibilities of the entities on the CHCC and its working groups. For example, representatives of one entity leading a working group described their role in initiating working group meetings, and planning and circulating meeting agendas. However, most CHCC participants said that they are unclear about their specific roles and responsibilities for CHCC, including DOD and USAID, whose representatives on the CHCC were unable to describe their roles and responsibilities on the full committee and its working groups. Furthermore, the CHCC has not clarified the roles and responsibilities of the additional federal entities that participated in one of the CHCC’s working groups, including whether these entities would be members of the full committee or participants of only one CHCC working group. The CHCC’s Partnerships and Public Awareness working group invited several other federal entities to attend its May 2017 meeting, but these entities’ roles and responsibilities in the working group had not been identified. As the lead entity of this working group, Smithsonian officials said that they did not know whether the additional participants of or invitees to the May 2017 Partnerships and Public Awareness meeting would be included as members of the full committee. The CHCC full committee meeting in June 2017 did not include these additional federal entities as invitees. According to some CHCC participants, the CHCC and its working groups spent their first year of operation working to set up CHCC meetings and determining which invitees to ask to meetings. As a result of prioritizing these activities and allowing the CHCC to take on a more fluid process, some participants told us that the committee and its working groups have yet to clarify the roles and responsibilities of its participants. However, the CHCC and its working groups could benefit from defining and agreeing upon participants’ respective roles and responsibilities as well as steps for decision making when working on protecting and preserving international cultural property. Without such clarity, CHCC participants could encounter barriers in organizing their joint and individual efforts on the committee and its working groups as the CHCC continues to operate beyond its first year of formation. Written guidance and agreements. Participants have not documented their agreement regarding how the CHCC will be collaborating, including the short-term and long-term goals of the committee and its working groups, as well as members’ roles and responsibilities on the committee and its working groups. The CHCC and at least one of its working groups have produced written notes after its meetings. For example, the Smithsonian produced a document after the May 2017 Partnership and Public Awareness working group meeting that provided details on the activities of its participants, upcoming public events, and a list of task assignments for its participants. However, these written documents did not discuss any collaborative strategies within the CHCC and its working group. State officials said that CHCC participants have not documented written guidance and agreements for the committee and its working groups because it was too early in the formation of the CHCC to make these determinations. Our prior work on key practices for collaboration found that the action of agencies articulating a common outcome and roles and responsibilities into a written document is a powerful tool in collaboration. The destruction of international cultural property causes irreversible damage to our shared heritage, and the trafficking of cultural property could fund ISIS terrorist activities. To protect cultural property from Iraq and Syria at risk of looting and smuggling, the U.S. government has imposed import restrictions, and DHS and DOJ have taken a number of actions to enforce the laws and regulations on restricted cultural property from these countries. Further, a law passed to protect and preserve international cultural property included a sense of Congress that the President should establish an interagency coordinating committee to coordinate the efforts of the executive branch. State has taken steps to establish the CHCC, and the committee’s efforts reflect several of the key practices that can enhance and strengthen collaboration. However, the CHCC could benefit from following additional practices as it moves beyond its first year. These practices include developing goals for the CHCC and all of its working groups, clarifying the roles and responsibilities of the committee’s and its working groups’ participants, and documenting these agreements among the participants. CHCC participants have noted that the CHCC is still in the early stages of establishment and have, therefore, yet to follow these additional collaboration practices. Given participants’ receptivity and commitment to the committee’s work, the CHCC could augment its current efforts as it moves forward. Using key collaboration practices could help the CHCC’s members to work collectively to better understand and respond to the destruction, looting, and trafficking of international cultural property, especially as such activities may persist with the ongoing instability in Iraq and Syria. We are making a total of three recommendations to State. Specifically: The Assistant Secretary of State for Educational and Cultural Affairs should work with other U.S. federal entities participating in the CHCC to develop goals for the CHCC and its working groups. (Recommendation 1) The Assistant Secretary of State for Educational and Cultural Affairs should work with other U.S. federal entities participating in the CHCC to clarify participants’ roles and responsibilities in the CHCC and its working groups. (Recommendation 2) The Assistant Secretary of State for Educational and Cultural Affairs should work with other U.S. federal entities participating in the CHCC to document agreement about how the CHCC and its working groups will collaborate, such as their goals and participants’ roles and responsibilities. (Recommendation 3) We provided a draft copy of this report to State, DHS, DOJ, the Treasury, DOD, the Interior, USAID, the NEH, and the Smithsonian for review and comments. State provided written comments that are reproduced in appendix II. State, DHS, the Treasury, the Interior, and the Smithsonian also provided technical comments, which we incorporated as appropriate. DOJ, DOD, USAID, and the NEH had no comments. In its written comments on our report, State concurred with all three of our recommendations. State noted its agreement with the need for outcomes and accountability and stated that CHCC working groups aim to draft mission statements and objectives. Following the adoption of such statements and objectives, State also foresees clarifying roles and responsibilities of CHCC participants, and documenting such goals through a memorandum of understanding. We are sending copies of this report to the appropriate congressional committees; the Secretaries of State, Homeland Security, the Treasury, Defense, the Interior, and the Smithsonian; the Attorney General of the United States; the Administrator of USAID; the Chairman of the NEH; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9601, or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. At the first formal Cultural Heritage Coordinating Committee (CHCC) meeting, participants included officials from nine U.S. federal entities: the Departments of State, Homeland Security, Justice, the Treasury, Defense, and the Interior; the U.S. Agency for International Development; the National Endowment for the Humanities; and the Smithsonian Institution. Figures 12, 13, and 14 show these entities’ reported activities related to protecting cultural property. In addition to the contact named above, Elizabeth Repko (Assistant Director), Kim Frankena (Assistant Director), Victoria Lin (Analyst-in- Charge), and Diana Blumenfeld made key contributions to this report. The team benefited from the expert advice and assistance of Lynn Cothern, Neil Doherty, Justin Fisher, Grace Lui, Marc Molino, and Sarah Veale. | The conflicts in Iraq and Syria that began in 2003 and 2011, respectively, have led to the destruction, looting, and trafficking of cultural property by Islamic State of Iraq and Syria (ISIS) and others. The United Nations called these events the worst cultural heritage crisis since World War II and reported that ISIS has used the sale of looted Iraqi and Syrian cultural property to support its terrorist activities. Congress authorized and the President imposed import restrictions on archaeological or ethnological material of Iraq in 2008 and Syria in 2016. The act directing Syrian restrictions also includes a sense of Congress that the President should establish an interagency committee to coordinate executive branch efforts on international cultural property protection. GAO was asked to review U.S. efforts to protect Iraqi and Syrian cultural property. This report examines (1) actions DHS and DOJ have taken to enforce U.S. laws and regulations involving restrictions on such property and (2) the extent to which CHCC participants collaborate to protect cultural property. GAO reviewed documents related to 17 DHS- or DOJ-led cultural property investigations, interviewed officials, and assessed the extent of CHCC collaboration using GAO's key practices. GAO's examination of 17 cultural property investigations shows that the Departments of Homeland Security (DHS) and Justice (DOJ) have taken a number of actions to enforce laws and regulations related to restricted Iraqi and Syrian cultural property. DHS's Customs and Border Protection (CBP) has taken actions such as monitoring shipments and detaining and seizing suspected items of restricted cultural property. CBP coordinates with DHS's Immigration and Customs Enforcement (ICE), which investigates objects; detains, seizes, and obtains forfeiture of items found to be in violation of U.S. law; and repatriates cultural property to its rightful owner. For example, ICE conducted an investigation into an Iraqi ceremonial sword for sale at an auction in the United States and then seized, obtained forfeiture of, and repatriated it to Iraq in July 2013 (see fig.). DOJ actions to address restricted Iraqi and Syrian cultural property include activities by the Federal Bureau of Investigation (FBI) and DOJ attorneys to investigate and prosecute criminal violations, as well as actions related to the forfeiture and repatriation of cultural property items. Ceremonial Sword Repatriated to Iraq by Department of Homeland Security in 2013 The Cultural Heritage Coordinating Committee (CHCC), established in November 2016 with nine participating federal entities and led by the Department of State (State), has followed several of the key collaboration practices identified by GAO but has not demonstrated others. GAO has previously identified key practices for organizations to enhance and sustain their collaborative efforts. The CHCC has followed key practices of identifying leadership; including relevant participants; bridging organizational cultures, such as agreeing on common terminology; and addressing resource issues. Most participants also reported that the CHCC was a helpful forum for sharing information. However, the CHCC has not fully demonstrated other key practices for enhancing collaboration. First, the CHCC and two of its three working groups have not developed short- and long-term goals. Moreover, the CHCC has not clarified participants' roles and responsibilities on the committee or its working groups. Finally, CHCC participants have not documented agreements related to collaboration, such as developing written materials to articulate common objectives. Incorporating these practices could help participants work collectively, focus on common goals, and organize joint and individual efforts to protect cultural property as the CHCC continues its efforts beyond its first year. GAO recommends that State work with other CHCC participants to (1) develop goals, (2) clarify participants' roles and responsibilities, and (3) document collaborative agreement in the CHCC and its working groups. State concurs with GAO's recommendations. |
The statutes that create federal programs may contain requirements that recipients must comply with in order to receive federal assistance. In addition, when Congress enacts a law establishing a program, it may authorize or direct a federal agency to develop and issue regulations to implement it. Congress may impose specific requirements in the statute; alternatively it may set general parameters and the implementing agency may then issue regulations further clarifying the requirements. Most federal agencies use the informal rulemaking procedures described in the Administrative Procedure Act. Those procedures, also known as “notice- and-comment” rulemaking, generally include publishing proposed regulations for public comment before issuing final rules. Comments from the public, particularly parties that will be affected by the proposed regulations, can provide agencies with valuable information on the regulation’s potential effects. In addition to regulations, agencies also use guidance and other documents to provide advice and information to entities affected by government programs. When agencies issue guidance documents, the Administrative Procedure Act generally allows them to forgo notice-and-comment procedures. In addition, agencies must comply with other rulemaking requirements, some of which direct agencies to estimate the burden of proposed regulations or assess their potential costs and benefits (see table 1). OMB performs many functions related to federal agency rulemaking. For example, under Executive Order 12866, OMB reviews agency rulemaking to ensure that regulations are consistent with applicable law, the President’s priorities, and the principles in executive orders. OMB also ensures that decisions made by one agency do not conflict with the policies or actions taken or planned by another agency and provides guidance to agencies. In 2003, for example, OMB revised guidelines for agencies to use when they assess the regulatory impact of economically significant regulations and provided guidance for how agencies can improve how they evaluate the benefits and costs of regulations. Title I of ESEA, as amended, provides funding to states and school districts to expand and improve educational programs in schools with high concentrations of students from low-income families. Title I funds may be used for instruction and other supportive services for disadvantaged students to increase their achievement and help them meet challenging state academic standards. To receive Title I funds, states must comply with certain requirements. For example, states must develop (1) academic assessments, to provide information on student achievement, and (2) an accountability system, to ensure that schools are making adequate yearly progress (AYP). Education developed 20 indicators to implement these monitoring requirements. Examples of indicators include the percent of youths with IEPs who graduate with a regular diploma and the percent which drop out of high school. For more information on the priority areas and indicators, see Education’s web site on the IDEA Part B State Performance Plan and Annual Performance Report: http://www2.ed.gov/policy/speced/guid/idea/bapr/index.html, accessed June 19, 2012. receive federal funds from discriminating against students based on their race, color or national origin, sex, disability, or other characteristics. Other federal agencies also administer grant programs and issue associated regulations with which states and school districts must comply. For example, USDA has issued regulations and guidance to states and school districts to implement the national school meals programs, which provide federal assistance to help provide nutritionally balanced reduced-price or free meals (breakfast, lunch, and snacks) to low-income students. These programs, in part, aim to address the adverse effects that inadequate nutrition can have on children’s learning capacity and school performance. In fiscal year 2010, almost 32 million students participated in the largest school meal program, the National School Lunch Program. The Healthy, Hunger-Free Kids Act of 2010revised some requirements for school meal programs, most notably by requiring USDA to update nutrition standards for meals served through the National School Lunch and School Breakfast programs. Key education stakeholders we interviewed said many federal requirements related to ESEA Title I, IDEA Part B, or national school meals programs were burdensome to states and school districts. For example, representatives from the National Governors Association identified multiple federal requirements as burdensome, such as the requirement for school districts to spend 20 percent of their Title I allocation on specified school improvement activities, including Supplemental Educational Services (SES), and the requirement to provide Title I services on an equitable basis to eligible children attending private school. (See appendix I for a description, including the sources, of these requirements as well as all other requirements cited throughout our report.) Also, representatives from the Council of Chief State School Officers told us of a study they conducted in which they found that states must comply with numerous duplicative reporting requirements.Specifically, their study found that states are required to report over 200 data elements multiple times to Education through collections such as the ESEA Consolidated State Performance Report (CSPR), the IDEA Part B Annual Performance Report, and the CRDC. Representatives from other organizations we interviewed—such as the American Association of School Administrators, the Council of the Great City Schools, and the National Rural Education Association—identified other federal requirements as burdensome for states and school districts. These requirements include data collection and reporting requirements for IDEA Part B and monitoring of SES providers under ESEA. Officials we interviewed in 3 states and 12 school districts reported 17 federal requirements as most burdensome, and many of these were the same requirements identified by key stakeholders. The 17 requirements included in this report met the following criteria: (1) they were identified as burdensome by more than one state or school district; (2) they could potentially impact all schools, districts, or states; and (3) they are mandatory requirements established by Congress or a federal agency. Of these 17 requirements, 7 relate to ESEA Title I, 3 to IDEA Part B, and 4 to the national school meals programs. For example, multiple state and district officials identified certain data collection and reporting requirements for IDEA Part B, referred to as the IDEA Indicators, as burdensome. Education uses these indicators to monitor states on key priority areas that are identified in the IDEA, such as ensuring that students with disabilities receive a free appropriate public education. The remaining 3 requirements relate to more than one federal grant program. For example, as required by the Federal Funding Accountability and Transparency Act of 2006 and OMB guidance, recipients of federal funds totaling $25,000 or more must report basic information on awards, such as the name and location of the entity receiving the award, and the award amount. As shown in figure 1, state and district officials we interviewed described many ways in which the identified requirements were burdensome to them: complicated, time-intensive, paperwork-intensive, resource- intensive, duplicative, and vague. Officials characterized 16 of the 17 requirements as being burdensome in multiple ways. For example, officials told us that collecting data for the IDEA Indicators requires a significant amount of time and resources because of the volume of data reported. In addition, these officials said that Education routinely changes what data is collected, which one official noted resulted in costly modifications to state and local data systems. All of the requirements identified by state and school district officials as most burdensome were characterized as being complicated, time- intensive, or both. Officials described 15 of the 17 burdensome requirements as complicated, but also identified some benefits, as illustrated by the following requirements: SES provider approval and monitoring. Under ESEA Title I, for schools that do not make AYP for 3 years, school districts must offer SES, such as tutoring and other academic enrichment activities, from state-approved providers selected by the parents of eligible students. State educational agencies must approve SES service providers and develop, implement, and publicly report on standards and techniques for monitoring the quality and effectiveness of their services. To approve providers, states told us they process applications, develop lists of approved providers, and address complaints from applicants who were not approved. A state official said that monitoring providers can also be challenging. For example, the official said it is difficult to know which providers are effective and that it is unclear whether SES has resulted in improvements in student achievement. School district officials told us they also struggle with their responsibilities under these requirements. School districts must notify parents about the availability of services annually and enter into a service agreement with any approved provider selected by parents of an eligible student. Districts must work with providers selected by parents, which, according to one district official, is burdensome because the districts have no control over the services provided. The official said her district employs teachers to monitor the SES providers and that in some cases the district has had problems with providers. Another district official said some of the challenges his district faced include providers not responding to the district in a timely manner, not submitting timely invoices, and submitting poorly crafted student learning plans. In contrast, according to a 2008 report, most parents of children receiving SES are satisfied with those services, which may In addition, be because parents are able to select service providers.one official we interviewed said that a benefit of SES is that students receive extended learning time. However, officials indicated they would like certain improvements. For example, one district official indicated she would like more input into which providers to use and how to monitor the services provided. IEP processing. Under IDEA Part B, for each eligible student with a disability, an IEP must be in place at the beginning of each school year. The IEP must be developed, reviewed, and revised in accordance with a number of requirements. For example, the IEP must include information about the child’s educational performance and goals, and the special education and related services that will be provided. The IEP team (consisting of, at a minimum, the parents, a regular education teacher, a special education teacher, a representative of the school district, and the child, when appropriate) must consider specific criteria when developing, reviewing, or revising each child’s IEP. Officials described this multistep process as complicated, in part because of unclear terms in the IEP paperwork. For example, an official told us that special education service providers on the IEP team often misinterpret questions on the IEP regarding the student’s performance and progress. Another form official said the paperwork required for an IEP meeting takes 2 to 3 hours to complete and the meeting itself takes another 2 to 3 hours. Although meetings can be consolidated or held via conference call, this official said that each of these time commitments takes away from classroom instruction time and provision of support services. Despite these challenges, IEPs provide benefits for students with disabilities. For example, one advocacy group noted that the IEP contains goals and includes progress reporting for parents so that the IEP team will know whether or not the child is actually benefiting from his or her educational program. Also, one district official we interviewed said that having IEPs online has allowed special education administrators to give immediate feedback to teachers and other special education service providers on changes to students’ educational needs. Other officials acknowledged that these requirements are designed to ensure that parental and student rights are protected, but believe those rights can be protected in a less-complicated way. Officials described 13 of the 17 requirements as time-intensive. For example, officials said disseminating state and district report cards is time-intensive, and according to one official this is due to the large amount of time devoted to developing data for the reports and printing and mailing them. States and districts that receive ESEA Title I funds are required to disseminate annual report cards that include, among other information, student achievement data at each proficiency level on the state academic assessments, both in the aggregate for all students and disaggregated by specified subgroups. They also include information on the performance of school districts in making AYP and schools identified for school improvement, as well as the professional qualifications of teachers in the state.report cards can also include state-required information. To comply with these and other ESEA requirements, states maintain a large amount of student demographic and assessment data, which they use to provide information about the academic progress of students in the schools and districts. An official also noted that processes for collecting, verifying, and reporting these data take large amounts of state and local officials’ time and resources. In addition, these report cards can be quite long; one state official said report cards for districts in his state can be 20 to 30 pages in length. A district official we interviewed recognized that the information on state and district report cards is important to help inform parents about the academic performance of their children’s school. However, officials suggested ways to streamline the report cards, including that states and districts be allowed to distribute one page of data highlights along with a reference to where the full report is available publicly, such as online or in the school library. In its guidance on state and district report cards, Education stated that because not all parents and members of the public According to Education officials, state and district have access to the Internet, posting the report cards on the Internet alone is not sufficient to meet the dissemination requirement. Several of the most burdensome requirements identified are reporting requirements, which state and district officials told us contained duplicative data elements. Specifically, officials said some data collections may require the same or similar data elements to be reported multiple times. For example, through the CSPR used for ESEA reporting as well as the Annual Performance Report used for IDEA reporting, states are required to report graduation and dropout rates for students with disabilities. Additionally, officials from eight school districts told us that the CRDC required them to provide data directly to Education that had previously been submitted to the state. Examples of data elements reported as duplicative by district officials include student enrollment; testing; and discipline, which includes suspensions and expulsions. State and school district officials characterized other burdensome federal requirements as paperwork-intensive (7 of 17), resource-intensive (6 of 17), and vague (4 of 17). For example, officials said time distribution requirements, established by OMB,to these requirements, in order for state and local grant recipients to use federal funds to pay the salaries of their employees who perform activities under multiple grants, they must maintain documentation of the employee’s activities. One district said that IDEA funding is used to pay for teachers working directly with students with disabilities, but because these students are included in general education classrooms it is difficult to document exactly how much time is spent working with these students. Two officials we interviewed said that complying with time distribution requirements provided no benefit to them. Officials described requirements to administer academic assessments as resource-intensive due to the costs needed to establish and maintain appropriate data systems. However, one state official noted that, as a result of the requirements, assessment data on student performance can be provided immediately to teachers and administrators. Also, some officials said they are paperwork-intensive. According were uncertain about requirements to implement the Healthy, Hunger- Free Kids Act of 2010, because, at the time of our interviews, some of the requirements had not gone into effect. According to key stakeholders and state and school district officials we interviewed, states and districts do not generally collect information about the cost to comply with federal requirements. Stakeholders we interviewed said there were many reasons that states and school districts generally do not collect data on compliance costs. For example, some stakeholders told us most states and districts do not have the capacity to track spending on compliance activities. In addition, three stakeholders told us that school districts often have difficulty determining whether requirements are federal requirements or state requirements, and may not be able to separately track costs associated with federal requirements. Information provided by the states and school districts we interviewed was generally consistent with views from these key stakeholders. Specifically, state and school district officials we interviewed said they do not collect information about the costs their agencies incur to comply with federal requirements, for a variety of reasons, including: (1) capacity limitations, such as limited staff and heavy workloads; (2) states and the federal government do not require them to report it; (3) it is too burdensome to collect the information; and (4) the information is not useful for improving student achievement or program administration and evaluation (see figure 2). When we asked state and district officials whether they could provide cost estimates on one requirement, most of them said they were unable to do so, and the estimates that were provided did not meet our criteria to include in the report. Education and USDA developed plans, known as retrospective analysis plans, to identify and address burdensome regulations, as required by Executive Order 13563. The order required agencies to develop plans to periodically review their existing significant regulations and determine whether these regulations should be modified, streamlined, expanded or repealed to make the agencies’ regulatory programs more effective or less burdensome. Consistent with the order’s emphasis on public participation in the rulemaking process, OMB encouraged agencies to obtain public input on their plans and make their final plans available to the public. Education’s final plan, issued in August 2011 discussed its efforts to reduce the burden on states and school districts and identified a preliminary list of regulatory provisions for future review, including IDEA reporting requirements, which were mentioned as burdensome by several stakeholders and state and school district officials we interviewed. Based on their review, Education officials told us they planned to consolidate several separate IDEA Part B data collections and include them in EDFacts beginning in October 2012. Education also said it would survey departmental program offices to ask program personnel to identify requirements they consider to be burdensome. However, department officials told us this survey has been delayed due to other priorities within the department, and they now expect to administer it in the fall of 2012. 20 U.S.C. § 7861. ESEA authorizes the Secretary of Education to waive, with certain exceptions, any statutory or regulatory requirement of ESEA for states or school districts that receive ESEA funds and submit a waiver request that meets statutory requirements. Under the ESEA, waivers can be effective for up to 4 years, although they may be extended. Education currently offers waivers from 10 ESEA provisions, including the timeline for 100 percent proficiency on state assessments and implementation of school improvement requirements. States that choose to apply must request waivers from 10 provisions and may choose to request waivers from an additional 3 provisions. For more information on the waivers, see http://www.ed.gov/esea/flexibility accessed June 19, 2012. distinguishes high-performing districts and schools from those that are lower-performing; and 3. commit to create and implement teacher and principal evaluation and support systems that will be used to continually improve instruction and assess performance using at least three performance levels. After receiving and reviewing waiver requests, Education approved waivers for 19 states, and, as of May 2012, was reviewing the requests of 17 other states and the District of Columbia. The waivers are generally for a 2-year period, beginning in the 2012-2013 school year. The waivers may be extended, but Education has not specified the length of time an extension would be in effect. Of the three states included in our review, Education has approved requests from Massachusetts and Ohio and, as of May 2012, is considering one from Kansas. ESEA waivers may address some requirements officials and stakeholders identified as burdensome. For example, as a result of obtaining a waiver, Massachusetts will no longer require that school districts implement SES requirements. These exemptions are beneficial only to states which receive a waiver; states not approved for waivers must still comply with ESEA requirements. According to Education officials, the waivers may provide relief to many school districts by reducing certain reporting requirements and requirements to provide SES, among other provisions. However, we believe it is too soon to know whether states and school districts will encounter difficulties in implementing these waivers or what the ultimate benefits may be in terms of reducing regulatory burden. In prior work we reported that states faced challenges implementing multiple reforms and, as a result, some reform efforts have been delayed.Similar to these other efforts, states with ESEA waivers may face challenges taking the steps needed to implement the required principles. As stated in its retrospective analysis plan, USDA implemented the direct certification process, which streamlined the approval process for free school meals. Direct certification is a means to determine a child’s eligibility for free school meals based on whether the child receives benefits through the Supplemental Nutrition Assistance Program, among other criteria. For example, students from families who receive nutrition assistance through this program are eligible for free school meals without completing the school meals application. In addition, in January 2012, USDA issued a final rule implementing revisions to nutrition standards required by the Healthy, Hunger-Free Kids Act of 2010 that contained changes from the proposed rule. Among the provisions that may assist school districts in implementing the new requirements, the final rule gives school districts more time to make changes to school breakfast menus. In addition, in accordance with legislation passed in 2012, USDA removed a proposed limit on the amount of starchy vegetables that could be served. As a result of these and other changes and lower estimates for the cost of food, USDA estimates the cost of complying with the new rule will be about $3.2 billion over the next 5 years, instead of the $6.8 billion cited in the proposed rule. 77 Fed. Reg. 11,778 (Feb. 28, 2012). Education, to reduce the burden of time distribution reports that school personnel must complete. According to Education, states, districts, and other stakeholders have repeatedly identified time distribution reports, required by OMB, as a source of administrative burden. Education officials told us they solicited feedback from stakeholders as they were designing this initiative. While the OMB notice did not include a timeline for this pilot, Education officials told us they expect to issue a notice to invite states and school districts to participate in the pilot later in 2012. Education has taken some action to address duplicative reporting requirements. For example, department officials removed items from the 2009-2010 CRDC that were already collected by the department under IDEA. According to Education officials, data on how students complete high school is no longer required in the CRDC, because Education already collects that information through its EDFacts data collection. Education officials also told us of an effort to consolidate district-level ESEA and IDEA reports and implement single file reporting in the 2011- 2012 school year. In an effort to reduce duplicative reporting by school districts, Education officials said they proposed that states report data required by the CRDC on behalf of their districts. However, according to department officials, only Florida has done so. Despite these efforts, department officials generally disagree with stakeholders and state and districts officials about the extent to which duplicative reporting requirements exist and the burden they impose. In its July 2011 letter to Education regarding the department’s preliminary retrospective analysis plan, the Council of Chief State School Officers wrote of its on-going concerns about such requirements in the CSPR, CRDC, and other data collections. The National Title I Association and the National Association of State Directors of Special Education expressed similar concerns to the department. When we discussed the issues raised in these letters with Education officials, they told us there are few duplicative reporting requirements and that the burden they impose is minimal. For example, states are to report the graduation rate for students with disabilities in the ESEA CSPR and the IDEA Annual Performance Report and possibly other reports. However, Education officials said states’ reporting these data twice, in their view, is not burdensome, because both reports use the same data. They also said that similar reporting requirements may be viewed as duplicative by state and district officials. For example, states are required to report not only a graduation rate for students with disabilities, but also a program completion rate, which includes students with disabilities who finish high school but do not graduate.completion data through another departmental data collection, the Common Core of Data. However, Education officials said these data are not duplicative, because they measure different ways students finish high school. We asked Education officials why, in response to comments they received on their draft retrospective analysis plan, they did not include a broader effort to identify duplicative reporting requirements in their final plan. In response, they said Executive Order 13563 (which required the department to develop the plan) focused on regulations and, as such, any reporting requirements based in statute would have been outside the scope of the order. Education may be unable to address certain burdensome requirements in the absence of legislative changes. These include, for example, certain requirements related to IDEA Indicators and transitioning preschool children with disabilities into IDEA Part B programs as well as requirements not addressed through ESEA waivers. IDEA indicators. IDEA requires Education to monitor states and states to monitor school districts using indicators in each of three specified priority areas. In accordance with this requirement, Education has established 20 indicators under IDEA Part B. In October 2011, Education published a Federal Register notice seeking public comments on proposed changes to the IDEA Part B data collection. Education said it planned to eliminate two Part B indicators, since states report data on those indicators in other data collections. In response to the notice, several commenters recommended that the department eliminate many other indicators, but the department did not do so; among other reasons, the department said many of the indicators are required by the IDEA. In addition, Education withdrew other modifications it had proposed to the data collection in response to input that those changes would actually increase the burden on states and districts. Education may continue to make modifications to the IDEA data collection in future years. However, Education lacks authority to eliminate certain indicators on priority areas that are required by statute. Transition of preschool students with disabilities from the IDEA Part C program to the IDEA Part B program. Every state that receives IDEA funds must have in effect policies and procedures to ensure that an IEP (or an individualized family service plan, if applicable) has been developed and is implemented by the third birthday for children participating in the IDEA Part C program who will transition into the IDEA Part B program. Two district officials told us that the transition requirements impose a burden on them, since there is no flexibility, even in the case of emergencies or other extenuating circumstances. Officials in one district told us that failure to comply with the requirement to have the IEP done by the child’s third birthday, by even one day, renders the school district out of compliance with this requirement. To comply with this requirement, officials in this district said that they begin the transition process with an assessment about 6 months in advance even though it would be better to assess the child as close to their third birthday as possible. (They explained that a child assessed when he or she is two and a half years old may need special education services, but, since children change more rapidly when they are young, it is possible they may not need services by the time they are three years old.) However, because the third birthday deadline is established by statute, Education lacks authority to provide exceptions to states and school districts. Requirements not addressed through ESEA waivers. Several of the ESEA Title I requirements identified as burdensome by states and school districts are also required by statute. For example, the statute specifies certain information that must be included in state and district report cards and requires that school districts must spend 20 percent of their Title I allocation on SES and school choice-related transportation, unless a lesser amount is needed. Although Education does not have the authority to modify these statutory requirements, it has used its waiver authority to issue waivers exempting states and their districts from the SES and school choice requirements and from some of the state and district report card requirements. Other than offering these waivers, however, Education does not have the authority to change the underlying statute, so states and districts must still comply with the statutory requirements to the extent they are not covered by a waiver. Recent government-wide initiatives have highlighted the need to reduce the burden faced by states and school districts in complying with federal grant requirements. While stakeholders and state and district officials generally agree that requirements are necessary to ensure program integrity, transparency, and fair and equal educational opportunities for all students, there is also acknowledgement that states and districts spend considerable time and resources complying with requirements. Education has taken some steps to alleviate burden on states and districts while, at the same time, ensuring these entities achieve program goals. Despite these efforts, additional in-depth analysis and greater collaboration among Education and key stakeholders is needed so that states and districts do not waste resources implementing overly complex processes or reporting data multiple times. Education can work with interested parties to identify requirements that can be modified or eliminated without affecting program integrity. Education cannot, however, change some requirements that states and districts find burdensome, because they are specified by statute. In these cases, statutory changes would be needed. Finding the appropriate balance between program goals and compliance can be difficult but maintaining requirements that are unnecessary and burdensome can hinder education reform efforts. We recommend that the Secretary of Education take additional steps to address duplicative reporting and data collection efforts across major programs such as ESEA Title I and IDEA Part B as well as other efforts, such as the Civil Rights Data Collection. For example, Education could work with stakeholders to better understand and address their concerns and review reporting requirements to identify specific data elements that are duplicative. In addition, we recommend that the Secretary build on these efforts by identifying unnecessarily burdensome statutory requirements and developing legislative proposals to help reduce or eliminate the burden these requirements impose on states and districts. We provided a draft copy of this report to Education, USDA, and OMB for review and comment. Education’s comments are reproduced in appendix II. Education generally agreed with our recommendations. In particular, Education agreed that it should take additional steps to address duplicative reporting and data collection efforts that are not statutorily required and said it believes additional efficiencies can be achieved in its data collections. Education noted that some data elements are required under various program statutes and said it will work with Congress on reauthorization of key laws, such as the ESEA and IDEA, to address duplication or the appearance of duplication resulting from those requirements. Education also acknowledged the importance of collaborating with stakeholders whenever the department develops regulations, such as data reporting requirements. Education and USDA provided technical comments on our report which we incorporated as appropriate. OMB did not have any comments on our report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Education and Agriculture, the Director of OMB, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Table 2 lists the 17 federal requirements identified as most burdensome by the officials we interviewed in 3 state educational agencies and 12 school districts. Requirements are grouped by program: Elementary and Secondary Education Act (ESEA) Title I, Part A; Individuals with Disabilities Education Act (IDEA) Part B; national school meals programs, including the National School Lunch Program and the School Breakfast Program; and other requirements related to the receipt of federal funds. The summaries and cited provisions for each requirement represent the burdens described in our interviews; therefore they are not intended to be complete descriptions of each requirement. Additional provisions related to these requirements may apply. In some cases a requirement may have multiple sources, such as where statutory requirements are further interpreted in a regulation or guidance document. In addition to the contact named above, the following staff members made important contributions to this report: Elizabeth Morrison, Assistant Director; Jason Palmer, Analyst-in-Charge; Sandra Baxter; Jamila Kennedy; and Amy Spiehler. In addition, Sarah Cornetto and Sheila McCoy provided extensive legal assistance. Jean McSween, Timothy Bober, Phyllis Anderson, and Kathleen Van Gelder provided guidance on the study. | States and school districts receive funding through ESEA, IDEA, and national school meals programs. Some requirements for these programs are intended to help ensure program integrity and transparency, among other purposes, but questions have been raised about whether some federal requirements place an undue burden on states and school districts. GAO was asked to (1) describe federal requirements identified as the most burdensome by selected states and school districts and other stakeholders, (2) describe information states and school districts collect on the cost of complying with those requirements, and (3) assess federal efforts to reduce or eliminate burdensome requirements. We defined burdensome requirements as those that are viewed as complicated or duplicative, among other things. We interviewed officials in 3 states and 12 districts and obtained information on the costs to comply with selected requirements. While the results from these interviews are not generalizable, they provide insights into complying with federal requirements. We interviewed external education stakeholders and officials in the Departments of Education and Agriculture and the Office of Management and Budget. Generally consistent with the views of key stakeholders we interviewed, state and school district officials cited 17 federal requirements as most burdensome for them. These requirements were related to the Elementary and Secondary Education Act (ESEA) Title I, Part A; the Individuals with Disabilities Education Act (IDEA) Part B; national school meals programs; or other requirements related to the receipt of federal funds. Officials described the burdens associated with these requirements as complicated, time-intensive, and duplicative, among other things, and characterized most of the requirements as being burdensome in multiple ways. For example, several officials told us that collecting data for IDEA reporting requirementssuch as the number of data elements collectedtakes a significant amount of time and resources. State and district officials also noted benefits of some requirements, for example, that the process to create individualized education programs can help protect the rights of students with disabilities. For a variety of reasons, states and school districts generally do not collect information about the costs to comply with federal requirements, according to officials we interviewed. For example, states and district officials told us they are not required to report compliance cost data, the data are not useful to them, and collecting the data would be too burdensome, in their view. Federal agencies have developed plans and are taking other steps to reduce burden, but stakeholders and state and district officials told us about several burdensome requirements that have not been addressed. The Department of Educations (Education) plan identified regulatory provisions for review including ones that were mentioned as burdensome in interviews we conducted. In addition, Education granted waivers to some states from certain ESEA requirements, such as offering supplemental educational services to eligible students in certain schools identified for improvement. To receive waivers, states had to describe how they will implement key efforts, such as college and career-ready standards. Despite these efforts, stakeholders and state and district officials said there are potentially duplicative reporting requirements that still need to be addressed. Department officials told us that there are relatively few duplicative reporting requirements and the few that exist present only a small burden on states and districts. In addition, Educations ability to address the burden associated with some requirements, such as some IDEA provisions, may be limited without statutory changes. GAO recommends that the Secretary of Education take additional steps to address potentially duplicative reporting requirements, such as working with stakeholders to address their concerns, and develop legislative proposals to reduce unnecessarily burdensome statutory requirements. Education generally agreed with our recommendations. |
Over the past five decades, mandatory spending has grown as a share of the total federal budget. For example, figure 1 shows that outlays from mandatory programs rose from approximately 49 percent of total federal spending in 1994 to about 54 percent in 2004, and to 60 percent in 2014. This growth is projected to continue at least through fiscal year 2046. Current law requires OMB to calculate the reductions to budgetary resources required each year to ultimately reduce the deficit by at least an additional $1.2 trillion. For fiscal year 2014, BBEDCA directed OMB to calculate a sequestration of mandatory spending, which was effective on October 1, 2013. A percentage reduction, or sequestration rate, is applied to programs, projects, and activities (PPA), which are generally sub-elements within accounts, to achieve the total reduction amount required for the fiscal year. The sequestration rate varies from year to year based on a formula outlined in BBEDCA. The annual reduction amount calculated by OMB ($109.3 billion) is split evenly between the defense and nondefense functions, and then allocated between discretionary appropriations and mandatory spending in each function in proportion to their share of the function. To determine the requisite percentage reduction to nonexempt budget accounts in each function pursuant to BBEDCA, OMB must define the sequestrable base. For fiscal year 2014, the base for mandatory spending was equal to the current law baseline amounts provided in the President’s Budget submission for fiscal year 2014, including unobligated balances in the defense function, and administrative expenses in otherwise exempt accounts. OMB was directed to calculate a sequestration consistent with provisions of sections 251A, 255, and 256 of BBEDCA, which limit or exempt the sequestration of certain budget authority. Under BBEDCA, many mandatory programs are exempt from sequestration, and Medicare non-administrative spending (spending to pay for services provided to Medicare beneficiaries) could not be reduced by more than two percent. These calculations are issued annually in OMB’s Report to the Congress on the Joint Committee Reductions. OMB provided guidance to agencies primarily through memoranda for heads of executive departments and agencies and other technical assistance. In addition, in July 2014, OMB updated Circular A-11 to include a new Section 100, providing agencies with guidance on sequestration. This added section encouraged agencies to record how sequestration was implemented to maintain consistency from year to year, inform efforts to plan for sequestration in future years, and build institutional knowledge. In fiscal year 2014, the total amount of mandatory budget authority across the federal government was approximately $2.9 trillion, spread across roughly 443 accounts. For each of these accounts, OMB applied the designations outlined in BBEDCA, which labeled certain accounts or activities exempt or subject to special rules, to determine how much budget authority, if any, was subject to sequestration and the relevant sequestration rate for calculating the amount of the reduction. OMB reported the estimated reductions for each account subject to sequestration in the OMB Report to the Congress on the Joint Committee Reductions for Fiscal Year 2014, which was released in the spring of 2013. Since this report was released prior to the start of fiscal year 2014, the report included estimates for accounts with indefinite budget authority and actual amounts for accounts with definite budget authority. We were unable to quantify the actual amount of total sequestered dollars government-wide in fiscal year 2014 because OMB staff said they do not have complete records of actual budget authority or the amount actually sequestered on an account by account basis. Therefore, they cannot aggregate this data. The sequestration procedures established under BBEDCA were designed to serve as a budget enforcement mechanism and reduce the federal budget deficit. The sequestration procedures do not apply to all mandatory spending. Certain budget authority is exempt or subject to special rules. The majority of mandatory budget authority across the federal government is exempt from sequestration. Among the accounts subject to sequestration, OMB calculated reductions based on differing rates ranging from 2 percent to 9.8 percent, as determined under the provisions of BBEDCA. As shown in figure 2, about $2.2 trillion, or approximately 77 percent, of the total estimated government-wide mandatory budget authority in fiscal year 2014 was exempt from sequestration. Applying the corresponding rates to each sequestrable account yielded an estimated target of $19.4 billion in sequestration reductions government-wide in fiscal year 2014. This represented less than 1 percent of the total estimated mandatory budget authority for that year. The estimated $19.4 billion includes $11.2 billion from budget authority sequestered at the 2 percent rate, $7.4 billion from budget authority sequestered at the 7.2 percent rate, and $778 million from budget authority sequestered at the 9.8 percent rate. Every federal account is assigned a “budget function,” which identifies the national priority supported by that account. As shown in figure 3, approximately 58 percent of the $19.4 billion in estimated reductions, or $11.3 billion, came from Medicare. Although BBEDCA limits the sequestration of Medicare and certain other health programs to a rate of 2 percent, Medicare comprises the majority of the budget authority estimated to be sequestered and it is the largest sequestrable national priority. The projected increases in Medicare spending will likely cause Medicare to comprise a larger share of the sequestration reductions over time. As shown in figure 3, after Medicare the next largest reductions in fiscal year 2014 came from mandatory budget authority for the administration of justice ($1.5 billion), transportation ($1 billion), health ($783 million), and national defense ($778 million). The remaining 20.7 percent of the estimated reductions were spread across 10 other national priorities. BBEDCA specifies that the same percentage reductions must be applied to each PPA within a sequestered account. However, because BBEDCA specifies exemptions and special rules for certain mandatory programs, under the law, different percentage reductions may apply to PPAs within the same budget account, and some PPAs or budget accounts may be entirely exempt. The exemptions and special rules lead sequestration to affect some areas of the federal government more than others. In fiscal year 2014, certain national priorities had a greater proportion of sequestrable budget authority. For example, nearly all mandatory budget authority for Medicare and more than 90 percent of the mandatory budget authority that supports the administration of justice was sequestrable, whereas national priorities such as social security and veterans benefits, which comprise a larger portion of the federal budget, were exempt from sequestration. As shown in figure 4, four national priorities had more than 80 percent of their mandatory budget authority subject to sequestration in fiscal year 2014. In contrast, eight national priorities had less than 25 percent of their mandatory budget authority subject to sequestration. In addition to the varying levels at which national priorities were subject to sequestration, certain national priorities were also subject to different sequestration rates. As described earlier, while BBEDCA limits the fiscal year 2014 sequestration of Medicare and certain other health programs to 2 percent, OMB calculated a 9.8 percent sequestration rate for mandatory budget authority that supports national defense and a 7.2 percent rate for nondefense mandatory budget authority. As with national priorities, varied proportions of federal agencies’ mandatory budget authority were subject to sequestration in fiscal year 2014. About two-thirds of federal agencies with mandatory budget authority implemented sequestration procedures in 2014. As shown in table 1, twelve agencies’ entire mandatory budget authority was subject to sequestration, while 22 agencies’ mandatory budget authority was completely exempt. Of the remaining 33 agencies that were somewhere in between, 9 agencies had 50 percent or more of their mandatory budget authority subject to sequestration and 24 agencies had less than 50 percent of their mandatory budget authority subject to sequestration. While there were varying proportions of mandatory budget authority that was sequestrable within agencies, 45 of the 67 agencies with mandatory budget authority were responsible for administering sequestration. Some of the types of resources that agencies needed to redirect, if any, to implement sequestration are described in a later section of this report. The greatest amount of growth in mandatory spending is attributed to the effects of an aging population and rising care costs for major federal health and retirement programs such as Medicare and Social Security. Most of the mandatory spending that is subject to automatic, annual sequestration is not from the areas that have been the main drivers behind the growth in mandatory spending during the past 10 years. While Social Security and health care are the largest contributors to the overall growth in mandatory spending, aside from Medicare and certain other health programs, these areas are either completely or largely exempt from sequestration. Medicare has a fixed rate of reduction of 2 percent through fiscal year 2024, and Social Security and 21 other agencies are exempted from sequestration cuts. The remaining agencies with sequestrable mandatory budget authority have variable reductions based on the exemptions and rate calculation formula outlined in BBEDCA. Figure 5 shows how the amount of mandatory budget authority that is exempt from sequestration has changed over time compared to the amount that is subject to the required reductions. In addition to the annual, automatic reductions to mandatory spending, the Statutory Pay-As-You-Go Act of 2010 (PAYGO), specifies a second type of sequestration that can be triggered if certain conditions are met. The act established a permanent budget enforcement mechanism intended to prevent enactment of mandatory spending and revenue legislation that would increase the federal deficit. The act requires OMB to track costs and savings associated with enacted legislation and to determine at the end of each congressional session if net total costs exceed net total savings. If so, a separate sequestration will be triggered. Under sequestration—triggered either by BBEDCA or the PAYGO Act— the exemptions and special rules of Sections 255 and 256 of BBEDCA apply. Consequently, the same mandatory accounts that are subject to sequestration under BBEDCA could incur further reductions if a secondary PAYGO sequestration is triggered. It is unclear what effects an additional enforcement sequestration under PAYGO would have on the level of federal agencies’ operations. To provide context and perspective in terms of an individual account or program, we selected a nongeneralizable sample of six accounts for further analysis. As shown in table 2, each of the selected accounts had mandatory budget authority subject to a 7.2 percent reduction including one account that also had a portion of mandatory budget authority subject to a 2 percent reduction. The agencies reported that they implemented these reductions by decreasing the amount of funds or direct payments provided to other federal partners, state and local entities, or individuals. For three of the accounts in the table, the actual sequestered amount differed from the estimate because these accounts have indefinite budget authority. OMB’s guidance for the fiscal year 2013 sequestration—which was issued in the spring of 2013—was the same guidance that applied for the fiscal year 2014 sequestration. Agency officials from four of the six agencies we interviewed described aspects of implementing sequestration in fiscal year 2014 as generally less challenging because they had already experienced the 2013 sequestration. For example, these agencies had already categorized accounts based on their sequestration designation, determined how to allocate the required reductions, and modified reporting systems to implement the 2013 sequestration. Thus, these activities did not need to be repeated to implement the 2014 sequestration order. Even though they had created the administrative framework to implement sequestration during its first year in fiscal year 2013, the agency officials we spoke with indicated that implementation of the fiscal year 2014 sequestration required them to engage in additional administrative activities to ensure that reductions were applied correctly and to accommodate the changes in cash flows for programs and services. This included such things as notifying program participants, performing manual computations, and updating software systems. For two of our six selected accounts, agency officials said it took time to clarify which fiscal year’s sequestration rate to apply when calculating payment reductions to program participants. For example, under the Build America Bonds (BABs) program, the Internal Revenue Service (IRS) administered sequestration reductions by reducing direct payments to bond issuers or reducing tax credits to taxpayers. Officials stated that payments could overlap fiscal years which brought confusion since fiscal year 2013 and fiscal year 2014 were subject to different sequestration rates, thus it was unclear whether the 2013 or 2014 sequestration rate should be applied to the return. As a result of this confusion, IRS’s Office of the Chief Financial Officer developed and issued guidance to describe which sequestration rate should be applied based on which fiscal year certain administrative actions had been completed and if delays had occurred. In addition, until required payment programming changes could be made, IRS staff manually calculated reductions to individual issuers and individually notified payment recipients of the sequestration rate and the total reduction applied to their payment. After issuing the guidance, IRS determined that 262 payments had been made using the wrong sequestration rate, and those payments had to be corrected and re- issued. In the end, IRS sequestered $263 million, which was 7.2 percent of the approximately $3.6 billion in BABs payments made in fiscal year 2014. Similarly, officials from the Farm Service Agency (FSA) within the U.S. Department of Agriculture (USDA) described their challenge of identifying which sequestration rate applied to the Commodity Credit Corporation (CCC) Fund when reducing direct payments to farmers whose crop year did not coincide with the federal fiscal year. Consequently, similar program recipients were subject to different reduction rates depending on the crop year and when their payment was obligated. FSA officials said this meant there could be two neighboring farmers participating in the same CCC program but subject to different sequestration rates. To help ensure appropriate application of the reductions, the agency modified its software programs to incorporate the sequestration calculations for more than a dozen programs. FSA described an additional challenge of determining whether and how to apply fiscal year 2014 sequestration reductions after the Agricultural Act of 2014, known as the Farm Bill reauthorization, was enacted in February 2014. The 2014 Farm Bill created some new programs, while terminating others, which OMB and FSA staff said required time and resources to identify which programs were subject to sequestration and how to implement the required reductions. In certain cases, officials said that sequestration added further uncertainty to pre-existing budgetary restrictions on agencies’ programs. For example, at the Department of the Treasury (Treasury), officials said sequestration reductions to the Treasury Forfeiture Fund (TFF) created additional uncertainty about the availability of funds, which led to cash management concerns. Due to the combination of sequestration reductions, as well as a cancelation and rescission of budgetary resources in fiscal year 2014, Treasury’s Executive Office for Asset Forfeiture (TEOAF) had fewer funds to allocate to participating law enforcement agencies. Treasury applied sequestration reductions to federal forfeiture program related expenses of the member agencies, allowing them to prevent state and local partners, as well as victims, from seeing reduced payments. Treasury staff said sequestration reduced the agency’s flexibility to cover unexpected expenses, such as unanticipated victim payments from prior year forfeitures. While the $125 million in sequestration reductions later became available to TEOAF in fiscal year 2015, the sequestration reduced the amount of funds available in fiscal year 2014, which Treasury staff said made it difficult to manage cash flows. In some circumstances current law allows for budget authority sequestered in one fiscal year to become available to the agencies again in a subsequent fiscal year. OMB refers to these amounts as “pop ups.” Another account where officials reported that sequestration added uncertainty to existing budget restrictions was the Highway Trust Fund. Department of Transportation (DOT) officials said the sequestration of $907 million—that would have otherwise been transferred into the Highway Trust Fund—became a complicating factor to deal with on top of the broader existing cash shortfall the Highway Trust Fund was facing because revenues from fuel taxes were insufficient to maintain authorized spending levels for highway and transit programs. In addition to the agencies managing the six selected accounts included in our review, we also spoke with staff from OMB, given its oversight role and responsibilities related to implementing sequestration across all federal agencies. OMB staff said they also had to redirect staff time and resources to meet the needs of the agencies including a substantial amount of staff hours that could otherwise have been devoted to other agency priorities. For example, staff from OMB’s Budget Review Division said that they spent a substantial amount of time working closely with their Office of General Counsel staff to make determinations regarding the availability of sequestered amounts in subsequent years pursuant to section 256(k)(6) of BBEDCA and to document these decisions. OMB staff said this was a new issue that surfaced in fiscal year 2014 since it was the first year that such amounts would become available for obligation. Staff also indicated that OMB had developed principles to aid in sequestration implementation; however, the process must be repeated every year as new accounts are created. OMB staff also indicated that while they and the agencies have gained more expertise in implementing sequestration, it requires resources and adds considerations that must be factored into the budget process. In March 2014, we recommended that OMB issue guidance directing agencies to formally document the decisions and principles used to implement sequestration for potential future application. In response to our recommendation, OMB revised its Circular A-11 guidance to include a new section about sequestration and directed agencies to record how they implemented sequestration to maintain consistency from year to year, inform agencies’ efforts to plan for sequestration in future years, and build institutional knowledge. OMB staff and some agency officials we spoke with said they rely primarily on apportionment records to document how sequestration was implemented and how much was actually sequestered. Selected agencies reported that program beneficiaries were affected in different ways by the sequestration reductions ranging from smaller direct payments, reduced services, delayed payments, and reduced tax credits. For example, the Health Resources and Services Administration (HRSA) provides services to underserved and vulnerable communities in need of health care. According to HRSA, sequestration reductions in fiscal year 2014 to the health centers and workforce programs prevented the expansion of services to an estimated 365,000 new patients. In addition, HRSA officials reported that, in the absence of sequestration, the National Health Service Corps program would have been able to increase the number of practitioners providing primary care, dental, mental and behavioral health services in the field by 358—from 9,242 to 9,600. According to the officials, this additional staff would have been able to provide services to approximately 300,000 additional individuals in fiscal year 2014. As illustrated in figure 6, in the case of direct payment BABs, sequestration reductions were transferred directly to issuers through reducing the outlay by 7.2 percent after IRS determines that a refund can be disbursed. In the case of tax credit BABs, sequestration reductions of 7.2 percent were taken from any payment owed to the taxpayer by IRS above the bond holder’s tax liabilities, but no reductions were taken from the tax credit if tax liabilities were higher than the amount of the BABs tax credit. FSA’s CCC Fund provides direct payments to farmers under a variety of farm programs to support their operational activities. FSA administered an estimated $574 million in reductions by reducing individual direct payments to farmers at the 7.2 percent sequestration rate, which FSA reported affected thousands of producers across different programs. A senior director from the National Corn Growers Association (NCGA) said the reductions further exacerbated the uncertainty growers were already experiencing from delays in final appropriations decisions for federal programs upon which they rely. FSA officials echoed this concern. In addition, the NCGA representative emphasized the need for growers to know the sequestration reduction amounts with enough time that they may include it in their final projections needed to secure loans for production costs. For fiscal year 2014, $12.6 billion was transferred from DOT’s Payments to the Transportation Trust Fund account directly into the Highway Trust Fund, which in December 2012, we found had been facing increasing shortfalls because revenues from fuel taxes were insufficient to maintain authorized spending levels. This amount was subject to the 7.2 percent reduction rate, translating into a $907 million reduction in monies available to reimburse states for highway projects. However, as a result of a subsequent appropriation of general revenues into the Highway Trust Fund of approximately $9.8 billion in August 2014, officials said DOT was able to pay states any outstanding reimbursement amounts. In other words, states received the full reimbursement, without additional reductions. The subsequent amount was appropriated under the Moving Ahead for Progress in the 21st Century Act (MAP-21) extension, the Highway and Transportation Funding Act of 2014, which was not subject to the 2014 sequestration because the extension was enacted after the sequestration order was issued for fiscal year 2014. We spoke with a senior official from the American Association of State Highway and Transportation Officials, who said the sequestration reductions accelerated the timing of the potential cash shortfall in the Highway Trust Fund. DOT officials echoed this same sentiment. However, this shortfall was ultimately postponed once the MAP-21 extension was enacted. Of the agencies we spoke with, only HRSA could quantify the effects of sequestration on programs or their recipients. The others described the effects in general terms. For example, in the case of the Build America Bonds, which are issued by state and local governments, IRS officials said they do not have a system to track whether bond issuers canceled projects or refinanced them due to sequestration reductions. A senior government official from one state that issued BABs, said that, while sequestration did not lead to the cancelation of any infrastructure projects, the reductions had a negative effect on the state’s budget as a whole. Moreover, it affected his perspective on the reliability of federally subsidized bond programs. According to Treasury officials, in addition to the $125 million reduction to comply with the sequestration order, additional amounts in the TFF were rescinded and canceled, which made it difficult to isolate the specific effects from sequestration alone. However, officials said the reductions had an operational effect on TFF member agencies. For example, officials from IRS’ Criminal Investigation unit (IRS-CI), one of the member agencies that receives the greatest amount of support from the TFF, reported that reductions in funding have limited their capacity to address emerging tax compliance and enforcement issues, such as cybercrime and identity theft. IRS-CI officials reported that lower funding levels caused them to reduce hiring, training, equipment purchases, and case support. As previously described, when the Joint Committee did not propose and Congress and the President did not enact legislation in January 2012 to reduce the deficit over 10 years by at least an additional $1.2 trillion, the sequestration process in section 251A of BBEDCA was triggered to automatically reduce spending such that an equivalent budgetary goal would be achieved. BBEDCA requires cuts totaling $109.3 billion in each year through fiscal year 2021. It is expected that reductions in both discretionary appropriations and mandatory spending will contribute to reaching this target. Reporting actual reductions may increase transparency and help ensure the deficit reduction targets are reached. The availability of amounts pursuant to section 256(k)(6) of BBEDCA could affect progress towards fiscal targets. While those sequestered amounts are counted as reductions in the fiscal year for which they are sequestered, because they can be made available for obligation again in future years, they do not result in lasting savings for the federal government. For example, amounts from revolving, trust, and special fund accounts and offsetting collections from appropriations accounts are temporarily sequestered and may become available in the subsequent year to the extent otherwise provided in law. If the temporarily sequestered amounts become available to the agency for obligation, OMB staff said those amounts are not subject to a second level of sequestration. In response to our March 2014 recommendation, OMB revised Circular A-11 to include a description of what happens to sequestered budgetary resources, including funds that are temporarily reduced. This guidance also instructs agencies on how to record such amounts, which OMB staff said helps to avoid a re-sequestration of those same amounts in the subsequent year. More recently, we asked OMB staff to provide aggregate data to show what amount of funds were sequestered permanently versus those that may become available for obligation in future fiscal years pursuant to the statute. OMB staff said they could not provide total government-wide dollar amounts on this, in part, because the determinations vary case-by-case, depending on the specific statutory language related to each account. BBEDCA does not require OMB to tally the total amount of funds that “pop up” in a given year. However, the act established annual deficit reduction targets. In addition, providing such information is consistent with the internal control standard for information and communication, which states, among other things, that entities must have relevant, reliable, and timely information and communications to achieve their objectives. While we recognize the need for a case-by- case approach to confirm the amount of funding available from “pop ups,” actual amounts for each of these situations could be assembled after the close of each fiscal year. Such information would provide insight about actual progress against the $1.2 trillion deficit reduction target, and would also provide additional transparency to Congress about the total amount of funds agencies have available in a given year. Section 251A of BBEDCA requires OMB to provide estimates of the required reductions for any fiscal year in which a sequestration of mandatory spending and reductions of discretionary spending limits has been ordered. These estimates include a listing of the reductions required for each nonexempt mandatory account and are reported each spring in OMB’s Report to the Congress on the Joint Committee Reductions. OMB staff confirmed that the reductions listed for mandatory accounts with definite budget authority—that is, accounts for which a specific amount of budget authority is determinable at the time of enactment—are equal to the actual amounts sequestered from those accounts. However, OMB staff also confirmed that the reductions listed for mandatory accounts with indefinite budget authority may differ from the amounts that were actually sequestered because the amount of budget authority for these accounts is for an unspecified or indeterminable amount at the time of enactment. This lack of specified amount makes it difficult to determine the total amount to be sequestered in advance of the fiscal year. In addition, OMB staff said there were changes in budget authority for some indefinite accounts and their database has not been updated to reflect those changes to show the actual amounts sequestered in fiscal year 2014. As a result, the database does not reflect the actual amounts sequestered. Thus, while OMB staff calculated the amounts ordered to be sequestered, they were unable to provide an aggregate actual amount sequestered government-wide. Moreover, they are not required under BBEDCA to tally the actual amounts reduced in a given year. This makes it difficult to determine whether progress is being made toward the required reductions. While there is uncertainty in estimating sequestration reductions for accounts with indefinite budget authority, actual amounts could be tabulated after the close of the fiscal year. These data would provide a clearer picture of the precise amount of funds that were permanently canceled, thereby representing the true savings generated from mandatory spending reductions in each year. Moreover, compiling such data could serve as a benchmark to evaluate the progress made each year toward the overall savings of $1.2 trillion required by law. Providing this information is also consistent with the internal control standard for information and communication. Among other things, this standard states that entities must have relevant, reliable, and timely information and communications to achieve their objectives. Of the approximately $2.9 trillion of estimated mandatory budget authority across the federal government in fiscal year 2014, an estimated $19.4 billion was sequestered after OMB and the agencies implementing sequestration carried out their responsibilities under BBEDCA. This represents less than one percent of mandatory budget authority in fiscal year 2014. Dozens of agencies implemented sequestration procedures in 2014 to administer the required reductions. In addition, the reductions affected certain national priorities more than others as provisions of BBEDCA provided exemptions and special rules for certain programs and accounts. Aside from Medicare and certain other health programs, the largest drivers of mandatory spending growth are statutorily exempt from sequestration. The selected agencies we spoke with reported that they were more familiar with sequestration procedures in 2014 since it was the second consecutive year of implementing the required reductions. However, these agencies said implementation involved additional administrative activities, and in certain cases, an additional element of uncertainty when planning and executing their budgets. The form of the reported reductions varied by program and affected beneficiaries differently ranging from smaller direct payments, reduced services, delayed payments, and reduced tax credits. The processes established under BBEDCA were designed to reduce the federal deficit by at least an additional $1.2 trillion over 10 years. However, in certain cases, sequestered amounts become available to agencies in subsequent fiscal years, thereby reversing the corresponding savings from those reductions. OMB does not tally the total amount of funds that are temporarily sequestered and become available in the next fiscal year, referred to as “pop ups.” Identifying these amounts would provide additional transparency to Congress about the total amount of funds agencies have available in a given year. In addition, while OMB publicly reports the estimated amount of sequestration reductions each year, it does not tabulate and report the total amount of the actual reductions government-wide at year-end. Doing so, along with reporting the amount of “pop ups,” would provide a clearer picture to decision makers of the amount of funds that were permanently canceled, thereby representing the true savings generated from mandatory spending reductions each year. Moreover, this data would increase the transparency of the process and provide annual benchmarks to measure progress toward the overall savings of $1.2 trillion required by law. To increase the transparency to Congress about the total amount of funds agencies have available in a given year, we recommend that the Director of the Office of Management and Budget identify and publicly report the total amount of actual budget authority government-wide that is temporarily sequestered and “pops up,” or becomes available again to agencies for obligation in the subsequent fiscal year. To promote further transparency in measuring the federal government’s progress against deficit reduction targets required under current law, we recommend that the Director of the Office of Management of Budget identify and publicly report the total amount of actual reductions in budget authority government-wide each year as a result of sequestration or the reduction of discretionary spending limits under BBEDCA. We provided a draft of this report to OMB and the Departments of Agriculture, Health and Human Services, Transportation, and the Treasury for review and comment. OMB agreed with the first recommendation but disagreed with the second, as discussed below. Each of these agencies provided technical comments, which we incorporated as appropriate. In oral comments received on January 20, 2016, OMB staff agreed with the first recommendation in this report and said they have started to take action. For example, beginning with the fiscal year 2016 budget, the President’s budget contains a data field to record “pop up” amounts. OMB staff said “pop up” amounts are delineated with a particular value in the OMB MAX database and are identified so as not to re-sequester those same funds in the subsequent fiscal year. OMB staff disagreed with the second recommendation in this report and said this would be a burdensome new requirement that is not applied for other types of budget enforcement. For example, they said estimated savings are used for PAYGO enforcement, and there is no requirement for agencies to track actual PAYGO savings over time. In addition, OMB staff said this would be problematic for programs with indefinite funding for direct payments (e.g., benefit payments) because agencies typically record indefinite budget authority equal to obligations incurred as they operate the program. Further, they said requiring agencies to capture the sequestration savings separately in their accounting records would require changes to agencies’ financial systems so that both a pre- sequestration amount and a reduction amount could be recorded for each payment. OMB staff said that their current oversight of sequestration via an annual exercise requiring agencies to certify that they are executing the reductions to payments required by the sequestration order and OMB’s review of estimated reductions is a balanced approach that focuses attention on the critical control elements needed to achieve savings. OMB staff said the attempt to provide additional precision is outweighed by the cost and confusion—and potentially erroneous conclusions—that would be engendered by this recommendation. They said collecting such data would require significant accounting changes both by OMB and agencies and would require coordination and approval by the Department of the Treasury—all of which could take years to implement. As stated earlier in this report, we acknowledge the uncertainty in estimating sequestration reductions for accounts with indefinite budget authority, thereby requiring actual amounts to be tabulated at the close of the fiscal year. In our view, identifying the actual amounts reduced at the close of each fiscal year would be consistent with the type of budgetary reporting practices OMB and agencies already follow when preparing the President’s budget each year. Regarding OMB’s concern that this would require modifications to agencies’ financial systems, we found that the selected agencies who manage the six accounts included in our more in- depth analysis were able to readily provide us with actual amounts reduced under sequestration, including cases where indefinite budget authority was involved. Moreover, there is evidence that at least these agencies have developed their own tracking mechanisms to identify the actual amounts that were reduced each fiscal year. We recognize such tracking mechanisms may vary across agencies, but OMB could request agencies to report the actual amounts as part of the annual preparation of the President’s budget, which could also be compared to apportionment records for the relevant fiscal year. Regarding OMB’s concern that collecting such data would require accounting changes across agencies and coordination with the Department of the Treasury, we recognize that it will take time to establish and refine a government-wide data set of sequestered amounts that is reliable and comparable across agencies. In the interim however, there are data available to both OMB and the agencies that can serve as a meaningful starting point to tally sequestered amounts and calculate a government-wide total, which could be refined over time. The fact that sequestration of mandatory spending will be in effect over the coming decade and an issue that agencies will have to continue to manage heightens the significance of identifying and tracking the actual amounts reduced to promote transparency to key decision makers. Moreover, we believe that taking the next step to calculate a government-wide aggregate dollar amount that could be publicly reported would provide confirmation of the amount of funds that were permanently canceled thereby adding transparency on the true savings generated from mandatory spending reductions each year. We continue to believe that this could help serve as a benchmark to evaluate the progress made each year toward the overall savings of $1.2 trillion required by law. We are sending copies of this report to the appropriate congressional committees; Director of OMB; the Secretaries of Agriculture, Health and Human Services, Transportation, and the Treasury; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines: (1) the designation of mandatory accounts across the federal budget under the President’s sequestration order for fiscal year 2014; (2) how selected agencies implemented the fiscal year 2014 sequestration order and the effects, if any, they reported the required spending reductions had on programs and services; and (3) how continued sequestration of mandatory spending relates to the achievement of deficit reduction goals. To accomplish our first objective, we identified how mandatory accounts were designated under sequestration primarily using a data set provided by OMB staff. This data set was generated by OMB through a government-wide data collection exercise to calculate the sequestration percentage and reductions by account, and issue the report required under the Joint Committee process. The data set includes the estimated budget authority, sequestration designation, sequestration rate, and the budget subfunction for every account with mandatory budget authority in fiscal year 2014. We assessed the reliability of this data set through interviews with agency officials, review of relevant documentation, and electronic data testing. We found the data to be sufficiently reliable for the purpose of our report. We focused on fiscal year 2014 because mandatory accounts were sequestered but discretionary accounts were not. Additionally, fiscal year 2014 was the most recently completed fiscal year for which actual data were available for the selected accounts included in our review. We also analyzed actual budget authority data for all mandatory accounts government-wide from OMB’s MAX database and matched it with the sequestration designation data to show trends over time from fiscal year 2005 through fiscal year 2014. We assessed the reliability of the data extracted from OMB’s MAX database through electronic data testing. We found the data to be sufficiently reliable for the purpose of our report. To accomplish our second objective, we selected a nongeneralizable sample of six accounts for a more in-depth review of the implementation of sequestration and its effects. To select the six accounts, we reviewed OMB sequestration data for all mandatory accounts across the federal budget for fiscal year 2014 and sorted for those accounts with at least $50 million in estimated reductions. This analysis yielded 31 accounts, which represented over 90 percent of all sequestrable mandatory budget authority government-wide in fiscal year 2014. From these accounts, we selected six accounts representing a variety of characteristics including the amount of sequestrable budget authority, type of account, agency, budget function, and whether the account included some portion of budget authority that was exempt from sequestration. In addition, we identified which national priorities (i.e., budget function) were affected the most (in percentage terms) by the fiscal year 2014 sequestration. We eliminated accounts that were recently discussed in prior GAO work, such as Medicare. Table 3 lists the six accounts that we selected. Furthermore, we reviewed budget data, guidance, and documentation of any reported programmatic effects of sequestration for each of the selected accounts. We spoke with agency budget and program officials, as well as OMB staff, about their challenges and lessons learned from implementing sequestration. In addition, we spoke with a nongeneralizable selection of interest groups to gain their perspective on the effects of sequestration on programs and services. We also interviewed agency officials on how their agency implemented OMB’s revised A-11 guidance on sequestration. To accomplish our third objective, we reviewed relevant literature and the government-wide federal budget data described above that was used for our first objective. In addition, we reviewed relevant legislation, executive memoranda, OMB guidance and federal standards for internal control to identify the criteria used in our analysis. Also, to inform our analysis, we interviewed a nongeneralizable selection of budget specialists with a broad range of agency, congressional, and academic experiences including current and former congressional staffers and agency officials to obtain their perspective on the implications of sequestration. Each selected specialist had at least 15 years of experience with extensive backgrounds in federal budget and policy issues and served in a variety of positions across the federal government and academia. While the views from these selected specialists are not generalizable nor do they represent the full range of possible views on sequestration, they provided insight and perspectives about sequestration. We conducted this performance audit from February 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. As part of our review, we selected a nongeneralizable sample of six accounts and examined how the agencies responsible for managing those accounts reported implementing sequestration procedures in fiscal year 2014. These six accounts serve as case illustrations to better understand how the required spending reductions were applied and what effects, if any, these reductions had on agencies’ programs and services. We selected accounts based on several characteristics including the amount of sequestrable mandatory budget authority, budget function, type of account, agency, and whether the account included some portion of funds that were exempt from sequestration. Build America Bonds (BABs) were created as a part of the American Recovery and Reinvestment Act of 2009 (Recovery Act) to stimulate municipal infrastructure spending by reducing borrowing costs for state and local governments. BABs are taxable government bonds with federal subsidies for a portion of the borrowing costs. BAB subsidies could be either in the form of nonrefundable tax credits provided to holders of the bonds (tax credit BABs) or refundable tax credits paid to state and local governmental issuers of the bonds (direct payment BABs). The funding for this account was authorized under the Recovery Act. State and local governments had the ability to issue BABs through December 31, 2010. The BABs account supports our national priority on general purpose fiscal assistance (budget subfunction 806). In the case of direct payment BABs, sequestration reductions were transferred directly to issuers through reducing the outlay by 7.2 percent after IRS determines that a refund can be disbursed. In the case of tax credit BABs, sequestration reductions of 7.2 percent were taken from any payment owed to the taxpayer by IRS above the bond holder’s tax liabilities, but no reductions were taken from the tax credit if tax liabilities were higher than the amount of the BABs tax credit. In fiscal year 2014, approximately $263 million was reduced and permanently canceled from this account in accordance with sequestration procedures. Sequestration reduced the amount of the subsidy that bond issuers received, thus increasing borrowing costs for state and local infrastructure projects. According to one official, the reduced borrowing subsidy, along with other factors such as market interest rates changing, may have contributed to some bond issuers prepaying their BABs. Officials from Treasury’s Office of Tax Analysis (OTA) estimated that some of the infrastructure projects funded by these prepaid BABs may have been refinanced at a lower cost and some projects may have been canceled, but IRS staff said they cannot report how many projects were canceled because issuers are not required by law to report the reason for early prepayment of bonds. On a higher level, one OTA official said that sequestration of BABs reduced the credibility of federally subsidized municipal bond programs. This presented a challenge for this program since direct payment bonds were a new type of municipal financing tool. As a result, OTA officials said that the proposal for a new municipal bond program called Fast Forward Bonds, which uses a similar type of direct payment mechanism, was crafted to assure potential bond issuers and holders that sequestration will not affect the level of subsidies provided by the federal government. According to officials, implementing sequestration for BABs was administratively challenging for IRS since the reductions had to be applied to each payment manually until programming changes could be made. IRS individually notified payment recipients of the sequestration rate and total reduction applied to their payment. During the transition between fiscal year 2013 and fiscal year 2014, IRS staff said they also encountered ambiguity determining which sequestration rate to apply to payments. For example, if a tax return was received toward the end of fiscal year 2013 but was not fully processed until fiscal year 2014 had already begun, it was often unclear if the 2013 or 2014 sequestration rate should be applied to the return. As a result of this confusion, IRS developed and issued guidance about which cases to apply the current or previous year’s sequestration rate. After issuing the guidance, IRS determined that 262 payments had been made using the wrong sequestration rate, and those payments had to be corrected and re- issued. The Commodity Credit Corporation (CCC) is a government-owned and government-operated entity that was created in 1933 to stabilize, support, and protect farm incomes and prices. CCC also helps maintain balanced and adequate supplies of agricultural commodities and aids their orderly distribution. The CCC has an authorized capital stock of $100 million held by the United States and the authority to have outstanding borrowings of up to $30 billion at any one time. Funds are borrowed from the U.S. Treasury. The U.S. Department of Agriculture’s (USDA) Farm Service Agency (FSA) is responsible for providing management and oversight of the CCC Fund, which supports the national priority of farm income stabilization (budget subfunction 351). FSA applied the 7.2 percent sequestration rate to the CCC Fund’s actual mandatory budget authority of $9.1 billion in fiscal year 2014. The OMB estimate of the account’s mandatory budget authority was based on the estimated program participation for that year, whereas the final reduction was based on actual program participation. An estimated $574 million was to be sequestered, but the actual amount was approximately $646 million. While approximately $646 million was reduced from the account in fiscal year 2014 and not available for obligation during that year, this reduction was designated as temporary, meaning that funds were not canceled nor reverted to the General Fund of the Treasury. Instead, these amounts remain in the fund or account and may be available in subsequent years only to the extent provided in appropriations or authorizing language. FSA officials said the sequestered amounts did not “pop up” the next year nor have they been made available to the agency for obligation. The required spending reductions affected a range of program areas supported by the CCC Fund, according to officials. For example, under the Crop Direct Payments program, 1.7 million farmers received reduced payments. Under the Emerging Market program, fewer grants were funded. Agency officials reported that their software had to be modified to apply the sequestration rate to reduce each obligation or payment for 13 different programs and activities. In addition, FSA officials said it took some time to determine which sequestration rate to apply since the crop year for certain programs does not coincide with the federal fiscal year. For example, FSA officials said two neighboring farmers who participate in the same program might be subject to different sequestration reduction rates depending on which fiscal year the payment was obligated to the program participant. FSA officials characterized sequestration as a complicating factor on top of their broader existing budget constraints related to the reauthorization of the Farm Bill. In February 2014, halfway through the fiscal year, the Farm Bill was re-authorized and included the creation of new programs and the termination of others. Agency officials said it took some time to clarify whether any of the new programs were subject to sequestration and in which fiscal years. The Health Resources and Services account supports the Health Resources and Services Administration’s (HRSA) goal of increasing access to basic health care for those who are medically underserved. The Health Resources and Services account consists of both discretionary and mandatory budget authority and supports the national priorities of health care services (budget subfunction 551) and health research and training (budget subfunction 552). The following programs are supported by mandatory budget authority: Health Center Program: Since 1965 this program has been delivering comprehensive preventive and primary health care to vulnerable populations regardless of their ability to pay. Mandatory budget authority provided funding for community health centers under the Affordable Care Act in 2010, which also established a Community Health Center Fund to provide for expanded and sustained national investment in community health centers. National Health Service Corps (NHSC): Created in 1970, the NHSC is a clinician recruitment and retention program that Congress created to reduce health workforce shortages in underserved areas. The NHSC provides scholarships and loan repayment opportunities to individuals in exchange for a commitment to serve in NHSC approved sites where there is a shortage of health professionals. In fiscal year 2014, the NHSC was fully funded by the Community Health Center Fund. Federal Capital Contribution Loan programs: HRSA administers the Health Professions and Nursing Federal Capital Contribution Loans. Through these revolving fund accounts, funds are awarded to institutions that in turn provide loans to individual students. As the loans are repaid, the account is replenished by offsetting collections. HRSA grants awards to more than 3,000 grantees including community- based organizations; colleges and universities; hospitals; and state, local, and tribal governments that support the mission of the agency. To implement the sequestration reductions, officials reported that HRSA decreased the number of awards granted to recipients and made fewer loan repayments and scholarships to health professionals. These funds were permanently sequestered. Funds from the Health Resources and Services account related to loan programs were temporarily sequestered because they are financed through a revolving fund supported by collections. These funds became available again in fiscal year 2015 as a “pop up”. Because the loan programs are financed through revolving fund accounts, they are subject to temporary sequestration under BBEDCA. The Health Resources and Services Administration (HRSA) provides services to underserved and vulnerable communities in need of health care. According to HRSA, sequestration reductions in fiscal year 2014 to the health centers and workforce programs prevented an expansion of services to an estimated 365,000 new patients. In addition, HRSA officials reported that, in the absence of sequestration, the National Health Service Corps program would have been able to increase the number of practitioners providing primary care, dental, mental and behavioral health services in the field by 358—from 9,242 to 9,600. According to those officials, this additional staff would have been able to provide services to approximately 300,000 additional individuals in 2014. Generally, institutions that have excess funds from loan repayments received from borrowers are required to return these funds to HRSA. The agency can then redistribute these funds to other institutions in need of additional funds. Agency officials said HRSA was not able to redistribute approximately $1.2 million of these excess funds in fiscal year 2014 because they were subject to the sequestration reductions. This account functions as a mechanism to transfer appropriated funds from the General Revenue Fund of the Treasury into the Highway Trust Fund as a result of legislative action. In recent years, the Administration has proposed to rename the Highway Trust Fund to the Transportation Trust Fund. The primary funding sources for the Highway Trust Fund are federal excise taxes on motor fuels (gasoline, diesel, and special fuels taxes) and truck-related taxes (truck and trailer sales, truck tire, and heavy-vehicle use taxes). The purpose of the transfer of funds from the General Revenue Fund of the Treasury to the Highway Trust Fund is to maintain the solvency of the Highway Trust Fund throughout the reauthorization period and cover the structural deficit created by the demands of new transportation programs. The Highway Trust Fund, administered by the Department of Transportation (DOT), is the major source of funding for federal surface transportation; however the fund’s revenues are eroding while outlays have outpaced these revenues since 2001. To maintain authorized spending levels for highway and transit programs and to cover revenue shortfalls, Congress transferred a total of about $63 billion (before sequestration) in general revenues to the Highway Trust Fund on six occasions between fiscal years 2008 and 2014. The Highway Trust Fund primarily supports surface transportation programs administered by four DOT operating administrations: the Federal Highway Administration, Federal Transit Administration, Federal Motor Carrier Safety Administration, and the National Highway Traffic Safety Administration. For fiscal year 2014, Section 40251 of the Moving Ahead for Progress in the 21st Century Act (MAP-21) appropriated $12.6 billion from general revenues to the Highway Trust Fund. This appropriation was recorded on the Payment to the Transportation Trust Fund account. This account supports our national priority on ground transportation (budget subfunction 401). This appropriation was subject to sequestration in fiscal year 2014, which resulted in a total appropriation of approximately $11.7 billion. A subsequent appropriation in the amount of approximately $9.8 billion was provided in August 2014, under the MAP-21 extension that was not subject to the 2014 sequestration because the extension was enacted after the sequestration order was issued for the applicable year. The sequestered amount of $907 million was permanently canceled in fiscal year 2014 in accordance with sequestration procedures. DOT officials indicated that sequestration did not reduce spending; instead, it only reduced the general revenue amounts initially transferred into the Highway Trust Fund. They said this reduction had the effect of hastening the eventual cash shortfall; however, Congress acted in time to infuse additional funds and thus further postpone this situation. As a result, the Federal Highway Administration did not have to reduce any payments. DOT officials characterized sequestration as a complicating factor on top of their broader existing budget constraints related to the revenue shortfalls in the Highway Trust Fund. They said that staff time and resources were focused on ensuring that there were sufficient funds available to cover obligations of the Highway Trust Fund, in addition to time spent to implement the sequestration reductions and attend high- level meetings with senior staff. The recurrence of sequestration adds to a broader concern, which is ensuring the long-term solvency of the Highway Trust Fund. The Social Services Block Grant account (SSBG) is a federal block grant which provides funding directly to states to support a wide range of social policy goals such as promoting self-sufficiency, preventing child abuse, and supporting community-based care for the elderly and disabled. Social services activities supported with these funds include child care, foster care, protective services for adults, and special services for the disabled. SSBG funds are allocated to states according to the relative size of each state’s population. States have total discretion to set their own eligibility criteria for program participants and have broad discretion over the use of the funds. The Office of Community Services within the Department of Health and Human Services’ Administration for Children and Families (ACF) is responsible for providing management and oversight of the SSBG account, which supports the national priority on social services (budget subfunction 506). ACF implemented the sequestration reductions to the SSBG’s fiscal year 2014 mandatory budget authority of $1.8 billion by applying the 7.2 percent sequestration rate to the statutory grant formula. Approximately $129 million were reduced and permanently canceled from this account. ACF officials said that the states bore the primary challenge because they were responsible for identifying areas for reductions in services. In November 2013, ACF informed grant recipients of the sequestration reductions by announcing each state’s allocation for the first quarter of fiscal year 2014 with the sequestration rate already applied. ACF officials said that all states and territories that received SSBG funds incurred a proportional reduction in their grant amount based on the statutory formula. Agency officials said that while they have not collected information on the effects of the sequestration reductions, they know that each state reduced funds for services and individually determined the specific service categories to be reduced. ACF officials said they do not have any information on any challenges that states may experience as a result of the continued sequestration in effect under current law. The Treasury Forfeiture Fund (TFF) is a multi-departmental fund and has four primary goals: to (1) deprive criminals of assets used in or acquired through illegal activities; (2) encourage joint operations among federal, state, and local law enforcement agencies, as well as foreign countries; (3) protect the rights of individuals; and (4) strengthen law enforcement. The Treasury Executive Office for Asset Forfeiture (TEOAF) is responsible for providing management and oversight of the TFF. The funding for this account comes from non-tax forfeitures made pursuant to laws enforced or administered by participating agencies within the Department of the Treasury (Treasury) and the Department of Homeland Security. TFF funds are used to cover expenses associated with its member agencies’ forfeiture programs, including the storage and maintenance of seized property; investigative costs leading to seizure; payments to financial victims and other third parties; and equitable sharing payments to law enforcement partners. If there is a remaining unobligated balance at the close of the fiscal year after an amount is reserved for Fund operations in the next fiscal year; Treasury may declare a “super surplus.” This balance can be used for any federal law enforcement purpose, whether or not it is related to forfeiture. Treasury officials report that super surplus is used to fund top priorities of TFF agencies, as well as initiatives supporting financial investigations. This account supports our national priority for the administration of justice (budget subfunction 751). Treasury applied the 7.2 percent sequestration rate for the TFF’s fiscal year 2014 budget authority of $1.74 billion. The OMB estimate was based on anticipated forfeiture revenue for fiscal year 2014, whereas the final reduction was based on actual revenue. Although roughly $125 million was sequestered in fiscal year 2014, the law allowed for this amount to become available again in fiscal year 2015 as a “pop up.” In addition to sequestration, a portion of the TFF was rescinded and canceled during fiscal year 2014. Treasury officials emphasized that it is too difficult to parse out the effects of sequestration on the TFF separately from the rescissions and cancelation, but the combined effect of these reductions has impeded the TFF’s ability to support law enforcement. In fiscal year 2015, as a result of cumulating reductions on TFF balances, Treasury was unable to declare super surplus for the first time in the fund’s 22-year history. In fiscal year 2014, Treasury officials reported they were able to spare equitable sharing with state and local law enforcement partners and payments to victims from reductions, but this meant reducing the amount of budgetary resources available to support other forfeiture-related expenses and agencies’ priorities. Rescissions and sequestration resulted in a large portion of TFF’s budget authority being unavailable for obligation according to officials. As a result, Treasury officials reported that the TFF must rely almost solely on incoming monthly revenue from newly completed forfeiture cases to be able to allocate the funds. According to Treasury officials, every low revenue month causes cash flow problems and delays in funding allocations for various programs, resulting in interruption and delays in TFF’s agencies’ operations and the processing of equitable sharing payments and refunds, which they said results in victims and state and local agencies having to wait longer to receive their payments. Administratively, the officials indicated that sequestration adds uncertainty to the TFF budget and reduces program flexibility to handle unexpected expenses. Though the sequestered funds may become available in the subsequent fiscal year, sequestration reduces the amount of funds available in the current year which makes it difficult to manage cash flows. In addition to the contact named above, Carol M. Henn, Assistant Director and Leah Q. Nash, Analyst-In-Charge made major contributions to this report. Also contributing to this report were Shari Brewster, Evelyn Calderon, Deirdre Duffy, Ellen Grady, Ricky Harrison, Donna Miller, John Mingus Jr., Katherine D. Morris, Kathleen Padulchick, Cindy Saunders, Timothy N. Shaw, Stewart Small, and Lou V.B. Smith. | In fiscal year 2014, federal agencies implemented the second consecutive year of sequestration reductions to mandatory spending, which are scheduled through fiscal year 2025. GAO was asked to review the implementation of sequestration on mandatory accounts and any related effects. This report examines 1) the designation of mandatory accounts government-wide under the President's sequestration order for fiscal year 2014, 2) how selected agencies implemented sequestration and any effects they reported on programs or services, and 3) how continued sequestration of mandatory spending relates to the achievement of deficit reduction goals. GAO analyzed fiscal year 2014 budget data on sequestration; selected a nongeneralizable sample of 6 accounts from USDA, HHS, Treasury, and DOT based on the amount of sequestrable budget authority, budget function, and account type; reviewed documentation on sequestration; interviewed budget officials; and reviewed legislation. GAO found that in fiscal year 2014, total mandatory budget authority government-wide was approximately $2.9 trillion spread across roughly 443 accounts. The Balanced Budget and Emergency Deficit Control Act of 1985 (BBEDCA), as amended, required the Office of Management and Budget (OMB) to apply a range of sequestration rates to non-exempt mandatory spending. This resulted in estimated reductions of $19.4 billion in fiscal year 2014, which was less than one percent of mandatory budget authority. Exemptions and special rules in BBEDCA led some areas of government to be reduced more than others. For example, 90 percent or more of mandatory budget authority for the administration of justice and transportation was subject to reduction. Veterans benefits and services were exempt. About two-thirds of the 67 federal agencies with mandatory budget authority implemented sequestration procedures in 2014. The largest drivers of mandatory spending growth—Social Security and health care—are statutorily exempt from sequestration under BBEDCA, with the exception of Medicare and certain health programs which are subject to a special rate. Agency officials responsible for managing the selected accounts in GAO's review at the Departments of Agriculture (USDA), Health and Human Services (HHS), the Treasury (Treasury), and Transportation (DOT) reported varied administrative and programmatic effects. While they said 2014 sequestration procedures were similar to the prior year, implementation involved additional administrative activities to ensure that reductions were applied correctly and to accommodate the changes in cash flows for programs and services. In certain cases, selected officials said sequestration added uncertainty when planning and executing their budgets. They also said that the required reductions affected program beneficiaries in different ways including smaller direct payments, reduced services, delayed payments, and reduced tax credits. The processes established by BBEDCA were designed to reduce the deficit over 10 years by at least an additional $1.2 trillion. However, the subsequent availability of temporarily sequestered budget authority in certain accounts—referred to as “pop ups”—provide savings in the year they are sequestered but do not represent lasting savings. OMB staff said they do not tally the total amount of funds that “pop up,” nor are they required to do so. However, doing so would provide additional transparency to Congress about the total amount of funds agencies have available in a given year. In addition, actual sequestered amounts for certain types of mandatory spending cannot be determined until the end of the fiscal year due to the variable nature of indefinite budget authority—budget authority for an unspecified or indeterminable amount at the time of enactment. OMB staff said they do not aggregate government-wide data on the actual amounts sequestered nor are they required to do so under BBEDCA. However, tabulating actual amounts after the close of the fiscal year would provide a clearer picture of the amount of funds that were permanently canceled, thereby representing the true savings generated from mandatory spending reductions each year. Moreover, compiling such data could improve transparency and serve as a benchmark to evaluate the progress made each year toward the required overall savings of $1.2 trillion. GAO recommends that OMB identify and publicly report the total amount of (1) temporarily sequestered budget authority that becomes available in subsequent fiscal years and (2) actual budget authority sequestered government-wide each year. OMB agreed with the first recommendation but disagreed with the second, citing implementation burden. GAO believes such information would enhance the transparency of achieving federal deficit reduction goals as discussed in the report. |
Minority Serving Institutions vary in size and scope but generally serve a high percentage of minority students, many of whom are financially disadvantaged. In the 2000-01 school year, 465 schools, or about 7 percent of postsecondary institutions in the United States, served about 35 percent of all Black, American Indian, and Hispanic students. Table 1 briefly compares the three main types of Minority Serving Institutions in terms of their number, type, and size. The Higher Education Act of 1965, as amended, provides specific federal support for Minority Serving Institutions through Titles III and V. These provisions authorize grants for augmenting the limited resources that many Minority Serving Institutions have for funding their academic programs. In 2002, grants funded under these two titles provided over $300 million for Historically Black Colleges and Universities, Hispanic Serving Institutions, and Tribal Colleges to improve their academic quality, institutional management, and fiscal stability. Technology is one of the many purposes to which these grants can be applied, both inside the classroom and, in the form of distance education, outside the classroom. Technology is changing how institutions educate their students, and Minority Serving Institutions, like other schools, are grappling with how best to adapt. Through such methods as E-mail, chat rooms, and direct instructional delivery via the Internet, technology can enhance students’ ability to learn any time, any place, rather than be bound by time or place in the classroom or in the library. For Minority Serving Institutions, the importance of technology takes on an additional dimension in that available research indicates their students may arrive with less prior access to technology, such as computers and the Internet, than their counterparts in other schools. These students may need considerable exposure to technology to be fully equipped with job-related skills. The growth of distance education has added a new dimension to evaluating the quality of postsecondary education programs. Federal statutes recognize accrediting agencies as the gatekeepers of postsecondary education quality. To be eligible for the federal student aid program, a school must be periodically reviewed and accredited by such an agency. Education, in turn, is responsible for recognizing an accrediting agency as a reliable authority on quality. While the accreditation process applies to both distance education and campus-based instruction, many accreditation practices focus on the traditional means of providing campus-based education, such as the adequacy of classroom facilities or recruiting and admission practices. These measures can be more difficult to apply to distance education when students are not on campus or may not interact with faculty in person. In this new environment, postsecondary education officials are increasingly recommending that outcomes—such as course completion rates or success in written communication—be incorporated as appropriate into assessments of distance education. The emphasis on student outcomes has occurred against a backdrop of the federal government, state governments, and the business community asking for additional information on what students are learning for the tens of billions of taxpayer dollars that support postsecondary institutions each year. While there is general recognition that the United States has one of the best postsecondary systems in the world, this call for greater accountability has occurred because of low completion rates among low- income students (only 6 percent earn a bachelors degree or higher), perceptions that the overall 6-year institutional graduation rate (about 52 percent) at 4-year schools and the completion rate at 2-year schools (about 33 percent) are low, and a skills gap in problem solving, communications, and analytical thinking between what students are taught and what employers need in the 21st Century workplace. For the most part, students taking distance education courses can qualify for financial aid in the same way as students taking traditional courses. As the largest provider of student financial aid to postsecondary students, the federal government has a substantial interest in distance education. Under Title IV of the Higher Education Act of 1965, as amended, the federal government provides grants, loans, and work-study wages for millions of students each year. There are limits, however, on the use of federal student aid at schools with large distance education offerings. Concerns about the quality of some correspondence courses more than a decade ago led the Congress, as a way of controlling fraud and abuse in federal student aid programs, to impose restrictions on the extent to which schools could offer distance education and still qualify to participate in federal student aid programs. The rapid growth of distance education and emerging delivery modes, such as Internet-based classes, have led to questions about whether these restrictions are still needed and how the restrictions might affect students’ access to federal aid programs. Distance education’s effect on helping students complete their courses of study is still largely unknown. Although there is some anecdotal evidence that distance education can help students complete their programs or graduate from college, school officials that we spoke to did not identify any studies that evaluated the extent to which distance education has improved completion or graduation rates. There are some variations in the use of distance education at Minority Serving Institutions and other schools. While it is difficult to generalize across the Minority Serving Institutions, the available data indicate that Minority Serving Institutions tend to offer at least one distance education course at the same rate as other schools, but they differ in how many courses are offered and which students take the courses. Overall, the percentage of schools offering at least one distance education course in the 2002-03 school year was 56 percent for Historically Black Colleges and Universities, 63 percent for Hispanic Serving Institutions, and 63 percent for Tribal Colleges, based on data from our surveys of Minority Serving Institutions. Similarly, 56 percent of 2- and 4-year schools across the country offered at least one distance education course in the 2000-01 school year, according to a separate survey conducted by Education.Minority Serving Institutions also tended to mirror other schools in that larger schools were more likely to offer distance education than smaller schools, and public schools were more likely to offer distance education than private schools. Tribal Colleges were an exception; all of them were small, but the percentage of schools offering distance education courses was relatively high compared to other smaller schools. The greater use of distance education among Tribal Colleges may reflect their need to serve students who often live in remote areas. In two respects, however, the use of distance education at Minority Serving Institutions differed from other schools. First, of those institutions offering at least one distance education course, Historically Black Colleges and Universities and Tribal Colleges generally offered fewer distance education courses—a characteristic that may reflect the smaller size of these two types of institutions compared to other schools. Second, to the extent that data are available, minority students at Historically Black Colleges and Universities and Hispanic Serving Institutions participate in distance education to a somewhat lower degree than other students. For example, in the 1999-2000 school year, fewer undergraduates at Historically Black Colleges and Universities took distance education courses than students at non-Minority Serving Institutions—6 percent v. 8.4 percent of undergraduates—a condition that may reflect the fact that these schools offer fewer distance education courses. Also, at Hispanic Serving Institutions, Hispanic students had lower rates of participation in distance education than non-Hispanic students attending these schools. These differences were statistically significant. We found that Minority Serving Institutions offered distance education courses for two main reasons: (1) they improve access to courses for some students who live away from campus and (2) they provide convenience to older, working, or married students. The following examples illustrate these conditions. Northwest Indian College, a Tribal College in Bellingham, Washington, has over 10 percent of its 600 students involved in distance education. It offers distance education by videoconference equipment or correspondence. The College offers over 20 distance education courses, such as mathematics and English to students at seven remote locations in Washington and Idaho. According to College officials, distance education technology is essential because it provides access to educational opportunities for students who live away from campus. For example, some students taking distance education courses live hundreds of miles from the College in locations such as the Nez Perce Reservation in Idaho and the Makah Reservation in Neah Bay, Washington. According to school officials, students involved in distance education tend to be older with dependents, and therefore, find it difficult to take courses outside of their community. Also, one official noted that staying within the tribal community is valued and distance education allows members of tribes to stay close to their community and still obtain skills or a degree. The University of the Incarnate Word is a private nonprofit Hispanic Serving Institution with an enrollment of about 6,900 students. The school, located in San Antonio, Texas, offers on-line degree and certificate programs, including degrees in business, nursing, and information technology. About 2,400 students are enrolled in the school’s distance education program. The school’s on-line programs are directed at nontraditional students (students who are 24 years old or older), many of whom are Hispanic. In general, the ideal candidates for the on-line program are older students, working adults, or adult learners who have been out of high school for 5 or more years, according to the Provost and the Director of Instructional Technology. Not all schools wanted to offer distance education, however, and we found that almost half of Historically Black Colleges and Universities and Hispanic Serving Institutions did not offer any distance education because they preferred to teach their students in the classroom rather than through distance education. Here are examples from 2 schools that prefer teaching their students in the classroom rather than by the use of distance education. Howard University, an Historically Black University in Washington, D.C., with about 10,000 students, has substantial information technology; however, it prefers to use the technology in teaching undergraduates on campus rather than through developing and offering distance education. The University has state-of-the-art hardware and software, such as wireless access to the school’s network; a digital auditorium; and a 24- hour-a-day Technology Center, which support and enhance the academic achievement for its students. Despite its technological capabilities, the University does not offer distance education courses to undergraduates and has no plans to do so. According to the Dean of Scholarships and Financial Aid, the University prefers teaching undergraduates in the classroom because more self-discipline is needed when taking distance education courses. Also, many undergraduates benefit from the support provided by students and faculty in a classroom setting. Robert Morris College is a private nonprofit Hispanic Serving Institution located in Chicago, Illinois, that offers bachelor degrees in business, computer technology, and health sciences. About 25 percent of its 6,200 undergraduates are Hispanic. Although the College has one computer for every 4 students, it does not offer distance education courses and has no plans to do so. School officials believe that classroom education best meets the needs of its students because of the personal interaction that occurs in a classroom setting. Among Minority Serving Institutions that do not offer distance education, over 50 percent would like to offer distance education in the future, but indicated that they have limited resources with which to do so. About half of Historically Black Colleges and Universities and Hispanic Serving Institutions that do not offer distance education indicated that they do not have the necessary technology—including students with access to computers at their residences—for distance education. A higher percentage of Tribal Colleges (67 percent) cited limitations in technology as a reason why they do not offer distance education. Technological limitations are twofold for Tribal Colleges. The first, and more obvious limitation is a lack of resources to purchase and develop needed technologies. The second is that due to the remote location of some campuses, needed technological infrastructure is not there—that is, schools may be limited to the technology of the surrounding communities. All 10 Tribal Colleges that did not offer distance education indicated that improvements in technology, such as videoconference equipment and network infrastructure with greater speed, would be helpful. Minority Serving Institutions, like other schools, face stiff challenges in keeping pace with the rapid changes and opportunities presented by information technology and Education could improve how technological progress is monitored. Minority Serving Institutions view the use of technology as a critical tool in educating their students. With respect to their overall technology goals, Minority Serving Institutions viewed using technology in the classroom as a higher priority than offering distance education. (See fig. 1.) Other priorities included improving network infrastructure and providing more training for faculty in the use of information technology as a teaching method. Minority Serving Institutions indicated that they expect to have difficulties in meeting their goals related to technology. Eighty-seven percent of Tribal Colleges, 83 percent of Historically Black Colleges and Universities, and 82 percent of Hispanic Serving Institutions cited limitations in funding as a primary reason for why they may not achieve their technology-related goals. For example, the Southwest Indian Polytechnic Institute in Albuquerque, New Mexico, serves about 670 students and it uses distance education to provide courses for an associates degree in early childhood development to about 100 students. The school uses two-way satellite communication and transmits the courses to 11 remote locations. According to a technology specialist at the school, this form of distance education is expensive compared to other methods. As an alternative, the Institute would like to establish two-way teleconferencing capability and Internet access at the off-site locations as a means of expanding educational opportunities. However, officials told us that they have no means to fund this alternative. About half of the schools also noted that they might experience difficulty in meeting their goals because they did not have enough staff to operate and maintain information technology and to help faculty apply technology. For example, officials at Diné College, a Tribal College on the Navajo Reservation, told us they have not been able to fill a systems analyst position for the last 3 years. School officials cited their remote location and the fact that they are offering relatively low pay as problems in attracting employees that have skills in operating and maintaining technology equipment. Having a systematic approach to expanding technology on campuses is an important step toward improving technology at postsecondary schools. About 75 percent of Historically Black Colleges and Universities, 70 percent of Hispanic Serving Institutions, and 48 percent of Tribal Colleges had completed a strategic plan for expanding their technology infrastructure. Fewer schools had completed a financial plan for funding technology improvements. About half of Historically Black Colleges and Universities and Hispanic Serving Institutions, and 19 percent of Tribal Colleges have a financial plan for expanding their information technology. Studies by other organizations describe challenges faced by Minority Serving Institutions in expanding their technology infrastructure. For example, an October 2000 study by Booz, Allen, and Hamilton determined that historically or predominantly Black colleges identified challenges in funding, strategic planning, and keeping equipment up to date. An October 2000 report by the Department of Commerce found that most Historically Black Colleges and Universities have access to computing resources, such as high-speed Internet capabilities, but individual student access to campus networks is seriously deficient due to, among other things, lack of student ownership of computers or lack of access from campus dormitories. An April 2003 Senate Report noted that only one Tribal College has funding for high-speed Internet. Education has made progress in monitoring the technological progress of Minority Serving Institutions; however, its efforts could be improved in two ways. First, more complete data on how Historically Black Colleges and Universities and Tribal Colleges use Title III funds for improving technology on campus, and thus, the education of students, would help inform program managers and policymakers about progress that has been made and opportunities for improvement. Education’s tracking system appears to include sufficient information on technology at Hispanic Serving Institutions. Second, although Education has set a goal of improving technology capacity at Minority Serving Institutions, it has not yet developed a baseline against which progress can be measured. If Education is to be successful in measuring progress in this area, it may need to take a more proactive role in modifying existing research efforts to include information on the extent to which technology is available at schools. Committee hearings such as this, reinforce the importance of effective monitoring and good data collection efforts. As the Congress considers the status of programs that aid Minority Serving Institutions, or examines creating new programs for improving technology capacity at these institutions, it will be important that agencies adequately track how students benefit from expenditures of substantial federal funds. Without improved data collection efforts, programs are at risk of granting funds that may not benefit students. Accrediting agencies have made progress in ensuring the quality of distance education programs. For example, they have developed supplemental guidelines for evaluating distance education programs and they have placed additional emphasis on evaluating student outcomes. Additionally, the Council on Higher Education Accreditation—an organization that represents accrediting agencies—has issued guidance and several issue papers on evaluating the quality of distance education programs. Furthermore, some accrediting agencies have called attention to the need for greater consistency in their procedures because distance education allows students to enroll in programs from anywhere in the country. While progress has been made, our preliminary work has identified two areas that may potentially merit attention. While accrediting agencies have made progress in reviewing the quality of distance education programs, there is no agreed upon set of standards for holding schools accountable for student outcomes. In terms of progress made, for example, the Council on Higher Education Accreditation has issued guidance on reviewing distance education programs. In addition, some agencies have endorsed supplemental guidelines for distance education and four of the seven agencies have revised their standards to place greater emphasis on student learning outcomes. Not withstanding the progress that has been made, we found that agencies have no agreed upon set of standards for holding institutions accountable for student outcomes. Our preliminary work shows that one strategy for ensuring accountability is to make information on student achievement and attainment available to the public, according to Education. The Council on Higher Education Accreditation and some accrediting agencies are considering ways to do this, such as making program and institutional data available to the public; however, few if any of the agencies we reviewed currently have standards that require institutions to disclose such information to the public. The second issue involves variations in agency procedures for reviewing the quality of distance education. For example, agency procedures for reviewing distance education differ from one another in the degree to which agencies require institutions to have measures that allow them to compare their distance learning courses with their campus-based courses. Five agencies require institutions to demonstrate comparability between distance education programs and campus-based programs. For example, one agency requires that “the institution evaluate the educational effectiveness of its distance education programs (including assessments of student learning outcomes, student retention, and student satisfaction) to ensure comparability to campus-based programs.” The two other agencies do not explicitly require such comparisons. Finally, we found that if some statutory requirements—requirements that were designed to prevent fraud and abuse in distance education—remain as they are, increasing numbers of students will lose eligibility for the federal student aid programs. Our preliminary work shows that 9 schools that are participating in Education’s Distance Education Demonstration Program collectively represent about 200,000 students whose eligibility for financial aid could be adversely affected without changes to the 50 percent rule—a statutory requirement that limits aid to students who attend institutions that have 50 percent or more of their students or courses involved in distance education. As part of the demonstration program, 7 of the 9 schools received waivers from Education to the 50 percent rule so that their students can continue to receive federal financial aid. We identified 5 additional schools representing another 8,500 students that are subject to, or may be subject to, the rule in the near future if their distance education programs continue to expand. These 5 schools have not received waivers from Education. While the number of schools currently affected is small in comparison to the over 6,000 postsecondary schools in the country, this is an important issue for more than 200,000 students who attend these schools. In deciding whether to eliminate or modify these rules, the Congress and the Administration will need to ensure that changes to federal student aid statutes and regulations do not increase the chances of fraud, waste, and abuse to federal student financial aid programs. Mr. Chairman, this concludes my testimony. I will be happy to respond to any questions you or other members of the Subcommittee might have. For further information, please contact Cornelia M. Ashby at (202) 512- 8403. Individuals making key contributions to this testimony include Jerry Aiken, Neil Asaba, Kelsey Bright, Jill Peterson, and Susan Zimmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Higher Education Act of 1965 gives special recognition to some postsecondary schools--called Minority Serving Institutions--that serve a high percentage of minority students. These and other schools face stiff challenges in keeping pace with technology. One rapidly growing area, distance education, has commanded particular attention and an estimated 1.5 million students have enrolled in at least one distance education course. In light of this, GAO was asked to provide information on: (1) the use of distance education by Minority Serving Institutions; (2) the challenges Minority Serving Institutions face in obtaining and using technology; (3) GAO's preliminary finding on the role that accrediting agencies play in ensuring the quality of distance education; and (4) GAO's preliminary findings on whether statutory requirements limit federal aid to students involved in distance education. GAO is currently finalizing the results of its work on (1) the role of accrediting agencies in reviewing distance education programs and (2) federal student financial aid issues related to distance education. There are some variations in the use of distance education at Minority Serving Institutions when compared to other schools. While it is difficult to generalize, Minority Serving Institutions offered at least one distance education course at the same rate as other schools. When Minority Serving Institutions offered distance education, they did so to improve access for students who live away from campus and provide convenience to older, working, or married students. Some Minority Serving Institutions do not offer distance education because classroom education best meets the needs of their students. Additionally, schools view the overall use of technology as a critical tool in educating their students and they generally indicated that offering more distance education was a lower priority than using technology to educate their classroom students. The two primary challenges in meeting technology goals cited by these institutions were limitations in funding and inadequate staffing to maintain and operate information technology. Accrediting agencies have taken steps to ensure the quality of distance education programs, such as developing supplemental guidelines for reviewing these programs. However, GAO found (1) no agreed upon set of standards for holding institutions accountable for student outcomes and (2) differences in how agencies review distance education programs. Finally, several statutory rules limit the amount of federal aid for distance education students. GAO estimates that at least 14 schools are not eligible or could lose their eligibility for federal student financial aid if their distance education programs continue to expand. While the number of schools potentially affected is relatively small in comparison to the more than 6,000 postsecondary institutions in the country, this is an important issue for the nearly 210,000 students who attend these schools. Several factors must be considered before deciding whether to eliminate or modify these rules. They include the cost of implementation, the extent to which the changes improve access, and the impact that changes would have on Education's ability to prevent schools from fraudulent or abusive practices. |
When FAA finds that certificate holders have violated aviation regulations, it has the statutory authority to take appropriate action. FAA Order 2150.3A on compliance and enforcement provides guidance on the range of options available for responding to violations. The option chosen depends on such factors as the seriousness of the violation and the violator’s prior enforcement history and willingness to comply with regulations. FAA uses administrative actions to document incidents involving minor violations, to request future compliance, and—if appropriate—to document corrective actions violators have agreed to take. Legal actions, such as fines or certificate actions, are FAA’s strongest enforcement tools. While FAA uses certificate actions primarily against individual certificate holders (e.g., pilots, mechanics, or flight engineers), it can also take certificate action against such entities as airlines, air taxi operators, or repair stations. FAA can also refer cases to the Department of Transportation’s Office of Inspector General or to the appropriate law enforcement agency for criminal prosecution. When FAA determines that the public interest and safety require the immediate suspension or revocation of an operator’s certificate, the agency can issue an emergency order. An emergency order revoking an operating certificate is the most severe enforcement action that FAA can take against a certificate holder. An emergency order is generally used when a certificate holder is not qualified and may make use of the certificate or demonstrates a lack of care, judgment, and responsibility by, for example, operating an aircraft while under the influence of drugs or alcohol. An emergency order takes effect immediately on issuance. The certificate holder does not have an opportunity to contest the order before it is issued, and, unlike nonemergency certificate actions, the emergency order remains in effect while the certificate holder appeals. Emergency orders can be appealed to the National Transportation Safety Board (NTSB) and the U.S. Court of Appeals. (See app. II for more information on the process for appealing FAA’s emergency and nonemergency certificate actions.) FAA used emergency orders in a small percentage of its enforcement cases. FAA regions varied in their use of emergency orders to initiate certificate actions; these differences appear to result in part from differences in enforcement practices. Nearly 60 percent of the emergency orders revoked or suspended pilot certificates or the medical certificates pilots must also have. Of the cases FAA initiated using emergency orders, over three-quarters ultimately resulted in a suspension or revocation of the certificate. Of the 137,506 enforcement cases closed in fiscal years 1990 through 1997, FAA initiated 3 percent using emergency orders. The actual number of emergency orders ranged from a low of 322 in fiscal year 1990 to a high of 573 in fiscal year 1996. On average, FAA closed 468 cases annually in which it had initiated enforcement action using emergency orders. (See table 1.) Since fiscal year 1990, emergency orders have been used to initiate an increasing proportion of certificate actions. As FAA shifted to using administrative actions to handle less serious enforcement cases, its use of certificate actions decreased. Because the number of emergency orders remained relatively constant, emergency orders came to represent a larger proportion of the remaining certificate actions. (See table 1.) According to the Assistant Chief Counsel in the Enforcement Division, the proportion of certificate actions initiated using emergency orders grew largely because, beginning in 1990, FAA used administrative actions more frequently to handle many less serious violations, which decreased the number of certificate actions. Thus, fewer cases are now handled as certificate actions, but they are the more serious cases. FAA used emergency orders to initiate 18 percent of its certificate action cases, on average, for fiscal years 1990 through 1997, but three regions initiated from 28 to 38 percent of their certificate actions using emergency orders. (See table 2.) These differences among the regions reflect, among other things, (1) unusually high numbers of emergency orders to suspend or revoke medical certificates in the Eastern, Western-Pacific, and Southwest regions and (2) large numbers of emergency suspensions of mechanic certificates in the Southwest region. While most regions issued no more than a handful (one to five) of emergency orders to revoke or suspend medical certificates each year in fiscal years 1990 through 1997, the Southwest region averaged nearly a dozen annually, and the Eastern and Western-Pacific regions averaged almost 25. (See table 3.) Officials at these offices and at FAA headquarters were unsure why these regions initiated so many more emergency orders on medical certificates than did the other regions. Differences in enforcement practices in FAA’s regional offices apparently may affect whether emergency orders are used to revoke or suspend a medical certificate. One regional counsel suggested that the staff in her region were simply efficient in processing these cases, while in other regions, the certificates of pilots that do not meet requirements may simply be allowed to expire. (Medical certificates must be renewed every 6 months to 3 years, depending on the type of pilot.) Another regional counsel suggested that some regions may handle medical certificate cases as nonemergency certificate actions. The Deputy Associate Administrator for Regulation and Certification suggested that the higher numbers of medical certificates suspended or revoked using emergency orders in certain regions may reflect the larger population of pilots in those regions. We agree that regions that have a higher number of pilots might have proportionately higher numbers of emergency orders against pilots’ medical certificates. However, we do not believe this fully explains the differences among FAA’s regions. For example, the Southern region, which FAA officials told us had the largest number of general aviation pilots, had only one-sixth as many emergency revocations or suspensions as the Western-Pacific and Eastern regions. FAA was better able to clarify why the Southwest region issued nearly 40 percent (174) of the 442 emergency orders to revoke or suspend mechanic certificates in fiscal years 1990 through 1997. Other regions revoked or suspended mechanic certificates between 6 and 75 times during this period. According to the information provided by the Flight Standards Service in FAA headquarters and the legal staff in the Southwest region, many of these cases resulted from problems with a designated examiner with delegated authority from FAA who did not properly administer tests to ensure that mechanics were qualified. His actions necessitated the reexamination of nearly 200 mechanics; those who did not retake or did not pass the examination had their mechanic certificates suspended on an emergency basis. The 3,742 emergency orders to revoke or suspend aviation certificates in fiscal years 1990 through 1997 affected both individual pilots and mechanics and aviation entities such as repair stations and airport operators. Of the emergency orders, nearly 60 percent affected pilots by revoking or suspending 1,563 pilot certificates and 625 medical certificates. FAA also issued emergency orders to revoke or suspend 442 mechanics’ certificates and 118 certificates of the operators of air carriers, air taxis, airports, and other aviation entities. (See fig. 2.) These numbers reflect the number of certificates issued—there are many more pilots (622,261 during 1996) than air carriers or air taxis (3,057 during 1996). In addition, pilots must have at least two types of operating certificates—pilot and medical. (See app. III for annual data on FAA’s use of emergency orders by certificate type.) Pilot (1,563) Medical (625) Mechanics (442) 3% Operators (118)1% Repair stations (56) FAA used emergency orders to initiate certificate action against a similar proportion of private pilots and pilots holding commercial and air transport certificates. (See table 4.) FAA issued emergency orders to commercial pilots nearly 75 percent more often than it did to air transport pilots, although the number of pilots in each group is similar—129,187 commercial pilots and 127,486 air transport pilots in 1996. According to FAA’s Deputy Associate Administrator for Regulation and Certification, it is not surprising that a smaller proportion of air transport pilots, particularly those flying for major airlines, receive emergency orders because they have more initial training and more extensive recurrent training on a regular basis than do commercial pilots. A high percentage of the certificate actions initiated using emergency orders ultimately resulted in revocations or suspensions. Of the 2,311 certificate revocations initiated using emergency orders in fiscal years 1990 through 1997, 86 percent resulted in the individual’s or entity’s losing the certificate. Specifically, 72 percent of the emergency revocations ultimately resulted in the certificate’s being revoked, and an additional 14 percent led to a suspension of the certificate. Less than 4 percent of the actions initiated as emergency revocations ultimately resulted in the case being dropped (no action). Similarly, of the 1,431 certificate suspensions initiated using emergency orders, 62 percent ultimately resulted in the suspension of the certificate, an additional 2 percent resulted in revocation, and 6 percent were ultimately dropped (no action). (See table 5.) While the final resolution of 240 of the cases could not be determined from the available data, the vast majority of the remaining cases were resolved by allowing the certificate to expire or by having operators successfully complete a reexamination of their qualifications. (See app. V.) According to FAA officials in the Enforcement Division in the Office of the Chief Counsel and in Flight Standards, the high numbers of emergency orders that were upheld for suspension and revocation reflects the fact that the agency takes emergency orders, particularly revocations, very seriously and is reluctant to initiate them without clear and convincing evidence. The Acting Director and other staff in the Flight Standards Service, the Assistant Chief Counsel in FAA’s Enforcement Division, and the nine regional counsels strongly agreed that emergency revocations are used in cases in which individuals or entities lacked the qualifications for the certificate or demonstrated a lack of care, judgment, and responsibility by, for example, falsifying material aviation records or operating aircraft while under the influence of drugs or alcohol. The Acting Director of the Flight Standards Service said that requests to initiate emergency revocations against individuals are scrutinized at the local and division levels within Flight Standards before being referred to legal staff for action. Additionally, regional legal and program office staff provide information in cases against air carriers and repair stations to the Office of the Chief Counsel and the Associate Administrator for Regulation and Certification for review and concurrence. In most cases, the Office of the Deputy Administrator and the Office of the Administrator of FAA are briefed on the recommendation before an emergency order is issued. A change to FAA’s policy broadened the circumstances in which the agency uses emergency orders. Although the policy change applied to both emergency revocations and emergency suspensions, FAA officials focused on the rule’s impact on the agency’s use of revocations. According to several regional counsels we interviewed, prior to 1990, many revocation actions had been taken on a nonemergency basis. In 1990, FAA concluded that an emergency order is appropriate when a revocation is warranted in the interest of public safety because the certificate holder lacks qualifications. Under these conditions, the revocation should be taken immediately unless it is unlikely that the holder will use the certificate. The Assistant Chief Counsel of the Enforcement Division pointed out that, if the revocation is not taken immediately, the certificate holder can continue to operate for months or even years until the appeal process is completed. Furthermore, because of FAA’s responsibility to protect the public safety, such potentially unsafe operating situations cannot be allowed to continue for a long period of time. FAA informally implemented this policy change in 1990 and 1991 before formally incorporating it into FAA Order 2150.3A in February 1992. As a result, FAA increased the use of emergency orders to initiate revocations from 184 in fiscal year 1990 to between 264 and 382 annually thereafter. (See table 6.) The use of emergency orders is intended to expedite the handling of serious certificate actions. For half of the 3,742 emergency actions we analyzed, however, more than 4 months elapsed between the time FAA learned of the violation and the time it issued the emergency order. During this period, FAA inspection staff investigated the violation, reached a preliminary determination that an emergency suspension or revocation was warranted, and then transferred the case to legal staff for the review and preparation of the case and the issuance of the emergency order. In most cases, FAA may not envision the use of an emergency order at the outset of the investigation. Time is needed to investigate the facts and evaluate whether the evidence demonstrates a lack of qualification sufficient to support the issuance of an emergency order. The time that elapses between the violation and the issuance of the emergency order raises questions about safety because the certificate holder, such as a pilot or mechanic, can continue to operate until the emergency order is issued. In addition, some aviation attorneys in the private sector question whether it is appropriate or necessary for FAA to handle some cases as emergencies, especially if the violations occurred years before. These two positions reflect the tension between FAA’s need to act swiftly in cases that present an immediate threat to safety or a demonstrated lack of qualifications and to act prudently to protect the rights of certificate holders by thoroughly investigating alleged violations before revoking or suspending a certificate that may be essential to the livelihood of an individual or the employees of an airline, repair station, or other aviation entity. For half of the enforcement cases in which FAA used emergency orders in fiscal years 1990 through 1997, more than 4 months elapsed between the time FAA learned of the violation and the time it issued the emergency order. Once FAA learned about the violations, it completed its investigation, prepared the case, and issued the emergency order within 10 days for 4 percent of the cases and within a month for 11 percent of the cases. Most cases, however, required more than 4 months (132 days) from the date of violation until FAA issued the emergency order. (See table 7.) Cases remained in the program offices for investigation for most of this time. (See tables IV.3 and IV.4 in app. IV for times spent on investigation and case preparation.) While it may be clear as soon as FAA learns of some types of violations that they merit the use of an emergency order, other cases may not be so clear-cut. According to the Deputy Associate Administrator for Regulation and Certification, the use of an emergency order is not necessarily envisioned when FAA first learns of a violation and initiates its investigation. She added that only after investigation do the FAA inspector and managers make a determination in some cases that an emergency order is warranted because of a lack of qualifications on the part of the certificate holder. She said that FAA generally processes emergency cases very quickly, often within a few days. While FAA’s databases do not have a field for recording when inspection staff initially determine that an emergency order is warranted, the Enforcement Information System (EIS) provides some data on how long it takes to issue an emergency order once inspection staff recommend that action. Specifically, EIS tracks the day FAA’s legal staff receive a case and the type of emergency action recommended by the program office. In about one-third of the cases in which inspection staff recommended emergency suspension or revocation, FAA’s legal staff issued the emergency order within 10 days of receiving the case. Half of the emergency orders were issued in 20 days or less, 94 percent took 6 months or less to issue, while the remaining 6 percent took longer than 6 months to issue. (See table IV.5.) Without an extensive review of individual cases—which was beyond the scope of our review—it is impossible to determine how much time FAA expended on investigation, particularly in more complex cases. According to the Acting Director of the Flight Standards Service, inspectors conduct investigations while simultaneously carrying out many other responsibilities, such as accident investigations and inspections. Similarly, FAA legal staff have many nonenforcement responsibilities, including work on procurement issues and contract disputes. In addition, some complex cases may require more time for legal review, while other cases may require additional investigation to have sufficient evidence to support the issuance of an emergency order, according to the Assistant Chief Counsel for Enforcement in FAA’s Office of the Chief Counsel. The fact remains, however, that months often elapse between the occurrence of a violation, the time FAA learns of that violation, and the date the agency issues an emergency order of suspension or revocation. During this time, a certificate holder who lacks qualifications or who represents a threat to safety can continue to operate. FAA regions varied widely in the number of days used to investigate the violations that led to the issuance of emergency orders in fiscal years 1990 through 1997. Four of FAA’s regions (Aeronautical Center, Alaskan, Central, and Northwest Mountain) issued emergency orders within about 2 to 3 months of learning about violations in half the cases they handled. In contrast, other regions (Eastern, European, Great Lakes, New England, Southern, Southwest, and Western-Pacific) took anywhere from almost 4 months to over 8 months to issue the emergency order. (See table 8.) Much of the variation occurred in the time needed for investigation. For example, the Central region turned half its cases over to FAA’s regional legal staff to prepare the emergency order within 40 days of learning of the violation, while half the cases in the Eastern region remained with the program office for over 6 months (197 days). (See table 9.) According to the Acting Director of the Flight Standards Service, such variations in the time needed for investigation may reflect differences in the type and complexity of the cases handled. For example, he said that the Eastern region may need additional time to investigate cases generated by the three international field offices located within its boundaries. He also suggested that the large number of repair stations and manufacturing operations in the Eastern region produce many cases that can be complex to investigate. FAA often spends months on investigating violations, determining whether they merit emergency action, preparing cases, and issuing emergency orders. We interviewed a number of aviation attorneys from the private sector who raised key questions about FAA’s use of emergency orders: Do the cases really need to be handled as emergencies, especially if the violations occurred years before? Does FAA use emergency orders to handle cases that it might otherwise not be able to prosecute? Does FAA use the planned issuance of an emergency order to pressure certificate holders into voluntarily surrendering their operating certificates? We discussed these issues with officials from FAA and NTSB. They provided a variety of opinions that reflected the tension between FAA’s responsibility to act prudently in investigating thoroughly before revoking or suspending a certificate and its responsibility to act swiftly in cases that present an immediate threat to safety or a demonstrated lack of qualifications. The scope of our review of FAA’s use of emergency orders did not permit the kind of case analysis that would determine whether FAA had struck the appropriate balance between these competing responsibilities. Several of the private sector attorneys questioned whether it is appropriate for FAA to use emergency orders for some violations that are years old or for cases that have required months to investigate and issue. While these attorneys acknowledged the need for an enforcement tool that allows FAA to act swiftly when aviation safety is a concern, one questioned the immediacy of the safety threat in some violations he has handled and another questioned whether FAA uses emergency orders to process violations when the investigation is not completed promptly. However, FAA officials cited situations involving older violations or long investigation time frames that they believe merited the use of emergency orders. For example, the Manager of the Compliance and Enforcement Branch in FAA’s Civil Aviation Security Division said that FAA may not learn for months or years that an inactive pilot who has returned to flying has had several drunk driving convictions. Although the violations are older, he said that they raise potential safety issues, as well as questions about the pilot’s judgment if the pilot has falsified information about these convictions when applying for a medical certificate or has failed to report these convictions to FAA within 60 days, as required. Similarly, the Acting Director of the Flight Standards Service said that some complex cases involving the use of unapproved parts for aircraft repairs may take months or years to investigate before FAA has sufficient evidence to initiate an emergency order. He said that, once the evidence is clear and convincing, the case becomes an emergency if it potentially affects safety. According to the Assistant Chief Counsel in FAA’s Enforcement Division, FAA’s position is that the revocation must be taken immediately in cases like these. For such situations, he said that FAA prefers to use an emergency action rather than allowing the certificate holder to operate for months or years until the case could be resolved using a nonemergency certificate action. Two aviation attorneys we interviewed suggested that FAA may use emergency orders in cases in which the agency has exceeded NTSB’s 6-month time frame for processing cases against individual airmen, mechanics, or other certificate holders. For example, one attorney cited a case in which a policeman had notified FAA of alleged alcohol use by a pilot on the night of the incident, but FAA did not issue the emergency order until 18 months later. NTSB’s rule states that FAA must notify the alleged violator of the violation within 6 months of the date of the violation. In an emergency case, the emergency order itself fulfills the notification requirement. Under NTSB’s rule, the case must generally be dismissed after 6 months. However, NTSB has no deadline for initiating cases when an individual’s basic qualifications to hold the operating certificate are in question. If FAA shows in nonemergency cases that it had good cause for its delay in notifying the violator, NTSB can determine that the case is not too old and hear it. According to the Manager of the Compliance and Enforcement Branch in FAA’s Civil Aviation Security Division, NTSB sometimes makes this determination if FAA learns about the violation well after it occurred. He said that NTSB’s judges have heard, and FAA has prevailed in, several recent cases in which FAA did not learn about pilots’ multiple drunk driving convictions until many months after they had occurred. FAA sometimes allows individuals or aviation entities to voluntarily cease operations rather than face emergency revocation of their certificates.Several FAA regional counsels interviewed said that small carriers or repair stations in their regions have occasionally done so. As one regional counsel explained, when a certificate holder voluntarily ceases operations and negotiates a consent order with the agency, FAA inspectors can focus on monitoring the entity’s efforts to come back into compliance rather than on preparing a legal case against the entity. The Assistant Chief Counsel in FAA’s Enforcement Division characterized this approach as less harsh than revoking a carrier’s certificate—an approach that could have more serious, long-term economic consequences for the carrier because it must reapply to begin operations after its certificate has been revoked. Two of the aviation attorneys we interviewed raised questions about the appropriateness of an aviation entity voluntary surrendering its operating certificate when confronted with the probable issuance of an emergency order. One attorney suggested that the notification of the probable issuance of an emergency order might be a way for FAA to avoid the due process that would be required for a nonemergency certificate action, for which hearings are held before a certificate is revoked or suspended. One attorney suggested that it might be appropriate for FAA to issue a letter of investigation and give the aviation entity 10 days to prepare a formal response. FAA does not concur that such notification is needed because certificate holders generally receive a notice of investigation when the agency initiates its investigation, according to FAA’s Deputy Associate Administrator for Regulation and Certification. According to the Assistant Chief Counsel in FAA’s Enforcement Division and the Acting Director of the Flight Standards Service, once evidence of a potentially serious safety situation or lack of qualifications has been gathered, FAA would be remiss in allowing the individual or entity to continue to operate. FAA’s emergency authority exists to provide the agency with a mechanism for acting swiftly in cases in which aviation-related activities jeopardize public safety or an operator’s qualifications are in question. In responding to violations of aviation safety and security regulations, FAA uses emergency orders rarely—in only 3 percent of enforcement cases. The time needed to investigate violations and issue emergency orders has raised some concerns about the urgency and diligence with which FAA pursues these serious certificate actions. These concerns reflect the need for FAA to strike a delicate balance in each case between prompt action to protect safety and judicious action to protect the rights and, frequently, the livelihood of a certificate holder. In addition, our analysis has raised questions about the consistency with which certain types of violations are handled across FAA’s regions. How well FAA achieves balance and consistency can ultimately be judged only through a review of individual cases, a level of review that was beyond the scope of this study. Nevertheless, FAA’s historical success in sustaining emergency actions through internal and external review can be read as indirect evidence of the appropriateness of the initial decision to use its emergency powers. Most cases begun as emergency actions eventually result in a cessation of operations through the suspension, revocation, or expiration of the certificate. Very few of these cases are later dropped because FAA determines that no violation was committed or has insufficient evidence to prove a violation. We provided FAA with a draft of this report for its review and comment. We met with FAA officials, including the Deputy Associate Administrator for Regulation and Certification, the Acting Deputy Director of the Flight Standards Service, and officials from the Office of the Chief Counsel and the Office of Civil Aviation Security Operations. FAA generally concurred with the facts presented and provided clarification on how the investigative process works. Specifically, FAA said that the number of days elapsed between FAA’s learning of a violation and issuing an emergency order should not be equated with the time needed to process an emergency order. FAA explained that when it first investigates a violation, it may not even envision an emergency order, and only makes the decision after an investigation, when it has determined that a lack of qualifications or other immediate threat to safety warrants an emergency order. Even then, FAA explained, the recommended emergency order must be reviewed by legal staff, and additional investigation may be required before FAA issues the emergency order. FAA also provided additional possible explanations for the regional variations we observed in issuing emergency orders, and for the number of emergency orders issued to pilots. For example, we noted FAA’s observation that the higher numbers of medical certificates suspended or revoked using emergency orders in certain regions may reflect the larger population of pilots in those regions. We added information or revised the report, where appropriate, to reflect these suggestions. To determine the extent to which FAA used emergency actions in fiscal years 1990 through 1997, we analyzed data from FAA’s Enforcement Information System (EIS) database. We also used this database to analyze the types of certificate holders affected by these emergency orders and the time frames for issuing the orders. While we were unable to verify the accuracy of all the data FAA provided, we did undertake several validation procedures to ensure the quality of the data. First, we performed extensive checks of the internal consistency of EIS in the fields used. In several cases, we uncovered blank fields and coding errors. We discussed the resolution of these discrepancies with the FAA staff responsible for the database. Second, we reviewed available information from an internal FAA study on EIS in evaluating the reliability of the data we used. We discussed our findings, the circumstances under which FAA uses emergency orders, and changes to FAA Order 2150.3A that might have affected the agency’s use of emergency orders for fiscal years 1990 through 1997 with the following FAA personnel: the Assistant Chief Counsel and other staff in FAA’s Enforcement Division, all nine counsels in FAA’s regions, the Acting Director of the Flight Standards Service and members of his staff, the Manager of the Compliance and Enforcement Branch in the Civil Aviation Security Division, and the managers of the Medical Specialties and Aeromedical Certification Divisions in the Office of Aviation Medicine. In addition, we discussed the appeals process with NTSB’s Deputy General Counsel. We also discussed FAA’s use of emergency orders with several aviation attorneys from the private sector. These attorneys, who have defended individuals or aviation entities in cases in which FAA used emergency orders to revoke or suspend their certificates, had experience with FAA’s Office of the Chief Counsel, are members of the NTSB bar, and/or serve on state aviation commissions. We conducted our review from February 1998 through June 1998 in accordance with generally accepted government auditing standards. As you requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. We will then send copies to the appropriate congressional committees; the Secretary of Transportation, the Administrator, FAA; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. If you have any questions about this report or need additional information, please call me at (202) 512-3650. Major contributors to this report are listed in appendix VI. Certificate holders have several options for appealing nonemergency and emergency certificate actions. Certificate actions are adjudicated by a National Transportation Safety Board (NTSB) administrative law judge. The certificate holder may then appeal the case before the full Board or seek review in a federal court of appeals. In the case of a nonemergency action, the certificate holder may continue to operate until the appeal process has been completed. In contrast, an emergency order takes effect on issuance. The certificate holder does not have the opportunity to contest the order before it is issued, and, unlike nonemergency certificate actions, the emergency order remains in effect while the certificate holder appeals. When faced with an emergency order, a certificate holder has several appeal options. First, the certificate holder can appeal the emergency nature of the order. The certificate holder may seek a direct review of the Federal Aviation Administration’s (FAA) emergency determination by a federal court of appeals. In such cases, the certificate holder petitions the court for a review of the emergency order or seeks a stay of the order.According to the Assistant Chief Counsel in FAA’s Enforcement Division, such cases are generally decided by the federal court of appeals within 5 to 7 working days. The certificate holder may also appeal the emergency order to NTSB. The certificate holder must appeal within 10 days after receiving the emergency order from FAA. NTSB is required to set a hearing date no later than 25 days after the certificate holder received the emergency order. The presiding administrative law judge’s initial decision is made orally at the end of the hearing and is final unless appealed. Any appeal by the certificate holder or FAA of the initial decision must be filed with NTSB within 2 days of the hearing, and the entire matter must be resolved within 60 days of the date on which the FAA Administrator advised NTSB of the emergency nature of the order. Further appeals are available to both FAA and the certificate holder in the federal courts of appeals. Figure II.1 shows the steps in initiating and appealing an emergency order. Nonemergency: FAA issues a notice of proposed certificate action (revocation or suspension) Is appeal denied or sustained? Is appeal denied or sustained? Is appeal denied or sustained? Operators include, for example, airport operators, agricultural operators, scheduled and on-demand air carriers, and scheduled cargo carriers. FAA’s data did not include a certificate type for these actions. In fiscal years 1990 through 1997, most violations that led to the issuance of emergency orders took many months to investigate, issue emergency orders, and resolve. For more than half the cases, over 13 months passed between the date of the violation and the final resolution of the case. Once FAA learned about the violations, about 2 percent were resolved within a month and 63 percent within a year, while the remaining 37 percent of the cases took more than a year to resolve. (See table IV.1.) After the issuance of the emergency order, cases were not resolved until any appeals were completed and certificates were returned to FAA. At each step, the process was potentially subject to delays, some of which were not under FAA’s control. In 70 percent of the cases in which FAA issued emergency orders, the agency did not learn of the violation on the date that it occurred. FAA learned about approximately 30 percent of the violations on the date that they occurred and about nearly half of the violations within a month of their occurrence. But discovering violations often took months or years: While FAA learned of 87 percent of the violations within a year of their occurrence, it did not learn of the remaining 13 percent of the violations for from just over a year to nearly 17 years from the date of occurrence. (See table IV.2.) FAA learned more quickly about violations related to some types of certificates than about those related to other types. While a pilot’s deviation from an assigned flight altitude may be detected promptly by an air traffic controller, FAA might not learn about a falsification of maintenance records until years after the repair was made, according to the Acting Director of Flight Standards. FAA became aware within 5 days of half of the violations that resulted in the issuance of emergency orders to revoke or suspend pilot licenses. These time frames were significantly longer for cases involving medical certificates (74 days) or mechanic certificates (131 days). FAA’s investigation of violations that led to emergency orders and the issuance of those orders generally took months to complete. FAA completed its investigation and case preparation and issued the emergency order to revoke or suspend the operating certificate within a month for 11 percent of the cases, but about one-third of the cases took longer than 6 months. While FAA does not always learn of violations promptly and has little control over the time needed for resolution once it issues an emergency order, the agency has more control over the time its program office staff needs to investigate a possible violation and its legal staff needs to prepare and issue the emergency order. As discussed below, however, many factors may influence the amount of time needed for investigation or review and preparation of the case by legal staff. For half the cases closed in fiscal years 1990 through 1997, less than 3 months elapsed between the time that FAA learned of the violation and the time that the program office completed its investigation and gave the case to FAA’s legal staff to prepare the emergency order. While about 19 percent of the investigations were completed in 30 days or less, about three-quarters were completed within 6 months, while the remaining one-quarter required 6 months or more. (See table IV.3.) FAA Order 2150.3A describes the process for program offices to follow in investigations once a potential violation has been identified. Inspection staff gather evidence; interview witnesses, if appropriate; prepare the draft enforcement case file; and have the proposed emergency revocation or suspension reviewed by local and regional program office managers. Typically, we found that cases were with the program office for investigation four times as long as they were with the legal office preparing the emergency order. Not all of this time was necessarily spent on the investigation, however. According to the Acting Director of the Flight Standards Service, safety inspectors usually have many other ongoing responsibilities, including inspections, accident investigations, and recurrent training, as well as other enforcement cases. Some types of violations may take longer to investigate. For example, it may take time to obtain and review records to determine whether an aircraft was actually available and used to perform required flight training as claimed in an airline’s training records, according to the Acting Director of the Flight Standards Service. In addition, he said that violations involving the falsification of records may require a court order and search warrant to obtain documents. Finally, certain types of cases, such as those in which unapproved parts were alleged to have been used, may involve a number of different customers and suppliers, as well as extensive coordination with the Federal Bureau of Investigation or other law enforcement agencies. Similarly, if FAA learns from a comparison of medical certificates with data in the National Driver Register that a pilot may have drunk driving convictions, weeks or even months may be needed to obtain the corroborating evidence from state or local court records, according to the Manager of the Compliance and Enforcement Branch in FAA’s Civil Aviation Security Division. Our analysis showed that violations related to certain types of certificates generally required longer to investigate. While the program office took about 60 days to investigate half of the cases to revoke or suspend pilot certificates, investigation time frames were longer for half the mechanic certificates (3 months) and medical certificates (nearly 8 months). (See table IV.4.) Half of the cases processed in fiscal years 1990 through 1997 spent 20 days or less with FAA’s legal staff for case preparation and the issuance of an emergency order. About one-third of the cases took 10 days or less from the time the legal staff received the case until it issued the emergency order, and emergency orders were issued within 6 months for 94 percent of the cases. The remaining 6 percent of the cases took longer than 6 months from the date the legal staff received the case until it issued the emergency order. (See table IV.5.) According to the Assistant Chief Counsel for Enforcement in FAA’s Office of the Chief Counsel, even after cases are forwarded to the legal staff, they sometimes require additional investigation to have sufficient evidence to support the issuance of an emergency order. In such cases, the legal staff must request additional documentation from the program office’s investigative staff. Typically, FAA’s legal staff had a case for about one-fourth as much time as the program office needed for the investigation. The time needed to issue emergency orders varied less by certificate type than did the time needed for investigation. For all types of certificates, FAA legal staff issued the emergency order for over half the cases within 30 days of receiving it. Once FAA issued an emergency order, it needed additional time to resolve a case. In fiscal years 1990 through 1997, half the cases were resolved within 73 days of the issuance of the emergency order. While nearly one-third of the cases were resolved within 30 days, 72 percent of the cases were resolved within 6 months, while the remaining 28 percent required longer than 6 months to resolve. (See table IV.6.) Time frames for case resolution were somewhat longer for half the cases involving mechanic certificates (over 96 days) and medical certificates (over 70 days). According to the Assistant Chief Counsel of FAA’s Enforcement Division, several factors may delay case resolution. First, it may be some days or weeks before the individual or aviation entity returns the operating certificate to FAA and the case can be closed out. In addition, he said that cases may be appealed before NTSB and the U.S. Court of Appeals. NTSB administrative law judges hear appeals, and their decisions may be appealed again by the violator or FAA before the full Board. NTSB’s rules call for a decision within 60 days. The Assistant Chief Counsel said, however, that some violators waive their right to this expedited review of emergency cases and have their cases reviewed together with other nonemergency certificate actions, which may take 1 to 2 years before a final ruling is issued. NTSB heard appeals on 1,277 emergency order cases in fiscal years 1990 through 1997. Violators may also appeal to the U.S. Court of Appeals, which often requires a year or more before a decision, according to the Assistant Chief Counsel of FAA’s Enforcement Division. He noted that the decision to appeal and the time needed for case resolution following the issuance of an emergency order are not within FAA’s control. Civil penalty (fine) Bonnie Beckett-Hoffmann Curtis L. Groves David K. Hooper Julian King Robert White The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Federal Aviation Administration's (FAA) use of emergency orders during fiscal years 1990 through 1997, focusing on: (1) the extent to which FAA used emergency orders, including data on regional variation in their use, the types of certificate holders affected, and the final outcomes of cases initiated using emergency orders; (2) the ways in which changes in FAA's policies might have affected the agency's use of emergency orders; and (3) the time needed for FAA to investigate alleged violations and issue emergency orders. GAO noted that: (1) FAA used emergency orders to initiate action to revoke or suspend operating certificates in 3 percent (3,742) of the 137,506 enforcement cases closed during fiscal years 1990 through 1997; (2) as FAA moved to handling less serious enforcement cases through administrative actions rather than certificate actions, the number of certificate actions decreased, and emergency orders came to represent a larger proportion of the more serious certificate actions that remained, increasing from 10 percent in 1990 to an annual average of nearly 20 percent over the following 7 years; (3) emergency orders as a percentage of certificate actions varied by FAA region, resulting from differences in enforcement practices and from unusual circumstances in an individual case; (4) in fiscal years 1990 through 1997, nearly 60 percent of the emergency orders revoked or suspended pilots' operating certificates or the certificates of their medical fitness to fly; (5) FAA initiated a substantially higher proportion of certificate actions with emergency orders for pilots with commercial operating certificates than for air transport pilots; (6) over three-quarters of the enforcement cases initiated using emergency orders resulted in the suspension or revocation of the certificate holder's operating certificate, and fewer than 5 percent resulted ultimately in FAA's dropping the case because it determined that no violation was committed or had insufficient evidence to prove a violation; (7) during fiscal years 1990 through 1997, FAA implemented a formal change in its policy on emergency actions that is reflected in the increased number of revocations using emergency orders; (8) in 1990, FAA decided that, for those cases in which revocations are based on a demonstrated lack of qualification to hold the relevant certificate, the certificate generally should be revoked immediately and not after the lengthy appeal process that other nonemergency certificate actions can be subject to; (9) FAA informally implemented this policy change in 1990 and 1991 before formally incorporating it into its compliance and enforcement guidance in 1992; (10) FAA initiated 184 revocations using emergency orders in fiscal year 1990, after which this number increased, ranging between 264 and 382 annually; and (11) although the use of emergency orders is intended to expedite the handling of serious enforcement of cases in which operating certificates are revoked or suspended, the time needed for FAA to investigate violations and issue emergency orders varied widely. |
Secret Service has two missions—conducting criminal investigations and providing protection. The criminal investigative mission includes conducting investigations in areas such as financial crimes, identity theft, counterfeiting, computer fraud, and computer-based attacks on banking, financial, and telecommunications infrastructure, among other activities. As part of the protective mission, Secret Service protects, among others, the sitting President and Vice President and their families; major presidential and vice presidential candidates and, within 120 days of the general presidential elections, their spouses; the President- and Vice President–elect; and former presidents and their spouses. In addition to day-to-day protection activities, Secret Service is required to provide protection for National Special Security Events (NSSE). The NSSE designation was established by statute in December 2000, for “special events of national significance” requiring significant law enforcement presence. The kinds of events categorized as NSSEs include presidential inaugurations, international summits held in the United States, major sporting events attended by protected persons, and presidential nominating conventions. For instance, during the 2008 presidential campaign and the 2009 Inauguration, a number of events were designated as NSSEs, including both the Democratic and Republican Nominating Conventions, and the concert celebrating the Inauguration on the National Mall. Designations are at the discretion of the President, signed by the Secretary of DHS, generally on the basis of the size of the event, its significance, and importance of anticipated attendees. Since fiscal year 2007, Secret Service has received $1 million annually in appropriations towards NSSE funding, which is available until expended. Like other federal agencies receiving annual appropriations, Secret Service must comply with a variety of fiscal laws, or those laws related to the control and use of public funds. Specifically, the Antideficiency Act and section 503 outline requirements that must be met in the management of, and reporting on, funds, such as the funds for 2008 presidential candidate protection. The Antideficiency Act prohibits the making or authorizing of “an expenditure or obligation exceeding an amount available in an appropriation or fund for the expenditure or obligation.” Section 503 states that “None of the funds … shall be available for obligation or expenditure for programs, projects or activities through a reprogramming of funds in excess of $5,000,000 or 10 percent, whichever is less … unless the Committees on Appropriations … are notified 15 days in advance of such reprogramming of funds.” Reliable financial systems are critical to meeting the reporting requirements associated with the Antideficiency Act and section 503. Since October 2004, Secret Service has been using the “Travel Manager, Oracle, PRISM, Sunflower” system (TOPS) to manage its financial business processes. TOPS is an integrated financial management system comprised of four applications: Travel Manager—input and management of travel vouchers; Oracle Financials—core financial and general ledger system; PRISM—procurement activities; and Sunflower—property management. Secret Service maintains financial data within TOPS by project code and object class. In addition, Secret Service uses the Manhours system to capture hours worked by its agents and certain support staff. According to Secret Service officials, prior to fiscal year 2005, Secret Service had one appropriation account, the Protection Services and Activity account, to manage appropriated funds for salaries and expenses. In fiscal year 2005, the Conference Report accompanying the 2005 DHS Appropriations Act itemized specific amounts for activities supported by Secret Service’s Salaries and Expenses appropriation account. The itemizations were made at Secret Service’s Program, Project, or Activity (PPA) level. Secret Service uses PPAs as subaccounts used to capture and track financial data such as funds allotted, obligations, and expenditures. According to Secret Service, three PPAs were used to fund 2008 campaign- related protection activities: Presidential Candidate Nominee Protection, which is for the protection Secret Service provides to major presidential and vice presidential candidates, and their spouses; NSSE, which is used for Secret Service planning and implementing security for designated NSSEs to ensure the physical protection of the President, the public, and other Secret Service protectees who participate in NSSEs; and Protection of Persons and Facilities, which operates to ensure the personal safety of certain designated individuals, such as the President and Vice President and former presidents and their spouses, protect the buildings and grounds where these individuals reside and work, and protect foreign heads-of-state visiting the United States. Table 1 shows the PPAs for Secret Service’s Salaries and Expenses account and the related fiscal year 2009 itemizations for each PPA. The unpredictable and changing nature of protectee activities creates ongoing challenges for Secret Service. These challenges include 1. generally short notice—sometimes 2-3 days—of protectees’ schedules and frequent schedule changes, which makes it difficult to budget for costs in advance; 2. newly scheduled events requiring shifts in personnel to maintain current assignments, often resulting in unexpected or additional overtime costs; 3. personnel cost information not being in real time due to delays in completion of travel vouchers; 4. the unanticipated increase in pace of the 2008 presidential campaign compared to previous campaigns upon which the fiscal year 2009 budget was based—for instance, the preinaugural events following the 2008 campaign included a three-stop train trip and a concert on the National Mall, not part of previous campaigns; and 5. the venue and activity being at the discretion of the protectee, to which Secret Service must adapt its protection services. In this context, Secret Service received $41 million in appropriated funds within its Presidential Candidate Nominee Protection PPA for fiscal year 2009. The amounts designated for PPAs are found in the explanatory statement accompanying DHS’ fiscal year 2009 Appropriations Act. Section 503(e) of the Appropriations Act provides that “such dollar amounts specified in this Act and accompanying explanatory statement shall be subject to the conditions and requirements ... of this section.” Early in fiscal year 2009, Secret Service realized that, due to the increased pace of the campaign and the large crowds, it might have a shortfall but believed at the time it could cover the additional expenses with funds from other PPAs. In January 2009, Secret Service contacted DHS and requested assistance to cover the shortfall. In May 2009, DHS directed Secret Service to submit a reprogramming request for the funding, which, after revision by DHS and Secret Service, was submitted to the Senate and House Appropriations Committees June 30, 2009. Figure 1 outlines the key events pertaining to the fiscal year 2009 shortfall. In light of these events, the Conference Report accompanying the fiscal year 2010 DHS Appropriations Act required the DHS CFO and the Secret Service Assistant Director for Administration to brief the Appropriations Committees on the process to be implemented in fiscal year 2010 to ensure the problems related to the fiscal year 2009 shortfall did not reoccur. Prior to the briefing, DHS and Secret Service developed a corrective action plan (CAP) to address the issues surrounding the shortfall. The CAP includes measures to “increase visibility,” “improve funds control,” and “increase the rigor of internal and external reprogrammings.” See appendix II for the full text of the CAP. Secret Service financial management personnel use an undocumented, manual process to prepare two key reports used to monitor obligations, manage its funds by PPA, and report to Congress: the Monthly Execution and Staffing Report, and the Presidential Campaign Cost Report. The Monthly Execution and Staffing Report provides data, by account and PPA, on enacted funding, unobligated carryover(s), obligations and expenditures to date, and staffing levels. Secret Service provides the Monthly Execution and Staffing Report to other external parties, such as the Appropriations Committees. The Presidential Campaign Cost Report is used internally to monitor costs (budgeted and actual) during the presidential campaign. Secret Service financial management personnel manually integrate information from several sources to prepare the Monthly Execution and Staffing Report. Each month, staff draw financial data from (1) 16 reports generated from TOPS—the Secret Service financial management system, (2) information from the Manhours system, which tracks work hours associated with each project, and (3) information from other accounting department reports and the SF-133 to prepare the Monthly Execution and Staffing Report. Furthermore, TOPS is set up to maintain and report financial data by project code and object class. As a result, the financial data needs to be manually adjusted in order to be presented by PPA in the Monthly Execution and Staffing Report. Secret Service officials acknowledged that they had not documented the procedures for developing and reviewing the Monthly Execution and Staffing Reports, and they agreed that it would be beneficial to have those procedures documented. Standards for Internal Control in the Federal Government state that internal controls need to be clearly documented and the documentation should appear in management directives, administrative policies, or operating manuals. Such documentation is useful to managers in controlling their operations and to any others involved in evaluating or analyzing operations. Documenting the process for preparing the Monthly Staffing and Execution Report would be useful to managers in controlling operations, as relying on an undocumented manual process to pull together information for the Monthly Execution and Staffing Report increases the risk of errors in that report. For example, as a result of human error, the Monthly Execution and Staffing Report for September 2009 originally sent to the Appropriations Committees overstated current year obligations for one PPA by $3 million while understating obligations for another PPA by $3 million. Similarly, an error in the Monthly Execution and Staffing Report for March 2009 occurred because the expenditures-to- date amount for one account was not updated and therefore the amount from the previous month was incorrectly carried forward for that account. Also, we noted several instances on the Monthly Execution and Staffing Reports that we reviewed where some formulas were inadvertently missing from columns such as unobligated authority and unexpended obligations. Secret Service could decrease the risk of reporting incomplete, inaccurate information by having documented procedures in place for its staff to prepare and review the Monthly Execution and Staffing Report. While other controls may also assist in helping to ensure Secret Service reports complete and accurate information, documenting these procedures to prepare and review the Monthly Execution and Staffing Report is a key first step. Secret Service also does not have documented procedures in place for how to split out costs for protection activities that could cut across multiple PPAs. Congress itemizes specific amounts from the Salaries and Expenses appropriation to individual PPAs. According to Secret Service, the activities associated with PPAs are not discrete because activities and costs related to PPAs may overlap. For example, according to Secret Service budget staff, during fiscal year 2009 they split costs for the January 2009 Inauguration across multiple PPAs—Presidential Candidate Nominee Protection ($4.1 million), Protection of Persons and Facilities ($1.0 million), and NSSE ($5.6 million). Budget staff explained that it charged some Inauguration costs to the Presidential Candidate Nominee Protection PPA because President-Elect Obama was in attendance. Similarly, some costs were charged to the Protection of Persons and Facilities PPA because former presidents and President Bush attended the Inauguration. Also, because the Inauguration was designated as an NSSE, some costs of the Inauguration were charged to the NSSE PPA. Similarly, during fiscal year 2010, Secret Service had another instance when it could justify charging costs across multiple PPAs but before doing so it had to seek clarification from DHS on the appropriate process and any necessary documentation. To help cover costs for the April 2010 Nuclear Security Summit, an NSSE, Secret Service used funds from the Protection of Persons and Facilities PPA ($1.9 million) because the summit included costs such as fencing and construction. While Secret Service staff have charged costs to multiple PPAs in some cases, they expressed concern because they were not certain whether this was the correct procedure to follow. As with the documentation of the process for preparing the Monthly Execution and Staffing Reports, and in accordance with Internal Control Standards, documented policies and procedures for charging costs in situations where more than one PPA is applicable would be useful to managers in controlling their operations. Establishing policies and procedures for charging costs could clarify how Secret Service can split costs between multiple PPAs and help manage funds for presidential candidate protection and other PPAs. Also, the lack of documented policies and procedures increases the risk of reporting incomplete, inaccurate information because Secret Service officials could unknowingly charge expenditures to the wrong PPA. Neither DHS nor Secret Service have documented early warning system benchmarks to use when monitoring Secret Service obligations and expenditures, and therefore these benchmarks may be inconsistently applied. The CAP developed by DHS and Secret Service outlines the actions that, if implemented, will help ensure that Secret Service financial management staff are monitoring obligations and expenditures, and effectively anticipating shortfalls. The plan directs Secret Service to implement an early warning system to track actual obligations against planned and anticipated obligations and to develop benchmarks that would act as “red flags” alerting the Secret Service CFO of potential funding shortfalls. While DHS and Secret Service have identified this as an action item in the CAP, Secret Service does not yet have a documented system of red flags to alert its staff to potential funding shortfalls. Similarly, DHS’ Budget Execution Guidance does not provide specific guidance on developing benchmarks, and Secret Service officials have not documented their own internal benchmark for monitoring obligations and expenditures as an early warning system. Internal Control Standards state that internal controls need to be clearly documented and that managers need to compare actual performance to planned or expected results, and activities need to be established to monitor performance measures and indicators. Even though documentation does not exist for such a system, both Secret Service and DHS noted that they take certain actions to identify potential funding shortfalls. For example, DHS officials told us that for annual appropriations, they would expect to see 25 percent of the appropriated amount used each quarter. Any deviations would be communicated during DHS’ quarterly reviews of Secret Service. Secret Service budget staff also said they use this “straight-line” approach to monitor budget execution. Nevertheless, written guidance on how to develop and document appropriate benchmarks for monitoring obligations and expenditures could help ensure a consistent application of red flags and therefore increase the effectiveness of an early warning system to alert officials of potential funding shortfalls. At the time of the fiscal year 2009 shortfall, DHS had written guidance covering communication necessary if a funding shortfall required a reprogramming notification under section 503, or was a potential or actual Antideficiency Act violation. Specifically, DHS’ fiscal year 2009 Budget Execution Guidance outlined the process for components to develop and submit a reprogramming request to DHS to comply with section 503 and required that DHS’ Office of the CFO (OCFO) transmit decisions on the requests to the component. For example, the Budget Execution Guidance requires all reprogrammings to be submitted at least 45 days in advance of anticipated expenditure of funds. In July 2008, DHS’ OCFO issued guidance concerning the investigation and reporting of Antideficiency Act violations. This guidance requires that, among other things, employees notify their supervisors if they suspect a potential Antideficiency Act violation, the component and DHS CFO evaluate the circumstances and complete a preliminary review, and—if it is determined a potential violation exists—an independent investigative officer complete a formal investigation and submit a report within 6 months. The Secret Service’s former CFO told us that he did not believe that the actions Secret Service took in January 2009 to address the fiscal year 2009 shortfall required congressional notification under section 503, or constituted an Antideficiency Act violation. Secret Service budget officials reported that, to cover the fiscal year 2009 shortfall in the Presidential Candidate Nominee Protection PPA, which reached $10.7 million, they charged three PPAs. They showed a negative balance of $4.1 million in the Presidential Candidate Nominee Protection PPA, and charged $5.6 million to the NSSE PPA and $1 million to the Protection of Persons and Facilities PPA. The former CFO told us that, at the time, he did not see the need for a reprogramming notification, and therefore the agency did not need to follow DHS’ guidance on communicating reprogramming requests. However, Secret Service budget officials acknowledged that, as discussed later in this report, the reprogramming request Secret Service submitted to DHS—and DHS submitted to the Senate and House Appropriations Committees on June 30, 2009—was for a $5.1 million reprogramming into the Presidential Candidate Nominee Protection PPA and did not mention or include amounts associated with other PPAs. The requested $5.1 million reprogramming exceeded section 503 notification thresholds. Therefore, GAO concluded that DHS and Secret Service violated section 503 and the Antideficiency Act. According to Secret Service officials, in January 2009 Secret Service communicated to DHS the fiscal year 2009 shortfall and requested assistance in covering it. According to the former Secret Service CFO, the agency realized it might have a shortfall in the Presidential Candidate Nominee Protection PPA as early as October 2008, but determined it could likely cover the costs using funding from both the Presidential Candidate Nominee Protection and NSSE PPAs. However, with the designation of additional NSSEs related to the Inauguration in December 2008, by January 2009 the agency realized it could not cover the costs from these two PPAs. Secret Service then informed DHS in January 2009 that it would have a funding shortfall in its Presidential Candidate Nominee Protection PPA—of which the agency could cover half. According to the former Secret Service CFO, DHS then agreed to look for funding to help cover the shortfall. DHS did not instruct Secret Service to submit a reprogramming request until May 2009—4 months after the agency’s first communication. Following DHS’ direction, Secret Service submitted a $5.1 million reprogramming request to DHS on June 1, 2009, an amount exceeding the section 503 threshold. DHS then followed its internal guidance in obtaining OMB approval of the request. Following OMB’s initial approval of the reprogramming request on June 19, 2009, Secret Service then modified its request—increasing the amount to include costs associated with the G20 Summit, which had just been designated an NSSE, and extended protection for former Vice President Cheney, of which Secret Service had just become aware—and resubmitted the request to DHS on June 25, 2009. After receiving OMB’s second approval on June 30, 2009, DHS submitted a reprogramming notification to Congress on the same day for $5.1 million to be reprogrammed into the Presidential Candidate Nominee Protection PPA. At the time of the fiscal year 2009 shortfall, there was no written guidance outlining the process for communicating within DHS or to the Appropriations Committees information about “internal reprogramming” of funds. For instance, the fiscal year 2009 Budget Execution Guidance does not include direction to components regarding how to report internal reprogrammings under the section 503 threshold in Monthly Execution and Staffing Reports. However, DHS stated that Secret Service was aware that it was permitted to internally reprogram funds between PPAs. According to Secret Service officials, they determined at the time that they could internally reprogram funds, and when doing so that they should report a negative balance in the Monthly Execution and Staffing Report. According to Secret Service officials, DHS communicated to Secret Service after the fact that it should have internally reprogrammed funding from another PPA into the Presidential Candidate Nominee Protection PPA and avoided showing a negative balance in unobligated authority. However, in the past, Secret Service had submitted Monthly Execution and Staffing Reports that had included negative balances in the Unobligated Authority column. According to Secret Service officials in the Office of Administration, DHS had not informed them not to do so. Since the fiscal year 2009 shortfall, Secret Service and DHS developed a CAP to, among other things, improve communication about internal and external reprogrammings. In addition, Secret Service officials told us that their communication with DHS about budget execution has improved, and DHS officials said that they now provide more training and guidance to components, such as guidance on general budget execution and Monthly Execution and Staffing Reports. For instance, DHS’ fiscal year 2010 Budget Execution Guidance now requires that Monthly Execution and Staffing Reports present information on both internal and section 503 reprogrammings. The CAP contains measures to improve guidance on what information to communicate during a funding shortfall, and requires that all internal transfers and reprogrammings be approved by the DHS CFO in writing within 24 hours of submission, all reprogramming proposals be submitted in writing and in the appropriate format with required information included, components must first initiate an internal funding review and clearly articulate the negative impact of using internal resources to cover the shortfall, and all above-threshold reprogrammings be submitted to the Appropriations Committees in a timely manner. CFOC A-123 Guidance is widely viewed as a “best practices” methodology for executing the requirements of appendix A of OMB Circular A-123, which requires management to develop corrective action plans for material weaknesses. This guidance provides that agencies construct a corrective action planning framework to facilitate plan preparation, accountability, monitoring, and communication. Key information to be included in corrective actions specified in this guidance includes, among other things, a description of the deficiency in sufficient detail to provide clarity and facilitate a common understanding of what needs to be done. DHS has developed and implemented two of the four communications- related CAP measures in accordance with this guidance. For instance, DHS’ fiscal year 2010 Budget Execution Guidance and DHS CFO’s Financial Management Policy Manual draft of Section 2.4–Budget Execution include updated guidance to components on how to implement two of the CAP measures outlined above, as shown in table 2. However, DHS did not develop and implement the remaining two of the communications-related CAP measures in accordance with this guidance. While DHS has implemented two of the four communication-related measures from the CAP in accordance with CFOC A-123 guidance, it has not developed and implemented the remaining two. Specifically, DHS has not provided written guidance describing what needs to be done to implement the CAP measures requiring that (1) components complete internal funding reviews prior to submitting reprogramming requests and articulate the negative impact of using internal resources to cover the shortfall—such as delays in hiring or postponement of training activities, or both—or (2) DHS provide timely submission of reprogramming notifications to the Senate and House Appropriations Committees. Implementing these remaining communication-related measures from the CAP could help ensure that DHS and Secret Service communicate effectively with each other and Congress in the event of future funding shortfalls. Specifically, receiving guidance on the information DHS would like to receive from components regarding their internal funding review and the negative impact of using their internal resources could help improve the effectiveness with which reprogramming requests are approved by DHS. For instance, Secret Service submitted a reprogramming request related to the April 12-13, 2010, Nuclear Security Summit to DHS on February 25, 2010. According to Secret Service officials in the Office of Administration, DHS denied their initial reprogramming request for the Nuclear Security Summit in part because it did not sufficiently describe Secret Service’s internal funding review and the negative impact upon the agency if it used internal resources. Secret Service subsequently revised and resubmitted its request to DHS on March 12, 2010. According to Secret Service officials, if DHS had provided clear guidance on its expectations for what information the reprogramming request should have included in this area, DHS could have approved the request more quickly. In addition, clearly defining time frames for its timely submission of reprogramming notifications to the Appropriations Committees, a measure delineated by DHS in the CAP, could help enable DHS and the committees to assess whether DHS effectively provides information about potential funding shortfalls. After receiving the revised Nuclear Security Summit request from Secret Service, DHS submitted the request to OMB for approval on April 5, 2010, more than 5 weeks after Secret Service’s initial submission. OMB approved the request on April 8, 2010—3 days after DHS’ submission. DHS then submitted the reprogramming notification to the Appropriations Committees on April 9, 2010, 6 weeks after Secret Service’s initial submission. Having the notification submitted 3 days—including the weekend—before the Nuclear Security Summit created challenges for Secret Service because, according to Secret Service officials, it was unaware of what funds would be available to cover the costs of the summit. Without clarifying what is meant by timeliness with respect to processing reprogramming requests, DHS is limited in its ability to assess whether its submission of this notification was completed in a timely manner and, consequently, to help Secret Service manage potential funding shortfalls and provide Congress the information it needs when making budgetary decisions. Secret Service performs the important mission of protecting presidential candidates and nominees. Because of the larger crowds and faster pace of the 2008 presidential campaign compared to prior campaigns, Secret Service’s spending exceeded the amount budgeted in its fiscal year 2009 Presidential Candidate Nominee Protection PPA. Given the importance of providing the Appropriations Committees with complete and accurate financial data concerning presidential candidate protection activities, it is imperative that Secret Service have the necessary documented internal control procedures in place, including financial management policies and procedures, to help ensure it can effectively manage and report on funds for presidential candidate protection. Relying on an undocumented manual process to pull together information for key reports on presidential candidate protection activities increases the risk that inaccurate information will be reported to Congress and errors could be made in budget management. Similarly, the lack of documented policies and procedures for splitting costs for presidential candidate protection activities across multiple PPAs increases the risk of reporting incomplete, inaccurate information on these activities. Also, lack of guidance on how to develop and document appropriate benchmarks for monitoring presidential candidate protection obligations and expenditures limits the ability of Secret Service financial management officials to identify any future funding shortfalls. Further, recognizing the communication breakdowns that occurred during fiscal year 2009, DHS and Secret Service have taken steps to improve communication, including developing the CAP. However, DHS has not clarified in its guidance all of the CAP measures, including components’ required documentation of internal funding reviews and the negative impacts of using internal resources in reprogrammings; and the time frames associated with its timely submission of reprogramming notifications to the Appropriations Committees. Providing this guidance could help DHS ensure it is able to approve components’ reprogramming requests more quickly, assess whether its submission of the reprogramming notifications to the Appropriations Committees are timely, and, ultimately, provide Congress the information it needs when making budgetary decisions. To improve financial management controls and communication related to presidential candidate protection budget execution, we recommend that the Secretary of DHS take the following five actions: direct the Director of Secret Service to develop documented procedures for preparing and reviewing its Monthly Execution and Staffing Reports and Presidential Campaign Cost Reports; direct the Director of Secret Service to develop written policies and procedures for charging costs when protection activities may be funded by multiple PPAs; direct the DHS CFO to ensure that DHS’ components, including Secret Service, have guidance and training on how to develop and document appropriate benchmarks for monitoring obligations and expenditures; direct the DHS CFO to develop and provide written guidance clarifying the elements necessary in a reprogramming request from a component to document internal funding reviews and the negative impact of using internal sources; and direct the DHS CFO to define time frames by which DHS could assess timeliness of submissions of reprogramming notifications to the Appropriations Committees. On June 23, 2010, DHS provided written comments on a draft of this report. DHS concurred with all five of our recommendations, and DHS and Secret Service are taking steps to improve financial management controls and communication related to presidential candidate budget execution. For instance, Secret Service has developed documented procedures for preparing and reviewing its Monthly Execution and Staffing Reports and Presidential Candidate Costs Reports, and begun to develop written policies and procedures for charging costs when protection activities may be funded by multiple PPAs. In addition, the DHS CFO plans to develop a scorecard to keep track of all reprogramming notifications and assess the timeliness of submissions. DHS’ comments are reproduced in appendix III. We are sending copies of this report to the Secretary of Homeland Security and interested congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact either David Maurer at 202-512-9627 or by e-mail at [email protected] or Susan Ragland at 202-512-9095 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The Hoorable Rert C. Brd Cirman, Subcommittee o HomelanSecrit Committee o Approritions Uited Ste Snate The Hoorable Geore V. Voiovich Ranking Miorit Memer, Subcommittee o HomelanSecrit Committee o Approritions Uited Ste Snate The Hoorable Dvid Price Cirman, Subcommittee o HomelanSecrit Committee o Approritions U.S. House of Rretive The Hoorable Hrold Rer Ranking Miorit Memer, Subcommittee o HomelanSecrit Committee o Approritions U.S. House of Rretive Subject: U.S. Secret Service—Stor Rtrictio Avilabilit of Fun Ivolving The coferece reort, H.R. Cf. R. No. 111-298, t 92 (2009), ccompanyng the Departmet of HomelanSecrit Approritions Act, 2010, Pub. L. No. 111-83, 123 St. 2142 (Oct. 28, 2009), directed GAO to examie whether the Departmet of HomelanSecrit (DHS) and the Uited Ste Secret Service (USSS) violted ectio 03 of the Cnsolidted Secrit, Disaster Assance, anCtinung Approritions Act, 2009, Pub. L. No. 110-329, div. D, 122 St. 362, 3680 (St. 30, 2008) and the Atideficie Act, 31 U.S.C. § 1341. For the reasns et oelow, dditio to thi legal oio, GAO ixaming DHS’s and USSS’s financil managemectice, as well as DHS olic anrocedre relted to communictions with it comagcie. See H.R. Cf. R. No. 111-298, t 92 (“coferee direct the Comtroller Geerl to . . . idetif ll ctionske or recommeded to e tke to ddress and correct any violtion”). ceiving from oth agcie ccounting reort, olic anrocedre docme, we coclde tht DHS and USSS violted oth ectio 03() and the Atideficie Act. Octice whe rederingions to o the view from the relevanagto eablictual record and the agy’s legaitio the subject mtter. GAO, Procedre and Prctice for Legal Deciions and Oions, GAO-06-1064SP (Washingto, D.C.: St. 2006), ilable t www.gao.ov/legal/rerce.html. I thi regard, we cocted meetings with oth USSS and DHS offici, reqting anre and coie of releant iternal correspdece. Both the Atideficie Act anectio 03() retrict the ilabilit of fun fo oligatio and expditre. The Atideficie Act rohiit an officer or em of the Uited Ste Govermet from mking or authorizing anxpditre or oligatiocess of or i dvance of ilable app § ligatio or the Atideroritions. 31 U.S.C. 1341()(1). Thus, an approritiousilable for an ag to ian ficie Act will e violted. 03() te: “Noe of the fun rovided by thi Act . . . ll ilable fo oligatio or expditre for ro, roject, or ctivitie through reromming of funcess of $,000,000 or 10 ercet, whichever i less, tht: (1) augmeting ro, rojec ctivitie; . . . tht wold result i change iting ros project, or ctivitie as approved by the Cngress, unless the o Approritions of the Snate and the House of Committees R of fun.” retive re otified 1ays dvance of such reromming Pub. L. No. 110-329, § 03(). Thi ectio, which applie to moun reter than $ millio, retrict the ilabilit of fun for oligatio (and expditre) by means of reromming of ro, roject, anctivitie (PPA) until roer otice i rovided. The moun degnated for PPA re found i the expanator teme ccompanyng DHS’s fir 2009 approritions ct. Sectio 03(e) of the approritions ct rovide th“such dollr expanator temell subject to the coditions and reqireme . . . of thi ectio.” Id. § 03(e). moun specified i thi Act anccompanyng USSSll under the directio of the Secret of HomelanSecrit, 18 U.S.C. §6(), and i reqired, mong other things, to rotect reidetiand vice reidetil candidte long with their spus and childre, 18 U.S.C. §6(). Cng. Rec. H9,801 (dil ed. St. 28, 2008). For fir 2009, USSS received fiapproritio of $1,408,729,000. Pub. L. No. 110-329, 122 St. t 3667. The expanator temet itemize $41,082,00 for the PreidetiCandidte Nomiee Protectio PPA. 1Cng. Rec. H9,801. Oligations connectio with reidetil candidte omiee rotectiod wi the inaugutio of the Preideand Vice Preidet, i the ret case, o Janu 20, 2009. Letter from Uderecret of Managemet, Departmet of Homeland ecrit, to the Cirman, Subcommittee o HomelanSecrit, Committee o S Approritions, Uited Ste Snate, June 30, 2009 (Rromming Notifictio). Sused baance from another USSS PPA to over [] hortfll iunding thi fias result of the rotective effort for O June 30, 2009, moth fter it reidetil candidte omiee rotectioded, DHS otified the Subcommittee HomelanSecrit of the HousanSnate Approritions Committee tht USSSd expded $,100,000 more thanee degnated for the PreidetiCandidte Nomiee Protectio PPA. RrommiNotifictio. DHSxped tht USS “c the 2008 PreidtiCapagn.” Id. t issue here i whether (1) DHS and USSS violted ectio 03(), and (2) if o, io of ectio 03() constitte violtio of the Atideficie Act. Sectio 03() O June 30, 2009, DHS otified the HousanSnate Subcommittee Homel Secrit of reromming of $.1 millio to cover hortfll i the USSS PreidetiCandidte Capagn Protectio PPA. Sectio 03() reqire the Secret of HomelanSecrit to rovide -day dvance otifictio of roed PPA rerommingscess of $ millio. A oted above, the 2008 reidetil cpagn officillded o Janua 20, 2009, anll USSSligations for candida protectioeerred by tht time. Neverth th PreidetiCandidte Nomiee Protectio PPA. eless, moth elapsed etwee e ed of the cpagn anotifictio of the $.1 millio reromming for the While it i uncler from the docmetio rovided to us by USSS and DHS whe USSSceeded the ectio 03() $ millio threhold, the threhold hd to hve eeceeded by the Inaugutio Janua 20, 2009, wheandidte ro eded. According to DHS, USSS used moun from it NtionaSpeciSecrit Eve PPA to cover itandidte rotectioligations tht eceeded the ome fil managemet issu, USSS doe ot ct idedetl of it pareag, DHS. Meeting etwee DHS Directorte of Managemet, Bet and Finance, and GAO, Jan. 12, 2010. DHS reqire it comagcie, iclding USSS, to submit writte reromming reqt to the DHS Directorte of Managemet, Bet and Finance. DHS submit ll reromming otifictions reqired under ectio 03() to the HousanSnate Approritions Committee. Id. $41 millio itemized i the expanator temet for the reidetil candidte rotectio PPA. Rromming Notifictio. However, ectio 03() specificll rovide tho fun re ilable throug rerommingcess of $ milli o unless HousanSnate Approritions Committee re otified 1ays dvance of the reromming. Sce DHSiled to otif the approritions commi 1ays dvance of the oligatio of the rerommed fun, and USSSrred oligationscess of t c violted ectio he $ millio threhold more thanays rior to ngressionaotifictio of the reromming, we coclde tht DHS and USSS (). The ecod qtio as whether violtio of ectio 03() constitte violt of the Atideficie Act. If an ag anligatiocess or dvance of moun thre legall ilable to the ag, the agas violted the ct. B-31740, Mr. 23, 2009. The Atideficie Act ete to ll roviions o imlicte the ilabilit of ag approritions an“agcieust consider the effect of ll l thddress the ilabilit of approritions.” Id., . Sectio 03() isuch lw. Uder ectio 03(), e of the fun approrited to DHS fir 2009 were legall ilable for oligatio throug rerommingcess of $ millio “unless the th reromming.” Pub. L. No. 110-329, § 03(). hortfll the PreidetiCandidte Nomiee Protectio PPA, et thee fun cold ot e rerommed until DHS otified Cngressays dvce of the reromming . Thus, USSS and DHS violted the Atideficie Act. The Atideficie Act reqire tht the ag he“sll reort immeditel to the .C. PreideanCngress ll relevant fct an temet of ctionske.” 31 U.S §1. I dditio, the agus copy of the reort to the Comtroller Geerl o the same dte it transmit the reort to the PreideanCngress. . 31 U.S.C. §1, as meded by Cnsolidted Approritions Act, 200, Pub. L I, § 1401, 118 St. 2809, 3192 (Dec. 8, 2004). See o No. 108-447, div. G, title B -30433, Mr. 8, 200. Appendix II: DHS CFO–Secret Service Corrective Action Plan (CAP) DHS CFO-USSS Corrective Action Plan Increase Visibility: Implement strategies so both USSS and CFO can more closely monitor obligations and expenditure, effectively anticipate shortfalls, and take the necessary actions before an over-obligation of funds occurs. Annual Obligation Plan: USSS will submit to CFO an annual obligation plan, with anticipated monthly obligations by PPA, prior to the start of each fiscal year. Updates for the plan will be provided to CFO before the start of each month. Early Warning System: USSS will track actual obligations against planned and anticipated obligations to develop benchmarks that would act as red flags alerting USSS CFO of potential funding shortfalls. Improve Funds Controls: Implement strategies to improve the control over funds distribution, including allotment, obligation, and expenditure. USSS will implement fiscal controls procedures to ensure that internal and external reprogramming requests are submitted significantly before anticipated over- obligations are anticipated to occur. DHS CFO has specific procedures in place if the monthly Budget Execution Report shows overspending at the PPA level. Specific training will be implemented to ensure that these procedures are followed. Increase the Rigor of Internal and External Reprogrammings: Specific processes will be implemented to standardize the process for internal and external reprogrammings, increase the rigor of the process, and ensure that the reprogramming vetting process does not impose burdensome delays. All internal transfers and realignments will now require the notification and written approval of DHS CFO. The Department’s written response will be sent within 24 hours. o DHS will implement new procedures to increase the rigor and responsiveness of reprogramming requests. o All external (above threshold) reprogramming proposals will be submitted to the Appropriations Committees in a timely manner. o DHS components will be required to first initiate an internal funding review to identify lower priority spending within their components before reaching out to the Department to identify sources in other components. . o A reprogramming can only be requested if insufficient internal funds can be identified and the component can clearly articulate the negative impact of using internal resources to cover the shortfall. o All reprogramming proposals must be submitted in writing and in the appropriate format. A reprogramming will only be considered in the Department after the impact of reducing funding for lower priority efforts is clearly articulated and communicated to DHS CFO in writing. In addition to the contacts named above, Susan Poling, Managing Associate General Counsel; Kirk Kiester, Assistant Director; Glenn Slocum, Assistant Director; David Alexander; Thomas Armstrong; Labony Chakraborty; Kathryn Crosby; Jill Evancho; Gabrielle Fagan; Tyrone Hutchins; and Felicia Lopez made key contributions to this report. | Due to the unprecedented pace and crowds of the 2008 presidential campaign, the U.S. Secret Service (Secret Service), a component of the Department of Homeland Security (DHS), exceeded its budgeted amount for fiscal year 2009 presidential candidate nominee protection funding, but did not notify Congress of this shortfall (fiscal year 2009 shortfall) until June 2009--5 months after the Inauguration. In response to the Conference Report accompanying the 2010 DHS Appropriations Act, this report addresses the extent to which, at the time of the fiscal year 2009 shortfall, (1) Secret Service had the necessary internal controls in place to help ensure it could effectively manage and report on funds for presidential candidate protection; and (2) Secret Service and DHS had policies and procedures in place to help ensure that information related to the fiscal year 2009 shortfall was communicated to DHS and Congress. To conduct the audit work, GAO reviewed appropriation laws and regulations, Secret Service financial reports, and various DHS and Secret Service policy and procedural documents. GAO also interviewed officials from DHS and Secret Service. At the time of the fiscal year 2009 shortfall, Secret Service did not have--and still does not have--all of the necessary internal controls, including policies and procedures, in place to help ensure it can effectively manage and report on funds for presidential candidate protection. For example, the agency relied on undocumented manual processes to prepare and review two key reports--the Monthly Execution and Staffing Report and the Presidential Campaign Cost Report--used to monitor obligations, manage its funds by subaccounts, and report to Congress. Documenting the processes to prepare and review these reports could decrease the risk of future reporting errors and be useful to managers in controlling operations. Secret Service also did not have documented procedures for charging costs for certain candidate protection activities that cut across multiple subaccounts. The subaccounts are not discrete, and Secret Service officials stated that they lacked clarity and procedures on which to use to cover costs for certain protection activities. Documenting policies and procedures for charging such costs could be useful in controlling operations and monitoring budget execution. Also, neither DHS nor Secret Service had documented benchmarks to serve as an early warning system when monitoring obligations and expenditures for potential future funding shortfalls. Lastly, DHS' budget guidance did not specify how to develop such benchmarks. Developing and implementing guidance on how to document benchmarks could help ensure that any future potential shortfalls in presidential candidate protection funds are identified in a timely manner. DHS and Secret Service lacked sufficient policies and procedures to ensure that information related to the fiscal year 2009 shortfall was communicated to DHS and Congress. At the time of the shortfall, DHS had written guidance on how to communicate a violation of the Antideficiency Act--which prohibits federal officials from obligating or expending funds in excess of appropriations--and notify Congress of a reprogramming, or shifting funds within an appropriation. However, because they mistakenly determined the guidance did not apply, Secret Service informed DHS of the shortfall and requested assistance in covering it. GAO issued a legal opinion determining that DHS and Secret Service violated reprogramming notification requirements and the Antideficiency Act. Further, DHS had no written guidance on communicating a reprogramming that did not require congressional notification. Since the shortfall, DHS and Secret Service developed a Corrective Action Plan (CAP) to address issues related to the shortfall. DHS implemented two of the four communication-related CAP measures, but has not provided written guidance for implementing the other two, which require that (1) components complete internal funding reviews prior to submitting reprogramming requests and articulate the negative impact of using internal resources to cover shortfalls, and (2) DHS provide timely submission of reprogramming notifications to the Appropriations Committees.Implementing these measures could help ensure better communication among Secret Service, DHS, and Congress in the event of future shortfalls, and help DHS and the committees assess whether DHS effectively provides information about potential shortfalls. GAO recommends that DHS and Secret Service (1) document certain financial management, cost allocation, and benchmark procedures, and (2) provide guidance on remaining communications-related corrective actions. DHS concurred. |
Border Patrol has reported that its primary mission is to prevent terrorists and weapons of terrorism from entering the United States and also to detect, interdict, and apprehend those who attempt to illegally enter or smuggle any person or contraband across the nation’s borders. Geographic responsibility for the southwest border is divided among nine Border Patrol sectors, two of which are in Arizona—Tucson and Yuma. Each sector has a varying number of stations, with agents responsible for patrolling within defined geographic areas. Border Patrol collects and analyzes various data on its enforcement efforts and the number and types of entrants who illegally cross the southwest border between the land ports of entry. These data include apprehensions and seizures of drugs and other contraband. The Border Patrol collects and maintains data on apprehensions and seizures in DHS’s Enforcement Integrated Database (EID). This database also includes an asset assists field in which agents can specify whether an asset, such as SBInet surveillance towers, contributed to apprehensions or seizures. CBP’s OTIA was created to help ensure CBP’s technology efforts are properly focused on the mission and are well integrated, and to strengthen CBP’s expertise and effectiveness in program management and acquisition. OTIA’s mission is to conduct and facilitate effective identification, acquisition, and life-cycle support of products and services while driving innovation to improve CBP’s performance in securing U.S. borders and facilitating lawful movement of goods and people. OTIA manages the implementation of the Plan and is acquiring seven technology programs in the Plan for use by Border Patrol in Arizona. The goal of the Plan is to achieve situational awareness along the Arizona border where the Plan’s technologies are deployed. For fiscal year 2013, OTIA budgeted $297 million in development and deployment funds for the Plan’s seven technology programs. Table 1 describes the Plan’s programs, and appendix II provides a photograph of each technology program. The overall policy and structure for acquisition management outlined in DHS Acquisition Management Directive 102-01 and its associated Instruction Manual 102-01-001 includes an Acquisition Life-cycle Framework to plan and execute the department’s acquisition programs. According to the directive, DHS adopted the Acquisition Life-cycle Framework to ensure consistent and efficient acquisition management, support, review, and approval throughout the department. As shown in figure 1, DHS’s Acquisition Life-cycle Framework includes four acquisition phases through which DHS develops, deploys, and operates new capabilities. During the first three phases, the DHS component pursuing the acquisition is required to produce key documents to justify, plan, and execute the acquisition. These phases each culminate in an Acquisition Decision Event where the Acquisition Review Board—a board of senior DHS officials—determines whether a proposed acquisition has met the requirements of the relevant acquisition framework phase and should proceed. The Acquisition Review Board is chaired by the Acquisition Decision Authority—the official responsible for ensuring compliance with Acquisition Management Directive 102-01. DHS classifies acquisitions into three levels that determine whether the Acquisition Decision Authority can be a Component Acquisition Executive or should be DHS’s Deputy Secretary or Under Secretary for Management.program is a Level 2 acquisition, which is overseen by the department, and the DHS Under Secretary for Management serves as the Acquisition Decision Authority. The other six programs in the Plan are Level 3 acquisitions, which are overseen by CBP’s Acquisition Review Board, and the Acquisition Decision Authority is a CBP official who serves as both the Assistant Commissioner for OTIA and Component Acquisition Executive. As of January 2014, CBP has awarded contracts for four of the Plan’s seven programs and has initiated or completed deployment of technology to Arizona for three of the four programs under contract, as shown in table 2. OTIA has developed a schedule for each of the Plan’s seven programs, and four programs will not meet their originally planned completion dates. OTIA established schedules for each program, serving as the original program plans with the required sequence of events, resource assignments, and dates for deliverables. However, as of March 2013, five of the Plan’s programs—IFT, RVSS, MSC, APSS, and UGS/IS—have experienced delays relative to their baseline schedules, as shown in figure 2. OTIA officials attributed program delays to various factors, including higher than expected numbers of proposals from vendors for some of the programs, system performance problems, and limited resources. In particular, OTIA officials stated that they initiated acquisitions for a number of the Plan’s programs around the same time, but OTIA did not have a sufficient number of acquisition staff with sufficient experience and skills to review contract proposals or manage the programs, a fact that contributed to program delays.for both the IFT and RVSS programs, the source selection process to decide which vendor would be awarded the contract was extended because of a higher than expected number of proposals received from vendors and a limited acquisition workforce to review and process the proposals, including not having a dedicated contracting officer for each of the programs. In addition, for the MSC program, OTIA officials attributed delays to problems both vendors who were awarded contracts experienced with their systems, as previously discussed. OTIA took various actions in response to these delays, such as extending the scheduled contract award date for the IFT and RVSS programs and extending scheduled activities for the MSC program from July 2014 to September 2015. For example, OTIA officials stated that According to best practices, in acquisition programs, agencies may make modifications to program schedules to reflect changes to programs; CBP has consistently updated each program’s schedule in response to program delays. However, we assessed OTIA’s schedules as of March 2013 for the three highest-cost technology programs—IFT, RVSS, and MSC—and found that these program schedules addressed some, but not all, best practices for scheduling. The Schedule Assessment Guide identifies 10 best practices associated with effective scheduling, which are summarized into four characteristics of a reliable schedule— comprehensive, well constructed, credible, and controlled. summarizes our assessment of the IFT, RVSS, and MSC schedules. Appendix III provides more detailed information on the description of each best practice and on the results of our assessment. GAO-12-120G. A schedule risk analysis is performed to calculate the amount of contingency time that is needed to complete the program on time. According to our overall analysis, OTIA at least partially met the four characteristics of reliable schedules for the IFT and RVSS schedules and partially or minimally met the four characteristics for the MSC schedule. For example: Comprehensive: OTIA’s schedule for the IFT, RVSS, and MSC programs partially met best practices in terms of being comprehensive. For example, our analysis found that all three program schedules reflected the work that needed to be accomplished for the schedules, and each schedule had duration estimates that at least substantially met best practices. The IFT schedule contained a clear start and a finish milestone, and the RVSS schedule contained at least a clear start milestone. However, the schedules for these programs did not meet other best practices in terms of being comprehensive. For example, the MSC schedule did not contain fields that map activities to a program work breakdown structure; and the schedules for the IFT and RVSS programs did not fully map all schedule activities to each program’s work breakdown structure in accordance with best practices. Moreover, the IFT and RVSS schedules did not include the level of detail expected to provide oversight of ongoing construction work, as activities associated with the construction work were reflected in the schedules as milestones, limiting OTIA’s ability to monitor the progress of these efforts. Specifically, these activities were reflected in the schedules as a milestone that was a point in time, rather than a range of time, as called for by best practices. In addition, resources were not assigned to some activities in all three schedules. According to best practices, a schedule without resources implies an unlimited number of resources and their unlimited availability. Best practices note that assigning resources to activities across programs can help prevent any future overallocation of resources. Well constructed: OTIA’s schedule for the IFT program substantially met the characteristic of being well constructed; the schedules for RVSS and the MSC programs partially met this characteristic. For example, our analysis found the IFT program schedule had few missing or incorrect logic links and the critical path—the chain of dependent activities with the longest total duration—was found to be a straightforward, continuous path of activities that depicted the effort driving the key milestones. Our analysis of the RVSS and MSC program schedules found that these schedules had no missing or incorrect logic links. However, we could not verify a reliable critical path that was continuous from the status date to contract award for these schedules. In addition, our analysis shows that each of the three programs’ schedules exhibited unreasonable amounts of total float—that is, the amount of time by which an activity can slip before the delay affects the program’s estimated finish date appeared to be overestimating true schedule flexibility. For example, 25 percent of the activities in the IFT schedule appeared to be able to slip at least 10 working months before affecting the final milestone of the program. Credible: OTIA’s schedules for the IFT and RVSS programs partially met the characteristic of being credible; the MSC program schedule minimally met this characteristic. For example, our analysis found that the IFT and RVSS schedules responded when significant delays were introduced into the planned activities in the schedules; that is, when we tested the robustness of the schedules by extending activity durations, forecasted dates recalculated appropriately. However, the MSC schedule responded to schedule delays in some instances but not in others, and some forecasted dates did not recalculate to account for changes we made in the duration of activities when testing the MSC schedule. Additionally, OTIA performed a risk analysis for the IFT and RVSS programs; however, the IFT and RVSS analyses did not include the risks most likely to delay the project or how much contingency reserve (that is, time held in reserve for potential delays) was needed for each schedule. For the MSC schedule, OTIA did not conduct a schedule risk analysis because, according to program officials, OTIA did not have a tool for conducting schedule risk assessment at the time the MSC schedule was developed. According to best practices, without this analysis, the program office may not sufficiently understand the level of confidence in meeting the program’s completion date and identify any potential reserves for contingencies. Controlled: OTIA’s schedules for the IFT and the RVSS programs partially met the characteristic of being controlled; the MSC program schedule minimally met this characteristic. For example, our analyses determined all three schedules were well maintained, updated periodically by a trained scheduler, and contained no out-of-sequence activities. We also found that the IFT and the RVSS schedules contained no date anomalies, but the MSC schedule did have anomalies. For example, the MSC schedule contained 13 activities in the past with no actual start or finish dates. Further, our analysis showed that none of the schedules had valid baseline dates for activities or milestones by which management could track current performance. The IFT baseline schedule was originally approved in July 2011, and the baseline for the RVSS was approved in September 2012; however, both of these programs have been delayed. Rebaselining resets the estimated schedule that is used to determine how the program will be held accountable. Once a program is rebaselined, OTIA officials stated that the office plans to report on the performance of the program based on the revised schedule. However, none of the schedules we assessed contained valid baseline dates that could be used to track on-time, delayed, or accelerated effort. For example, a baseline schedule was not established for the MSC program and both the IFT and RVSS schedules were missing some baseline dates for activities and milestones. In addition, according to our analyses, none of the three schedules were supported by a schedule baseline document, which is a single document that defines the organization of a schedule, describes the logic of the network, describes the basic approach to managing resources, and provides a basis for all parameters used to calculate dates. OTIA officials stated that the Acquisition Program Baseline for both the IFT and RVSS serves as the baseline schedule document, which defines the cost, schedule, and performance baselines; however, the Acquisition Program Baseline and related guidance present an overview of OTIA schedule policy rather than assumptions specific to individual program schedules. OTIA officials stated that they believe the schedules for the IFT, RVSS, and MSC programs are generally reliable, but also stated that these schedules may not fully meet all best practices. OTIA officials stated that they plan to rebaseline the IFT and RVSS program schedules after contract award and the MSC program schedule after contract negotiations. Rebaselining these schedules would help OTIA better address some of the best practices, such as to help ensure a more full and consistent allocation of resources, to address gaps in the critical path to program completion, and to address schedule risk assessments. However, OTIA’s plans to rebaseline the schedules would not position OTIA to meet all best practices, which are designed to ensure reliable schedules. According to best practices, to be considered reliable, a schedule must substantially or fully meet all four schedule characteristics. As our analysis indicates, OTIA does not have the information it needs in the schedules to effectively use them in managing and overseeing the IFT, RVSS, and MSC programs. While OTIA’s plans to rebaseline the schedules are positive steps, ensuring that all schedule best practices are applied to the IFT, RVSS, and MSC schedules when updating them could help OTIA better ensure the reliability of the three programs’ schedules and could help better position OTIA to identify and address any potential further delays in the programs’ commitment dates. OTIA has not developed an Integrated Master Schedule for scheduling, executing, and tracking the work to implement the Plan and its seven programs. Rather, OTIA has used the separate schedules for each individual program (or “project”) to manage implementation of the Plan. The use of an Integrated Master Schedule is a well-established practice in program and project management and is a necessary tool for coordination of independently managed projects that have dependencies—including resource dependencies—on one another.According to schedule best practices, an Integrated Master Schedule shows the effect of delayed or accelerated government activities on contractor activities, as well as the opposite effect for multiple programs. In addition, an Integrated Master Schedule that allows managers to monitor all work activities, how long the activities will take, and how the activities are related to one another is a critical management tool for complex systems that involve the incorporation of a number of different projects, such as the Plan. OTIA officials stated that an Integrated Master Schedule for the overarching Plan is not needed because the Plan contains individual acquisition programs as opposed to a plan consisting of seven integrated programs. However, collectively, these programs are intended to provide Border Patrol with a combination of surveillance capabilities to assist in achieving situational awareness along the Arizona border with Mexico, as referenced in CBP’s planning documents. As a document that integrates the planned work, the resources necessary to accomplish that work, and the associated budget, an Integrated Master Schedule provides information and oversight regarding the schedule. According to best practices, an Integrated Master Schedule also helps agencies monitor progress against overall completion dates. However, OTIA has not established a target completion date for an Integrated Master Schedule for the overall Plan. Department of Homeland Security, Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology (BSFIT) for Fiscal Years 2014- 2017 (Washington, D.C.: Apr. 17, 2013). dedicated contracting officers to plan and execute the programs’ source selection and environmental activities. Developing and maintaining an Integrated Master Schedule for the Plan could allow OTIA insight into current or programmed allocation of resources for all programs as opposed to attempting to resolve any resource constraints for each program individually. Because OTIA does not have an Integrated Master Schedule for the Plan, it is not well positioned to understand how schedule changes in each individual program could affect implementation of the overall Plan. An Integrated Master Schedule could also help provide CBP a comprehensive view of the Plan and help CBP to reliably commit to when the Plan will be fully implemented, as well as help CBP to better predict whether estimated completion dates are realistic to manage programs’ performance. OTIA has developed a rough order of magnitude estimate for the Plan and individual Life-cycle Cost Estimates for the IFT and RVSS programs that meet some but not all best practices for such estimates. Best practices for cost estimating and Office of Management and Budget guidance emphasize that reliable cost estimates are important for program approval and continued receipt of annual funding. DHS policy similarly provides that Life-cycle Cost Estimates are essential to an effective budget process and form the basis for annual budget decisions. Reliable Life-cycle Cost Estimates reflect four characteristics—they are (1) well documented, (2) comprehensive, (3) accurate, and (4) credible— which encompass 12 best practices. For example, a best practice for a credible cost estimate is independently verifying a program’s Life-cycle Cost Estimate with an independent cost estimate and reconciling any differences. In August 2010, OTIA developed a rough order of magnitude cost estimate for the Plan—a high-level estimate without much detail—which was about $1.54 billion, including approximately $750 million in acquisition costs and approximately $800 million in operations and maintenance costs. In June 2013, OTIA revised this cost estimate for the Plan, estimating the cost at $1.39 billion, including about $480 million in acquisition costs and about $910 million in operations and maintenance costs. According to OTIA officials, some of the differences in costs between the August 2010 and June 2013 estimates are attributable to using more current information for the June 2013 estimate. Table 4 provides the June 2013 estimated cost and number of units to be procured and deployed for each of the Plan’s seven programs. In November 2011, we reported on the results of our analysis of the Plan’s August 2010 estimate. Specifically, we found that the August 2010 estimate substantially met best practices in terms of being comprehensive and accurate, and partially met best practices in terms of being well documented. For example, we reported that, in terms of being comprehensive, the estimate included documented technical data. In terms of accuracy, we reported that the cost estimate was continually updated and refined as more information became known. However, we also found that the August 2010 estimate minimally met best practices for being credible. For example, CBP officials had not conducted a sensitivity analysis and a cost-risk and uncertainty analysis to determine a level of confidence in the estimate, nor did CBP compare it with an independent estimate. At that time, OTIA officials stated that CBP’s approach was to develop and report an initial rough order of magnitude cost estimate for the programs in the Plan, not necessarily a Life-cycle Cost Estimate that met all best practices. In our November 2011 report, we recommended that CBP update its August 2010 cost estimate for the Plan using best practices, so that the estimate would be comprehensive, accurate, well CBP concurred with the recommendation. documented, and credible. In November 2012, OTIA officials told us that CBP no longer intends to develop a Life-cycle Cost Estimate for the Plan that meets all best practices. OTIA officials also stated that they used a risk-based approach to improve cost-estimating certainty and confidence by focusing on the Life-cycle Cost Estimates for the IFT and RVSS programs, which compose 90 percent of the Plan’s estimated cost. According to the officials, developing a Life-cycle Cost Estimate for the Plan that followed all best practices at this point in the acquisition cycle would not contribute much cost management benefit because a number of programs are under contract and units were being deployed to the field. However, as we recommended in November 2011, we continue to believe that a Life-cycle Cost Estimate for the Plan, developed using best practices, is needed to ensure that the estimate is comprehensive, accurate, well documented, and credible to help the agency and Congress fully understand the impacts of the Plan’s various programs. Moreover, CBP’s June 2013 revised cost estimate for the Plan does not address the concerns we identified in November 2011 with CBP’s original cost estimate. For example, the IFT and RVSS programs compose 90 percent of the Plan’s cost in the June 2013 Life-cycle Cost Estimate; however, OTIA has not independently verified its Life-cycle Cost Estimates for the IFT and RVSS programs with independent cost estimates and reconciled any differences with each program’s respective Life-cycle Cost Estimate, consistent with best practices. Furthermore, the remainder of the June 2013 Life-cycle Cost Estimate is not fully documented. The costs for programs other than the IFT and RVSS are provided as a summary program cost without a detailed description provided. In contrast, the IFT and RVSS Life-cycle Cost Estimates provided backup documentation, including labor hours and methodology. After CBP developed the initial cost estimate for the Plan in August 2010, CBP developed separate Life-cycle Cost Estimates for the IFT and RVSS programs in January and March 2012, respectively. The estimates for the IFT and RVSS programs met some but not all best practices for cost estimates. Specifically, our analysis shows that, in developing these estimates, CBP partially documented the data used in the cost model for the IFT’s Life-cycle Cost Estimate and fully documented the cost model for the RVSS’s Life-cycle Cost Estimate. CBP also conducted a sensitivity analysis and risk and uncertainty analysis to determine the level of confidence in both Life-cycle Cost Estimates so that contingency funding could be established relative to quantified risk. However, our analysis showed that CBP did not independently verify its draft Life-cycle Cost Estimates for the IFT and RVSS programs with independent cost estimates and reconcile any differences with each program’s respective Life-cycle Cost Estimate, consistent with best practices. According to OTIA officials, the IFT program’s Life-cycle Cost Estimate will be updated after the contract is awarded, the cost model for the updated Life-cycle Cost Estimate will be fully documented in accordance with best practices for cost estimating, and DHS’s Office of Program Accountability and Risk Management is expected to review the updated IFT Life-cycle Cost Estimate. Also, OTIA officials stated that they expect to update the RVSS Life-cycle Cost Estimate and receive approval for it in February 2014. However, OTIA is uncertain as to whether the updated IFT and RVSS Life-cycle Cost Estimates will be verified with independent cost estimates and any differences reconciled with the respective updated Life-cycle Cost Estimates. Specifically, OTIA officials stated that the IFT contract award will drive changes to the scope, schedule, and cost/budget baseline for the IFT program; CBP plans to update the Life-cycle Cost Estimate with programming and cost assumptions; and CBP plans to provide the updated cost estimate to the department as part of a revised submission of the Acquisition Program Baseline document. For the RVSS program, OTIA officials stated that the contract award resulted in changes that required updates to and reconciliation between the Cost Estimating Baseline Document and the Life-cycle Cost Estimate for the program’s scope, schedule, and cost/budget baseline. CBP intends to update the RVSS program’s Life-cycle Cost Estimate with programming and cost assumptions during the second quarter of fiscal year 2014 and provide the updated cost estimate to DHS for review. However, according to OTIA officials, as of November 2013, the agency had not yet determined whether to independently verify or validate the IFT and RVSS Life-cycle Cost Estimates. As CBP no longer intends to develop a Life-cycle Cost Estimate for the entire Plan, when updating the IFT and RVSS Life-cycle Cost Estimates, independently verifying the cost estimates and reconciling any differences, in accordance with cost-estimating best practices, could help better ensure the reliability of each estimate. Consistent with DHS acquisition guidance, CBP tailored the DHS Acquisition Life-cycle Framework for the IFT, RVSS, and MSC programs, primarily because the agency’s strategy for the three programs includes acquiring nondevelopmental technologies, preferably commercial-off-the- shelf systems, as opposed to developing technologies. As a result, rather than entering the DHS acquisition framework at Acquisition Decision Event 1, when a system includes technology development, the IFT program entered at combined Acquisition Decision Events 2B/3, and the RVSS and MSC programs entered at Acquisition Decision Event 2B. In pursuing its strategy to acquire nondevelopmental systems for the Plan’s three highest-cost programs, OTIA identified requirements and capabilities for each program, consistent with DHS acquisition guidance. Specifically, OTIA identified requirements for the IFT and RVSS programs that were approved in 2012, and capabilities for the MSC program that were developed in 2009. As part of the strategy to acquire commercial- off-the-shelf systems, CBP traded off, that is, reduced, some requirements for the RVSS and expects to trade off some requirements for the IFT for cost-effectiveness or schedule reasons. For example, with regard to the RVSS, OTIA traded off two requirements because, according to OTIA officials, they were not offered with the selected RVSS, which presented the best value to the government while providing as many requirements as possible. According to DHS Acquisition Management Directive 102-01 guidance, as part of the acquisition process, a program office may make trade-offs among performance, life- cycle cost, schedule, and risk. For example, the guidance states that a small reduction in performance that does not impair the mission might result in a large cost reduction. For the Plan’s three highest-cost programs, DHS and CBP did not consistently approve key acquisition documents before or at the Acquisition Decision Events, in accordance with DHS’s acquisition guidance. An important aspect of an Acquisition Decision Event is the review and approval of key acquisition documents critical to establishing the need for a program, its operational requirements, an acquisition baseline, and test and support plans, according to DHS guidance. DHS Acquisition Management Directive 102-01—and the associated DHS Instruction Manual 102-01-001 and appendixes—requires program offices to develop documents demonstrating critical knowledge that would help leaders make better-informed investment decisions when managing individual programs. The DHS guidance provides information for preparing acquisition documents, which require department- or component-level approval before a program moves to the next acquisition phase. In a September 2012 report, we found that while DHS had initiated efforts to validate required acquisition documents in a timely manner at major milestones, DHS leadership had authorized and continued to invest in major acquisition programs even though the vast majority of those programs lacked foundational documents demonstrating the knowledge needed to help manage risks and measure performance.in September 2012 that this limited DHS’s ability to proactively identify and address the challenges facing individual programs. We recommended, among other things, that DHS ensure all major acquisition programs fully comply with DHS acquisition policy by obtaining department-level approval for key acquisition documents before approving their movement through the acquisition life cycle. DHS concurred and since the time of our September 2012 report has approved We concluded additional acquisition documents. However, DHS has not yet demonstrated progress in obtaining department-level approval for most of its major acquisition programs’ key acquisition documents. On the basis of our analysis for IFT, RVSS, and MSC programs under the Plan, the DHS Acquisition Decision Authority approved the IFT program and the CBP Acquisition Decision Authority approved the RVSS and MSC programs to proceed to subsequent phases in the Acquisition Life-cycle Framework without approving all six required acquisition documents for each program. We also found that one document for the IFT program, five documents for the RVSS program, and two documents for the MSC program were subsequently approved after the programs received authority to proceed to the next phase. Table 5 provides a comparison of when key acquisition documents were required to be approved and when they were approved for the IFT, RVSS, and MSC programs. We discuss the status of key acquisition documents for the three highest- cost programs below. IFT program. Our analyses found that the DHS Acquisition Decision Authority approved four of the six documents required at Acquisition Decision Event 2B/3—the Acquisition Plan, Acquisition Program Baseline, Integrated Logistics Support Plan, and Operational Requirements Document—but did not approve two others—the Life-cycle Cost Estimate and Test and Evaluation Master Plan. At the time of the Acquisition Decision Event, CBP had a Life-cycle Cost Estimate for the IFT, but the cost estimate had not yet been approved by DHS. According to OTIA officials, the Life-cycle Cost Estimate for the IFT was discussed at the Acquisition Decision Event 3 meeting and approved by the DHS Under Secretary for Management and DHS’s Office of Program Accountability and Risk Management. However, CBP did not provide documentation showing that the estimate was approved by DHS. The DHS Director of Operational Test and Evaluation approved the revised IFT Test and Evaluation Master Plan on November 27, 2013, over DHS and CBP officials 18 months after it was required to be approved.attributed the delay in approving the Test and Evaluation Master Plan, in part, to discussions within CBP about the type and level of testing to be conducted on the IFTs. Specifically, CBP officials stated that a June 2012 version of the draft Test and Evaluation Master Plan did not include robust operational test and evaluation because of the IFT program’s strategy to acquire a nondevelopmental system (sometimes referred to as a commercial-off-the-shelf system). As a result, Border Patrol requested that rigorous, disciplined testing be included in the Test and Evaluation Master Plan to obtain familiarization with, and confidence in, the system and establish baseline performance information. According to DHS’s acquisition guidance, the Test and Evaluation Master Plan is important because it describes the strategy for conducting developmental and operational testing to evaluate a system’s technical performance, including its operational effectiveness and suitability. However, the IFT Test and Evaluation Master Plan approved by DHS in November 2013 does not describe testing to evaluate the operational effectiveness and suitability of the system. Rather, the Test and Evaluation Master Plan describes CBP’s plans to conduct a limited user test of the IFT. According to the Test and Evaluation Master Plan, the limited user test will be designed to determine the IFT’s mission contribution. According to OTIA and the Test and Evaluation Master Plan, this testing is planned to occur during 30 days in environmental conditions present at one site—the Nogales station. CBP plans to conduct limited user testing for the IFT under the same process that is typically performed in any operational test and evaluation, according to the Test and Evaluation Master Plan. The November 2013 IFT Test and Evaluation Master Plan notes that, because the IFT acquisition strategy is to acquire nondevelopmental IFT systems from the marketplace, a limited user test will provide Border Patrol with the information it needs to determine the mission contributions from the IFTs, and thus CBP does not plan to conduct more robust testing. However, this approach is not consistent with DHS’s acquisition guidance, which states that even for commercial-off-the-shelf systems, operational test and evaluation should occur in the environmental conditions in which a system will be used before a full production decision for the system is made and the system is subsequently deployed. This guidance also states that for commercial-off-the-shelf systems, operational tests should be conducted to ensure that the systems satisfy user-defined requirements. In addition, DHS guidance states that the primary purpose of test and evaluation is to provide timely and accurate information to managers, decision makers, and other stakeholders to support research, development, and acquisition, in a manner that reduces programmatic financial, schedule, and performance risk. We recognize the need to balance the cost and time to conduct testing to determine the IFT’s operational effectiveness and suitability with the benefits to be gained from such testing. However, revising the Test and Evaluation Master Plan to include more robust testing to determine operational effectiveness and suitability that more fully accounts for the various environmental conditions under which the IFTs will operate could better position CBP to evaluate IFT capabilities before moving to full production for the systems, help provide CBP with information on the extent to which the towers satisfy the Border Patrol’s user requirements, and help reduce potential program risks. In particular, although the limited user test should help provide CBP with information on the IFTs’ mission contribution and how Border Patrol can use the system in its operations, the limited user test does not position CBP to obtain information on how the IFTs may perform under the various environmental conditions the system could face once deployed. For example, in November 2013, the DHS Director of Test and Evaluation stated that testing the IFT at only one location during a clear, warm day without much wind would not produce representative results for days when it would be, for example, rainy, windy, freezing, or snowy, or when there was lightning. Likewise, he said testing in one location, such as Nogales, would not necessarily produce the same results as testing in Tucson because of the different terrains for the two locations. Conducting limited user testing in one area in Arizona—the Nogales station—for 30 days could limit the information available to CBP on how the IFT may perform in other conditions and locations along the Arizona border with Mexico. As of November 2013, CBP intends to deploy IFTs to 50 locations in southern Arizona, which can include different terrain and differences in climate throughout the year. Although the IFT program is not the same as SBInet, according to the Plan, the IFTs are to be deployed to locations with similar environmental and terrain conditions as SBInet towers, and IFT and SBInet systems may have similar types of technologies, such as cameras and radar. CBP previously encountered testing issues with SBInet. For example, in a January 2010 report, we found that while DHS’s approach to SBInet testing appropriately consisted of a series of progressively expansive developmental and operational events, the test plans and procedures for some test events were not defined in accordance with guidance. In January 2010, we concluded that effective testing was integral to successfully acquiring and deploying a large-scale, complex system, like SBInet. We further concluded that to do less unnecessarily increased the risk of problems going undetected until late in the system’s life cycle, such as when it was being accepted for use. In addition, in a November 2011 report, we found that the U.S. Army Test and Evaluation Command (ATEC) operationally tested SBInet at Tucson and that testing revealed challenges regarding the effectiveness and suitability of the technology for border surveillance. Among other things, this testing found that the rugged, restrictive terrain and weather conditions prevalent where SBInet is deployed affected the performance of the system’s radar, which affected success in detecting, identifying, and classifying items of interest. Revising the Test and Evaluation Master Plan to more fully test the IFT in the various environmental conditions in which it will be used to determine operational effectiveness and suitability before IFTs move to full production, in accordance with DHS acquisition guidance, could help provide CBP with more complete information on how the IFTs will operate under a variety of conditions before beginning full production. It could also help better position CBP to understand how the IFTs will meet Border Patrol’s operational requirements for the towers in contributing to Border Patrol’s border security mission. Without conducting operational testing in accordance with DHS guidance, the IFT program may be at risk of not meeting Border Patrol operational needs. RVSS program. The CBP Acquisition Decision Authority approved the program at Acquisition Decision Event 2B; however, the official had not approved any of the six required documents as required by DHS acquisition guidance at the time of that event. According to OTIA officials, the Acquisition Decision Authority approved the program for this Acquisition Decision Event because all of the necessary programmatic information was sufficiently developed and coordinated to support this decision. However, the Acquisition Decision Authority did not approve five of the documents until months after this event, and a sixth document, a Life-cycle Cost Estimate, was in draft form in November 2013—2 years after its required approval date. According to OTIA officials, the RVSS Life-cycle Cost Estimate is expected to be completed and approved in the second quarter of fiscal year 2014 and provided to DHS for review. MSC program. The CBP Acquisition Decision Authority approved two of the required six documents by Acquisition Decision Event 2B—the Acquisition Plan and Operational Requirements Document. However, the Integrated Logistics Support Plan was not approved until about 21 months after Acquisition Decision Event 2B. Also, the Acquisition Program Baseline was not expected to be approved until the second quarter of fiscal year 2014, more than 3 years after it was required to be approved for Acquisition Decision Event 2B and at least 16 months after it was required to be approved for Acquisition Decision Event 3. Furthermore, a Life-cycle Cost Estimate for the MSC’s operations and maintenance costs was expected to be completed in late 2013, more than 3 years after it was required to be approved for Acquisition Decision Event 2B. Since we last reported on CBP’s efforts to assess the performance of its SBInet surveillance systems in November 2011, CBP has taken steps to assess the performance of these technologies. In November 2011, we found that CBP had not conducted a post-implementation review and developed a plan to address SBInet operational test outcomes. Specifically, we found that CBP had not addressed the findings of ATEC’s March 2011 operational test results for the SBInet system at Tucson, which revealed challenges regarding the effectiveness and suitability of the technology for border surveillance and made nine recommendations At that time, CBP officials stated that to address performance issues.the agency did not conduct a post-implementation review or develop a plan to address the ATEC test results because the Secretary of Homeland Security canceled SBInet in January 2011. In November 2011, we recommended that CBP, in accordance with DHS guidance, conduct a post-implementation review and operational assessment of its SBInet system, and assess costs and benefits of taking action on the results of ATEC’s operational test.that conducting such a review, and weighing the costs and benefits of taking action on recommendations resulting from ATEC’s test of the SBInet system, could inform CBP’s decisions about future deployments of similar technologies, such as the IFTs. In making this recommendation, we concluded In response to our November 2011 recommendation, OTIA tasked the Johns Hopkins University Applied Physics Laboratory with conducting an independent post-implementation review of its SBInet Block 1 system. In January 2013, CBP released the results of the SBInet Block 1 Post Implementation Review (PIR), an assessment of the performance of its The PIR two SBInet surveillance system locations at Tucson and Ajo. concluded that CBP’s SBInet surveillance system has enhanced overall situational awareness within system viewsheds, improved agent safety, and been operationally available and effective with costs consistent with those anticipated for the system. For instance, the PIR concluded that the system broadened the agents’ situational awareness beyond the tactical, agent-on-the-ground sphere of awareness, and increased their ability to monitor incursions. The PIR also made five recommendations for CBP to improve future operational assessments of its SBInet surveillance system and to plan for new acquisition sensor deployments, such as for CBP to conduct a more detailed assessment of the impacts of Block 1 systems and develop more on-the-job agent training.as of May 2013, CBP is in the process of documenting and reviewing each recommendation outlined in the PIR, and intends to document its plans to address those recommendations that OTIA and the Office of Border Patrol determine need corrective action. However, these officials stated that some of the findings and recommendations outlined in the PIR will not be explicitly addressed or applied to future deployment efforts. For instance, according to officials, because the technologies planned for deployment under the Plan are commercial-off-the-shelf products, the PIR finding about recording the documentation of environmental factors, such as weather and terrain, that impede the system performance will not apply to the technologies to be deployed under the Plan, as those technologies include requirements on documentation of environmental factors. Border Patrol officials further stated that the contractor and Border Patrol will have a process to enable them to determine where the best deployment locations, given the variable terrain, will be for the technologies to be deployed under the Plan. Moreover, Border Patrol officials stated that Tucson sector officials have been assigned responsibility to determine the extent to which corrective actions are needed to address each recommendation outlined in the PIR because these sector officials have a better understanding of the environment in which the SBInet system is operating. According to OTIA officials, the agency plans to conduct annual operational assessments of its SBInet system. As additional surveillance technologies are deployed, we will continue to monitor Border Patrol’s efforts to address issues identified by the PIR as part of our recommendation follow-up process. According to OTIA and Border Patrol officials, In addition, the PIR concluded that as of January 2013, six of the nine recommendations outlined in ATEC’s operational test have either been addressed or are in the process of being addressed. The ATEC recommendations that remain to be addressed include, for example, addressing software reliability, improving sustainability cost, and reducing maintenance issues. OTIA officials stated that the agency plans to take actions to address the remaining three recommendations by, for example, pursuing alternative technical solutions to extend the life-cycle of the SBInet system and improving sustainability costs by reducing the contractor’s responsibility for field maintenance and other functions by transitioning to government support in 2014. CBP is not capturing complete asset assist data on the contributions of its surveillance technologies to apprehensions and seizures, and these data are not being consistently recorded by Border Patrol agents and across locations. Although CBP has a field within the EID for maintaining data on whether technological assets, such as SBInet surveillance towers, and nontechnological assets, such as canine teams, assisted or contributed to the apprehension of illegal entrants, and seizure of drugs and other contraband, according to CBP officials, Border Patrol agents are not required to record these data. This limits CBP’s ability to collect, track, and analyze available data on asset assists to help monitor the contribution of surveillance technologies, including its SBInet system, to Border Patrol apprehensions and seizures and inform resource allocation decisions. Our analysis of EID asset assist data for apprehensions and seizures in the Tucson and Yuma sectors from fiscal year 2010 through June 2013 shows that information on asset assists was generally not recorded for all apprehension and seizure events. For instance, for the 166,976 apprehension events reported by the Border Patrol across the Tucson sector during fiscal year 2010 through June 2013, an asset assist was not recorded for 115,517 (or about 69 percent) of these apprehension events. In the Yuma sector, of the 8,237 apprehension events reported by Border Patrol agents during the specified time period, an asset assist was not recorded for 7,150 (or about 87 percent) of these apprehension events. Similarly, data on seizure events reported across the Tucson and Yuma sectors show that for some seizure events, asset assists were not reported from fiscal year 2010 through June 2013 (about 32 percent and about 67 percent, respectively). According to Border Patrol officials, in the absence of requirements for Border Patrol agents to record data on asset assists, differences in the reporting of these data at the station level are likely attributable to the emphasis placed on the recording of these data by supervisory agents. Appendix IV contains summary statistics on the extent to which data on asset assists are recorded for apprehensions and seizures across the Tucson and Yuma sectors from fiscal year 2010 through June 2013. Since data on asset assists are not required to be reported, it is unclear whether the data were not reported because an asset was not a contributing factor in the apprehension or seizure or whether an asset was a contributing factor but was not recorded by agents. As a result, CBP is not positioned to determine the contribution of surveillance technologies in the apprehension of illegal entrants and seizure of drugs and other contraband during the specified time frame. As shown in figures 3 and 4, while the recording of asset assists increased from fiscal year 2010 through June 2013 from about 18 percent to about 47 percent in the Tucson sector and from about 8 percent to about 21 percent in the Yuma sector, for more than one-half of the apprehension event records for the Tucson sector and four-fifths for the Yuma sector, asset assists were not reported for the first three quarters of fiscal year 2013. Border Patrol officials did not specify why the agency does not require the recording and tracking of data on asset assists. However, Border Patrol officials stated that agents are encouraged to select the appropriate asset assist code when assets contributed to an apprehension or seizure. Border Patrol officials also stated that although they do not regularly track and analyze data on asset assists, including those from surveillance technologies, these data are tracked and analyzed on an ad hoc basis to help determine Border Patrol’s resource allocation and operational needs, and more specifically, what resources are available at the strategic level to help mitigate the threat of illegal entrants, drugs, and other contraband. Moreover, an Associate Chief at Border Patrol told us that while data on asset assists are not systematically recorded and tracked, Border Patrol recognizes the benefits of assessments of asset assists data, including those from surveillance technologies, such as the SBInet system, as these data in combination with other data, such as numbers of apprehensions and seizures, are used on a limited basis to help the agency make adjustments to its acquisition plans prior to deploying resources, thereby enabling the agency to make more informed deployment decisions. Border Patrol also uses these other data, such as numbers of apprehensions and seizures, to help inform assessment of its efforts to secure the border. Border Patrol officials cautioned that while asset assists data are the only available data directly linking apprehensions and seizures to the agency’s surveillance technologies, these data do not enable direct attributions of the SBInet system’s contribution to border security strategic goals because of several factors, such as changes in the flows of illegal entrants across sectors or in economic conditions in the United States and Mexico. Moreover, the officials said that surveillance technologies such as SBInet and RVSS towers enable the detection of apprehensions and seizures and accordingly, it is the agents who identify and track the illegal activity and ultimately apprehend illegal entrants and seize contraband. Despite the absence of complete data on the contribution of CBP’s surveillance technologies to apprehensions and seizures, our analysis of Border Patrol’s data on the location of apprehensions and seizures provides some insights into where Border Patrol apprehensions and seizures occurred in relation to the locations of its two highest-cost surveillance technologies—SBInet towers and RVSS. For example, our analysis of apprehensions events data, as determined by Geographic Information System data entered by Border Patrol agents when recording apprehensions and seizures, shows that across the Tucson sector from fiscal year 2010 through June 2013, of the 166,976 apprehension events, 71,397 (or about 43 percent) occurred within the camera and radar range of SBInet and RVSS towers. As shown in figure 5, the percentage of apprehension events occurring within the range of both SBInet and RVSS surveillance technologies has changed little, if at all, over time. Apprehension events occurring within the radar and camera range of SBInet towers have remained relatively unchanged, while apprehension events occurring within the range of RVSS towers increased by about 1 percent during our specified time frame. Moreover, of those 115,517 apprehension events in the Tucson sector that do not have data on asset assists, 8,751 (or about 8 percent) occurred within the camera range, and 9,818 (or about 9 percent) occurred within the radar range of SBInet towers. Moreover, data on asset assists were not recorded in 35,147 (or about 30 percent) of apprehension events within the range of RVSS towers. Table 6 shows the reporting of asset assists for apprehension and seizure events occurring across the Tucson sector within the range of SBInet and RVSS towers during our specified time period. Border Patrol officials stated that while analyzing data on the contributions of Border Patrol’s surveillance technologies is a relevant measure of the agency’s ability to meet its border security goals, conclusions regarding the contributions and impacts of its surveillance technologies on Border Patrol’s enforcement efforts cannot be formed solely on the basis of the proximity of apprehension or seizure events to the locations of its surveillance technologies. These officials stated that there are instances in which illegal entrants were detected by some combination of cameras or radar closer to the border; however, to gain a better tactical advantage, Border Patrol agents made the apprehensions farther from the border. As we reported in December 2012, Border Patrol officials stated that apprehensions occur in areas farther from the border because several factors preclude greater border presence, including terrain that is inaccessible or creates a tactical disadvantage, the distance from Border Patrol stations to the border, and access to ranches and lands that are federally protected and environmentally sensitive. Standards for Internal Control in the Federal Government calls for agencies to ensure that ongoing monitoring occurs during the course of normal operations to help evaluate program effectiveness. standards also state that agencies should promptly and accurately record transactions to maintain their relevance and value for management decision making and that this information should be readily available for use by agency management and others so that they can carry out their duties with the goal of achieving all of their objectives, including making operating decisions and allocating resources. These standards further state that to be effective, agencies need to clearly document all transactions in a timely manner to ensure that they are making appropriately informed decisions. Moreover, the standards call for clear documentation of and procedures that are readily available for examination. In addition, these standards call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. Because DHS’s EID database already includes the asset assists data field and these data are used by Border Patrol on a limited basis to make decisions about resources, requiring agents to record and track asset assists data could help ensure that these data are complete and, if analyzed, could help better inform CBP’s resource allocation decisions. Moreover, we acknowledge that conclusions regarding the contributions of surveillance technologies based on location and proximity data alone may not be sufficient to examine the contribution of CBP’s surveillance technologies in achieving their strategic goals. However, analyzing data on apprehensions, seizures, and asset assists in combination with other relevant performance metrics or indicators as appropriate could provide more robust analysis of the contributions of surveillance technologies, and accordingly could better position CBP to be able determine the extent to which its technology investments have contributed to the agency’s border security efforts. GAO/AIMD-00-21.3.1. In response to our November 2011 recommendation regarding the identification of mission benefits and development of key attributes for performance metrics for the surveillance technologies to be deployed as part of the Plan, CBP has identified mission benefits expected from the implementation of the surveillance technologies to be acquired or deployed as part of the Plan, but has not fully developed key attributes for In November 2011, we performance metrics for these technologies.reported that agency officials had not yet defined the mission benefits expected or quantified metrics to assess the contribution of the selected approaches in achieving their goal of situational awareness and detection of border activity using surveillance technology. We recommended that CBP determine the mission benefits to be derived from implementation of the Plan and develop and apply key attributes for metrics to assess program implementation. CBP concurred with our recommendation. In April 2013, CBP issued its Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014-2017, which identifies specific mission benefits to be achieved by the deployment of each of the seven technologies under the Plan.technologies will provide the mission benefits of improved situational awareness and agent safety. Furthermore, CBP officials stated that each of the seven technologies deployed or planned for deployment as part of the Plan will help enhance the ability of Border Patrol agents to detect, identify, deter, and respond to threats along the border. A summary of the mission benefits of each surveillance technology deployed or planned for deployment under the Plan is presented in appendix V. According to CBP officials, the majority of these surveillance While CBP has defined mission benefits for the technology programs under the Plan, the agency has not yet developed key attributes for performance metrics for all surveillance technologies to be deployed as part of the Plan. The Clinger-Cohen Act of 1996 and Office of Management and Budget (OMB) guidance emphasize the need to ensure that information technology investments, such as IFT systems, produce tangible, observable improvements in mission performance. In our April 2013 update on the progress made by the agencies to address our findings on duplication and cost savings across the federal government, CBP officials stated that operations of its two SBInet surveillance systems identified examples of key attributes for metrics that can be useful in assessing the Plan’s implementation for technologies. For example, according to CBP officials, to help measure whether illegal activity has decreased, examples of key attributes include decreases in the amount of arrests, complaints by ranchers and other citizens, and destruction of public and private lands and property. While the development of key attributes for metrics for the two SBInet surveillance systems is a positive step, as of April 2013, CBP has not yet identified attributes for metrics for all technologies to be acquired and deployed as part of the Plan. In addition to these efforts, CBP officials stated that in response to our prior recommendations regarding the establishment of a performance goal or goals and associated performance metrics that define how border security is to be measured, Border Patrol, as of December 2013, was in the process of developing and implementing performance goals and measures to assess Border Patrol’s efforts to secure the border. However, CBP officials stated that none of the current measures directly address the operational impact of technology. The officials further stated that the Tucson sector has submitted an issue paper that identifies potential data that can attribute a certain level of effectiveness to its SBInet system, but it is still under review by CBP. While these are positive steps, to fully address the intent of our recommendation, CBP would need to develop and apply key attributes for performance metrics for each of the technologies to be deployed under the Plan to assess its progress in implementing the Plan and determine when mission benefits have been fully realized. CBP has established schedules for the Plan and the IFT, RVSS, and MSC programs that meet some but not all best practices for scheduling, hindering CBP’s ability to reliably commit to when it will deliver all of the Plan’s technologies to Arizona. Ensuring that all schedule best practices are applied to the IFT, RVSS, and MSC schedules when updating them could help OTIA better ensure the schedules’ reliability and could help better position OTIA to identify and address any potential further delays in the program’s milestone commitment dates. Further, developing and maintaining an Integrated Master Schedule for the Plan, in accordance with best practices, could allow insight into current or programmed allocation of resources for the Plan and help CBP to reliably commit to when the Plan will be fully implemented. Also, CBP has developed Life- cycle Cost Estimates for the IFT and RVSS programs. Although OTIA officials stated that DHS’s Office of Program Accountability and Risk Management conducted an assessment of the IFT Life-cycle Cost Estimate, an assessment is not equivalent to verifying the estimate with an independent cost estimate. When updating the Life-cycle Cost Estimates for the IFT and RVSS programs, verifying the estimates with independent cost estimates and reconciling any differences, consistent with best practices, could help to better ensure the credibility of CBP’s cost estimates for these programs. DHS and CBP have approved some key acquisition documents as directed by DHS and CBP Acquisition Review Boards, but work remains to approve all key acquisition documents in accordance with DHS acquisition guidance. Specifically, revising the Test and Evaluation Master Plan to more fully test the IFTs in the various environmental conditions in which they will be used to determine operational effectiveness and suitability, in accordance with DHS acquisition guidance, could help provide CBP with more complete information on how the IFTs will operate under a variety of conditions before beginning full production. Requiring the collection of data on the extent to which technology assets assisted in apprehensions and seizures could better position Border Patrol to assess the contribution of surveillance technologies to its enforcement efforts and its goals of achieving and maintaining operational control and situational awareness along the southwest border. Conducting analysis of such data, once collected, in combination with other relevant performance metrics or indicators as appropriate, could help better position CBP to be able to determine the extent to which its technology investments have contributed to border security efforts. To improve the acquisition management of the Plan and the reliability of its cost estimates and schedules, assess the effectiveness of deployed technologies, and better inform CBP’s deployment decisions, we recommend that the Commissioner of CBP take the following six actions: When updating the schedules for the IFT, RVSS, and MSC programs, ensure that scheduling best practices, as outlined in our schedule assessment guide, are applied to the three programs’ schedules. Develop and maintain an Integrated Master Schedule for the Plan that is consistent with scheduling best practices. When updating Life-cycle Cost Estimates for the IFT and RVSS programs, verify the Life-cycle Cost Estimates with independent cost estimates and reconcile any differences. Revise the IFT Test and Evaluation Master Plan to more fully test the IFT program, before beginning full production, in the various environmental conditions in which IFTs will be used to determine operational effectiveness and suitability, in accordance with DHS acquisition guidance. Require data on asset assists to be recorded and tracked within the Enforcement Integrated Database, which contains data on apprehensions and seizures. Once data on asset assists are required to be recorded and tracked, analyze available data on apprehensions and seizures and technological assists, in combination with other relevant performance metrics or indicators, as appropriate, to determine the contribution of surveillance technologies to CBP’s border security efforts. We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are summarized below and reproduced in full in appendix VI, and technical comments, which we incorporated as appropriate. DHS concurred with four of the recommendations in the report. DHS did not concur with the other two recommendations in the report. With regard to the first recommendation, that CBP ensure that scheduling best practices are applied to the IFT, RVSS, and MSC programs’ schedules when they are updated, DHS concurred and stated that OTIA plans to ensure that scheduling best practices are applied as far as practical when updating the three programs’ schedules. DHS plans to update the programs’ schedules by July 2015. With regard to the second recommendation, that CBP develop and maintain an Integrated Master Schedule for the Plan, DHS did not concur with this recommendation. DHS stated that maintaining an Integrated Master Schedule for the Plan undermines the DHS-approved implementation strategy for the individual programs making up the Plan and that a key element of the Plan has been the disaggregation of technology procurements. According to DHS, the implementation of this recommendation would essentially create a large, aggregated program, similar to SBInet, and effectively create an aggregate “system of systems.” DHS stated that CBP believes its strategy of disaggregation has been effective and has reduced overall risk and cost. DHS also stated that each program within the Plan has its own schedule and that forcing linkages among the Plan’s programs into a single Integrated Master Schedule contradicts lessons learned and the approved implementation strategy for the Plan. We continue to believe that developing and maintaining an Integrated Master Schedule for the Plan, consistent with best practices for scheduling, is needed. As noted in the report, the use of an Integrated Master Schedule is a well-established practice in program and project management and is a necessary tool to coordinate independently managed projects that have dependencies—including resource dependencies—on one another. The programs under the Plan are intended to provide Border Patrol with a combination of surveillance capabilities to assist in achieving situational awareness along the Arizona border with Mexico; and while the programs themselves may be independent of one another, the Plan’s resources are being shared among the programs. Furthermore, this recommendation is not intended to imply that DHS needs to re-aggregate the Plan’s seven programs into a "system of systems" or change its procurement strategy in any form. Rather, the intent of our recommendation is for DHS to insert the individual schedules for each of the Plan’s programs into a single electronic Integrated Master Schedule file in order to identify any resource allocation issues among the programs’ schedules. Developing and maintaining an Integrated Master Schedule for the Plan could allow OTIA insight into current or programmed allocation of resources for all programs as opposed to attempting to resolve any resource constraints for each program individually. In addition to helping identify resource constraints, an Integrated Master Schedule can be a useful tool for consolidating multiple projects or program files into a single master file, even if those projects or programs have no direct links among activities. For example, aggregating individual files into a master schedule is useful for reporting purposes, particularly if the projects or programs are under the purview of a single management organization or a single customer. In this case, the master schedule would allow for a concise view of all projects or programs for which the stakeholder is responsible or has an interest. A master schedule of this nature is often referred to as a consolidated schedule, although the terms “consolidated schedule” and “Integrated Master Schedule” are often synonymous. We continue to believe that developing and maintaining an Integrated Master Schedule for the Plan could help provide CBP a comprehensive view of the Plan and help CBP to reliably commit to when the Plan will be fully implemented and better predict whether estimated completion dates are realistic to manage programs’ performance, as noted in the report. With regard to the third recommendation, that CBP verify the Life-cycle Cost Estimates for the IFT and RVSS programs with independent cost estimates and reconcile any differences, DHS concurred, although its planned actions will not fully address the intent of the recommendation unless assumptions underlying the cost estimates change. DHS stated that while OTIA did not obtain a traditional independent cost estimate for the programs, the Life-cycle Cost Estimates were meant to be conservative in managing program risk and that the estimated life-cycle costs to date are less than originally projected. DHS further stated that at this point it does not believe that there is a benefit in expending funds to obtain independent cost estimates and that if the costs realized to date continue to hold, there may be no requirement or value added in conducting full-blown updates with independent cost estimates. DHS noted, though, that if this assumption changes, OTIA will complete updates and consider preparing independent cost estimates, as appropriate. We recognize the need to balance the cost and time to verify the Life-cycle Cost Estimates with the benefits to be gained from verification with independent cost estimates. However, as noted in this report, independently verifying the cost estimates is consistent with best practices and could help provide CBP with more insights into program costs. An independent cost estimate provides an independent view of expected program costs that tests the program office’s estimate for reasonableness. Independent cost estimates frequently use different methods and are less burdened with organizational bias than a program office’s estimate, helping to provide decision makers with insight into a program’s potential costs. Thus, we continue to believe that independently verifying the Life-cycle Cost Estimates for the IFT and RVSS programs and reconciling any differences, consistent with best practices, could help CBP better ensure the reliability of the estimates. With regard to the fourth recommendation, that CBP revise the IFT Test and Evaluation Master Plan to more fully test the IFT program in the various environmental conditions in which IFTs will be used to determine operational effectiveness and suitability, DHS did not concur with the recommendation. Specifically, DHS stated that the Test and Evaluation Master Plan includes tailored testing and user assessments that will provide much, if not all, of the insight contemplated by the intent of the recommendation. According to DHS, the approved non-developmental item acquisition strategy for the IFT program was based on market surveys and observations during field use by other customers and the incorporation of system demonstrations conducted during source selection. DHS also stated that there is no requirement for expansive, formal operational test and evaluation and to re-write the Test and Evaluation Master Plan to incorporate operational testing undermines and removes the benefits of the non-developmental item strategy. Moreover, DHS stated that the user test currently outlined in the Test and Evaluation Master Plan will provide the operational user the information needed to validate system requirements and operational characteristics. DHS also noted that Acquisition Decision Event 3 has been approved for IFT production, and after the initial IFT system undergoes testing in accordance with the Test and Evaluation Master Plan, the Office of Border Patrol will make the determination regarding operational readiness prior to deploying additional systems. We continue to believe that DHS should revise the Test and Evaluation Master Plan to more fully test the IFT program, before beginning full production, in the various environmental conditions in which the IFT will be used to determine operational effectiveness and suitability. DHS’s acquisition guidance states that the Test and Evaluation Master Plan is important because it describes the strategy for conducting developmental and operational testing to evaluate a system’s technical performance, including its operational effectiveness and suitability. The guidance states that, even for commercial-off-the-shelf systems, such as the IFT program, operational test and evaluation should occur in the environmental conditions in which a system will be used before a full production decision for the system is made and the system is subsequently deployed. In addition, DHS guidance states that the primary purpose of test and evaluation is to provide timely and accurate information to managers, decision makers, and other stakeholders to support research, development, and acquisition in a manner that reduces programmatic financial, schedule, and performance risks. The current Test and Evaluation Master Plan describes CBP’s plans to conduct a limited user test of the IFT, which will be designed to determine the IFT’s mission contribution. However, determining mission contribution is not equivalent to determining operational effectiveness and suitability, which specifically identifies how effective and reliable a system is in meeting its operational requirements in its intended environment. DHS plans to conduct limited user testing during a 30-day period in environmental conditions present at one site—the Nogales station. However, as of November 2013, CBP intended to deploy IFTs to 50 locations in southern Arizona, which can include different terrain and differences in climate throughout the year. As we noted in the report, conducting limited user testing in one area in Arizona for 30 days could limit the information available to CBP on how the IFTs may perform in other conditions and locations along the Arizona border. Therefore, CBPs approach to use limited user testing will not specifically identify how effective and reliable a system is in meeting its operational requirements in its intended environment. Moreover, while DHS has approved the IFT program for production at Acquisition Decision Event 3, testing for the IFTs has not yet begun. As noted in the report, revising the Test and Evaluation Master Plan to include more robust testing to determine operational effectiveness and suitability could better position CBP to evaluate IFT capabilities before moving to full production for the system, help provide CBP with information on the extent to which the towers satisfy the Border Patrol’s user requirements, and help reduce potential program risks. Furthermore, although the IFT program is not the same as SBInet, according to the Plan, the IFTs are to be deployed to locations with similar environmental and terrain conditions as SBInet towers, and IFT and SBInet systems may have similar types of technologies, such as cameras and radar. As noted in the report, we previously identified testing issues CBP encountered with SBInet, such as DHS’s test plans and procedures for some SBInet test events not being defined in accordance with guidance and that operational tests of SBInet at Tucson revealed challenges regarding the effectiveness and suitability of the technology for border surveillance. Thus, we continue to believe that revising the Test and Evaluation Master Plan to more fully test the IFT in the various environmental conditions in which it will be used to determine operational effectiveness and suitability, before beginning full production, could help provide CBP with more complete information on how the IFTs will operate under a variety of conditions. Without conducting operational testing in accordance with DHS guidance, the IFT program may be at increased risk of not meeting Border Patrol operational needs. With regard to the fifth recommendation, that CBP require data on asset assists to be recorded and tracked within the Enforcement Integrated Database, DHS concurred and stated that Border Patrol is changing its data collection process to allow for improved reporting on asset assists for apprehensions and seizures and intends to make it mandatory to record whether an asset assisted in an apprehension or seizure. DHS plans to change its process by December 31, 2014. With regard to the sixth recommendation, that CBP analyze available data on apprehensions and seizures and technology assists to determine the contribution of surveillance technologies to its border security efforts, DHS concurred and stated that Border Patrol intends to create a plan of action with milestones to explore and develop a process to answer how different classes of technology, within a certain environment, contribute to Border Patrol’s mission. DHS stated that Border Patrol plans to develop an initial set of quantitative and qualitative technology-related measures by September 30, 2014, as an interim milestone; gather baseline data for the measures in fiscal year 2015 and begin to use these data to evaluate the contributions of specific technology assets by the end of that fiscal year; and by the end of fiscal year 2016, use measures associated with technology to assist in determining levels of situational awareness in different areas of the border. These planned actions, if implemented effectively, should address the intent of the recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Our objectives were to determine the extent to which U.S. Customs and Border Protection (CBP) has (1) developed schedules and Life-cycle Cost Estimates for the Arizona Border Surveillance Technologies Plan (the Plan) in accordance with best practices; (2) followed key aspects of the Department of Homeland Security’s (DHS) acquisition management framework in managing the Plan’s three highest cost programs; and (3) assessed the performance of technologies deployed under the Secure Border Initiative Network (SBInet), identified mission benefits, and developed performance metrics for surveillance technologies to be deployed under the Plan. To determine the extent to which CBP has followed best practices in developing schedules and Life-cycle Cost Estimates for the Plan’s three highest-cost programs—the Integrated Fixed Tower (IFT), Remote Video Surveillance System (RVSS), and Mobile Surveillance Capability (MSC)—we obtained CBP’s Office of Technology Innovation and Acquisition’s (OTIA) program schedules as of March 2013, which were current at the time of our review, for the these programs and compared them against best practices for developing schedules. Specifically, we assessed the extent to which the schedules for these three programs met each of the 10 best practices identified in the schedule assessment guide. We characterized whether the schedules met each of the 10 best practices based on the following scale: Not met—the program provided no evidence that satisfies any of the criterion. Minimally met—the program provided evidence that satisfies a small portion of the criterion. Partially met—the program provided evidence that satisfies about half of the criterion. Substantially met—the program provided evidence that satisfies a large portion of the criterion. Met—the program provided complete evidence that satisfies the entire criterion. In conducting our analysis, we focused, for example, on whether the schedules reflect best practices for a reliable schedule, such as whether the schedules define the work necessary to accomplish a program’s objectives. More details on our assessment and methodology are presented in appendix III. By assessing the schedules against best practices, we also identified schedule challenges that CBP was experiencing in testing, procuring, deploying, and operating technologies in the Plan and interviewed CBP officials to determine the reasons for the schedule challenges and steps that CBP had taken or was taking to address them. In addition, we obtained and analyzed the August 2010 and June 2013 Life-cycle Cost Estimates for the Plan. We also analyzed the IFT and RVSS January 2012 and March 2012 Life-cycle Cost Estimates, respectively, which were current at the time of our review, and compared them against best practices for cost estimating. We analyzed DHS and CBP documents and interviewed officials regarding their efforts to implement our November 2011 recommendations to update the Life- To cycle Cost Estimate for the Plan in accordance with best practices.assess the reliability of cost estimate data that we used, we reviewed relevant program documentation, such as cost estimation spreadsheets, as available, to substantiate evidence obtained from interviews with knowledgeable agency officials. We found the data to be sufficiently reliable for the purposes of our report. To determine the extent to which CBP followed key aspects of DHS’s acquisition management framework in managing the Plan’s three highest- cost programs, we analyzed DHS and CBP documents, including DHS Acquisition Management Directive 102-01 and its associated DHS Instruction Manual 102-01-001, program briefing slides, budget documents, Acquisition Decision Memorandums, schedules, and program We focused on the IFT, RVSS, and MSC programs for more risk sheets.in-depth analyses because they are the Plan’s three highest-cost programs and represent 97 percent of the estimated cost of the Plan. Specifically, to assess the acquisition strategy for the Plan, we focused on the IFT, RVSS, and MSC programs and analyzed their acquisition plans and discussed the approaches with CBP officials. To assess system requirements and capabilities for the IFT, RVSS, and MSC programs, we obtained and analyzed requirements and capabilities documents and worked with CBP officials to identify any changes to requirements and capabilities since they were initially approved and whether any requirements or capabilities had been traded off for cost, schedule, or other purposes. To assess the extent to which CBP followed DHS acquisition guidance, we selected aspects of Acquisition Management Directive 102-01 that were relevant to where these programs were in the acquisition process during fiscal year 2013. Specifically, we determined whether acquisition documents had been approved by the time of the applicable Acquisition Decision Events as required by DHS acquisition guidance. To determine the extent to which CBP has assessed the performance of technologies deployed under SBInet and developed performance metrics to assess the performance of surveillance technologies planned for deployment under the Plan, we analyzed performance assessment documentation and interviewed CBP officials responsible for performance measurement activities regarding the establishment of performance metrics used by CBP to determine the effectiveness and the contributions of its surveillance technologies toward the agency’s stated border security goals. With respect to CBP’s assessment of the performance of technologies deployed under SBInet, we analyzed the results of the January 2013 post implementation review, which was conducted by the Johns Hopkins University Applied Physics Laboratory to determine the effectiveness of SBInet technologies in achieving their intended results. We reviewed our November 2011 report to determine the extent to which CBP’s post implementation review aligned with DHS guidance and the Office of Management and Budget’s (OMB) Capital Programming Guide, a supplement to OMB Circular A-11, which identifies a post implementation review as a tool to evaluate an investment’s efficiency and effectiveness. We also analyzed CBP and DHS documents, such as CBP’s July 2013 SBInet Block 1 After Action Report, and interviewed officials to assess corrective actions taken to improve SBInet performance issues. Specifically, we analyzed CBP documentation and interviewed agency officials within OTIA and the Office of Border Patrol to determine the progress the agency has made in addressing findings and recommendations outlined in CBP’s post implementation review and prior performance assessments, including the Army Test and Evaluation Command’s assessment of its SBInet technologies. On the basis of interviews with agency officials regarding the methodology and implementation of the review, we found the review to be sufficiently reliable for our report. In addition, we analyzed CBP data on apprehensions of illegal entrants and seizures of drugs and other contraband for the Tucson and Yuma sectors maintained in the Enforcement Integrated Database (EID), a DHS-shared common database repository for several DHS law enforcement and homeland security applications, as well as policy, planning, and budget documents provided by Border Patrol to determine whether such data could be used to determine the contributions of the SBInet technologies to apprehensions and seizures. We analyzed apprehension and seizure data for the Tucson and Yuma sectors within Arizona, because these are the Border Patrol sectors contained within Arizona and covered by the Plan. For the purposes of this report, we analyzed apprehension and seizure events recorded in the EID for fiscal years 2010 through June of fiscal year 2013. An apprehension or seizure event is defined as an occasion on which Border Patrol agents apprehend an illegal entrant or seize drugs and other contraband. Each reported apprehension or seizure event is assigned a unique identifier in the EID, and Border Patrol agents assign an additional identifier to each individual illegal entrant or type of seized item associated with the event. As a result, a single apprehension event may involve the apprehension of multiple illegal entrants, and a single seizure event may result in the seizure of multiple items. Appendix IV contains the results of our analysis of all recorded apprehensions and seizures occurring across the Tucson and Yuma sectors during the specified time frame. For our analysis, we also obtained data on asset assists recorded in the EID for apprehensions and seizures. According to Border Patrol officials, the asset assist data field was added to the EID in May of 2009. Agents may select from a drop-down menu to identify whether a technological or nontechnological asset assisted in the apprehension or seizure. Multiple assets can be selected for a single event, if relevant. For the purposes of this report, technological assets identified within the Border Patrol’s asset assists data field drop-down menu are those assets for which Border Patrol continues to make significant funding investments and are included as part of the Plan, and include Cameras, Mobile Surveillance Systems, Scope Trucks, and Unattended Ground Sensors. According to Border Patrol headquarters officials, agents identifying “Cameras” are most likely attributing the asset assist to either SBInet towers or Remote Video Surveillance Systems. In addition, for our analysis, we obtained Geographic Information Systems data for apprehensions, seizures, and Border Patrol’s two highest-cost surveillance systems—SBInet and RVSS towers—to show the latitude and longitude coordinates of apprehensions and seizures in relation to the location of SBInet towers and RVSS towers. We used Geographic Information Systems data to determine the percentage of apprehensions and seizures that occurred within the proximity of the radar and camera range of SBInet and camera range of RVSS towers, and the extent to which asset assists were reported for apprehensions and seizures occurring within the proximity of the surveillance systems. For the purposes of this report, the ranges of the SBInet and RVSS towers are “buffer ranges” that, according to Border Patrol headquarters officials, do not account for obstructions due to terrain, land features, and vegetation. To perform these analyses, we compared Border Patrol data on the longitude and latitude of apprehensions, seizures, SBInet towers, and RVSS towers with agency mapping data, which allowed us to determine the extent to which apprehensions and seizures occurred within the proximity of SBInet and RVSS towers. We interviewed Border Patrol headquarters officials regarding data collection and analysis procedures, and performance assessment activities. We analyzed apprehensions and seizures data from fiscal year 2010 through June 2013 because fiscal year 2010 was the first fiscal year for which data on asset assists were available following Border Patrol’s deployment of its SBInet technologies, and the collection of data on Geographic Information Systems coordinates for apprehensions and seizures was required. To assess the reliability of apprehensions and seizures data, including the asset assist and Geographic Information Systems data, we interviewed Border Patrol headquarters officials who oversee the maintenance and analyses of the data about agency guidance and processes for collecting and reporting the data. We determined that the apprehensions, seizures, asset assists, and Geographic Information Systems data were sufficiently reliable for the purposes of this report. However, as we reported in December 2012, because of potential inconsistencies in how the data are collected, these data cannot be compared across sectors but can be compared within a sector over time.were sufficiently reliable for the purposes of this report, but found limitations with the consistency in which these data are recorded for all apprehensions and seizures, a fact that we discuss in the report. Although we determined that the latitude and longitude coordinates for some apprehensions and seizures were invalid—e.g., they were identified as occurring outside U.S. national boundaries—the numbers were not significant, and we determined that the Geographic Information Systems data were sufficiently reliable for the purposes of this report. Location data that were determined to be invalid were not included in our analysis. We compared CBP’s reporting requirements and use of asset assists data against criteria in Standards for Internal Control in the Federal Government, which, among other things, call for ensuring effectiveness We determined that the recorded data on asset assists and efficiency of management operations, including the use of the entity’s resources. In addition, we visited Border Patrol’s Tucson sector in Arizona to observe Border Patrol agents operating SBInet technologies and other selected technologies, such as RVSS towers, and discussed agents’ experiences in using these technologies. While visiting the Tucson sector, we interviewed officials regarding the deployment and contributions of surveillance technologies within the sector. We visited the Tucson sector because of the presence of surveillance technologies, such as SBInet and RVSS towers, in that sector and because, under the Plan, the Tucson sector has locations for which additional technology deployments, such as IFTs, are planned. While the information we obtained from our visit cannot be generalized to all Border Patrol sectors, it did provide us with insights about the use of the deployed surveillance technologies. Finally, we analyzed documents, including CBP’s Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014-2017, and interviewed CBP officials responsible for overseeing the progress CBP and DHS have made in implementing our November 2011 recommendations to identify the mission benefits to be derived from technologies in the Plan and metrics to measure the extent to which border security is expected to improve by using these technologies. We conducted this performance audit from September 2012 through March 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Best practices for cost estimating and scheduling identify 10 practices associated with effective scheduling. These are (1) capturing all activities, (2) sequencing all activities, (3) assigning resources to all activities, (4) establishing the duration of all activities, (5) verifying that the schedule is traceable horizontally and vertically, (6) confirming that the critical path is valid, (7) ensuring reasonable total float, (8) conducting a schedule risk analysis, (9) updating the schedule with actual progress and logic, and (10) maintaining a baseline schedule. These practices are summarized into four characteristics of a reliable schedule— comprehensive, well constructed, credible, and controlled. We assessed the extent to which the March 2013 schedules for CBP’s three highest-cost technology programs under the Arizona Border Surveillance Technology Plan—IFT, RVSS, and MSC—met each of the 10 best practices. the 10 best practices as follows: We characterized whether the schedules met each of Not met—the program provided no evidence that satisfies any of the criterion. Minimally met—the program provided evidence that satisfies a small portion of the criterion. Partially met—the program provided evidence that satisfies about half of the criterion. Substantially met—the program provided evidence that satisfies a large portion of the criterion. We determined the overall assessment rating by assigning each individual rating a number: Not met = 1, minimally met = 2, partially met = 3, substantially met = 4, and met = 5. Then, we took the average of the individual assessment ratings to determine the overall rating for each of the four characteristics. The resulting average becomes the Overall Assessment as follows: Not met = 1.0 to 1.4, minimally met = 1.5 to 2.4, partially met = 2.5 to 3.4, substantially met = 3.5 to 4.4, and met = 4.5 to 5.0. We developed this rating scale in consultation with cost-estimating experts who helped develop the Cost Estimating and Assessment Guide. Met—the program provided complete evidence that satisfies the entire criterion. Table 7 provides the results of our analysis of the IFT, RVSS, and MSC schedules as of March 2013. DHS’s Enforcement Integrated Database includes a field that enables Border Patrol agents to identify whether a technological or nontechnological asset assisted in the apprehension of illegal entrants or the seizure of drugs or other contraband. This appendix provides summary statistics on the reporting of asset assists by Border Patrol agents in the apprehension of illegal entrants and seizure of drugs and other contraband across the Tucson and Yuma sectors from fiscal year 2010 through June 2013. For the purposes of this report, “unreported asset assists” could include instances for which an asset was not a contributing factor in apprehensions and seizures. sectors, conclusions about the differences in reported asset assists across sectors and differences within sectors over time cannot be made. As shown in tables 8 and 9, the 166,976 apprehension events that occurred in the Tucson sector from fiscal year 2010 through June 2013 resulted in the apprehension of 549,357 illegal entrants, and 20,322 seizure events that occurred in the Tucson sector over the same period resulted in the seizure of 21,973 items. and 6,828 seizure events occurring in the Yuma sector during the same time period resulted in the apprehension of 17,580 illegal entrants and 7,892 seized items. The two tables also show that the percentages of reported asset assists for apprehension events and seizure events and the resulting apprehensions and seizures differed across the Tucson and Yuma sectors and in both sectors and over time. Figure 13 shows that the percentages of apprehension events and apprehensions in the Tucson sector for which asset assists were not reported decreased from fiscal year 2010 through June 2013. Figure 14 shows that in the Yuma sector, the percentages of apprehension events and apprehensions for which asset assists were not reported were higher than in the Tucson sector, but similarly decreased over that time period. For the first three quarters of fiscal year 2013, asset assist information was not reported for more than one-half of the apprehension events in the Tucson sector and nearly four-fifths of the apprehension events in the Yuma sector. Figures 13 and 14 also show that the percentages of apprehension events and apprehensions for which technological asset assists and other asset assists were reported increased during that period in both sectors. Because it is difficult to determine whether unreported asset assists do not involve asset assists, or do involve asset assists that were not recorded, it is difficult to determine whether the increases in the percentages of apprehension events and apprehensions involving technology asset assets and other assists involve real increases, or increases resulting from fewer asset assists going unreported. The higher percentage of apprehension events and apprehensions involving technology asset assists in the Tucson sector relative to the Yuma sector may also be partly due to differences in the two sectors in unreported asset assists, and also to differences in the number of technology assets in the two sectors. Figure 15 shows that for seizure events and seizures in the Tucson sector from fiscal year 2010 through June 2013, as was the case for apprehension events and apprehensions, the percentages for which asset assists were unreported declined, while the percentages for which technology asset assists and other asset assists were reported increased. Unreported asset assists were lower for seizures than for apprehensions in the Tucson sector, and the changes with respect to the percentages of seizures involving unreported asset assists, technology asset assists, and other asset assists in the Tucson sector were not as pronounced as the changes with respect to apprehensions. Figure 16 shows that in the Yuma sector over the same period, the percentages for which asset assists were unreported declined, and the percentage for which other (nontechnology) assets were reported increased. In the Yuma sector, the percentage of technology asset assists was small (less than 1 percent) in each of the fiscal years and there was no discernible trend in the percentage of technology asset assists. As with apprehensions, it is difficult to determine how much of the increase in technology asset assists in the Tucson sector and other asset assists in both sectors involves the increased use of technology and other assets, or changes in reporting of asset assists. Table 10 summarizes the mission benefits to be derived from each of the technologies to be deployed as part of the Arizona Border Surveillance Technology Plan as outlined in CBP’s Multi-Year Investment and Management Plan for Border Security Fencing, Infrastructure, and Technology for Fiscal Years 2014-2017. According to CBP officials, each of the seven technologies deployed or planned for deployment as part of the Plan will increase situational awareness and enhance the ability of Border Patrol agents to detect, identify, deter, and respond to threats along the border. In addition to the contact named above, Jeanette Espinola (Assistant Director), David Alexander, Charles Bausell, Frances Cook, Katherine Davis, Joseph E. Dewechter, Jennifer Echard, Shannon Grabich, Yvette Gutierrez, Eric Hauswirth, Richard Hung, Jason Lee, Grant Mallie, Linda Miller, John Mingus, Anna Maria Ortiz, Karen Richey, Doug Sloane, Karl Seifert, Nate Tranquilli, Katherine Trimble, Jim Ungvarsky, and Michelle Woods made key contributions to this report. | The Department of Homeland Security's (DHS) U.S. Customs and Border Protection's (CBP) schedules and Life-cycle Cost Estimates for the Arizona Border Surveillance Technology Plan (the Plan) reflect some, but not all, best practices. Scheduling best practices are summarized into four characteristics of reliable schedules—comprehensive, well constructed, credible, and controlled (i.e., schedules are periodically updated and progress is monitored). GAO assessed CBP's schedules as of March 2013 for the three highest-cost programs that represent 97 percent of the Plan's estimated cost. GAO found that schedules for two of the programs at least partially met each characteristic (i.e., satisfied about half of the criterion), and the schedule for the other program at least minimally met each characteristic (i.e., satisfied a small portion of the criterion), as shown in the table below. For example, the schedule for one of the Plan's programs partially met the characteristic of being credible in that CBP had performed a schedule risk analysis for the program, but the risk analysis was not based on any connection between risks and specific activities. For another program, the schedule minimally met the characteristic of being controlled in that it did not have valid baseline dates for activities or milestones by which CBP could track progress. Source: GAO analysis of CBP data. Note: Not met—CBP provided no evidence that satisfies any of the criterion. Minimally met—CBP provided evidence that satisfies a small portion of the criterion. Partially met—CBP provided evidence that satisfies about half of the criterion. Substantially met—CBP provided evidence that satisfies a large portion of the criterion. Met—CBP provided complete evidence that satisfies the entire criterion. Further, CBP has not developed an Integrated Master Schedule for the Plan in accordance with best practices. Rather, CBP has used the separate schedules for each program to manage implementation of the Plan, as CBP officials stated that the Plan contains individual acquisition programs rather than integrated programs. However, collectively these programs are intended to provide CBP with a combination of surveillance capabilities to be used along the Arizona border with Mexico, and resources are shared among the programs. According to scheduling best practices, an Integrated Master Schedule is a critical management tool for complex systems that involve a number of different projects, such as the Plan, to allow managers to monitor all work activities, how long activities will take, and how the activities are related to one another. Developing and maintaining an Integrated Master Schedule for the Plan could help provide CBP a comprehensive view of the Plan and help CBP better understand how schedule changes in each individual program could affect implementation of the overall Plan. Moreover, cost-estimating best practices are summarized into four characteristics—well documented, comprehensive, accurate, and credible. GAO's analysis of CBP's estimate for the Plan and estimates completed at the time of GAO's review for the two highest-cost programs showed that these estimates at least partially met three of these characteristics: well documented, comprehensive, and accurate. In terms of being credible, these estimates had not been verified with independent cost estimates in accordance with best practices. Ensuring that scheduling best practices are applied to the three programs' schedules and verifying Life-cycle Cost Estimates with independent estimates could help better ensure the reliability of the schedules and estimates. CBP did not fully follow key aspects of DHS's acquisition management guidance for the Plan's three highest-cost programs. For example, CBP plans to conduct limited testing of the highest-cost program—the Integrated Fixed Tower (IFT: towers with cameras and radars)—to determine its mission contributions, but not its effectiveness and suitability for the various environmental conditions, such as weather, in which it will be deployed. This testing, as outlined in CBP's test plan, is not consistent with DHS's guidance, which states that testing should occur to determine effectiveness and suitability in the environmental conditions in which a system will be used. Revising the test plan to more fully test the program in the conditions in which it will be used could help provide CBP with more complete information on how the towers will operate once they are fully deployed. CBP has identified mission benefits for technologies under the Plan, but has not yet developed performance metrics. CBP has identified such mission benefits as improved situational awareness and agent safety. Further, a DHS database enables CBP to collect data on asset assists, defined as instances in which a technology, such as a camera, or other asset, such as a canine team, contributed to an apprehension or seizure, that in combination with other relevant performance metrics or indicators, could be used to better determine the contributions of CBP's surveillance technologies and inform resource allocation decisions. However, CBP is not capturing complete data on asset assists, as Border Patrol agents are not required to record and track such data. For example, from fiscal year 2010 through June 2013, Border Patrol did not record whether an asset assist contributed to an apprehension event for 69 percent of such events in the Tucson sector. Requiring the reporting and tracking of asset assist data could help CBP determine the extent to which its surveillance technologies are contributing to CBP's border security efforts. This is a public version of a For Official Use Only—Law Enforcement Sensitive report that GAO issued in February 2014. Information DHS deemed as For Official Use Only—Law Enforcement Sensitive has been redacted. GAO recommends that CBP, among other things, apply scheduling best practices, develop an integrated schedule, verify Life-cycle Cost Estimates, revise the IFT test plan, and require tracking of asset assist data. DHS concurred with four of six GAO recommendations. It did not concur with the need for an integrated schedule or a revised IFT test plan. As discussed in this report, GAO continues to believe in the need for a schedule and a revised test plan. |
The FAIR Act requires executive agencies to submit each year to the Office of Management and Budget (OMB) inventories of activities that, in the judgment of the head of the agency, are not inherently governmental functions. The first FAIR Act inventories were due to OMB by June 30, 1999. According to an OMB official, most agencies met this requirement. these inventories to include information about (1) the fiscal year the activity first appeared on the FAIR Act list, (2) the number of full-time- equivalent (FTE) staff years necessary to perform the activity by a federal government source; and (3) the name of a federal government employee responsible for the activity from whom additional information about the activity may be obtained. It is important to note that the FAIR Act does not require an agency to list activities that the agency determines are inherently governmental and therefore not commercial. OMB published draft guidance in March 1999 and issued final guidance on the implementation of the FAIR Act on June 24—about a week before the first inventories were due. OMB implemented the FAIR Act by revising its Circular A-76, “Performance of Commercial Activities,” and the A-76 Supplemental Handbook. Under Circular A-76, executive agencies are to conduct cost comparison studies of commercial activities performed by government personnel to determine whether it would be more cost efficient to maintain them in-house or contract with the private sector for their performance. Under OMB’s revised guidance, agencies were expected to list the activities the agency determined are not inherently governmental using specific codes established for A-76. These include both “reason” and “function” codes. The “reason codes” are used to show whether the agency believes that an activity determined to be commercial should be subject to an A-76 cost comparison or not, including identifying those commercial activities that cannot be competed because of a legislative or other exemption. The function codes are to characterize the types of activities that the agency performs. The function codes range from fairly broad categories, such as “family services,” to much more specific (and defense-related) activities, such as “Intermediate, Direct, or General Repair and Maintenance of Equipment—Missiles.” would group a set of inventories for release together, rather than releasing them on a rolling, agency-by-agency schedule. In a September 30, 1999, Federal Register announcement, OMB listed the first group of FAIR Act inventories—from 52 agencies—that were made available to the public. Of these 52 inventories, 10 were from CFO Act agencies. Five of these were from cabinet agencies (Agriculture, Commerce, Education, Health and Human Services, and Housing and Urban Development) and the other five were from EPA, GSA, the National Aeronautics and Space Administration, the Social Security Administration, and the Agency for International Development. The remaining 42 inventories released in September 1999 were from smaller executive agencies such as the Marine Mammal Commission and the Office of National Drug Control Policy. The next step in implementing the FAIR Act includes potential challenges to the lists. According to the FAIR Act, within 30 days after publication of the notice of the public availability of the list, an interested party may challenge the omission of a particular activity from, or an inclusion of a particular activity on, the FAIR Act inventory. Within 28 days after an executive agency receives a challenge, it must decide the challenge and provide written notification, including a discussion of the rationale for the decision, to the challenger. This decision can be appealed to the head of the agency within 10 days after the challenger receives written notification of the decision. Clearly, executive agencies and OMB still have plenty of work ahead to implement even the first step of the FAIR Act—the public release of inventories. Nevertheless, our initial review of selected inventories that have been released raise a number of important questions about the efforts thus far. On behalf of the Subcommittee, we will be seeking answers to these and related questions over the coming months in order to assess agencies’ efforts and to develop a body of best practices, as efforts under the FAIR Act move forward. A major area of interest during the initial implementation of the FAIR Act concerns the decisions agencies made about whether or not activities were eligible for competition and the reasons for those decisions. The FAIR Act provides that when an agency considers contracting with a private sector source for a commercial activity on its list, the agency shall use a competitive process to select the source unless it is exempted from doing so. A commercial activity in an agency can be exempted from competition for a variety of reasons. These reasons include legislative restrictions, other actions by Congress, Executive Orders, OMB decisions, or separate decisions by the relevant agency. Our initial review of the selected inventories suggests that questions can be raised about how agencies decided whether or not a commercial activity could be subject to competition, particularly when an agency reports that relatively few of its commercial activities could be considered for competition. Out of a total of 829 FTEs performing commercial activities listed in EPA’s FAIR Act inventory, about 30 FTEs (about 3.6 percent) were listed in commercial activities that could be considered for competition. These activities were listed under six function codes, including (1) nonmanufacturing operations (such as mapping and charting or printing and reproduction activities); (2) maintenance, repair, alteration, and minor construction of real property; (3) regulatory management and support services; (4) installation services; (5) administrative support for environmental activities; and (6) other selected functions. EPA listed about 24 FTEs, or about 3 percent of the total of the commercial activities listed, as performing activities that are exempt from competition because of actions by Congress, Executive Order, or OMB. Most of these FTEs provide support for two function codes—research, development, testing, and evaluation; or administrative support for environmental activities. house expertise to effectively apply and enforce the nation’s environmental laws in fulfilling its mission and meeting emergency requirements. For example, EPA’s Deputy CFO told us that the agency exempted selected positions requiring scientific expertise in its research and development office in order to oversee the work produced by laboratories run by contractors. Out of a total of 7,249 FTEs GSA determined were providing commercial activities, it listed 4,556 FTEs (63 percent) who perform commercial activities that could be subject to competition. Almost half of these FTEs were involved in the maintenance, repair, or minor construction of real property. GSA also listed 874 FTEs (12 percent of the total commercial activities identified) as exempt from competition—more than half of these FTEs also perform activities involved with the maintenance, repair, or minor construction of real property. According to GSA’s FAIR Act inventory, 1,819 FTEs (25 percent of its FTEs performing commercial activities) should be retained in-house because the activities are being “reinvented.” GSA plans to reassess the activities for possible recategorization once reinvention efforts are completed. The FTEs are devoted to various activities, including financial and payment services, information and telecommunication program management, and security and protection. Agencies used a variety of approaches to develop their FAIR Act inventories. For example, a number of agencies used their “Raines inventories” as a basis for their FAIR Act inventories. The Raines inventories were developed as part of a 1998 effort led by OMB under which agencies were to identify commercial and other activities and provide that information to OMB. Specifically, agencies were asked to list agency functions and positions supporting activities that were inherently governmental; commercial, but specifically exempt from the cost comparison requirements of OMB Circular A-76; commercial and should be competed; and commercial, but must be retained in-house (including the reason why). Officials from the Department of Commerce said that Commerce based its FAIR Act inventory almost entirely on the information from its Raines inventory. The Department asked its component organizations to update the information that previously had been prepared for OMB as part of its Raines inventory. According to Commerce officials, these organizations made only minor changes for the FAIR Act inventory. GSA described its approach as starting from the top and working down, with agency management forming a team to develop its FAIR Act inventory. GSA’s team was composed of one or two staff members from each of GSA’s service divisions and regional offices. GSA officials said that this team held lengthy discussions about GSA’s core mission and about which of its functions should be considered inherently governmental. In addition, a contractor was hired to train staff and to facilitate discussions on the topic of inherently governmental activities. GSA officials said that making the training as inclusive as possible was important to address the staff’s apprehensions about privatization. EPA delegated the responsibility for developing its inventory to its 10 regional offices because it decided that the regional officials closest to the work should make determinations about specific activities. EPA headquarters reviewed and compared the submissions from its regions and offices and worked to resolve any discrepancies. EPA’s Deputy CFO said that he does not expect the percentage of activities EPA identifies as commercial to remain static. He predicted that it would increase in the future, although he also emphasized that EPA is already very reliant on contractor support to fulfill its mission. The inventories now being released represent the first time that agencies have produced inventories under the FAIR Act. Thus, it is not surprising that a variety of different reporting formats are being used. It will likely take several reporting cycles before a documented set of best practices emerges that meets the needs of Congress and other interested parties. Also, it is not surprising that these inventories will become more useful as they become clearer and more complete. competition. However, Commerce also assigned these same entries a “reason code” indicating that these activities are “prohibited from conversion to contract because of legislation.” Thus, the information reported does not appear to be consistent. In addition, Commerce did not assign any “reason codes” for a substantial number of FTEs listed throughout its FAIR Act inventory, so it is not clear how Commerce is characterizing these commercial activities. Officials in agencies we spoke to generally found that the A-76 codes needed additional refinement. Officials from the Department of Commerce noted that the function codes were oriented toward military activities and needed to be augmented to more fully capture the range of activities undertaken by civilian agencies. In response to concerns such as Commerce’s, OMB allows agencies to develop new function codes to better meet their needs. Commerce, EPA, and GSA are among the agencies that are using additional function codes. While such flexibility is important to accurately reflect the diversity of the types of specific activities that individual agencies perform, it also needs to be balanced against the need for comparisons of the types of activities that are common across agencies. Beyond the requirements of the FAIR Act, some agencies are including information with their inventories that can provide additional perspective on the contracting and management issues confronting agencies. In the inventories that we have examined, we found that, in some cases, the agencies included supplemental information that was helpful, such as listing inherently governmental activities, describing the scope of activities currently under contract, and discussing how listed activities contribute to agencies’ strategic and annual performance. Including information about an agency’s inherently governmental activities (such as was provided to OMB as part of the Raines inventories) helps provide a fuller perspective about all of an agency’s activities, not just those the agency considers commercial. For example, although not required to do so, GSA’s FAIR Act inventory included inherently governmental activities. Such information can help provide Congress and other interested parties with a more complete picture of GSA’s activities and allows for more informed judgments about whether an activity currently characterized as inherently governmental should be considered commercial. Similarly, describing the scope of activities that an agency has already outsourced can provide an important perspective on and context for the agency’s operations. In their letters or other documents submitting their FAIR Act inventories to OMB, for example, GSA, EPA, and Commerce all describe their current levels of contracting. Commerce’s letter said that its service contracting outlays increased by 36 percent from 1996 through 1998. GSA stated that nearly 94 percent of its budget is spent for contractors. EPA’s letter estimates the amount of resources currently contracted outside of EPA translates into 11,000 to15, 000 FTE had it retained the work inside of the agency. Finally, it is important to recognize how an agency’s strategies, including any plans to contract for services, contributes to the achievement of the agency’s mission and its programmatic goals. In its introduction to its FAIR Act inventory, GSA states that its strategic plan provides the road map for achieving its mission and the context within which it developed this inventory, citing four goals, such as one to “create loyal customers by providing excellence in customer service.” EPA’s FAIR Act inventory links each commercial activity with 1 or more of EPA’s 10 strategic goals—such as linking the administrative support activities in the Office of Water with EPA’s strategic goal of ensuring clean and safe water. The FAIR Act inventories, then, can provide valuable information about the role of contracting in an agency’s efforts to provide cost-effective products and services. OMB has encouraged agencies to understand and use a variety of tools and strategies to make sound business decisions and enhance federal performance through competition and choice. Efforts under the FAIR Act can best be understood within the context of other initiatives, such as the Government Performance and Results Act, performance-based organizations, and franchise funds, as part of a package of ways agencies can improve services and reduce costs. FAIR Act inventories that provide information and perspective on how various initiatives are being used together can be helpful to congressional and other decisionmakers in assessing the economy, efficiency, and effectiveness of an agency. questions about the efforts thus far which we will be reviewing for the Subcommittee. These questions include the following: What decisions did agencies make about whether or not activities were eligible for competition and what were the reasons for those decisions? What processes did agencies use to develop their FAIR Act inventories? How useful are the FAIR Act inventories? What supplemental information can be included to increase the usefulness of inventories? By enacting the FAIR Act, Congress has increased the visibility of agencies’ commercial activities. Continuing congressional interest in the FAIR Act process is needed in order to maintain serious agency attention to developing and using the FAIR Act inventories. Oversight hearings, such as today’s hearing, send clear messages to agencies that Congress is serious about improving the efficiency and effectiveness of government operations and the effective implementation of the FAIR Act. We look forward to continuing to work with you and other Members of Congress as your oversight efforts continue. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For further contacts regarding this testimony, please contact J. Christopher Mihm at (202) 512-8676. Individuals making key contributions to this testimony included Steven G. Lozano, Thomas M. Beall, Susan Michal-Smith, Susan Ragland, and Jerome T. Sandau. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed its observations on the initial implementation of the Federal Activities Inventory Reform (FAIR) Act of 1998, focusing on: (1) the progress to date in developing and releasing agencies' FAIR Act inventories; (2) the status of the initial steps taken to implement the FAIR Act; and (3) issues related to the Department of Commerce, the Environmental Protection Agency, and the General Services Administration FAIR Act inventories. GAO noted that: (1) most agencies' FAIR Act inventories have been submitted to the Office of Management and Budget (OMB) for review and consultation, and the first group of inventories is now publicly available; (2) clearly, executive agencies and OMB still have plenty of work ahead to implement the FAIR Act, including the public release of more inventories and the resolution of any challenges; (3) nevertheless, GAO's initial review of selected inventories raise some questions about the efforts thus far which GAO will be reviewing for the House Subcommittee on Government Management, Information and Technology; (4) these questions concern: (a) the decisions agencies make about whether or not activities were eligible for competition and what the reasons for those decisions were; (b) the processes agencies use to develop their FAIR Act inventories; (c) how useful the FAIR Act inventories are; and (d) what supplemental information can be included to increase the usefulness of inventories; (5) by enacting the FAIR Act, Congress has increased the visibility of agencies' commercial activities; (6) continuing congressional interest in the FAIR Act process is needed in order to maintain serious agency attention to developing and using the FAIR Act inventories; and (7) oversight hearings send clear messages to agencies that Congress is serious about improving the efficiency and effectiveness of government operations and the effective implementation of the FAIR Act. |
Our prior work has shown how leading companies use strategic sourcing—a process that moves an organization away from making numerous individual procurements to purchasing through a broader aggregate approach—to manage up to 90 percent of their procurements and achieve savings of 10 to 20 percent on the goods and services they buy. A strategic sourcing effort begins after an opportunity is identified, usually through a spend analysis. Spend analyses provide knowledge about how much is being spent for given products and services, who the buyers and suppliers are, and where opportunities exist for leveraged buying and other tactics to save money and improve performance. Based on such analysis, organizations evaluate and prioritize commodities for strategic sourcing. In 2013, we identified five foundational principles critical to carrying out an effective strategic sourcing approach: maintaining spend visibility, centralizing procurement, developing category strategies, focusing on total cost of ownership, and regularly reviewing strategies and tactics. Within those principles, leading companies highlighted the importance of identifying the most cost effective sourcing vehicles, clearly defining and communicating policies in order to eliminate unapproved purchases, or “rogue buying,” and ensuring that spending goes through approved contracts. Taken together, these principles enable companies to identify market trends, share knowledge about suppliers, make more informed contracting decisions, and take advantage of opportunities to save money and buy more efficiently. See appendix III for a full list of leading companies’ foundational principles for strategic sourcing. Since 2005, OMB has issued several memorandums to establish a framework, standards, and governance for government-wide strategic sourcing efforts. In its May 2005 memorandum, OMB defined strategic sourcing as the “collaborative and structured process of critically analyzing an organization’s spending and using this information to make business decisions about acquiring commodities and services more effectively and efficiently,” and directed agencies to take action to leverage and control government spending through strategic sourcing. In response to OMB direction, the Department of the Treasury and GSA, with support from OFPP, partnered to launch the FSSI program in November 2005 to strategically source commonly purchased products and services. The FSSI program was chartered under the purview of the Chief Acquisition Officer’s Council and the Strategic Sourcing Working Group with OFPP ultimately responsible for providing oversight and guidance as well as ensuring the overall effectiveness of the program. The Working Group, comprised of representatives of various agencies, was responsible for vetting and approving initiatives and sourcing strategies, and establishing standards, processes, and policies. The FSSI Program Management Office within GSA was established to support the Working Group and coordinate the efforts of the agencies designated as executive agents to implement individual FSSI initiatives; provide guidance and oversight; review information and recommendations; and makes strategic program decisions. In December 2012, OMB issued guidance that formalized a governance structure, provided additional requirements, and identified key characteristics of federal strategic sourcing efforts. For example, key characteristics included the use of tiered pricing or other appropriate strategies to reduce prices as cumulative sales volume increases, and contractual requirements with vendors to provide sufficient pricing, usage, and performance data to enable the government to improve commodity management practices on an ongoing basis. Noting that the majority of federal spending is driven by a small number of large agencies, OMB established the Strategic Sourcing Leadership Council (Leadership Council) and called on the seven largest and highest spending agencies and the Small Business Administration to take a leadership role on strategic sourcing. The Leadership Council is chaired by the Administrator of OFPP and comprised of representatives from the Departments of Defense (DOD), Energy, Health and Human Services, Homeland Security, and Veterans Affairs; GSA; the National Aeronautics and Space Administration (NASA); and the Small Business Administration. See appendix IV for key provisions from the 2012 OMB strategic sourcing memorandum. The Leadership Council is expected to propose plans and management strategies to maximize the use of strategic sourcing efforts. For example, in 2013, the Leadership Council established a three step key decision point process for developing, approving, and overseeing the FSSIs which is described in GSA’s FSSI guidance. The Leadership Council must provide approval at each step in order for a prospective FSSI to progress through the strategic sourcing process and obtain the requisite designation. Figure 1 summarizes the key decision point process. For the first key decision point, any interested agency can present a high level opportunity analysis for a commodity which they believe may be a FSSI candidate. If the Leadership Council approves the candidate at the first key decision point, a commodity team is formed to develop and refine the commodity strategy and develop a solution strategy. If the Leadership Council approves the second key decision point, the commodity team is allowed to execute the strategy. After execution, the commodity team is to summarize the solution and assess success. If the new solution proves valuable, then it is approved at the third decision point and given “FSSI” designation and, it is to become mandatory to the maximum extent practicable. Because the key decision point process is relatively new, it has not been fully applied to all of the current FSSIs in our review. For example, Janitorial and Sanitation Supplies and Maintenance, Repair, and Operations Supplies are listed on GSA’s FSSI website although they have not been formally approved as FSSIs at the third key decision point. After determining the scope and the total government-wide spending on the commodities covered by the proposed solution, the commodity team is to refine the scope by establishing a baseline for the amount of spending that can potentially be addressed by the solution. This baseline, referred to as addressable spend, is to be used to measure FSSI adoption by calculating the amount of actual spending through the FSSI compared to the total addressable spend and is required for approval at the second key decision point. Addressable spend may identify some spending as non-addressable to accommodate agency needs or unique circumstances such as existing agency contractual arrangements where termination costs would be prohibitive and legislative or other authorities unique to an agency. If an agency seeks to exclude a portion of spending from the addressable spend total, the agency must provide the basis for the exclusion to the commodity team. If resolution is not reached at the commodity team level, the request for exclusions will be presented to the Leadership Council for discussion and resolution. Using the approved addressable spend as a baseline, Leadership Council agencies are required to provide the lead agency with non-binding commitment letters stating an agency’s intended volume of purchases through the proposed solution, for purposes of negotiation and pricing, prior to award. These letters do not obligate the agency to use a solution if, for example, the pricing, terms, and conditions, are not aligned with expectations. Nonetheless, the lead agency is to describe how each Leadership Council agency will transition from existing vehicles to the new solution. The lead agency is also responsible for ongoing management of the FSSI including keeping prices competitive, monitoring vendor performance, tracking agency adoption, and managing to performance metrics including savings and small business achievement against benchmarks, among other things. According to OMB’s addressable spend guidance, identifying addressable spend, determining conditional commitment levels, and measuring adoption rates are critical to the success of strategic sourcing. Further, the Leadership Council is to continuously monitor information on performance and promote agency adoption, among other things. To assist their effort, GSA established the FSSI Program Management Office to monitor overall FSSI program usage for all commodities, to collect and analyze performance data, and to provide an assessment to the Leadership Council. The office is also tasked with disseminating best practices, providing guidance on performance measures and data collection, and recommending improvements to the FSSIs. In addition to formalizing the governance structure for government-wide strategic sourcing efforts, OMB also identified an interim CAP goal for strategic sourcing in February 2012. CAP goals were introduced in the fiscal year 2013 federal budget and focused on 14 major issues including strategic sourcing. The strategic sourcing CAP goal statement for fiscal years 2013 and 2014 directed agencies to increase their use of FSSI vehicles by at least 10 percent in both fiscal years 2013 and 2014. For fiscal years 2014-2015, OMB established new measures for the strategic sourcing CAP goal to measure Leadership Council agency savings, adoption, small business use, and reduction in duplication. Further, federal agencies have also initiated strategic sourcing efforts that do not fall within the purview of the FSSI program. For example, the Department of Veterans Affairs reported that its strategic sourcing efforts generated $1.4 billion in cost avoidances in fiscal year 2015, including savings from pharmaceutical purchases and medical supplies. The Department of Homeland Security reports that it saved $466 million in fiscal year 2015 through a range of agency- and government-wide strategic sourcing vehicles including FSSIs. In December 2014, OFPP issued a memorandum that directs agencies to take specific actions to implement category management, an approach which is intended to manage entire categories of spending across government for commonly purchased goods and services. The memorandum notes that despite some progress in implementing strategic sourcing efforts, agencies continue to duplicate procurement efforts and award contracts for similar services to the same vendors, which imposes significant costs on contractors and agencies. In May 2015, the Leadership Council approved government-wide category management guidance which describes category management as a fundamental shift from the practice of handling purchasing, pricing analysis, and vendor relationship management in thousands of procurement units across government. According to the guidance, the federal government will “buy as one” under category management by creating common categories of products and services across agencies and manage each category as a mini-business with its own set of strategies, led by a category manager and supporting senior team with expertise in their assigned category. This approach includes not only strategic sourcing, but also a broader set of strategies, such as developing common standards in practices and contracts, improved data analysis and information sharing to better leverage the government’s buying power and reduce unnecessary contract duplication. In December 2014, the Leadership Council approved organizing federal procurement spending into 10 common categories such as IT, travel, and construction. According to OFPP, these 10 categories collectively accounted for $275 billion in fiscal year 2014 federal spending. Figure 2 identifies the 10 common categories. In December 2014, the Leadership Council was given responsibility for approving government-wide categories of spend, prioritizing categories for management, and establishing guiding principles, among other duties. Category managers—government-wide leaders who are to develop and oversee category-specific strategies and encourage and drive category management principles and practices—are approved by OFPP and the Leadership Council. The effort is supported by GSA’s Category Management Program Management Office, and the Acquisition Gateway, an IT portal that supports category management by sharing contract information such as terms and conditions, transactional pricing data, and contracting best practices. Table 1 identifies the key roles and responsibilities for category management governance. In February 2016, OFPP announced the Category Managers who will be overseeing the 10 categories of federal procurement spending. Category Managers’ first responsibility has been to prepare category strategic plans for Leadership Council approval. The category strategic plan is to identify category strategies, the reasons for selecting those strategies, how the category team plans to execute the strategies, and anticipated results (benefits, costs, and risks) associated with the strategies. The category strategic plan is then to be reviewed and approved by the Leadership Council before the category team assembles resources and teams as required to execute the strategies. The Leadership Council approved strategic plans for all 10 categories in June 2016. According to Leadership Council category management guidance, performance reviews are to be conducted for each category at the beginning of each year. This review is to assess performance over the previous year and establish goals and targets for the upcoming year. Category reviews are to be briefed to the Leadership Council to share strategies, successes, and progress towards established goals and targets. OMB also established a CAP goal for category management with goal elements focused on savings, small business goals, reduction in contract duplication and “spend under management.” Spend under management is a model designed to assess agency- and government- wide category management maturity, and to highlight successes as well as development areas across all categories and federal agencies. For example, agency-level maturity can be characterized by the use of agency-level solutions and the implementation of policies to drive behavior change, among other characteristics. Government-wide maturity is characterized by the adherence to Leadership Council approved strategies, the collection of prices paid data, and analysis of outstanding opportunity spend relative to actual spend. In September 2012, we reported that in fiscal year 2011 the FSSI program managed $339 million out of roughly $537 billion of total federal spending—or less than 1 percent—but reported achieving $60 million in savings. We also reported that the program faced key challenges in obtaining agency commitments to use new FSSIs and in increasing the level of agency spending directed through FSSI vehicles. Further, we found that the FSSI program had not yet targeted any of the government’s 50 highest-spend products and services for strategic sourcing. As such, we concluded that the focus only on low-risk, low- return efforts diminished the government’s ability to fully leverage its enormous buying power and achieve other efficiencies. To help ensure that government-wide strategic sourcing efforts further reflect leading practices, we recommended that OMB and OFPP issue an updated memorandum or other directive to federal agencies on calculating savings and establish metrics to measure progress toward goals; and direct the FSSI program to assess whether each top spend product and service government-wide is suitable for an FSSI, with a plan to address those products or services that were suitable for strategic sourcing. OMB and OFPP implemented our recommendation in part by establishing the Leadership Council to lead efforts to increase the government-wide management and sourcing of goods and services. The Leadership Council subsequently approved general principles for calculating savings for federal strategic sourcing initiatives in February 2014, and has begun to implement category management. In January 2014, we reported on the extent to which data and performance measures are available on the inclusion of small businesses in government-wide strategic sourcing initiatives. We found that GSA generally considered small businesses and small disadvantaged businesses, but lacked data and performance measures. For example, although GSA collected baseline data on proposed FSSIs, it had not developed a performance measure to determine changes in small business participation going forward. Consistent with OMB guidance and to track the effect of strategic sourcing on small businesses, we recommended that the Administrator of GSA establish performance measures on the inclusion of small businesses in strategic sourcing initiatives. In response to this recommendation, GSA issued guidance in April 2015 that provided information on how to determine baseline data for small business participation in strategic sourcing initiatives and annual requirements for assessing small business participation relative to that baseline. Moreover, the guidance requires a corrective action plan if a strategic sourcing initiative falls below the baseline for two consecutive quarters. GSA also created a strategic sourcing template to track baseline small business participation and monitor the change in small business spending for each individual strategic sourcing initiative, as required by OMB. To help ensure that agencies are tracking the effect of strategic sourcing on small businesses, we recommended that OFPP monitor agencies’ compliance with the requirement to maintain baseline data and performance measures on small business participation in strategic sourcing initiatives. As of July 2016, OFPP staff stated that they are in the process of addressing this recommendation. In September 2015, we found that the efforts of DOD, the Department of Homeland Security, and NASA to strategically manage spending for IT services, such as software design and development, have improved in recent years but still missed opportunities to leverage their buying power. Each of the agencies we reviewed designated officials responsible for strategic sourcing and created offices to identify and implement strategic sourcing opportunities, including those specific to IT services. Most of these agencies’ IT services spending, however, continued to be obligated through hundreds of potentially duplicative contracts that diminish the government’s buying power. These agencies managed between 10 and 44 percent of their IT services spending— which collectively accounted for about $11.1 billion in fiscal year 2013— through preferred strategic sourcing contracts in fiscal year 2013. Further, most of these agencies’ efforts to strategically source IT services had not followed leading commercial practices, such as clearly defining the roles and responsibilities of the offices responsible for strategic sourcing; conducting an enterprise-wide spend analysis; monitoring the spending going through the agencies’ strategic sourcing contract vehicles; or establishing savings goals and metrics. As a result, the agencies were missing opportunities to leverage their buying power and more effectively acquire IT services. We made a series of recommendations to each agency to improve their efforts to strategically source IT services. Each agency concurred with the recommendations addressed to their agency and have actions underway to implement them. Over the last 5 years, GSA officials responsible for the FSSI program reported that federal agencies spent almost $2 billion through seven FSSIs and achieved an estimated $470 million in savings, an overall savings rate of about 25 percent, comparable to savings reported by leading commercial companies. Overall agency adoption of the FSSIs, however, has remained low, resulting in reduced potential savings. For example, in fiscal year 2015, the first year for which all seven FSSIs had performance data, only $462 million of the $4.5 billion—or about 10 percent—in addressable spending targeted by the seven FSSIs we reviewed went through the FSSIs. In contrast, leading commercial companies historically manage 90 percent of their procurement spending through strategic sourcing approaches. Low adoption of the FSSIs by the large agencies that make up the Leadership Council—as well as government-wide adoption more generally—was due to a variety of reasons, including weaknesses in FSSI oversight and execution. The FSSIs generally incorporated the minimum characteristics of strategic sourcing vehicles identified by OMB guidance, such as collecting vendor transactional data, but not all FSSIs fully complied with OMB direction and maximized potential savings. From fiscal year 2011 to fiscal year 2015, GSA reported that agencies spent almost $2 billion through the FSSIs and achieved an estimated total of $470 million in savings, an overall savings rate of about 25 percent. In our prior work, we found that leading commercial companies achieved sustained savings rates of 10 to 20 percent using strategic sourcing approaches. As shown in table 2, reported annual spending through the FSSI program increased from $308 million in fiscal year 2011, when two FSSIs were in place, to $462 million in fiscal year 2015, when seven were in place. Four FSSIs experienced significant growth between fiscal year 2014 and 2015, though the Office Supplies FSSI experienced a decline of nearly 30 percent. For example, the Wireless FSSI grew from about $4 million in 2014 to over $26 million in fiscal year 2015. Average estimated savings rates for individual FSSIs over the period ranged from 11 to 55 percent which met or exceeded savings achieved by leading commercial companies. Moreover, FSSIs such as Domestic Delivery Services and Print Management achieved savings through demand management which involves working with federal buyers and policy makers to identify and standardize requirements and specifications and eliminate unnecessary purchases and inefficient purchasing behaviors. For example, through use of the Print Management FSSI, GSA procurement officials explained that GSA significantly reduced its spending on print-related products and services by reducing its staff to printer ratio from 2 to1 to 14 to 1 and successfully reduced overall printing costs from an estimated $1.8 million in fiscal year 2011 to $0.6 million in fiscal year 2015. Domestic Delivery Services program officials reported that use of data from the FSSI helped agencies identify and reduce the number of express shipments and increased the use of more affordable ground services resulting in cost savings. While the FSSIs generated savings and other benefits, federal agency adoption rates for the FSSIs remained far lower than the 90 percent achieved by leading commercial companies and reduced potential savings. For example, in fiscal year 2015, government-wide spending on the commodities covered by the FSSIs in our review was estimated by GSA officials to be $6.9 billion, with the amount identified as addressable to be about $4.5 billion, a fraction of the $439 billion in fiscal year 2015 federal procurement spending. Furthermore, only about $462 million of the $4.5 billion in addressable spend—or slightly more than 10 percent, went through the FSSIs (see table 3). In fiscal year 2015, GSA reported that agencies spent $462 million through the FSSIs and saved $129 million, a savings rate of 28 percent. Had agencies spent the entire $4.5 billion of addressable spending through the FSSIs and achieved a similar savings rate of 28 percent, we estimate that up to $1.3 billion in fiscal year 2015 savings could have been achieved (see figure 3). We identified several factors that contributed to low utilization including weaknesses in OFPP and Leadership Council oversight and various factors unique to the individual FSSIs. For example, in fiscal year 2015, the seven large procurement agencies within the Leadership Council reported spending $268 million through the FSSIs, less than 10 percent of the $2.8 billion that GSA estimated was the combined addressable spending for those agencies during the same period. In 2012, OMB directed Leadership Council agencies to promote, to the maximum extent practicable, strategic sourcing practices within their agencies including issuing and enforcing mandatory use policies for government-wide solutions, such as the FSSIs. FSSI guidance on the key decision point process requires information from each Leadership Council agency concerning how each Leadership Council agency will transition from existing vehicles to the FSSIs. While some Leadership Council agencies provided commitment letters and issued mandatory use policies for FSSIs, most that did used the FSSIs far less than their letters suggested and none of the FSSIs included individual agency transition plans from Leadership Council agencies to increase FSSI adoption as required. In addition, according to OFPP staff and GSA officials, neither OFPP nor the Leadership Council revisited those commitments or held agencies accountable for meeting them or provided monitoring to ensure that transition plans from existing agency vehicles to the FSSIs were provided. Standards for internal control in the government highlight the need to enforce accountability by evaluating performance and holding organizations accountable. Similarly, for fiscal years 2014 and 2015, OMB established new measures for the strategic sourcing CAP goal to include Leadership Council agency adoption of the FSSIs, but did not establish targets and performance measures either at the aggregate or agency level, as discussed later in the report. OFPP staff reported that the Leadership Council agencies provided input into spending projections for fiscal years 2015 and 2016 which are being used for internal management purposes although the Leadership Council agencies did not, at an aggregate level, meet their spend targets in fiscal year 2015. As of the first quarter of fiscal year 2016, FSSI CAP goal measures are no longer tracked although existing FSSIs will not expire for years to come. In addition to accountability, federal internal control standards call for agencies to monitor and evaluate results. OFPP in coordination with the Leadership Council, however, had not set targets and measures for individual Leadership Council agencies to gauge progress over time and hold individual Leadership Council agencies accountable for results. Until the Leadership Council and OFPP create a means to incentivize Leadership Council agencies to use FSSIs that they help create and approve, and measure results against individual agency targets, the FSSIs are at risk of continuing to experience low use and by extension missed opportunities for savings. Additionally, several of the individual FSSIs experienced challenges that affected, to varying degrees, their efforts. For example: The Office Supplies FSSI estimated Leadership Council agencies’ addressable spend to be $410 million, but these agencies only spent $55 million through the FSSI in fiscal year 2015. Office Supplies officials attributed the low spending in fiscal year 2015 to delays during the acquisition process which compressed the amount of overlap between the second and third generation of the FSSI. As a result, when the FSSI contracts were awarded and unsuccessful offerors filed bid protests, the FSSI experienced a 6-month lapse in service. For example, the Air Force, with an estimated $36 million in addressable spending in fiscal year 2015, suspended its mandatory use policy due to the protests and did not reinstate it until March 2015. Office Supplies officials also attributed low spending through the FSSI to an overall decline in the government-wide market for office supplies which according to officials has shrunk from $1.5 billion in fiscal year 2012 to $1.3 billion in fiscal year 2015 due to factors such as increased telework and reductions in agency procurement budgets. The Wireless FSSI estimated that the combined addressable spend of Leadership Council agencies to be nearly $700 million, but they only spent $12 million through the vehicle in fiscal year 2015. The Wireless program reported that although six Leadership Council agencies were buying off the vehicle, there had been few large enterprise buys due to the limited ability of agency acquisition teams to centralize the funding which pays for these service plans and devices. Wireless program officials also noted that they did not fully take into account when existing agency contracts would expire. For example, the program noted that several agencies needed up to 3 years to migrate from existing agency contracts to the FSSI. In 2012, OMB identified the minimum characteristics of strategic sourcing vehicles to increase savings and enable the government to improve its commodity management practices. These characteristics include the collection and use of transactional data to support continuous government analysis of pricing, usage and performance data and the use of tiered pricing to reduce prices as cumulative sales volume increases. Transactional data refers to the information generated when the government purchases goods or services from a vendor including specific details such as descriptions, part numbers, quantities, and prices paid for the items purchased. The collection and use of transactional data is foundational to strategic sourcing as it allows the government to perform active commodity management, monitor pricing changes to ensure that the benefits of strategic sourcing are maintained, and to calculate savings based on changes in price. The six GSA FSSIs generally incorporated these minimum characteristics, whereas the Library of Congress’s Information Retrieval FSSI did so to a limited extent. Each of the GSA FSSIs currently collects vendor-reported transactional data to report total spending and to report total spending and to help calculate adoption rates and savings based on changes between a baseline unit price and the FSSI price. Prior to the creation of the Leadership Council in 2012, legacy FSSIs including the first and second generations of Office Supplies and Domestic Delivery Services, as well as Print Management, calculated savings based on methods approved by their respective commodity teams in accordance with guidelines approved by the Strategic Sourcing Working Group, the governance body which preceded the Leadership Council. These approaches generally consisted of comparing prices offered under the FSSI program to the prices offered under GSA’s Federal Supply Schedule program. Office Supplies officials acknowledged that this approach may have overstated savings by four to five percent because the schedule price represents a ceiling price that GSA negotiates with vendors and not the prices paid which can include additional discounts. Further, as we have recently reported, published schedule rates may not represent the actual prices paid under the schedule which can include additional discounts and better pricing due to competition at the task order level. In 2012, we recommended that OFPP issue direction to federal agencies that includes guidance on calculating savings. In 2014, the Leadership Council approved savings principles to include savings based on price, cost avoidance, and administrative savings. According to the guidance, the baseline unit price used to calculate price savings should be either the current schedule lowest quartile price, the lowest price on any contract for similar quantity, or a lower price available from an existing vehicle or data source identified by a commodity team member and agreed to by the Leadership Council. Further, savings methods are to be proposed and approved as part of second key decision point and approved by the Leadership Council. Office Supplies, now in its third generation, has used the transactional data it has collected over time to refine its savings methodology and reduce price variation. Office Supplies program officials told us that starting in fiscal year 2014, the Office Supplies FSSI began to use the lowest quartile price from the schedule as a baseline when four or more price points are established. Office Supplies also uses transactional data to reduce price variance for identical goods. Referred to as the dynamic pricing model, the program requires vendors to offer prices that fall within 10 percent of the lowest price offered. As an example, an FSSI official stated that an identical toner cartridge might be listed for anywhere from $100 to $300. Dynamic pricing reduces this variability by capping the price for goods offered to no more than 10 percent greater than the lowest price. GSA officials emphasized that it took 3 years for the Office Supplies FSSI to collect and standardize sales data to include part numbers, manufacture name, and quantity which has allowed them to implement a more precise methodology. A senior GSA official reported that the Janitorial and Sanitation Supplies FSSI and the Maintenance, Repair, and Operations Supplies FSSI will both adopt the lowest quartile method of calculating savings and dynamic pricing once they have the data to do so. Wireless FSSI officials told us that they collect and analyze transactional data from vendors, and can share the average discount available through the FSSI, but that contractual terms prohibit the FSSI from sharing the actual prices paid that ordering agencies pay which are often lower, unless a federal agency requests such information from the FSSI. The officials told us that this restriction inhibits the FSSI’s ability to demonstrate to agencies savings that could be achieved through use of the FSSI. Until this issue is addressed by clarifying the contract terms, the program will remain limited in its ability to make a business case to agencies on the potential cost savings from using the Wireless FSSI. Information Retrieval officials reported that they collect transactional data from a limited number of vendors, but do not use that data to report spending or as the basis to calculate savings based on changes in price. Information Retrieval officials reported that after obtaining Leadership Council approval for FSSI designation, the program negotiated transactional data reporting requirements with 5 of its 69 vendors, those with aggregate sales above $3 million as of fiscal year 2014. Because Information Retrieval does not collect transactional data from all of its vendors, it lacks the data needed to calculate savings based on price. Rather, Information Retrieval calculates and reports administrative savings based on a methodology it developed which estimates savings based on assumptions about the number of hours a typical agency would spend on similar procurements and agency labor rates for contracting staff. While the Leadership Council recognizes administrative savings in its 2014 savings principles, we excluded Information Retrieval’s savings figures from our report in part because of inconsistencies and errors in its spending data that impeded our ability to independently verify the savings data. The GSA FSSI Program Management Office is responsible for ensuring oversight and support of the FSSIs including monitoring compliance with FSSI standards. Under the 2012 OMB memorandum and the 2014 Leadership Council guidance for calculating savings, FSSIs are expected to collect transactional data use that information to calculate savings based on cost difference. Officials from the GSA FSSI Program Management Office, however, indicated that they previously had not collected or reviewed data from the Information Retrieval FSSI to ensure compliance with FSSI standards, but have begun to engage with the Library of Congress staff during the course of our review to gather more information on implementation of the FSSI. Until the GSA FSSI Program Management Office takes steps to ensure that the Information Retrieval FSSI meets these requirements, GSA and OFPP will not have the data or insight necessary to monitor and assess whether savings and other benefits are being achieved through the Information Retrieval FSSI. According to OMB and GSA guidance, a tenet of strategic sourcing is that higher volume generally translates to lower prices. Tiered pricing is a mechanism to capture volume-based savings in contracts where the volume is unknown and allows customers to obtain percent discounts that increase as aggregate purchasing tiers are reached. As a result of low FSSI adoption, FSSI mechanisms such as tiered price discounts negotiated with vendors that were intended to drive further savings were not reached. Table 4 illustrates an example of a tiered pricing model. While six of the seven FSSIs we reviewed established tiered pricing agreements with at least some vendors, Office Supplies is the only FSSI with an active contract where purchases were sufficient to meet a tiered pricing threshold. Office Supplies officials reported that spending with one of the FSSI’s 24 vendors reached the $25 million tier triggering a two percent discount on all subsequent purchases. Officials expect spending with six to eight other vendors to reach the $10 million tier during fiscal year 2016 triggering a one percent discount. Print Management does not include tiered discounts and no changes are planned to the program since it is scheduled to end in September 2016. Information Retrieval negotiated tiered discounts with 5 of its 69 vendors, those with aggregate sales over $3 million. Based on our review of seven current FSSIs and interviews with OFPP staff and GSA officials, we identified four key lessons which can be generally applied to category management. These are the need for (1) stronger enforcement mechanisms to drive category management success; (2) targets and measures to hold agencies accountable for results; (3) the collection and use of transactional data to ensure that the benefits of strategic sourcing are achieved; and (4) strategies to increase small business participation. While the category management initiative incorporates many of these key lessons in its guidance and memorandums, it does not establish expectations or a process to set specific targets and measures for Leadership Council agencies to use approved vehicles. Our work found that the FSSIs achieved limited adoption and savings because individual agencies were not held accountable for results, in part because OMB and the Leadership Council did not exercise mechanisms to monitor agency use of the FSSIs, or drive or enforce agency compliance with commitment letters, transition plans, or the subsequent establishment of mandatory use or consideration policies. OFPP staff agreed with our assessment that a key lesson learned from the FSSIs is that stronger enforcement mechanisms are needed to increase agency compliance with category management plans and goals. A senior OFPP staff noted that the early premise of strategic sourcing was that agencies would readily use new strategic sourcing vehicles, but that level of use has been inconsistent. OFPP staff stated that individual category management memorandums include stronger compliance requirements and mechanisms than were present under the FSSI program to drive compliance. For example, OMB’s October 2015 policy for workstations, June 2016 policy for software licenses, and August 2016 policy for mobile devices all direct agency Chief Information Officers to take specific actions within their agencies using new authorities and responsibilities provided to them under the Federal Information Technology Acquisition Reform Act to improve their agencies’ IT management policies and practices. For example, OMB’s October 2015 memorandum on workstations includes several provisions intended to encourage agency compliance such as prohibiting agencies from issuing new solicitations for laptops and desktops and directing them to leverage three existing solutions. It also directs agency Chief Acquisition Officers and Chief Information Officers to work together to develop transition and implementation plans for both the technical and acquisition aspects of the policy. Specifically, Chief Acquisition Officers were directed to provide baseline spend data for purchases made through the approved vehicles and identify when the agency will phase out existing contracts for workstations and transition to the preferred vehicles. Chief Information Officers were directed to develop implementation instructions for the use of standard configurations, the prohibition on new awards, oversight, compliance, and other management measures. Table 5 describes some of the key provisions of OMB’s category management policy for workstations. OMB’s June 2016 memorandum on enterprise software and August 2016 memorandum on mobile devices and services follow a similar approach to the workstation policy in that they specify key responsibilities and requirements. For example, the Enterprise Software Category Team was established under category management and is co-managed by GSA, DOD, and OMB to guide the development of government-wide software license agreements for mandatory agency use. Under the policy, OMB is to encourage or direct use of existing best in class software licensing agreements. The memorandum further requires agencies to develop implementation plans, in accordance with guidance, to address how agencies will move from their existing agreements to those mandated under category management. Agencies must also justify and obtain approval to pursue new agreements that overlap or conflict with the mandated agreements. Similarly, the mobile devices and services memorandum directs agencies to baseline their usage for devices and services; reduce the number of contracts for mobile devices and services and transition to a government-wide solution or solutions; and modify demand management practices to optimize plan pricing and device refresh schedules. To help drive category management success, OFPP staff told us that they anticipate requiring the use of specific vehicles or agreements and requiring agencies to develop transition and implementation plans in subsequent category management memorandums. OFPP staff acknowledged that the IT category is unique in that it leverages efforts under the PortfolioStat initiative as well as the authorities provided to Chief Information Officers under the Federal Information Technology Acquisition Reform Act. However, OFPP staff indicated that OFPP can exercise its authority provided to the Administrator under the Office of Federal Procurement Policy Act to direct agencies to take certain actions. For example, under the Act, the Administrator is to provide overall direction of procurement policy and promote economy and efficiency in federal procurements. Additionally, OFPP is also reviewing and updating its business case guidance for new interagency and agency-specific acquisitions to ensure awareness and appropriate coordination with the Leadership Council. This policy outlines required elements of a business case analysis as well as a process for developing, reviewing, and approving business cases to support the establishment and renewal of government-wide acquisition contracts and certain multi-agency contracts, multi-agency blanket purchase agreements under the federal supply schedules program, agency-specific contracts, and agency-specific blanket purchase agreements over a certain threshold. The purpose of the policy is to ensure that the expected return from investment in a contract or agreement is worth the effort and cost associated with planning, awarding, and managing a new vehicle, and to address unjustified duplication among contracts. Under the revised policy, category managers will be responsible for reviewing new agency business cases and advising the Leadership Council of potential duplication or opportunities for new or expanded strategic sourcing initiatives. While OFPP’s October 2015 workstation memorandum established aggregate goals for adoption, it did not establish specific adoption goals or targets for individual agencies to achieve. Specifically, the memorandum calls for civilian agencies to increase their spending through Leadership Council-approved vehicles to 75 percent by the end of fiscal year 2018, but it did not provide specific targets for the Leadership Council agencies to achieve. Further, the overarching guidance to implement category management—the Leadership Council charter as updated in April 2016 and the Leadership Council’s May 2015 category management guidance—do not specify the extent to which specific Leadership Council agencies should adopt category management solutions. For example, the April 2016 Leadership Council charter asks agencies to adopt approved strategies, but does not set an expectation to develop agency-specific targets for expected levels of Leadership Council agency use of the solutions approved. Moreover, the May 2015 guidance explains that Leadership Council agencies should advocate for advancing category management initiatives and increasing adoption of solutions, but does not include a process to specify a minimum level of use or other targets and performance measures by agency. Moreover, standards for internal control in the government highlight the need to evaluate performance and hold organizations accountable. Given the low use of FSSIs by the Leadership Council agencies, OFPP may be at risk of repeating that outcome unless it clarifies expectations and establishes a process in guidance regarding agency-specific targets and measures for Leadership Council agency adoption of category management initiatives and FSSIs and ensures that these targets and measures are set. Our work found that the FSSI program lacked agency-specific targets and measures to increase agency accountability. OMB directed Leadership Council agencies to use and promote federal strategic sourcing efforts to the maximum extent possible, and established CAP goals to encourage agency adoption but did not establish agency-specific targets and measures by which to monitor and hold agencies accountable for using solutions that are strategically sourced or identified as best in class under category management. OFPP staff identified several strategies under category management that resulted from lessons learned under the FSSI program which they expect will help increase agency accountability and results. For example, OFPP staff recognized the importance of establishing specific targets as a basis for holding agencies accountable for results. Specifically, the category management CAP goal aims to increase civilian agency spending on workstations through Leadership Council-approved vehicles from a baseline of 39 percent in 2015 to 75 percent by the end of calendar year 2019 and to reduce the number of new/renewed contracts for workstations by 30 percent. OFPP reports CAP goal progress quarterly during the fiscal year. As of July 2016, OMB reported that agencies have spent $171 million on laptops and desktops and 58 percent of this spend has gone through the approved vehicles but acknowledged that most of the spending in this category was expected to occur in the fourth quarter of the fiscal year and were uncertain whether this level of adoption will continue. OMB’s category management CAP goal, however, does not report agency-specific targets and measures to monitor whether agencies adopted specific FSSI and category management vehicles. Given the low agency usage of the FSSIs, reporting agency specific usage of category management-approved vehicles is important to understand whether the category management effort is achieving results. Standards for internal control in the government highlight the need to enforce accountability by evaluating performance and holding organizations accountable. Without reporting on agency specific targets and measures in the CAP goal, OMB will continue to lack the means to monitor progress and hold agencies accountable for using best in class solutions or adopting category management principles. On a more general level, OFPP staff also noted that the “spend under management” model tracks attributes such as leadership and strategy based on a tiered maturity model to measure agency- and government- wide progress toward meeting category management goals. OFPP staff reported that they are using spend under management dashboards in management meetings to provide greater visibility into agency-level data by category. Data calls will be completed at least annually and agencies will be tracked and monitored on their progress toward agency- and government-wide maturity, according to OFPP staff. Our work found that another key lesson learned from the FSSI program was collecting and using transactional data to perform active commodity management, monitor pricing changes to ensure that the benefits of strategic sourcing are maintained and to calculate savings based on changes in price. Under category management, OFPP, GSA, and the Leadership Council have taken a number of steps to institutionalize the collection and use of transactional data: Category management guidance emphasizes the importance of collecting transactional data to determine prices actually paid to support comparative analytics (i.e., normalizes for quantity or delivery term variances) and usage/business intelligence and performance data and enable agencies to improve their commodity management practices on an ongoing basis. The Leadership Council charter establishes the expectation that member agencies share agency prices offered, transactional prices paid data, and contract terms and conditions, as requested. GSA launched an online portal called the Acquisition Gateway to house contract and pricing information for each of the categories in one central location. Content gathered from across government, and validated by the category manager, will provide information and expertise on data, acquisition vehicles, market intelligence, prices- paid information, sustainability-related information, and analysis. In June 2016, GSA published a rule on transactional data reporting. The rule creates new contract clauses requiring vendors to report transactional data such as part numbers, quantities, and prices paid. The new clauses will be initially implemented on a pilot basis for federal supply schedule contracts, and will apply to all new GSA government-wide acquisition contracts and GSA government-wide indefinite-delivery, indefinite-quantity contracts. OMB’s policy memorandum for workstations was informed by an interagency Workstation Category Team, established by the Leadership Council, led by NASA and comprised of subject matter experts and managers of large government-wide and agency-wide hardware contracts. The Workstation Category Team performed research into pricing, terms, and conditions. OMB’s 2015 workstation memorandum directs agencies to consolidate workstation acquisitions through three government-wide solutions to reduce administrative costs and drive greater transparency into pricing by simplifying the collection and comparison of this data. OMB determined that the three government-wide solutions were generally awarded and are managed according to category management principles, including the monitoring of prices paid, usage, and performance data. According to the policy, as well as category management guidance, the Leadership Council will evaluate the performance and value of these approved contracts on an annual basis and revise as necessary. In June 2016, OMB reported that ceiling catalogue prices for personal computers had dropped by up to 50 percent since the release of the workstation policy. To gain better visibility into prices for software agreements, OMB’s June 2016 software policy directs executive agents of government-wide software agreements to post and maintain standard pricing and terms and conditions for the agreements on the Acquisition Gateway. This information will be used by the Enterprise Software Category Team—co- managed by GSA, DOD, and OMB—to identify existing agreements for approval and endorsement as best in class agreements for government- wide use until new government-wide software agreements can be established. According to the software policy, these efforts will provide increased visibility into government-wide spending on software licenses which will be posted on the Acquisition Gateway to further assist in the creation of new software agreements and the development of other tools. As we have previously reported, because strategic sourcing can reduce the number of available contracting opportunities, some members of the small business community have expressed concern about the impact of federal strategic sourcing initiatives on small businesses. Consequently, ensuring that small business concerns are appropriately addressed under the category management initiative is the fourth key lesson learned. Category management guidance and policy emphasizes the goal of maintaining or increasing small business participation and requires all proposed strategic sourcing vehicles and category management strategies to baseline small business use and set goals to meet or exceed that baseline. For example, the May 2015 category management guidance reiterates OMB’s 2012 policy to increase participation by small businesses to the maximum extent practicable by baselining small business use under current strategies and setting goals to meet or exceed that baseline participation under the new strategic sourcing vehicles. The Leadership Council also approved draft guidance pertaining to best in class criteria to include having a small business plan that baselines current participation rates and seeks to maintain or increase them. In its October 2015 workstation memorandum, OMB established a small business baseline and goal to increase small business participation. According to OMB’s policy on workstations, the percentage of workstation work (in dollars) awarded to small businesses in fiscal year 2014 under the three vehicles identified as best in class was 64 percent, or nearly 10 percent greater than the small business participation rate for these commodities overall, and nearly 85 percent of the vendors on these solutions are small businesses. To maintain and increase this participation, the workstation category team, in consultation with the Small Business Administration and the Leadership Council, will review small business participation rates and work with the managers of the three vehicles to evaluate opportunities to increase participation. OMB’s June 2016 software policy differs from the workstation policy in that it does not identify best in class vehicles, but rather directs the Enterprise Software Category Team to guide the development of government-wide software license agreements for mandatory agency use, and states that OMB will encourage or direct use of best in class existing software licensing agreements. OMB’s draft criteria for best in class vehicles states that specific criteria for determining best in class contracts will vary depending on the category and commodity, but such solutions should generally include a small business plan that baselines current participation rates and seeks to maintain or increase them. GSA officials noted that each of the 10 government-wide categories will have a small business goal and that the category management program as a whole will consider the needs of small business when formulating procurement strategies. Further, category managers are expected to actively engage the small business community for their commodity area in order to address these businesses concerns. GSA officials also noted that they may consider new vendor management strategies such as small business on-ramping. Although not specifically addressed in category management guidance, officials noted that they consider small business on-ramping to be a best practice which will likely be featured as a strategy under category management. As an example, GSA’s One Acquisition Solution for Integrated Services contract vehicle includes on-ramping which GSA describes as a competitive process that can be conducted as necessary to address competition at the task order level, mergers and acquisitions that shrink the number of vendors, customer-driven request for a more focused sub-pool, and/or small businesses outgrowing their small business size. The purpose of the process is to ensure that there remain an adequate number of contractors eligible to compete for task orders to meet the government’s requirements. The FSSI program has led to procurement savings of nearly $500 million over the last 5 years and achieved a savings rate comparable to that achieved by leading commercial companies. But unlike leading companies that use their strategic sourcing vehicles 90 percent of the time, federal agencies directed less than 10 percent of their spending on the goods and services offered under the FSSIs, resulting in a missed opportunity to potentially have saved billions of dollars over the last 5 years. In fiscal year 2015 alone, GSA reported that agencies saved $129 million out of the $462 million spent through the FSSIs, representing a savings rate of 28 percent. Had agencies spent the entire amount of addressable spending through the FSSI and achieved a similar rate of 28 percent, we estimate that up to $1.3 billion in fiscal year 2015 savings could have been achieved. The low usage rates for the FSSI program are not unique. For example, our prior work found that agencies managed only 10 to 44 percent of their IT services spending through their preferred strategic sourcing contracts. OFPP’s category management initiative dwarfs the size and scope of the FSSI program by targeting two-thirds of federal spending. Given the scale of category management, it is imperative that the lessons of implementing the FSSIs are learned and addressed. Chief among those lessons is that the large procurement agencies that make up the Leadership Council and govern the FSSI and category management initiatives must themselves be more accountable for achieving results. These agencies fell short in using the very same FSSIs that they approved, including providing transition plans for how agencies would migrate to use of FSSI solutions as required under FSSI guidance. While many of the lessons learned during the course of the FSSI program have been reflected in the initial category management efforts, neither the April 2016 Leadership Council charter nor the Leadership Council’s May 2015 category management guidance establish expectations and a process for setting agency-specific targets and measures to assess adoption of solutions and performance. Moreover, since the category management CAP goal provides regular updates on progress, agency accountability for results would be enhanced by including agency specific progress against targets and performance measures. Given the low agency usage of the FSSIs, without such actions, and ensuring these targets and measures are set, OMB, and specifically the Office of Federal Procurement Policy, will lack the means to monitor progress and hold large procurement agencies accountable for using existing FSSIs or best in class solutions identified under subsequent category management efforts. Considering the magnitude of spending targeted by category management, taking these actions will increase the likelihood that category management will deliver on its promise to substantially change the federal procurement landscape and generate substantial savings and other benefits for federal customers. At a tactical level, the GSA FSSI Program Management Office is responsible for ensuring oversight of the FSSIs, including monitoring compliance with FSSI standards. In two cases, more engagement by the office may be beneficial. For example, the Library of Congress’s Information Retrieval FSSI collects transactional data to only a limited extent and does not use that data to calculate savings. GSA FSSI Program Management Office officials, however, indicated that they had not, until recently, engaged with the Information Retrieval FSSI to ensure compliance with FSSI standards. Similarly, the Wireless FSSI negotiated contractual terms that limit its ability to share actual prices paid with other federal agencies. Collecting and using transactional data and sharing prices paid information across federal agencies are key provisions of strategic sourcing and are identified in current strategic sourcing guidance. The FSSI Program Management Office, in collaboration with the Information Retrieval and Wireless FSSIs, could enhance the performance of these FSSIs by making sure their practices are fully aligned with current guidance, to the maximum extent practicable. To better promote federal agency accountability for implementing the FSSI and category management initiatives, we recommend that the Administrator of Federal Procurement Policy take the following four actions: Ensure that transition plans are submitted and monitored as required by FSSI guidance and guidance governing specific category management initiatives; Update the Leadership Council charter to establish an expectation that Leadership Council agencies develop agency-specific targets for use of the solutions approved; Revise the 2015 category management guidance to establish a process for setting targets and performance measures for each Leadership Council agency’s adoption of proposed FSSIs and category management solutions and ensure agency specific targets and measures are set; and Report on agency specific targets and metrics as part of the category management CAP goal. To improve the management of current FSSIs, we recommend that the GSA FSSI program management office take the following two actions: Provide oversight and support to the Information Retrieval FSSI to better align their practices with current strategic sourcing guidance related to collecting and using transactional data to calculate savings ; and In collaboration with the Wireless FSSI, determine whether the initiative should modify its contract terms to enable the FSSI to share prices paid data with other federal agencies. We provided a draft of our report to OMB, GSA, and the Library of Congress. OMB and GSA concurred with our recommendations to improve oversight and accountability of FSSI and category management efforts. The agencies’ comments are summarized below and written comments from GSA and the Library of Congress are reproduced in appendix V and VI respectively. We also received technical comments from OMB and GSA which we incorporated, as appropriate. OMB did not provide written comments on the draft report, but in oral comments, OMB staff generally agreed with our recommendations and identified several actions to address them. OMB actions include in part, the October 2016 issuance of a draft circular for public comment to implement category management practices. Regarding our recommendation that OFPP ensure that transition plans are submitted and monitored as required by guidance, OMB staff agreed that transition plans should comply with guidance. OMB staff indicated, however, that retroactively requiring agencies to submit FSSI transition plans is not needed because all of the FSSIs are currently being evaluated against category management best in class criteria as part of the migration to a category management approach. OMB staff stated that for example, the Office Supplies FSSI has been designated as a best in class solution, which will require agencies to submit transition plans. The OMB draft circular on category management also provides that OMB will issue policy on the agency migration process to best in class solutions. We believe these actions, if implemented, meet the intent of our recommendation. Given that transition plans were also required under FSSI guidance but were not submitted, it will be important for OMB to assure that agencies follow through on submitting required plans going forward. Regarding our second and third recommendations that OFPP establish an expectation that Leadership Council agencies develop agency-specific targets for use of solutions approved and revise guidance to establish a process for setting targets and performance measures, OMB staff agreed with the need for agency-specific targets for use of best in class solutions. OMB staff noted that they plan to establish targets by large spend agencies for best in class solutions and update category management governance and reporting procedures and processes as needed. OMB staff also agreed that Leadership Council agency progress toward implementing category management should be tracked and measured. Both OMB staff and the draft circular on category management indicate that spend under management will be used as the principal measure by which OMB will assess adoption of category management. As noted earlier in our report, spend under management tracks progress in areas such as data and metrics to monitor adoption of category management practices. OMB staff indicated that they plan to evaluate at least annually agencies’ spend under management results, which includes agency adoption of best in class solutions, and then review with agency leaders progress toward meeting goals. Regarding our fourth recommendation to report on agency specific targets and metrics as part of the category management CAP goal, OMB staff indicated that results achieved relative to CAP goal targets will be reported on a quarterly basis on Performance.Gov. In addition, OMB will track agency spend through best in class contracts and these data will likely be used as an internal category metric and shared with the agencies. Taken together, these actions are responsive to our recommendations; however, given the low use of the FSSIs, OMB should continue to carefully monitor category management implementation as it moves forward and ensure that OFPP uses the planned targets and measures noted above to hold agencies accountable for individual results. In short, greater accountability can lead to increased savings. In GSA’s written comments, GSA agreed with our recommendations to provide oversight and support to the Information Retrieval FSSI and determine whether the Wireless FSSI should modify contract terms to better share prices paid data. GSA plans to conduct a gap analysis of the Information Retrieval FSSI and its compliance to FSSI standards to include determining unmet practices required for collecting and using transactional data for the FSSI program management office government- wide oversight and reporting, as well as providing the Library of Congress with FSSI best practice tools and resources related to collecting transactional data and calculating savings. With respect to our recommendation regarding the Wireless FSSI, GSA told us they would conduct an assessment to determine the best approach to share Wireless FSSI prices paid data with other federal agencies. In written comments, the Library of Congress concurred with the report’s findings and noted that initial progress has been made to ensure that its partnership with GSA results in enhanced analysis and transparency of the Information Retrieval FSSI. We are sending copies of this report to the appropriate congressional committees; the Administrator of General Services; the Inspector General, Library of Congress; and the Director, Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. We were asked to examine the Federal Strategic Sourcing Initiative (FSSI) program and lessons learned. This report addresses (1) the extent to which savings and other benefits have been achieved by the FSSI program, and (2) lessons, if any, from the Office of Federal Procurement Policy (OFPP) and General Services Administration (GSA) implementation of the FSSI program and the extent to which those lessons have been incorporated into OFPP’s category management initiative. We focused our review on seven FSSIs that were active between fiscal years 2011 and 2015: (1) Office Supplies; (2) Domestic Delivery Services; (3) Print Management; (4) Wireless; (5) Maintenance, Repair, and Operations Supplies; (6) Janitorial and Sanitation Supplies; and (7) Information Retrieval. This covers the period since we last assessed FSSI implementation through the last full year for which FSSI spending and savings data was available. GSA is the executive agent for all the FSSIs except Information Retrieval which is administered by the Library of Congress. We excluded the Telecommunications Expense Management FSSI which ceased operations in the third quarter of fiscal year 2014 because limited data on the program were available. FSSIs establish multiple award blanket purchase agreements, basic ordering agreements, and/or indefinite delivery, indefinite quantity contracts, through which federal agencies may obtain the specific goods and services they need. To determine the extent to which savings and other benefits have been achieved through the seven FSSIs, we collected and reviewed agency reported data on spending, savings, and adoption. We also reviewed FSSI guidance on the key decision point process and program documents, and interviewed officials responsible for each of the FSSIs under our review as well as the FSSI Program Management Office within GSA which is responsible for monitoring overall FSSI program performance and usage regardless of the lead agency managing the initiatives. For government-wide and addressable spending data, we reviewed agency reported data, acquisition plans, business case analyses, and key decision point documents prepared for Leadership Council review. We also reviewed relevant Office of Management and Budget (OMB) and GSA guidance and interviewed FSSI program officials to clarify our understanding of how each program developed its government-wide spending figures. These figures were typically based on a variety of data sources including data from the Federal Procurement Data System-Next Generation, and purchase card data. Since government-wide spending figures are estimates and not actual performance, we reviewed the methods used to formulate them and found them to be reasonable and the data sufficiently reliable for providing appropriate context for the actual spending that went through the FSSI vehicles. We also reviewed program documents and interviewed FSSI program officials to better understand the basis and rationale for spending excluded from addressable spending. This baseline, referred to as addressable spend, is to be used to measure FSSI adoption by calculating the amount of actual spending through the FSSIs as a percentage of the total addressable spending through the FSSIs and is required for approval at the second key decision point. For FSSI spending and savings data, we took a number of steps to assess the reliability of the data reported by each FSSI. For the GSA FSSIs which report spending based on vendor reported transactional data, we obtained documentary and testimonial evidence on the internal controls used by vendors, the FSSI teams, and the FSSI Program Management Office to ensure the accuracy of the spending data reported. For three of the six GSA FSSIs, we also collected and reviewed a non-generalizable sample of transactional data reports. The Library of Congress does not report spending through the Information Retrieval FSSI based on vendor reported data although it collects a limited amount of such data, and we took similar steps to assess their reliability. We determined that the spending data for the GSA FSSIs were sufficiently reliable for our purposes but that the spending data for Information Retrieval lacked internal controls, were inconsistent, and contained errors. As a result, the Information Retrieval spending data were not sufficiently reliable and we excluded it from our analyses. For savings data, we reviewed the methodologies used by each FSSI to calculate savings and compared them with the savings principles approved by the Leadership Council in 2014, which include price savings, cost avoidance, and administrative savings. Each of the GSA FSSIs uses transactional data to calculate the difference between a baseline unit price and the FSSI price. While the FSSIs varied in the precision of their methods, we verified that they generally complied with Leadership Council guidance for calculating savings and confirmed with GSA officials that these methods were approved by their respective commodity teams or the Leadership Council. Information Retrieval does not calculate savings based on price, but rather reports administrative savings. While guidance allows FSSIs to report administrative savings, due in part to the inconsistencies and errors in the Information Retrieval’s spending data, we could not independently verify the savings data reported and did not include those figures in our report. For adoption rates reported by the FSSIs, we focused our analysis on fiscal year 2015. We verified that the adoption rates reported by GSA were calculated correctly based on the addressable and actual spending reported. We also performed our own calculations to determine the adoption rates for Leadership Council agencies based on the addressable and actual spend data reported by GSA. To better understand the factors that explain agency adoption of the FSSIs, we reviewed guidance on the key decision point process which was established by the Leadership Council in 2013 as a framework for the development, approval, and oversight of the FSSIs. We identified requirements for Leadership Council agencies to provide the FSSIs with commitment letters based on their addressable spending and to issue mandatory use policies as appropriate and assessed the extent to which those agencies actually used the FSSIs in accordance with the commitments they provided and mandatory use policies they implemented. We also interviewed GSA procurement officials about the factors affecting their use of the FSSIs and obtained documents from the FSSIs in which agencies explained their rationale for not using specific FSSIs, but we did not interview officials from each agency within the Leadership Council regarding the factors affecting their respective agencies’ use of the seven FSSIs we reviewed. In addition, we interviewed FSSI program officials and senior leadership officials within GSA and OMB about Leadership Council agency adoption of the FSSIs, as well as government-wide adoption more generally. We also assessed the extent to which the seven FSSIs incorporated key characteristics identified by OMB to include the collection and use of transactional data, the calculation of savings based on changes in price, and the use of tiered pricing to reduce prices as cumulative sales volume increases. For each FSSI we interviewed FSSI program officials and collected program documents such as acquisition plans, contract documents specifying contractual terms requiring vendors to provide certain data and information on the use of tiered discounts. To determine what lessons, if any, from OFPP and GSA implementation of the FSSI program and the extent to which those lessons have been incorporated into OFPP’s category management initiative we reviewed the seven current FSSIs and conducted interviews with GSA and Library of Congress officials responsible FSSI implementation, as well as GSA officials and OFPP staff responsible for oversight. Based on this review, we identified lessons learned which can be generally applied to category management and corroborated our findings with GSA officials and OFPP staff responsible for the implementation and oversight of the FSSI program to determine which lessons were key. We also reviewed category management policy and guidance and independently assessed the extent to which these lessons had been incorporated. We also reviewed the CAP goal quarterly progress updates for fiscal years 2012 through 2016 as posted on Performance.Gov for both strategic sourcing and category management. We conducted this performance audit from November 2015 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: Leading Companies’ Foundational Approaches for Strategic Sourcing Principle 1: Maintain Spend Visibility Automate and integrate procurement and financial systems across the organization Establish a catalogue of defined services and related terminology to be applied consistently across invoice line items to allow for more efficient spend analysis Principle 2: Centralize Procurement Centralize procurement knowledge and decisions by aligning, prioritizing, and integrating procurement functions within the organization Ensure that spending goes through approved contracts, which is the key to an effective centralized process Clearly define and communicate policies to eliminate unapproved purchases, or “rogue buying” Appendix IV: Key Provisions from the Office of Management and Budget 2012 Strategic Sourcing Memorandum identify at least five products and/or services for which new government-wide acquisition vehicles or management approaches should be developed and made mandatory, to the maximum extent practicable, for the Leadership Council agencies; for these identified commodities, provide supporting spend analysis, estimate savings opportunities, and define metrics for tracking progress; develop transition strategies to the new solutions; identify agencies to serve as “executive agents” to lead the development of these new solutions; propose plans and management strategies to maximize the use of each strategic sourcing effort; and propose vendor management or other strategies to reduce the variability in the prices paid for similar goods and services, where the development of new government-wide vehicles may not be immediately feasible. in consultation with the Leadership Council, implement at least five new government-wide strategic sourcing solutions in fiscal years 2013 and 2014; increase the transparency of prices paid for common goods and services for use by agency officials in market research and negotiations; and promulgate requirements, regulations, and best practices. reflect input from a large number of potential agency users regarding demand for the goods and services being considered, the acquisition strategy (including contract pricing, delivery and other terms and conditions, and performance requirements), and the commodity management approach; ensure that the federal government gets credit for all sales, regardless of payment method, so that volume-based pricing discounts can be applied; include tiered pricing, or other appropriate strategies, to reduce prices as cumulative sales volume increases; require vendors to provide sufficient pricing, usage, and performance data to enable the government to improve commodity management practices on an ongoing basis; and are supported by a contract administration plan that demonstrates commitment by the executive agent to perform active commodity management and monitor vendor performance and pricing changes throughout the life of the contract. all proposed strategic sourcing agreements must baseline small business use under current strategies and set goals to meet or exceed that baseline participation. In addition to the individual named above, W. William Russell (Assistant Director), Emily Bond, Peter Haderlein, Kristine Hassinger, Julia Kennon, Angie Nichols-Friedman, Max Sawicky, Roxanna Sun, and Holly Williams made key contributions to this report. | Each year, federal agencies obligate over $400 billion on goods and services, but they miss out on savings when they do not leverage their collective buying power. In 2005, the Office of Management and Budget (OMB) directed agencies to leverage spending through strategic sourcing. In 2014, OFPP, an office in OMB, announced its category management initiative, which is intended to further streamline and manage entire categories of spending across the government more like a single enterprise. GAO was asked to examine the current status of the FSSI program and the extent to which OFPP has incorporated lessons learned from the program into its category management initiative. This report addresses (1) savings and other benefits the FSSI program has achieved, and (2) lessons identified and incorporated into OFPP's category management initiative. GAO analyzed FSSI spending, savings, and adoption data for all seven active FSSIs for fiscal years 2011 through 2015; reviewed OMB, OFPP, Leadership Council, and GSA strategic sourcing and category management guidance; and interviewed GSA and FSSI program officials and OFPP staff. From fiscal year 2011 through 2015, federal agencies reported spending almost $2 billion through the Federal Strategic Sourcing Initiatives (FSSI) GAO reviewed and reported an estimated total of $470 million in savings. Federal agencies' low use of the FSSIs, however, diminished the potential savings that could have been achieved. For example, in fiscal year 2015, federal agencies spent an estimated $6.9 billion on the types of goods and services available through these FSSIs. Of this amount, $4.5 billion was considered “addressable” and could have been spent through the FSSIs, but just $462 million was. While total savings reported for fiscal year 2015 came in at $129 million—a savings rate of 28 percent—had all of the agencies directed their addressable spending through FSSIs, up to $1.3 billion in savings could have been achieved, assuming the same savings rate. See figure. GAO found that FSSI use has been low, in part, because Leadership Council agencies, a cohort of large federal agencies responsible for FSSI governance, directed only 10 percent of their collective spending to the FSSIs. FSSI guidance requires agencies to develop plans to transition from existing agency vehicles to FSSIs, but Office of Federal Procurement Policy (OFFP) staff and General Services Administration (GSA) officials stated such plans were not collected or used to monitor FSSI use. Ensuring agencies submit these plans and monitoring them is consistent with internal control standards to evaluate and hold agencies accountable for performance. OFPP's category management initiative largely incorporates key lessons learned from the FSSIs into guidance, such as addressing small business concerns and obtaining data on prices paid. OFPP, however, has not yet ensured that agency-specific targets and performance measures for adoption of FSSI and category management solutions are set. Until OFPP takes action to do so, it is at risk of agencies underutilizing existing FSSI and category management solutions and, in turn, of diminished cost savings. To increase potential savings, GAO is making 6 recommendations, including that OFPP ensure agencies submit transition plans, monitor their use, and ensure agency specific targets and performance metrics to measure adoption of FSSI and category management solutions are set. OMB and GSA concurred with the recommendations. |
To a large degree, spectrum management policies flow from the technical characteristics of radio spectrum. Although the radio spectrum spans nearly 300 billion frequencies, 90 percent of its use is concentrated in the 1 percent of frequencies that are below 3.1 gigahertz. The crowding in this region has occurred because these frequencies have properties that are well suited for many important wireless technologies, such as mobile phones, radio and television broadcasting, and numerous satellite communication systems. The process known as spectrum allocation has been adopted, both domestically and internationally, as a means of apportioning frequencies among the various types of uses and users of wireless services and preventing radio congestion, which can lead to interference. Interference occurs when radio signals of two or more users interact in a manner that disrupts the transmission and reception of messages. Spectrum allocation involves segmenting the radio spectrum into bands of frequencies that are designated for use by particular types of radio services or classes of users, such as broadcast television and satellites. Over the years, the United States has designated hundreds of frequency bands for numerous types of wireless services. Within these bands, government, commercial, scientific, and amateur users receive specific frequency assignments or licenses for their wireless operations. The equipment they use is designed to operate on these frequencies. During the last 50 years, developments in wireless technology have opened up additional usable frequencies, reduced the potential for interference, and improved the efficiency of transmission through various techniques, such as reducing the amount of spectrum needed to send information. While this has helped limit congestion within the radio spectrum, competition for additional spectrum remains high. Wireless services have become critically important to federal, state, and local governments for national security, public safety, and other functions. At the same time, the consumer market for wireless services has seen extraordinary growth. For example, mobile phone service in the United States greatly exceeded the industry’s original growth predications, as it jumped from 16 million subscribers in 1994 to an estimated 110 million in 2001. The legal framework for allocating radio spectrum among federal and non- federal users emerged from a compromise over two fundamental policy questions: (1) whether spectrum decisions should be made by a single government official, or a body of decision-makers; and (2) whether all non- federal users should be able to operate radio services without qualification, or if a standard should be used to license these operators. The resulting regulatory framework—dividing spectrum management between the President and an independent regulatory body—is rooted both in the President’s responsibility for national defense and in the fulfillment of federal agencies’ missions, and the encouragement and recognition by the federal government of the investment made by private enterprise in radio and other communications services. The first federal statute to establish a structure for spectrum management—the Radio Act of 1912—consolidated licensing authority with the Secretary of Commerce. However, the act proved to be deficient in addressing the burgeoning growth of radio communications and ensuing interference that occurred in the late 1910s and 1920s. Specifically, the Secretary of Commerce lacked the authority to use licensing as a means of controlling radio station operations, or to take actions to control interference, such as designating frequencies for uses or issuing licenses of limited duration. In recognition of such limitations, deliberations began in the 1920s to devise a new framework for radio spectrum management. Although there was general agreement that licensing should entail more than a registration process, there was debate about designation of the licensing authority and the standard that should govern the issuance of licenses. The Radio Act of 1927, reflecting a compromise on a new spectrum management framework, reserved the authority to assign frequencies for all federal government radio operators to the President and created the Federal Radio Commission (FRC) to license non-federal government operators. Composed of five members from five different regions of the country, FRC could assign frequencies, establish coverage areas, and establish the power and location of transmitters under its licensing authority. Further, the act delineated that a radio operation proposed by a non-federal license applicant must meet a standard of “the public interest, convenience and necessity,” and that a license conveyed no ownership in radio channels nor created any right beyond the terms of the license.FRC’s authorities were subsequently transferred to the Federal Communications Commission (FCC), and the FRC was abolished upon enactment of the Communications Act of 1934, which brought together the regulation of telephone, telegraph, and radio services under one independent regulatory agency. The 1934 act also retained the authority of the President to assign spectrum to and manage federal government radio operations. The need for cooperative action in solving problems arising from the federal government’s interest in radio use was recognized in 1922 with the formation of the Interdepartment Radio Advisory Committee (IRAC), comprised of representatives from the federal agencies that use the most spectrum. IRAC, whose existence and actions were affirmed by the President in 1927, has continued to advise whoever has been responsible for exercising the authority of the President to assign frequencies to the federal government. In 1978, the President’s authority for spectrum management of federal government users was delegated to NTIA, an agency of the Department of Commerce. IRAC assists NTIA in assigning frequencies to federal agencies and developing policies, programs, procedures, and technical criteria for the allocation, management, and use of the spectrum. Over the past 75 years, since the 1927 act formed our divided structure of spectrum management, there is historical evidence of cooperation and coordination in managing federal and non-federal users to ensure the effective use of spectrum. For example, FCC and IRAC agreed in 1940 to give each other notice of proposed actions that might cause interference or other problems for their respective constituencies. Further, FCC has always participated in IRAC meetings and NTIA frequently provides comments in FCC proceedings that impact federal radio operations. And, as I will discuss later, FCC and NTIA also work together with the Department of State to formulate a unified U.S. position on issues at international meetings that coordinate spectrum use regionally and globally. However, as demand for this limited resource increases, particularly with the continuing emergence of new commercial wireless technologies, NTIA and FCC face serious challenges in trying to meet the growth in the needs of their respective incumbent users, while accommodating the needs of new users. The current shared U.S. spectrum management structure has methods for allocating spectrum for new uses and users of wireless services, but these methods have occasionally resulted in lengthy negotiations between FCC and NTIA over how to resolve some allocation issues. Since nearly all of the usable radio spectrum has been allocated already, accommodating more services and users often involves redefining spectrum allocations. One method, spectrum “sharing,” enables more than one user to transmit radio signals on the same frequency band. In a shared allocation, a distinction is made as to which user has “primary” or priority use of a frequency and which user has “secondary” status, meaning it must defer to the primary user. Users may also be designated as “co-primary” in which the first operator to obtain authority to use the spectrum has priority to use the frequency over another primary operator. In instances where spectrum is shared between federal and non-federal users—currently constituting 56 percent of the spectrum in the 0-3.1 GHz range—FCC and NTIA must ensure that the status assigned to users (primary/secondary or co-primary) meet users’ radio needs, and that users abide by rules applicable to their designated status. Another method to accommodate new users and technologies is “band- clearing,” or re-classifying a band of spectrum from one set of radio services and users to another, which requires moving previously authorized users to a different band. Band-clearing decisions affecting either only non-federal or only federal users are managed within FCC or NTIA respectively, albeit sometimes with difficulty. However, band- clearing decisions that involve radio services of both types of users pose a greater challenge. Specifically, they require coordination between FCC and NTIA to ensure that moving existing users to a new frequency band is feasible and not otherwise disruptive to their radio operation needs.While many such band-clearing decisions have been made throughout radio history, these negotiations can become protracted. For example, a hotly debated issue is how to accommodate third-generation wireless services. FCC also told us that the relationship between FCC and NTIA on spectrum management became more structured following the enactment of legislative provisions mandating the reallocation of spectrum from federal to non-federal government use. To address the protracted nature of some spectrum band-clearing efforts, some officials we interviewed have suggested establishing a third party— such as an outside panel or commission, an office within the Executive branch, or an inter-agency group—to arbitrate or resolve differences between FCC and NTIA. In some other countries, decisions are made within one agency or within interagency mechanisms that exist for resolving contentious band-clearing issues. For example, the United Kingdom differs from the U.S. spectrum management structure in that a formal standing committee, co-chaired by officials from the Radiocommunications Agency and the Ministry of Defense, has the authority to resolve contentious spectrum issues. Another proposed mechanism is the preparation of a national spectrum plan to better manage the allocation process. The Omnibus Budget Reconciliation Act of 1993 required NTIA and FCC to conduct joint spectrum planning sessions. The National Defense Authorization Act of 2000 included a requirement for FCC and NTIA to review and assess the progress toward implementing a national spectrum plan. Top officials from FCC and NTIA said that neither requirement has been fully implemented. However, they indicated their intention to implement these directives. A central challenge for the United States in preparing for WRCs, at which international spectrum allocation decisions are made, is completing the preparatory actions to ensure that the U.S. is able to effectively negotiate for international allocations that best serve the interests of domestic federal and non-federal spectrum users. The management of our domestic spectrum is closely tied to international agreements on spectrum use at regional and global levels. Domestic spectrum allocations are generally consistent with international allocations negotiated and agreed to by members of the International Telecommunication Union (ITU). The spectrum allocation decisions reached at these international conferences can affect the direction and growth of various wireless communications services and have far-reaching implications for the multi-billion dollar wireless communications industry in this country and abroad. While the first international radio conferences were aimed at interference avoidance for early radio uses, such as maritime safety, meeting this same objective has become increasingly challenging throughout the last century with the proliferation of services and the number of nations adopting communications that utilize the radio frequency spectrum. For example, the emergence of new radio applications with international ramifications, such as broadcasting, radio navigation, and satellite-based services, has increased the need to reach agreements to prevent cross border signal interference and maximize the benefits of spectrum in meeting global needs, such as air traffic control. At the same time, the number of participating nations in these negotiations has risen dramatically—from 9 nations in the first conference held in 1903, to 65 nations in 1932, to 148 at the conference held in 2000—along with the frequency of conferences (now held every 2 to 3 years), and the number of agenda items negotiated at a conference (e.g., 11 in 1979; 34 in 2000). There has also been a movement toward regional cooperation at WRCs. Because decisions on WRC agenda items are made by vote of the participating countries—with one vote per country—uniform or block voting of nations in regional alignment has emerged to more effectively advance regional positions. The State Department coordinates and mediates the U.S. position for the WRC and leads the U.S. delegation to the conference through an ambassador appointed by the President. We found strong agreement among those we interviewed that it is important for the United States to develop its position in advance of the conference in order to have time to meet with other nations to gain international support for our positions. However, we heard differences of opinion about the United States’ preparatory process for the conferences. U.S. positions on WRC agenda items are developed largely through separate processes by FCC and NTIA with the involvement of their respective constituencies. To obtain input from non-federal users, FCC convenes a federal advisory committee comprised of representatives of various radio interests (e.g., commercial, broadcast, private, and public safety users), and solicits comment through a public notice in the Federal Register. NTIA and federal government users can and do participate in the FCC process. To obtain the views of federal spectrum users, IRAC meets to provide NTIA with input on WRC agenda items. Although IRAC’s WRC preparatory meetings are closed to the private sector due to national security concerns, non-federal government users may make presentations to IRAC to convey their views on WRC agenda items. Any differences of opinion between FCC and NTIA on the U.S. position must ultimately be reconciled into a unified U.S. position on each WRC agenda item. In cases where differences persist, the ambassador acts as a mediator to achieve consensus to form a position. State Department and FCC officials told us that the work of FCC and NTIA with their respective constituencies and with each other in preparation for a conference leads to U.S. positions on WRC agenda items that are thoroughly scrutinized, well reasoned, and generally supported among federal and non-federal parties. In contrast, some non-federal officials told us that the NTIA process does not allow the private sector adequate involvement in the development of U.S. positions for the WRC. Also, some federal and non-federal officials said that since each agency develops its positions through separate processes, it takes too long to meld the two toward the end of the preparatory period. For example, to speed up our preparatory process, the former U.S. Ambassador to the 2000 WRC recommended merging the separate FCC and NTIA preparatory groups to get an earlier start at working with industry and government users to reach a consensus on U.S. positions regarding WRC agenda items. Differing views also have been expressed on how we appoint an individual to head the U.S. delegation. Since the early 1980s, the President has appointed an ambassador to head the U.S. delegation to WRCs for a time period not exceeding six months. The former U.S. Ambassador to the 2000 WRC said that ambassador status is generally believed to confer a high level of support from the administration, and it is viewed as helping to achieve consensus in finalizing U.S. positions and enhancing our negotiating posture. However, the former ambassador also said that the brief tenure of the appointment leaves little time for the ambassador to get up to speed on the issues, solidify U.S. positions, form a delegation, and undertake pre-conference meetings with heads of other delegations to promote U.S. positions. In addition, the ambassador said there is concern about the lack of continuity in leadership from one conference to the next, in contrast to other nations that are led by high-level government officials who serve longer terms and may represent their nations through multiple conferences. Leaders of national delegations with longer terms are perceived as being more able to develop relationships with their counterparts from other nations, helping them to negotiate and build regional and international support for their positions. On the other hand, NTIA officials expressed the view that the ambassador’s negotiating skill was of equal importance to the duration of the appointment. NTIA has several activities to encourage efficient spectrum use by the federal government, but does not have assurance that these activities are effective. NTIA is required to promote the efficient and cost-effective use of the federal spectrum that it manages—over 270,000 federal frequency assignments at the end of 2000—“to the maximum extent feasible.” NTIA has directed agencies to use only as much spectrum as they need. NTIA’s process for assigning and reviewing spectrum places primary responsibility for promoting efficiency in the hands of the individual agencies because the determination of agencies’ spectrum needs depends on an understanding of their varied missions. Moreover, the large number of frequency assignments that require attention (NTIA processes between 7,000 and 10,000 assignment action requests—applications, modifications, or deletions—from agencies every month on average) makes it necessary to depend heavily on the agencies to justify and review their assignment needs. NTIA authorizes federal agency use of the spectrum through its frequency assignment process. As part of this process, NTIA requires an agency to justify on its application that it will use the frequency assignment to fulfill an established mission and that other means of communication, such as commercial services, are not appropriate or available. In turn, agencies generally rely on mission staff to identify and justify the need for a frequency assignment and complete the engineering and technical specifications for the application. NTIA and IRAC review the application to ensure, among other things, that the assignment will not interfere with other users. Once NTIA has authorized spectrum use by agencies, it requires that the agencies review their frequency assignments every 5 years to determine that the assignments are still needed and meet technical specifications. NTIA said that it may delete assignments that have not been reviewed for more than 10 years. Officials from the seven federal agencies in our review told us that they attempt to use spectrum as efficiently as possible, but five of them are not completing the required five-year reviews in a timely or meaningful way. According to agency officials, this is due to shortages of staff available to complete the review or because completing the reviews are a low agency priority. For example, a spectrum manager for a major agency division has over 1,000 frequency assignments that have not been reviewed in 10 years or more. A spectrum manager in another agency said that the agency has eliminated all field staff responsible for assisting with the five-year reviews, which has impaired the timeliness and quality of the reviews. The spectrum manager for a third federal agency said that he was sure that the agency was not using all of its frequency assignments, but he added that conducting a comprehensive review would be cost prohibitive and generate limited benefits to the agency. However, we note that although the agencies may not reap benefits from conducting these reviews, if these reviews result in the release of unused or underutilized spectrum, other federal and non-federal users could benefit. Although NTIA’s rules and procedures also include NTIA monitoring programs designed to verify how spectrum is used by federal agencies, NTIA no longer conducts these programs as described. For example, at one time, the Spectrum Management Survey Program included NTIA site visits to verify if agency transmitters were being used as authorized. NTIA said that although this program helped correct frequency assignment information and educate field staff on NTIA requirements, it is not currently active due to NTIA staff shortages. In addition, the Spectrum Measurement Program made use of van-mounted monitoring equipment to verify that federal agencies were utilizing assigned frequencies in accordance with the assignment’s requirements. NTIA said that although this program provided useful information, the van-mounted verification has been discontinued due to lack of resources. As a result of the limited nature of the assignment and review programs and decreased monitoring, NTIA lacks assurance that agencies are only using as much spectrum as they need. NTIA also seeks to promote efficiency by advocating spectrum conservation through research and technical initiatives, but some of these activities face implementation problems. Two examples illustrate the potential and the limitations of these types of efforts. First, NTIA, with the approval of IRAC, has required all federal agencies to upgrade land-based mobile radios by setting deadlines for halving the spectrum bandwidth used per channel (in essence, freeing up half of each band currently in use) for radios in certain highly congested bands—a process called narrowbanding. This requirement has the potential to greatly expand the spectrum available for land mobile telecommunications, but some agencies said that they are struggling to meet the deadline due to a lack of sufficient staff and funding. Several agencies in our review said they will not complete the upgrades before the deadline. For example, the Chief Information Officer for one agency that is a member of IRAC compared the requirement to an unfunded mandate, and indicated that his office did not have the financial resources needed to upgrade the tens of thousands of radios that fall under the requirement. A second example of a technological initiative is a NTIA-sponsored pilot program for federal agencies in six cities in the early 1990s to establish a spectrum sharing method for voice radio communications, called trunking, which conserves spectrum by putting more users on each radio channel. According to NTIA, some agencies resisted the program because it was more costly for agencies to participate in trunking than it was for them to use their own channels. In addition, some agencies said the trunking systems did not meet their mission needs. NTIA added that the program was only completely successful in Washington, DC, where agency demand for frequency assignments, and therefore spectrum congestion, is extremely high. We found efforts to encourage this technology in other countries as well. In the United Kingdom, providers of emergency services are being encouraged to join a trunking system. Once the new system has proved to be capable of meeting their needs, certain public safety users will incur financial penalties if they do not use this system. Additionally, in one province in Canada, a variety of public safety users have voluntarily begun developing a trunking system in order to use their assigned spectrum more efficiently in light of the fees they must pay for this resource. NTIA also told us that the congressionally-mandated spectrum management fees agencies must pay also help to promote the efficient use of spectrum. These fees are designed to recover part of the costs of NTIA’s spectrum management function. The fees began in 1996 and amounted to about $50 per frequency assignment in 2001. NTIA decided to base the fee on the number of assignments authorized per agency instead of the amount of spectrum used per agency because the number of assignments better reflects the amount of work NTIA must do for each agency. Moreover, NTIA stated that this fee structure provides a wider distribution of cost to the agencies. Although NTIA officials said that spectrum fees provide an incentive for agencies to relinquish assignments, it is not clear that this promotes efficient use of spectrum, in part because agencies may be able to reduce assignments without returning spectrum. For example, a spectrum manager for a federal agency said that the spectrum fee has caused the agency to reduce redundant assignments, but that it has not impacted the efficiency of the agency’s spectrum use because the agency did not return any spectrum to NTIA as a result of reducing its assignments. We have learned that other countries are moving toward using payment mechanisms for government spectrum users that are specifically designed to encourage government users to conserve their use of spectrum, rather than to recover the cost of managing the spectrum. Both Canada and the United Kingdom are reviewing their administrative fee structures at this time with the intent of encouraging spectrum efficiency. We are conducting additional work on the management of the radio spectrum to determine how the current rules and regulations governing spectrum holders affect the rollout of new technologies and services and the level of competition in markets that utilize spectrum. To address these and other related issues, we are building on the information presented here today concerning U.S. rules and regulations governing spectrum management. We are interviewing an array of providers of mobile telephone, satellite, paging services, broadcasters, NTIA, other federal agencies, and public safety representatives. Tomorrow we are hosting a panel with experts from several of these sources to elicit additional input on these and other issues. | As new technologies that depend on the radio spectrum continue to be developed and used more widely, managing the spectrum can grow increasingly challenging. The current legal framework for domestic spectrum management evolved as a compromise over the questions of who should determine the distribution of the spectrum among competing users and what standard should be applied in making this determination. Although initially, all responsibility for spectrum management was placed in the executive branch, this responsibility has been divided between the executive branch for managing federal use and an independent commission for managing non-federal use since 1927 . The current shared U.S. spectrum management system has processes for allocating spectrum, but these processes have occasionally resulted in lengthy negotiations between the Federal Communications Commission and the National Telecommunications and Information Administration (NTIA) over allocation issues. The United States also faces challenges in effectively preparing for World Radiocommunication Conferences. NTIA has several activities to encourage efficient spectrum use by federal agencies, but it lacks the assurance that these activities are effective. NTIA is required to promote efficiency in the federal spectrum it manages, which included more than 270,000 federal frequency assignments at the end of 2000. To do this, NTIA directs federal agencies to use only as much of the spectrum as they need. |
For fiscal year 2005, the District’s Office of Contracting and Procurement—its lead contracting office—reported conducting over 20,000 transactions valued at $1.2 billion on behalf of 55 District entities, five of which accounted for $596 million (see table 1 for the departments, agencies, and other entities reporting procurements through this office). Over two-thirds of the District’s procurement dollars managed through the lead contracting office was spent on professional and public safety services, human care, and road and highway construction. In addition, some District entities, including the Board of Education for District of Columbia Public Schools and the Department of Mental Health, procure independently of the lead contracting office. According to information available from District sources, these entities spent over $600 million in fiscal year 2005. The District also has special requirements related to being the seat of the federal government. The fiscal relationship between the federal government and the District as well as city governance have been perennial questions for Congress, and the District’s local autonomy has evolved significantly in the last 30 years. In 1973, Congress enacted the District of Columbia Self-Government and Governmental Reorganization Act or Home Rule Act, which established the structural framework of the current District government. The Home Rule Act allowed for an elected Mayor and a council with certain delegated legislative powers. However, Congress explicitly reserved legislative authority over the District. The Home Rule Act generally provides a framework and processes for Congress to enact, amend, or repeal any act with respect to the District. Congress used this authority in the 1990s to enact laws intended to restore the city to financial solvency and improve its management in response to a serious financial and management crisis. Since the 1870s, the federal government has made financial contributions to the District’s operations. In fiscal year 2006, federal government appropriations included $603 million in special federal payments to the District with $75 million for elementary, secondary, and post-secondary education initiatives. In 1997, the council, with the Mayor’s approval, amended the District’s procurement law to centralize procurement under one contracting office, which would be the exclusive contracting authority for all procurements covered under the act. The amendment also authorized the Office of Contracting and Procurement to be headed by a CPO who would be appointed by the Mayor for a 5-year term, with the advice and consent of the council, and could only be removed from office for cause. The CPO was required to have no less than 7 years of procurement experience in federal, state, or local procurement. The CPO, by delegation of the Mayor, was given the exclusive contracting authority for all procurements covered under the law. The amendment was enacted around the same time that various procurement studies were published, with one describing procurement in the District as “in crisis”—as evidenced by over 600 contracts expiring in 90 days and a rushed response to ensure that vital services were not interrupted. The studies reported that procurement processing was inconsistent and responsibilities were widely distributed across the District; training for procurement personnel was insufficient and few were professionally certified; agencies maintained separate databases; and there was no acquisition planning process to define needs. Centralization under the CPO’s office was expected to improve the quality of the District’s procurement operations by promoting accountability, decreasing procurement costs, eliminating duplication of effort, and increasing financial control and performance. In particular, it was reported that centralization of the acquisition function could allow the District to spend money more effectively by promoting more competition and through bulk purchases of goods and services used by multiple agencies. Despite the expected benefits, the District’s inspector general’s and auditor’s offices continued to identify deficiencies across the District’s procurement system that frequently produce negative impacts on the integrity and operations of the District. Moreover, for the past 5 years, the inspector general’s annual reports have cited procurement as a significant area of concern due to lapses in contracting operations resulting in costly inefficiencies, fraud, waste, and abuse. Some of the persistent problems reported by District auditors and inspectors include the following—many of which are similar to those that prompted the 1997 law: Outdated procurement law and regulations that fail to effectively address long-standing procurement deficiencies, policies, and procedures for all aspects of the process specifically in the areas of solicitation, awarding, and monitoring of contracts. Lack of continuity in procurement law, policies, and procedures as applied to some agencies. Noncompliance with procurement law and regulations, and lax accountability over individuals for not complying with the District’s guidelines. Ineffective competition and overuse and misuse of sole-source contract awards. Unauthorized commitments and purchases by District personnel from vendors without valid written contracts. Failure to conduct advanced planning for known projects and procurement requirements that lead to costly sole-source acquisitions often based on faulty justifications. Insufficient independent oversight of agencies that expend significant resources for information technology, construction, and communication projects. Managers not ensuring a sufficient number of experienced procurement personnel, proper training, and certification of procurement workforce. The objective of a public procurement system is to deliver on a timely basis the best value product or service to the customer, while maintaining the public’s trust and fulfilling public policy goals. The federal government achieves this through guiding principles established in the FAR. NASPO and the ABA model procurement code have also established key guiding principles and practices that are generally accepted and should be incorporated into an effective procurement system. In addition, our work has identified best practices and other accepted elements that are essential for an efficient and accountable acquisition function. Key characteristics of a successful procurement system include: Transparency—Comprehensive procurement law with clear and written policies and procedures that are understood by all sources. Accountability—Clear lines of procurement responsibility, authority, and oversight. State and local governments recommend the CPO have full- time, sole, and direct responsibility for the procurement program. Integrity—Public confidence earned by avoiding any conflict of interest, maintaining impartiality, avoiding preferential treatment for any group or individual, and dealing fairly and in good faith with all parties. Competition—Specifications that do not favor a single source and solicitations widely publicized to benefit from the efficiencies of the commercial marketplace. Organizational Alignment and Leadership—Appropriate placement of the acquisition function in the organization to cut across traditional organizational boundaries with stakeholders having clearly defined roles and responsibilities. For state and local governments to operate effectively, recommended practice is central leadership in the executive branch. Human capital management—Competent workforce responsive to mission requirements, with continued review and training to improve individual and system performance. Knowledge and information management—Technologies and tools that help managers and staff make well-informed acquisitions decisions. The District lacks a uniform procurement law that applies to all District entities and that provides the CPO with adequate authority and responsibility for the entire acquisition function—an essential component to promoting transparency, accountability, and competition. In addition, the law has been amended to exempt certain District entities and procurements from following the law’s competition and other requirements. According to current and former District procurement officials, District entities are seeking to expand independent procurement authority—a move that would undermine attempts to establish a central authority. Finally, the law limits competition by broadening the exceptions under which sole-source contracts can be awarded; authorizing dollar thresholds for small purchases that are higher than those provided for in other city and federal government procurement regulations, including the FAR; requiring the use of a local supply schedule with limited vendors for a variety of goods and services; and encourages agencies under certain circumstances to bypass contracting rules to directly pay vendors without valid written contracts. In contrast, other cities’ procurement laws emphasize the competitive process and having a strong centralized authority for their CPOs in order to safeguard the integrity of their procurement systems. Contrary to sound procurement principles and practices as identified by a variety of sources, the District lacks a uniform procurement law that uniformly applies to all District entities and provides clear authority to the CPO. To promote transparency, accountability, and maintain integrity of public procurement, NASPO and the ABA model procurement code for state and local governments describe concepts for creating a uniform procurement law that provides for central management of the entire procurement system and broad discretion and authority to a CPO to implement policies. Similarly, in the federal procurement system, the FAR establishes uniform policies and procedures for acquisition by most executive agencies under the President. Without such a foundation, the District’s procurement system is vulnerable to poor acquisition outcomes and less capable of maintaining public trust. Twelve District entities, including the Water and Sewer Authority and Housing Authority, are not under the authority of both the District’s procurement law and Office of Contracting and Procurement, and are allowed to follow their own procurement rules and regulations. In many cases, the procurement law specifically exempts these entities from following the law, which is contrary to the central statutory purpose of the District’s procurement law to (1) eliminate overlapping or duplication of procurement activities; (2) improve the understanding of procurement laws and policies by organizations and individuals doing business with the District government; and (3) promote the development of uniform procurement procedures governmentwide. As a result, the District’s law has created a procurement environment where some entities follow different rules and practices, undermining the District’s ability to capture an overall view of its procurements as well as placing an added burden on vendors to understand how to do business with the District. According to NASPO, it is essential to have one uniform law that applies to all agencies and their procurements and exclude blanket exemptions for any executive agency or department. If exclusions are necessary, the law should define them narrowly by types of goods and services procured. NASPO state procurement leaders we spoke with said that they would be unable to effectively run their own procurement systems without one governing law. Without it, vendors are discouraged from competing since they do not know what rules apply, which increases the risk that taxpayers pay more for goods and services. According to several former and current CPOs in the District, not having a uniform procurement law that governs all entities has been problematic in ensuring transparency, accountability, and oversight. Officials from other cities we reviewed agreed that having a common procurement framework is critical for ensuring transparency and integrity in the procurement system. Atlanta, for example, has one procurement law that governs all agencies, which allows agencies, vendors, and contracting employees to have a clear and consistent view of how procurements should take place. The law also fails to provide a service agency that would be the exclusive contracting agency for all District procurements under the Mayor’s direction. NASPO calls for a centralized procurement official with the authority and responsibility to, at a minimum, develop standardized policy and procedure, delegate procurement authority to executive agencies, provide expert assistance and guidance on procurement issues, and oversee the acquisition process. While the statutory purpose of the 1996 amendment to the procurement law was to centralize procurement in the Office of Contracting and Procurement headed by a CPO, the law does not give the CPO sole authority over the full spectrum of procurement activities in the District. For example, although the law allows the CPO to delegate procurement authority to employees of District entities covered under the law and to the CPO’s own staff in the Office of Contracting and Procurement, the council, with the Mayor’s approval, has used its authority to pass emergency laws exempting entities and procurement actions from the CPO’s authority. The council’s use of its emergency act authority has been problematic in certain cases where it exempted District entities from conducting their procurements through the CPO’s office. For example, in October 2006, the council amended the procurement law to provide the District’s Board of Library Trustees procurement authority independent of the Office of Contracting and Procurement and the District’s procurement law—contingent upon the board issuing its own procurement regulations—except for provisions pertaining to contract protests, appeals, and claims. A senior official in the Office of Contracting and Procurement said that circumventing the CPO’s authority in this case was not a solution largely because the library board trustees do not have the contracting experience or staff to exercise the new authority. NASPO recognizes that to ensure the appropriate level of transparency and accountability and to preserve the integrity of the procurement system, it is critical that the CPO have sole responsibility for delegating procurement authority. According to the District’s current and former CPOs, agencies and the council are pushing to expand independent procurement authority through exemptions. These efforts, if successful, could further undermine efforts to establish a central authority—a key objective of the procurement law amendment more than a decade ago. NASPO state procurement leaders as well as current and former CPOs in the District told us that this is a move in the wrong direction and that amendments to the procurement law should only be made to introduce more effective procurement methods or when current laws no longer make sense. In addition to authorizing agencies to award their contracts independently of the CPO, the council has eliminated the CPO’s sole authority to debar or suspend contractors from future contracts for various reasons, such as conviction of certain offenses. In 2003, the council eliminated this authority after the then-CPO debarred one vendor who pleaded guilty in federal court to conspiracy in giving cash bribes to District public works officials in return for falsified orders for asphalt-delivery. Prior to this time, the procurement law gave the CPO sole authority for suspensions and debarments. According to both a former CPO and a current senior procurement official who were involved in this case, the procurement law was amended to establish an interagency suspension and debarment panel that reconsidered the CPO’s decision in this case as well as made final decisions in all future cases. After the panel’s reconsideration, the vendor was allowed to resume doing business with the District. To ensure a strong, central procurement system, NASPO recommends that CPOs have sole authority to implement a range of remedies for poor vendor performance, including suspension and debarment. The council, with approval from the Mayor, has further amended the law to exempt temporarily or permanently certain agencies from following the procurement law’s requirements for competition or conducting their contracts through the CPO. For example, in June 2006, the council exempted the Director of the Department of Health from following the competition and other requirements of the procurement law and allowed the Director to select and contract with a vendor for an air quality study of the Lamond-Riggs park within 30 days. In another case, in June 2006, the council, with the Mayor’s approval, exempted the Office of Contracting and Procurement from following its procurement law for awarding a construction contract on behalf of the Department of Youth Rehabilitation Services for a youth center at Oak Hill. A senior District procurement official told us that despite this exemption, the office intends to award competitively. According to senior procurement officials in the CPO’s office, entities seek exemptions believing that working through the CPO or the competitive process required by the law takes too much time. Current and former District officials noted that in giving some entities their own temporary procurement authority through exemptions in the law, the council and Mayor have, in effect, created a culture of resistance to centralized management and oversight of the acquisition function. One senior District procurement official told us that such exemptions also create inequities among agencies; explicitly discourage competition—contrary to the statutory purpose of the law; and occasionally show preferences for certain agencies and vendors. A former District executive and former CPO told us that such exemptions have over time distorted the procurement law and made it difficult for any vendor interested in doing business with the District to understand how and to whom the procurement law applies. Further, it is questionable why the council would use emergency act authority to make noncompetitive awards given that the procurement law and implementing regulation already establish procedures for these types of procurements. NASPO state procurement officials we spoke with voiced concerns over exemptions that would give certain agencies the authority to operate under their own rules or no rules at all and jeopardize the integrity of their public procurement system. Moreover, they said that such exemptions further undermine the CPO’s authority over the District’s procurement system and ability to develop consistent procurement policy. Other cities we reviewed have faced similar challenges with what they called “political influence” in the procurement process. New York’s CPO told us the city council plays no role in making procurement policy and under no circumstances would the council be allowed to pass exemptions to the city’s procurement law similar to those passed in the District. Long-standing procurement principles, policies, and procedures implemented in the FAR and recommended by NASPO and the ABA model procurement code recognize that maximizing the use of competition ensures governments receive the best value in terms of price and quality. According to a procurement law expert who participated in a GAO forum on federal acquisition challenges and opportunities, contractor motivation to excel is greatest when private companies, driven by a profit motive, compete head to head in seeking to obtain work. Consistent with this fundamental principle, the District’s procurement law mandates that full and open competition is the preferred acquisition method. However, certain provisions in the District’s procurement law have resulted in a public procurement system that emphasizes flexibility and speed over competition. Specifically, the law (1) authorizes sole- source contracting under broad provisions, (2) establishes higher dollar thresholds for limited competition small purchases than are allowed in other cities or the FAR, and (3) mandates the use of a local supply schedule with a limited number of vendors—each of which permits use of streamlined acquisition methods for high dollar procurements that result in limited or no competition. Both NASPO and the FAR recognize that circumstances sometimes make it difficult or impossible to conduct formal competitive procurements and that in such cases, the use of sole-source procurements is warranted. However, NASPO and the FAR also recognize that such procurements should only be permitted under narrowly defined conditions and should always be properly justified. They state that to ensure transparency in these types of procurements, the law should also require legal notice of intent to initiate a sole-source procurement over a determined dollar value. While recognizing there are situations in which competition must and should be limited, NASPO states that artificially restricting competition when competition is possible defeats a central tenet of public procurement. Rather than restrict the conditions under which sole-source procurements can occur, the District’s procurement law has been amended—as recently as 2002—to expand exceptions to full and open competition. Although complete data District-wide on sole-source contracting are unavailable, over 14 percent—or $173 million—of the fiscal year 2005 reported procurement spending through the Office of Contracting and Procurement was on a sole-source basis. Of the District’s various sole-source provisions, three account for the majority of sole-source contracts and spending (see table 2). Of the three provisions, one is similar to an equivalent provision in the FAR, while the remaining two provisions have no equivalent. Senior procurement officials and former CPOs pointed out that these provisions in the procurement law establish a wide range of circumstances to bypass competition. Over 40 percent of the District’s fiscal year 2005 sole-source contracts were awarded under provision (a)(1), which similar to an equivalent FAR provision, requires agencies to justify that there is only one available source for a good or service. Of the 296 contract awards under this provision, 45 percent were made by the Office of the Chief Technology Officer (OCTO) for a variety of information technology and telecommunication services. According to NASPO officials we spoke with, typically more than one vendor in the commercial marketplace provides these services and the services would normally be competed. In 2005, the District’s inspector general reported on questionable single available source justifications involving information technology services. According to the inspector general, there were numerous competing firms that could have satisfied the District’s needs for eight selected single available sole-source contracts they reviewed. For three sole-source contracts for general purpose commercial information technology equipment, software, and service, the inspector general found that there were 700 vendors eligible to compete through the District’s supplier database and another 113 vendors located in the District eligible to compete through the federal supply schedules. Overall, the inspector general concluded that the District could have potentially saved at least $589,000—over 24 percent—of the $2.5 million for the sole-source contracts awarded. More than half of the fiscal year 2005 sole-source contracts were awarded through the (a) ( 3) and (a) (3A) provisions, which permit agencies to award sole-source contracts to any vendor who agrees to charge according to a schedule of prices for federal agencies. Unlike the District’s single available source provision, these provisions have no equivalent in the FAR or NASPO and ABA procurement guidance for state and local governments. According to a senior District procurement official, these two procurement law provisions were intended to save time in the District’s procurement process by piggybacking off the prices previously set as a result of the prior competition—primarily contracts awarded to District and other vendors under the General Services Administration’s (GSA) multiple award schedule (MAS) program. The use of sole-source provisions as a time-saving measure appears to conflict with the District’s own procurement regulations, which calls for contracting officers to avoid sole-source procurements except where necessary. GAO’s work has also found that while MAS has provided the federal government with a more flexible way to buy commercial items and services, contract negotiators do not always use the full range of tools to ensure the government effectively negotiated prices. As a result, the federal government has missed opportunities to save millions of dollars in procuring goods and services. By eliminating competition altogether and awarding sole-source contracts to vendors based on MAS pricing, the District may be similarly missing significant cost-saving opportunities. Moreover, the District may be at greater risk because its sole-source use of the federal supply schedule is not subject to the FAR, and the District’s implementing procurement regulation does not provide specific guidance on the use of the (a)(3) and (a)(3A) provisions. A senior procurement official we spoke with noted that the CPO’s office recently started requiring District contract officers to additionally justify their use of these methods after growing concerned about the large number of sole source contracts being awarded. To ensure they get the best value for the taxpayer dollar, other cities we reviewed have taken steps to emphasize competition over sole source. These officials recommended that a procurement law—similar to statutes implemented in the FAR—narrowly define sole-source contracting and require that such actions be properly justified and documented. For example, in Atlanta, sole-source contracts may only be awarded when the CPO determines after conducting a good-faith, due diligence review of available sources that there is only one available source for the required good or service. Even for emergencies, Atlanta’s procurement law requires the CPO to use competition to the maximum extent practicable, and sole source may only be considered in the case of a threat to public health, welfare, or safety. According to Atlanta’s CPO, in fiscal year 2005, only five sole-source contracts were awarded. Similarly, New York’s procurement rules specify only one condition or circumstance in which sole-source contracting is permitted for purchases above $5,000; there is only a single available source and competition is not possible. For purchases under a certain dollar threshold, the administrative costs to formally compete may outweigh the benefits of competition. In such cases, procurement systems may permit streamlined acquisition procedures with limited competition for purchases not exceeding a specified dollar threshold. In the District, small purchase procedures streamline the process by limiting competition to oral or written price quotes from only a few vendors, or eliminating competition altogether (see table 3). For the District, a series of legislative changes since 1985—when the small dollar threshold for small purchases was $10,000—have increasingly raised the threshold for some entities, expanding the opportunities to limit competition. Currently, the District’s small purchase threshold is $500,000 for OCTO and the Metropolitan Police Department and $100,000 for all other entities. The District’s small purchase authority allows for somewhat larger limited competitive purchases than that authorized in the FAR. Under the FAR’s micro-purchase authority, competition is not required for purchases up to $3,000 when the contacting officer determines that the price is reasonable. For small purchases between $3,001 and $100,000, the FAR’s simplified acquisition procedures require that the contracting officer promote competition to the maximum extent practicable. Generally, the contracting officer should consider obtaining at least three price quotes or offers from sources within the local area and evaluating those to determine the most advantageous to the government. Under the District’s small purchase authority, competition is not required for purchases up to $10,000 when the contracting officer determines that the purchase is in the best interest of the District. Moreover, contracting officers in the District are allowed to waive the competitive small purchase procedures under broad circumstances—such as time constraints and lack of available sources—when it is impractical to obtain the required number of quotes. In fiscal year 2005, over 75 percent of the District’s procurements through the Office of Contracting Procurement were for small purchases totaling $163 million. However, small purchase procurements could increase in the future. According to one senior District procurement official, there is a move to increase the small purchase threshold from $100,000 to $500,000 for all agencies—a limit five times as high as that prescribed in the FAR. State and city procurement officials voiced concern that the District would consider this change in an effort to expedite procurements by allowing limited competition methods. NASPO state procurement officials we interviewed were surprised at how high the District’s small purchase thresholds were set, and viewed this as one of the procurement law’s major barriers to competition. Each of these officials said that they consider such amounts to be large purchases, particularly at the $500,000 level. As one senior procurement official in the District put it, “just about anything can be considered a small purchase in the District.” Other cities we reviewed see the economic and quality benefits of competition when larger procurements are involved, such as those the District considers small purchases. In Atlanta, for example, the small purchase threshold is $20,000 and New York, which spends over $11 billion per year on procurement, only recently increased its small purchase threshold to $100,000. According to the Atlanta CPO, raising small purchase limits across the board ultimately compromises the integrity of the procurement system by reducing transparency over procurement decisions and source selection. One District official remarked that, if these types of changes continue in their current direction, the District will no longer have a recognizable procurement system. The District of Columbia Supply Schedule (DCSS) program also limits competition by restricting the pool of vendors for a variety of goods and services to local companies; requiring entities to use the schedule as a first source for all procurements $100,000 and below; and allowing limited competition for purchases over $100,000—to a ceiling as high as $10 million for certain services. At the same time, there is no mechanism in place to ensure that the incumbent vendor does not receive all DCSS contracts for a particular schedule. NASPO has recognized that balancing the need to promote socioeconomic goals with the need to ensure maximum competition is an ongoing challenge. However, NASPO recommends caution in the use of supply schedule programs, such as the DCSS, because while there is the presumption of best value, competition among vendors is often limited with no incentive to offer best price. The DCSS program was established in 2002 to help achieve the District’s local and small and disadvantaged requirement established in its procurement law and expand the District’s tax base. According to a former District executive, the DCSS program was also intended to expedite agencies’ small purchases of common and routine items for which competition would not be practical, such as office and janitorial supplies. The current program is the primary vehicle for supporting the District’s small, local, and disadvantaged business enterprises (LSDBE) and requires that District entities use DCSS small business entities to make purchases of $100,000 and below. This mandatory use of the DCSS ultimately limits the pool of vendors for a number of goods and services, which for some of the schedules is fewer than three vendors. Though it may appear similar to GSA’s MAS program of federal supply schedule contracts, the DCSS serves a different purpose. Under the FAR, the purpose of the GSA supply schedules program is to provide federal agencies with a simplified process for obtaining commercial supplies and services at prices associated with volume buying. The FAR provides extensive guidance on the use of the schedules to achieve that purpose. In contrast, the DCSS is designed to promote LSDBEs and lacks the type of comprehensive guidance provided to the federal supply schedules by the FAR. According to NASPO, unlimited use of supply schedules limits competition and can increase costs because vendors have no incentive to meet the best price of their competitors. Further, open-ended contracts for the same goods or services are awarded to many more vendors than needs appear to demand, removing any consideration of need and price from the purchasing decision. In fiscal year 2006, reported contract awards off of the DCSS—which contains 19 categories of goods and services with nearly 200 local vendors—totaled almost $22 million (see table 4). Some DCSS contracts are valued much higher than $100,000, including some fiscal year 2006 awards to DCSS vendors valued at $1 million and one award for $5 million. Moreover, in 2006, the CPO’s office raised the contract ceilings for individual DCSS vendors on several of these schedules including the information technology services schedule, which is now set at $10 million. As a result, one DCSS information technology vendor could in 1 year potentially receive a single limited competition order worth up to $10 million. NASPO officials we spoke with voiced concern about the ease with which the District makes what they would consider large limited competition purchases off a supply schedule originally intended to limit competition only for small purchases. In addition, District procurement officials told us that the DCSS program has limited guidance and no procedure in place to ensure that each vendor is provided a fair opportunity to be considered for orders. Under DCSS terms and conditions, contracting officers must follow small purchase procedures as described in table 3 when buying a good or service off DCSS. However, these officials said that it is up to the contracting officer to arbitrarily select three vendors from each schedule to obtain price quotes; according to District procurement officials, this typically includes the incumbent. For the 14 schedules that have more than three vendors, this discretion could prove unfair to certain vendors. The FAR, in contrast, advises contracting officers to request quotations or offers from two sources not included in the previous solicitation. According to District procurement officials, there is currently no requirement to monitor the use of the schedule to determine whether it is promoting small businesses overall or if a pattern of sole-source contracts to the same businesses is occurring. They told us this type of information would be beneficial to evaluating the effectiveness of the program and that an overall assessment of the current program may be needed to determine if it is meeting its original intent. To safeguard the obligation of taxpayer dollars and protect the integrity of a public procurement system, a government’s procurement law should grant exclusive authority to contracting officers for establishing contracts and restrict employees from making unauthorized commitments for goods and services. It should also grant the CPO the authority to ratify contracts and authorize payments for goods and services received without a valid written contract if certain conditions are met. Until recently, the District’s procurement law appeared to emphasize these standards. Under September 1996 CFO guidance, direct voucher payments without having been first obligated in the District’s financial management system could only be made in 21 specific non-procurement related circumstances—all of which were reasonable and included situations where the payees could not be determined in advance, such as court ordered fines, workers’ compensation, jury duty fees, and medical payments for assault crime victims. However, in 2006, the council, with the Mayor’s approval, amended the procurement law that increased the circumstances under which such payments may be made. Changing the policy may have had the unintended consequence of focusing agency personnel attention on the process of paying for unauthorized commitments rather than focusing on how to get management attention on preventing employees from entering into authorized commitments. According to financial management officials, in 2005, the District’s CFO office reviewed over 21,000 direct voucher payments totaling $556 million made in fiscal year 2004. They stated that the purpose of the review was in part to determine to what extent these direct voucher payments resulted from unauthorized commitments by District agencies for goods and services. The analysis confirmed that of the vouchers reviewed, over 11,000 totaling $217 million were not in compliance with 21 allowed uses under the 1996 CFO policy. Rather than take steps to hold agencies accountable for these violations, the CFO’s policy was changed without consulting the CPO’s office on the merits of the change. CFO officials told us their office determined it was necessary to accommodate agency circumstances for bypassing the procurement process to more promptly obtain goods and services needed for critical operations. Under Financial Management and Control Order No. 05-002, issued July 22, 2005, and revised October 17, 2005, the CFO added 7 new circumstances for direct voucher payments to the 21 already included in the 1996 financial guidance. Five of the seven added circumstances were for new non-procurement related transactions, such as temporary welfare payments to families and certain lawsuit settlement payments. The remaining two are for procurement-related transactions, however, and are problematic. The first circumstance—which allows direct voucher payments for goods and services needed for an unanticipated and nonrecurring extraordinary emergency—duplicates provisions in the District’s procurement law that establish procedures for handling such circumstances under emergency contracting procedures. A senior District procurement official said that direct voucher payments should not be made for emergency procurements. The second circumstance allows agencies to make direct voucher payments for liabilities incurred through unauthorized commitments to vendors for goods and services without valid contracts after payment has been ratified—a practice that could further encourage employees to bypass established contracting procedures. The District’s inspector general has voiced a similar concern with this change and in December 2005 testimony called for a reexamination of the CFO’s 2005 policy for allowing direct voucher payments for unauthorized vendor commitments that bypass contracting rules. More recently, the inspector general reported that in fiscal year 2005, District agencies greatly increased payment ratification requests for unauthorized vendor commitments and the procurement office ratified $34 million in payments. In the federal procurement system under FAR Part 1.6, the policy provides procedures for ratification actions to approve unauthorized commitments, but also states that these procedures may not be used in a manner that encourages such commitments be made by government personnel. Moreover, the FAR provides a ratification procedure that not only discourages unauthorized commitments, but allows for their approval if certain conditions are met. Specifically, under the FAR, the chief of a contracting office may ratify an unauthorized commitment only when the goods or services have been accepted; the ratifying official has the authority; the contract would have been proper if done by approved personnel; the price is reasonable; the contracting officer recommends payment; the funds were and are available; and the ratification complies with any additional agency regulations. In addition, the FAR states that cases of nonratifiable commitments may be subject to further referral and resolution under government claim procedures. Allowing government agency personnel to circumvent the normal procurement process and enter into unauthorized commitments with vendors to perform services or deliver goods eliminates the opportunity for competition. After reviewing a draft of this report, CFO officials acknowledged the need to work with the Office of Contracting and Procurement to strengthen the District’s ratification policy. They indicated that unauthorized commitments that cannot be ratified should be referred for possible Anti-Deficiency Act violations. Accordingly, we revised our recommendations to the mayor and the CFO concerning the use of direct vouchers and the ratification process. Other cities we reviewed have taken steps to curb the use of unauthorized commitments. For example, New York’s CPO described the city’s stringent controls and regular monitoring to detect and publicize agencies’ unauthorized commitments with vendors as well as its discipline of employees for bypassing contracting rules—steps that have greatly decreased the number of unauthorized commitments in that city’s procurement system. In addition to generally lacking a uniform procurement law that applies to all entities, promotes competition, and provides the CPO the authority to ensure sound procurement outcomes, the District’s management and oversight of its procurements have lacked the rigor needed to protect against fraud, waste, and abuse. Specifically, the Office of Contracting and Procurement is positioned too low within the District’s executive governmental structure to enforce agency compliance with policies and procedures, effectively coordinate procurement activities and acquisition planning, and sustain leadership. At the same time, the District’s contracting managers and staff, agency heads and program personnel, and other key procurement stakeholders do not have the basic tools for ensuring sound acquisition outcomes, including written guidance on the District’s procurement policies and procedures, a professional development program and certification requirements for contracting staff, and an integrated procurement data system. Although the District and Congress have taken actions to address management and oversight challenges, many remain largely unaddressed. The low-level placement of the Office of Contracting and Procurement undermines the office’s ability to effectively manage and oversee the District’s procurements across dozens of agencies and departments. NASPO and GAO have stated that the central procurement office’s effectiveness is clearly linked to its location in the government structure and that placing the office at a high level is critical to ensuring effective direction, coordination, and control over a government’s procurement spending. Procurement is viewed as a strategic, service function within the executive branch with the central procurement authority being a key policy and management resource for the chief executive. The low-level placement of the District’s procurement office has led to high CPO turnover and a lack of sustained leadership, significantly impeding progress expected from the 1996 law. Within the District’s government structure, the Office of Contracting and Procurement is placed under the Deputy Mayor for Operations— essentially relegating procurement to an administrative and operations support function—as further evidenced by its position in relation to those agencies that procure through this office (see fig. 1). According to former CPOs and current procurement officials, the low-level position denies the CPO direct access to the city administrator, agency heads, and deputy Mayors other than the Deputy Mayor of Operations. As a result, this limits the CPO’s ability to affect budget, program, and financial management decisions. A former District official told us that to improve management and oversight of the procurement system, the CPO needs to be at all executive meetings to raise procurement issues that cut across agency lines. This official told us that it would be helpful to elevate the CPO’s office to a high level similar to other centralized cross- government functions, such as the Office of the Chief Technology Officer, which is responsible for all meeting all of the District’s information technology needs. The low-level position of the CPO’s office in the District’s governmental structure has also undercut the CPO’s ability to influence day-to-day procurements across the District. According to several senior District procurement officials, agencies often bypass the procurement office and do not consult the CPO’s designated contracting officer when initiating procurements—a practice that has led to unfavorable acquisition outcomes. For example, the District’s auditor reported in 2005 that the offices of the Mayor and city administrator failed to involve the CPO’s office and violated contracting rules by entering into unauthorized commitments with a vendor for international trade mission services without a valid written contract, making the commitment invalid. Ultimately, the CPO’s office was left to ratify a transaction that did not conform to the procurement law or regulations. One impact of CPO’s low-level placement is manifested in the inability of the CPO to ensure effective acquisition planning—a critical process for anticipating future needs, devising contracting programs to meet these needs, and arranging for the acquisition to promote competition and use of necessary resources. CPOs from the other cities we reviewed consider acquisition planning as critical to managing the procurement system and maximizing competition, and have put in place mechanisms and tools to regularly address planning. In Atlanta, for example, the CPO requires his contracting staff to meet bi-weekly with agency officials to plan for expiring contracts and new requirements. Agencies are also required to submit a quarterly report to the CPO detailing their procurement needs. In New York, agencies awarding contracts must submit a draft plan detailing anticipated procurement actions. They are also required to hold public hearings on their plan within 20 days of its issuance and provide notice of the hearings 10 days in advance. While the District has a process in place to facilitate acquisition planning across agencies, the CPO lacks the ability to hold agencies accountable for submitting accurate and timely plans. According to former CPOs and current senior procurement officials, District entities in general do not understand the importance of acquisition planning or involving the CPO’s office in planning efforts. Consequently, agencies largely view the required annual plans as a paper drill. In recent years, the CPO’s office has tried to improve acquisition planning across the procurement system without much success. For example, in 2000 the then-CPO implemented a new acquisition planning tool that was aimed at guaranteeing short turnaround for small and simple buys and sharing workload with partner agencies on larger, more complex buys. Though this was the original intent, CPO contracting officers we spoke with do not use the plans to schedule procurement support activities for their agencies. Our analysis of selected contracts conducted by the CPO’s office in 2005 for three agencies against procurements listed in their 2005 acquisition plans found none of the contracts were recorded in the planning tool. conduct advanced planning for known projects, services, and procurement requirements ultimately manifests in costly internally generated emergency contracts and purchases.” A senior District procurement official agreed and stated that the lack of planning does not constitute an emergency, but all too often the lack of planning occurs and forces emergency-type procurement actions. Finally, sustaining procurement leadership has been difficult due to the low-level position of the CPO’s office. Former CPOs agreed that in a complex and large-scale procurement system such as the District’s, it is essential to have sustained leadership and a CPO with executive-level procurement experience and qualifications. However, over the past 10 years, the District has had five CPOs—three appointed for 5-year terms and two interim—and none served more than 3 years. According to each of the three CPOs appointed to 5-year terms, the inability to effectively coordinate acquisition activities across all agencies and manage and oversee the District’s procurement function undermined their efforts at reform and ultimately discouraged them from completing their tenures. The lack of sustained leadership is underscored by the 2-year vacancy in the District’s CPO position since September 2004, at which time the Deputy Mayor for Operations became the interim CPO. With no procurement experience—contrary to the District’s law requiring at least 7 years of procurement experience—this official acknowledged that it has been challenging to assume the extra responsibilities of the CPO position. The cities we reviewed have recognized the importance of elevating the central procurement office in the governmental structure as necessary for sound procurement management and oversight. For example, in 2003, Atlanta recognized that the centralized acquisition function headed by a senior procurement director was buried in the structure and took steps to elevate this office with a newly appointed CPO to report through its chief operating officer to the Mayor. According to Atlanta’s CPO, the office now has a seat at the table with the necessary authority to control and direct procurement across all agencies, and to have the Mayor reinforce the CPO’s role in managing the city’s council and agencies. The District lacks other basic tools to effectively manage and oversee its procurement system. Specifically, the city lacks (1) a procurement manual with clear standardized policies and procedures to guide procurement and agency staff; (2) certification requirements for procurement staff and training for agency staff so that both workforces have the necessary skills and knowledge to fulfill their responsibilities; and (3) an integrated procurement data system that can provide complete, accurate, and timely information to inform acquisition decisions and management. Other cities we reviewed recognize the benefit of having these tools as a way to effectively manage and oversee their procurement systems. Despite repeated recommendations since 1997 to develop a procurement policy and procedures manual, the District has yet to do so. Procurement is a complex process guided by numerous policies, documentation requirements, and procedures. A comprehensive manual—one that lays out in one place these policies and rules and standardized procedures and practices—is critical to ensuring procurement and agency staff have a clear and consistent understanding of contracting rules and processes. An internal study by the CPO’s office in 2004 found that in the absence of such guidance, there was a lack of consistency in how the District’s procurement work is done. This inconsistency creates frustration within and outside the government as well as an impression that the District’s procurement actions are unfair. Each of the other cities we reviewed have developed and implemented a basic procurement manual for strengthening management, accountability, and transparency in their procurement systems. In Atlanta, for example, when the new CPO was appointed in 2003, he found a comprehensive procurement manual was key and immediately took steps to update the manual, which had not been done in 7 years. According to former CPOs and current senior procurement officials, the District has not committed to developing a professional acquisition workforce For example, the CPO’s office has not fully developed professional certification requirements. Although the CPO is not required to develop such requirements, this would ensure staff have the qualifications and skills to carry out the responsibilities commensurate with their delegated contracting authorities. A former District executive told us that the CPO’s office should deliver regular training to agency managers and staff on procurement rules and procedures as well as develop metrics to ensure that agency staff participate in the training and obtain the necessary knowledge for fulfilling their responsibilities in the procurement process. One former CPO referred to his staff as an “accidental” procurement workforce because some had previously been administrative staff and few had any contracting background. In 2005, the CPO’s office conducted a skills and training assessment and determined that the current procurement and contracting staff required training on fundamental processes, such as source selection, contract negotiation, and contract administration. The CPO’s fiscal year 2006 budget added $668,400 earmarked for procurement training, and the interim CPO developed a program to train the District procurement staff on basic contracting concepts. While the 2006 training program appears to have addressed some of the immediate contracting skill gaps identified in the 2005 assessment, this one-time effort, in our view, does not address the CPO office’s need for longer-term investments in training. Unlike in the federal government, this program is not linked to a certification process or continuing education necessary for maintaining individual employee’s contracting authorities. In the absence of a comprehensive training and certification program, the CPO delegates contracting authority to procurement staff based on his perceptions of individual skill and experience. NASPO emphasizes the importance of professional development and not only recommends that executive branch officials and the central procurement office encourage professional competence by providing funding for training, but endorse professional certification of staff. Several public procurement organizations, including the National Institute for Government Purchasing, have developed certification programs to ensure procurement staff has attained a prescribed level of qualification. Procurement officials in other cities we reviewed also view training and certification of the procurement staff as critical to the success of their procurement system. For example, New York’s CPO office established a Procurement Training Institute in 2000 and requirements for staff training, including certifications and continuing education minimums. The District also lacks an integrated procurement data system to centrally manage and oversee agency and headquarters procurement activities, despite the procurement law requiring such a system over 20 years ago and investment in the Procurement Automated Support System (PASS), which was intended to provide these capabilities. Although the CPO’s office recognizes that capturing and reporting complete, accurate, and timely procurement data would increase transparency and support development of meaningful performance measures to promote competition and discourage excessive use of sole-source contracts and unauthorized vendor commitments without valid contracts, officials have lacked the high-level support from District leaders and OCTO needed to follow through on their plans for improvement. To make strategic, mission-focused acquisition decisions, organizations need knowledge and information management processes and systems that produce credible, reliable, and timely data about the goods and services acquired and the methods used to acquire them. Our prior work has shown that leading companies use procurement and financial management systems to gather and analyze data to identify opportunities to reduce costs, improve service levels, measure compliance and performance, and manage service providers. After numerous discussions with procurement, financial management, and auditing officials, we found there is no visibility over total procurement actions and spending in the District. We found it difficult to get even the data on such basics as the number and dollar value of hundreds of millions of dollars in procurements for agencies not supported by the CPO’s office, such as the public schools and the Department of Mental Health. Data for the $1.2 billion in fiscal year 2005 procurement spending reported by the District’s CPO office are captured by several standalone systems. As a result, the CPO’s office cannot readily generate regular reports from these systems to track information on what agencies are buying, how they are buying, and from whom they are buying. When we initiated this review, we requested procurement data on such basics as the number of sole-source contracts awarded in a specified time frame, from the CPO’s office for fiscal years 2005 and 2006. The information was provided to us piecemeal. According to a District procurement official, to obtain this data, the CPO’s office must ask its contracting officers and specialists to manually compile, sometimes from memory, the information—a workaround that is not only time-consuming but at significant risk of error. Because of this, we were unable to obtain reliable fiscal year 2006 data on sole-source awards. In an effort to obtain complete, accurate, and timely procurement data and to automate and streamline the procurement process, the District has invested almost $13 million in PASS. Yet, almost 4 years since its inception in 2003, the system is only partially in operation. According to District procurement officials, PASS does not provide full information on completed or ongoing procurements across all agencies, nor does it provide CPO and District agency and financial managers reports and other information they need to manage and oversee the procurement system. In August 2006, the inspector general reported concerns over the delays in fully implementing PASS, noting that a conflict between the CPO’s office and OCTO has hindered the installation and full implementation of PASS. According to senior procurement officials, the CPO’s office has not consented to the extra $2 million that OCTO is requesting to fully implement PASS because all upgrades and installation were included in their purchase of PASS in 2003. The inspector general has recommended the CPO’s office seek assistance from the Mayor’s office in expediting the installation and implementation of PASS’s contracting and sourcing modules. CPOs in the other cities we reviewed told us that a procurement data system is critical to managing and overseeing the procurement system, but some are facing challenges similar to the District’s to develop an integrated tool. New York’s CPO, for example, told us that the city clearly recognizes the importance of an integrated procurement data system and as a result, is engaged in a major undertaking to fully implement a data system sometime in 2007. In the interim, she relies on information contained in the city’s financial management system in compiling various procurement performance indicators. Since 2004, the District has taken several actions to improve the management and oversight of its procurement system. These efforts include an internal study for innovation and reform in the CPO’s office and procurement system; changes in staff assignments and review processes in the CPO’s office; and establishment of an expert task force to review CPO, procurement workforce, and competition matters and submit recommendations to the Mayor and council. However, information we obtained from former CPOs and current senior procurement and other officials involved with these efforts indicates that most recommended actions remain under study or are partially implemented at best. Most of these officials voiced skepticism or concern about the merits and benefits of these efforts as well as the absence of high-level and sustained attention from District leaders to address systemic problems that hamper management and oversight of the procurement system and undermine transparency, accountability, and competition. Following the early resignation of the District’s last full-time CPO in September 2004, the Mayor and city administrator directed the District’s Center for Innovation and Reform to work with the interim CPO’s staff to lead a 6-week internal initiative to create a credible, transparent procurement process that incorporate best practices and innovation. This internal group’s final report made several recommendations to the CPO’s office aimed at streamlining the process, providing tools such as a procurement manual, and leveraging technology. However, 2 years after these recommendations were made, many remained open. Further, none are aimed at the type of legal and organizational changes necessary for effective reform. More recently, the interim CPO took steps to provide better customer support from the Office of Contracting and Procurement to the District’s agencies and vendors. Specifically, the interim CPO announced in April 2006 the establishment of sole-source contract reviews and implementation of a central tracking data system to ensure that contract ceilings are not exceeded, and to capture vendor performance data for consideration in future source selections affecting those vendors. The CPO also announced a new staffing alignment to assign a lead contracting officer for groups of agencies and several commodity buying groups for certain services that are centrally managed, such as construction and information technology equipment and services. According to senior procurement officials and the interim CPO, they expect that assigning contracting officers will improve communication and efficiency across the District as agencies will have a single point of contact for managing and troubleshooting contracting issues. While these are positive steps aimed at improving internal procurement operations, they are not far reaching enough to address the more fundamental problems impeding overall effectiveness in the District’s procurement system. The third effort to improve District procurement has been ongoing since December 2005 when the Mayor and council passed legislation to establish a task force of local experts in contracting and procurement. The task force is comprised of 10 members appointed by the Mayor and council and represents a range of professional, legal, and business expertise in District and public procurement operations and policy. Since March 2006, the task force has met to obtain testimony and review other information from District procurement, financial management, auditors, and agency officials. At the time of our review, the task force chairman expected to report final recommendations to the Mayor and council before the end of 2006. In addition to these actions the District has taken to address procurement system challenges, in December 2005, the Mayor, interim CPO, and CFO separately provided information to the Chairman of the House Government Reform Committee, who requested the information in light of press allegations about possible violations of the city’s procurement laws and procedures, and unauthorized payments to vendors. The Chairman noted that it was essential for the Committee to conduct an assessment of the District’s procurement system and the possible shortcomings in the laws, policies, enforcement and practices. In their separate responses, the Mayor, interim CPO, and CFO provided copies of the law, policies, and procedures in place in the District for procurement and contracting, including sole source and small purchase actions, exemptions for various agencies such as the public schools and Department of Mental Health, approval of voucher payments to vendors, and procurement and contracting oversight mechanisms through the District’s inspector general and auditor’s offices. In addition, the interim CPO provided information on recent actions taken by the Office of Contracting and Procurement to improve customer service and streamline the procurement process. However, information provided did not address the range of concerns and shortfalls in the procurement law and management and oversight that we subsequently identified during the course of our review. NASPO state government and city procurement officials we spoke with said they have confronted similar management and oversight challenges. They recognized that overcoming these challenges and achieving meaningful procurement reform can take several years and requires sustained executive support from elected leaders and legislatures. To better ensure every dollar of the District’s more than $1.8 billion procurement investment is well spent, it is critical that the District have an effective procurement system that follows generally accepted key principles and is grounded in a law that promotes transparency, accountability, and competition, and helps to ensure effective management and oversight and sustained leadership. Currently, the District’s procurement system is mired in a culture that thrives on streamlined acquisition processes, broad authority for sole-source contracts, and unauthorized payments to vendors that are eventually papered over through ratifications. Given this culture, it is not surprising that public confidence in the District’s ability to judiciously spend taxpayer dollars is guarded at best. To effectively address the District’s long-standing procurement deficiencies, it is clear that high-level attention and commitment from multiple stakeholders—including Congress—are needed. Until the law provides for the right structure and authority, the District’s procurement reforms will likely continue to fail. To address needed structural and fundamental revision in the District’s procurement law and to strengthen management and oversight practices as well as facilitate congressional oversight, we recommend that the Mayor of the District of Columbia submit a comprehensive plan and time frame to Congress detailing proposed changes in line with our recommendations. This comprehensive plan, to be submitted to Congress, should include the following recommendations for revising the procurement law: Apply, at a minimum, to all District entities funded through the District’s appropriated budget and specify that if exclusions from its authority are necessary, they be defined narrowly by types of goods and services procured. Provide the CPO sole authority and responsibility as head of the District’s Office of Contracting and Procurement to manage and oversee the entire acquisition function for all entities, and if exclusions from the CPO’s authority are necessary, they be defined narrowly by types of goods and services procured. Consider reestablishing the CPO as the sole authority for suspension and debarment decisions. Eliminate sections 2-303.05(a)(3) and (a)(3A) of the District Official Code that allow noncompetitive procurements with a vendor who (a) maintains a price agreement or schedule with any federal agency; and (b) agrees to adopt the same pricing schedule as that of another vendor who maintains a price agreement or schedule with any federal agency. Reconsider appropriateness of high dollar thresholds for small purchases to maximize competition. Revise the DCSS program to (a) cap purchase ceilings at an appropriate threshold; (b) eliminate any schedule that contains fewer than three vendors or combine it with another schedule; (c) establish procedures to ensure all eligible vendors are provided an opportunity to be considered for orders; and (d) require the CPO to monitor and report on patterns of contracting with a limited number of the same vendors. Require that specific guidance on the use of the DCSS program be incorporated into the District’s regulations. Eliminate the procurement-related circumstance that allows direct voucher payments for emergency procurements. To further discourage the use of unauthorized commitments to vendors, we recommend that the Mayor of the District of Columbia, in coordination with the CFO and other stakeholders take the following actions: Revise Directive 1800.04 to be consistent with FAR part 1.6 and clearly state, consistent with the policy of FAR section 1.602-3(b), that these ratification procedures are not to be used in a manner that encourage unauthorized commitments by government personnel. Refer unauthorized commitments that are not ratified for further resolution under government claim procedures, to include in appropriate cases, possible referrals for Anti-Deficiency Act violations. Upon revision of the ratification directive, track and evaluate the use of direct voucher payments and ratifications to improve management attention and oversight of agencies’ unauthorized commitments with vendors. To strengthen management and oversight practices in the District’s procurement system, we recommend that the Mayor take the following actions: Recruit and appoint a CPO with the requisite skills and procurement experience as required in the law. Elevate the CPO’s position and office so that it is either in line with other critical cross-government functions, such as OCTO, or higher and would allow participation in cross-cutting executive management, budgeting, planning, and review processes. Direct the CPO to develop a process and tools for frequent and regular interactions with agency heads and program managers to support acquisition planning. Direct the CPO to develop a procurement manual concurrent with revision in the procurement law. Direct the CPO to establish a plan and schedule for professional development and certification programs for contracting staff and to track personnel trained. Direct OCTO to work with the CPO to expeditiously complete installation of an integrated procurement data system. To help ensure the District makes adequate progress in revising its procurement law and improving procurement management and oversight, we recommend that the Mayor submit periodic reports to congressional oversight and appropriations committees on such elements by agency as (a) competitive actions by agency; (b) number, value, and type of sole source procurements; (c) numbers of procurement personnel trained and the type of training received; and other indicators as appropriate. In addition, to further discourage the use of unauthorized commitments to vendors, we recommend that the Chief Financial Officer (CFO) of the District of Columbia take the following actions: Revise Financial Management and Control Order No. 05-002 to eliminate the use of direct vouchers payments for emergency procurements. Work with the CPO and other stakeholders to do the following: (a) Revise Directive 1800.04 to be consistent with FAR part 1.6 and clearly state, consistent with the policy of FAR section 1.602-3(b), that these ratification procedures are not to be used in a manner that encourage unauthorized commitments by government personnel. (b) Refer unauthorized commitments that are not ratified for further resolution under government claim procedures, to include in appropriate cases, possible referrals for Anti-Deficiency Act violations. (c) Upon revision of the ratification directive, track and evaluate the use of direct voucher payments and ratifications to improve management attention and oversight of agencies’ unauthorized commitments with vendors. We provided a draft of our report to the former Mayor’s office and the office of the CFO. The primary focus of our report deals with procurement reform needed in the District that falls under the responsibility of the Mayor. Therefore, most of our recommendations are made to the Mayor’s office. Given that the comment period coincided with the final month of the administration, the outgoing Mayor chose not to comment. However, the new administration contacted our office and indicated concurrence with most of the findings and recommendations and, as the principal office responsible for ensuring action is taken, plans to provide formal comments and an action plan within 60 days of the report’s public release. Though most of our recommendations are made to the Mayor’s office, there is a role for the CFO to play in helping curb unauthorized commitments. Therefore, we also made recommendations to the CFO. In that context, the CFO provided written comments, which were limited to our discussion on the use of direct vouchers. Our response focuses only on those comments. In general, the CFO questions our understanding of the direct voucher process and the CFO’s authority. We recognize the limitations in the CFO’s authority for holding personnel accountable for unauthorized commitments and the CFO’s obligation to pay for accepted goods and services. However, focusing on limited authority and payment obligation does not address the larger issue. Specifically, our report raises a concern about the effect of the lack of management attention on prohibiting unauthorized commitments that may be ratified and ultimately paid through direct vouchers—a process CFO staff acknowledge is broken and in need of more stringent controls. Accordingly, we revised our recommendations to the Mayor and the CFO concerning the use of direct vouchers and the ratification process. Strengthening this process is a small part of a larger procurement reform effort that must be headed by the Mayor and implemented by the CPO, CFO, and other stakeholders in the District. The CFO’s comments state that the office intends to review and clarify Financial Management and Control Order No. 05-002. We encourage them to implement our recommendations as well as work with the Mayor’s office and other stakeholders in coordinating procurement reform actions as applicable. The CFO’s comments are included in appendix III along with our comments on specific points he raised. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies to other interested congressional committees and the Mayor and Chief Financial Officer of the District of Columbia. We will make copies available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix IV for a list of key contributors to this report. We conducted our work at the District of Columbia’s Office of Contracting and Procurement, Office of the CFO, Office of the Inspector General, Auditor’s Office, and Center for Innovation and Reform. We did not conduct detailed audit work at the various agencies that procure independently of the Office of Contracting and Procurement since this is the central office that was established under 1996 reform legislation and it procures for 61 District organizations—a majority in the District. We also visited representatives of the National Association of State Procurement Officials (NASPO) in Springfield, Illinois, and city procurement officials in Atlanta, Baltimore, and New York. In selecting cities to visit, we considered those that have faced similar challenges to the District as well as took various approaches to structuring their public procurement systems and implementing reform. We did not assess the effectiveness of their approaches or reform efforts and our report is not intended to suggest that we evaluated or endorse any particular approach from these cities, but only to draw comparisons to the District where applicable. In developing our criteria for generally accepted key principles for an effective public procurement system, we relied on a variety of sources. NASPO is a nationally recognized non-profit association comprised of directors of central purchasing offices in each of the 50 states and other member jurisdictions. NASPO has published a series of volumes related to state and local government purchasing with the most recent edition describing principles and suggested practices. We also spoke with state procurement officials representing NASPO to obtain their perspectives on our analysis as well as their own states’ guiding principles and practices for an effective public procurement system. In addition to NASPO, the American Bar Association’s (ABA) model procurement code for state and local governments outlines principles for public procurement and provides a variety of options and strategies applicable to all public bodies. The Federal Acquisition Regulation (FAR) also describes guiding principles of public procurement and though these are aimed at the federal government, many are not unique to the federal acquisition system and are equally applicable to state and local governments. Finally, we leveraged our own work since 2001 on effective procurement and acquisition management practices. To assess whether the District’s primary procurement law reflects fundamental principles that promote transparency, accountability, integrity, and competition, we did a detailed legal review and analysis of the Procurement Practices Act of 1985, as amended. We did not do a similar review or analysis of laws, policies, or regulations governing the various independent agencies or procurement authorities. In comparing the District’s primary procurement law to generally accepted key principles and assessing the impact of any shortfalls, we focused on several key elements that are recognized by a variety of sources for promoting transparency, accountability, integrity, and competition: (1) uniform application of the law across all District organizations; (2) adequacy of authority granted to the CPO for the full spectrum of acquisition functions; (3) exemptions in the law through various temporary, emergency, or permanent legislative amendments; and (4) provisions in the law that limit or restrict competition, such as authority for sole-source contracting, simplified acquisition procedures, and use of supply schedule. Our review also examined recent legislation that was passed in response to various procurement challenges that had been identified to include changes in law and policy resulting from the CFO’s review of direct voucher payments for unauthorized commitments with vendors for goods and services without valid contracts. To further understand the rationale and impact of these various provisions and related procurement issues, we interviewed current and former procurement, executive, financial management, and auditing officials in the District. We also spoke to a D.C. Council committee representative regarding legislative actions to address reported procurement problems and related issues. In addition, we interviewed state government procurement leaders of NASPO about sound principles and practices regarding public procurement statutory coverage and their views on issues we raised about the District’s procurement law. We also interviewed city procurement officials in Atlanta, Baltimore, and New York to obtain their views on issues we raised concerning the District’s procurement law and to learn about related challenges they have faced and their responses to these challenges. To assess the extent to which the District’s management and oversight of the procurement process reflect generally accepted practices, we examined several key elements. First, we examined the organizational alignment and leadership for managing the acquisition function across all District organizations. Second, we assessed management’s commitment to competence including elements required for a professional procurement workforce. Third, we reviewed the District’s development of procurement management and oversight tools, including a procurement manual and automated data systems for recording procurement information. To gain insights on the challenges of procurement management and oversight in the District, we interviewed current and former city procurement and District executive officials to obtain their perspectives. To obtain an historical perspective on the management and oversight challenges in the District that drove legislation reform in 1996, we reviewed various studies done at that time and their recommendations. To understand how the District has addressed those challenges, we reviewed selected District inspector general and auditor reports since 2004, and the resulting recommendations as well as those from the internal study of the Center for Innovation and Reform. We interviewed responsible city procurement officials on the status of addressing those recommendations. We also interviewed the chairman of the Contracting and Procurement Reform Task Force, which was established in 2006 to review the District’s procurement system and attended several public meetings to observe their discussions. In the course of our review, we relied on various management and other procurement data reports provided by the Office of Contracting and Procurement. Specifically, information on procurement spending in dollars and contracting and competition methods was generated from various procurement data systems or compiled from manual inputs. Though we did not conduct detailed tests of procurement transactions, data reliability was suspect for these various reports based on very limited testing and independent auditors have also raised questions about the data. To fully test data reliability for all the various reports we received would have required resources outside the scope of this review. Moreover, an independent public accounting firm audits the District’s financial statements annually and reports on internal control and compliance over financial reporting. Compliance with procurement regulations was part of the fiscal year 2005 audit in which the District received an unqualified, clean opinion. Despite the limitations, we found the data to be reasonable and sufficiently reliable for our purposes. Further, we have attributed, where applicable and appropriate, this information to the Office of Contracting and Procurement and responsible officials. This work was done between February 2006 and October 2006 in accordance with generally accepted government auditing standards. In 1973, Congress enacted the District of Columbia Self-Government and Governmental Reorganization Act or Home Rule Act, which set forth the structural framework of the current District government in the District Charter. The District Charter established the Office of the Mayor and vested the Mayor with the executive power. It also established the D.C. Council and delegated certain legislative powers to it. Despite the powers delegated to the Council, Congress retained the ultimate legislative authority over the District under the Constitution. Generally, the Constitution authorizes Congress to enact legislation on any topic for the District and to amend or repeal any District act. With regard to the powers delegated to the Council, the Home Rule Act authorized it to pass permanent and emergency acts. A permanent act starts as a bill, which usually gets introduced by a Council member and then gets assigned to and considered by the proper committee. The committee then reports the bill to the Committee of the Whole (the entire Council), which reviews it before it is put on agenda for regular session. Hearings are required for permanent legislation before it is adopted. The Council votes on a bill two times, during first and second readings. However, 15 days before the Council adopts a bill, it must be published in the D.C. Register. The Mayor then can either (1) sign the bill or take no action and it becomes an act or (2) veto the bill and Council can override the veto by two-thirds majority. The act must then be published in the D.C. Register. The Council chair transmits the act to both houses of Congress, which have 30 calendar days (or 60 calendar days for criminal acts) to review the act and if they take no action, the act becomes law. Congress may disapprove the act by adopting a joint resolution of disapproval, which must be signed by the President. Unless the President vetoes the act, it becomes law within 30 days. Emergency acts are quicker to pass than permanent acts, since they are not required to go through (1) committee, (2) a second reading, (3) a public hearing, (3) congressional approval, and (4) publication in the D.C. Register before becoming effective, but must be published after that. For an emergency act, the Council must decide by two-thirds of the members that emergency circumstances make it necessary that an act be passed. Emergency acts are effective for 90 days. With regard to the executive power, the Home Rule Act vested in the Mayor, who is the chief executive officer of the District government, the power to properly execute all laws relating to the District. The Mayor may delegate any function to (1) any officer, employee, or agency of the executive office of the Mayor or (2) any director of an executive department who may, with the Mayor’s approval, further delegate all or part of the functions to subordinates under the Mayor’s jurisdiction. In addition to establishing these branches of government in the District, the Home Rule Act also established five independent agencies existing outside the control of the executive or legislative branches of the District government. The independent agencies were the (1) Board of Education; (2) Armory Board; (3) Public Service Commission; (4) Zoning Commission; and (5) Board of Elections. In 1986, the Council enacted the D.C. Procurement Practices Act of 1985, pursuant to the Council’s authority to pass acts under the Home Rule Act. One of the primary underlying statutory policies of the act was to provide for a uniform procurement law and procedures for the District of Columbia government. To achieve this policy, the Procurement Practices Act applied to all agencies and employees of District government which were subordinate to the Mayor. The Procurement Practices Act excluded from its application a separate branch of government or an independent agency (as defined in D.C. Administrative Procedures Act) that had authority to enter into contracts or to issue rules and regulations for awarding contracts pursuant to existing law. The Procurement Practices Act applied to every contract, interagency agreement, or intergovernmental agreement for procurement or disposal of goods and services by covered agencies and employees. The Procurement Practices Act also created in the executive branch of the District government the Contract Appeals Board. The appeals board was the exclusive hearing tribunal for and had jurisdiction to review and determine de novo throughout the District government the following: (1) protests of a solicitation or contract award and (2) appeals from a final decision of the Director of Administrative Services. It allowed disappointed contractors to appeal board decisions to the D.C. Court of Appeals. It also established bid protest procedures for protests of the solicitation or award of a contract. The Procurement Practices Act was amended by the Procurement Reform Amendment Act of 1996 (reform act) with the primary statutory purpose to centralize procurement in the Office of Contracting and Procurement. The law required this office to be headed by a Chief Procurement Officer (CPO). By delegation of the Mayor, the CPO has the exclusive contracting authority for all procurements covered by the Procurement Practices Act. The reform act further centralized procurement in the CPO by requiring the CPO rather than the Mayor to delegate contracting authority to employees of District entities subject to the act and to employees of Office of Contracting and Procurement who are contracting officers and specialists in procurement. All delegations must be subject to limitations specified in writing. The reform act also changed some of the requirements for sole-source emergency procurements, which the Procurement Practices Act authorized the executive branch to use. Specifically, the reform act allowed contracting officers to make and justify sole source emergency procurements when there was an imminent threat to the public health, welfare, property, or safety under emergency conditions. The requirement is implemented in the District’s regulations, which defines an “emergency condition” as a situation, such as a flood, epidemic, riot, or equipment failure that created the imminent threat. The reform act expanded the Procurement Practices Act’s application to include independent agencies, which were previously excluded from its application. Specifically, the act applied to all departments, agencies, instrumentalities, and employees of the District government, including agencies which are subordinates to the Mayor, independent agencies, boards, and commissions. It applies to any contract for the procurement of goods and services, including construction and legal services. Despite the reform act’s primary statutory purpose of centralizing the District’s procurement authority in the Office of Contracting and Procurement, it excluded many entities from the authority of both the Office of Contracting and Procurement and the Procurement Practices Act. Specifically, it excluded: the D.C. Council; the D.C. courts; the D.C. Financial Responsibility and Management Assistance Authority (Control Board), as Congress previously statutorily excluded the Procurement Practices Act’s application to the Control Board and vested the Board’s contracting authority in its Executive Director; the Office of the Chief Financial Officer (CFO), and required the Chief Financial Office, during a control year, to adopt the Control Board’s procurement rules and regulations, except that during years other than control years, Office of the CFO must be bound by provisions in this act. Further, the reform act added a new section in the Procurement Practices Act, exempting the following entities from the authority of the Procurement Practices Act and Office of Contracting and Procurement: Redevelopment Land Agency with regard to real property or interests Administrator of Homestead Program Administration under Homestead Housing Preservation Act of 1986 with regard to disposal or transfer of real property; Mayor to sell real property in D.C. for nonpayment of taxes or assessments of any kind; Mayor and D.C. Council pursuant to D.C. Public Space Rental Act; Convention Center Board of Directors pursuant to the Washington Convention Center Management Act of 1979; Sports Commission pursuant to the Omnibus Sports Consolidation Act D.C. Housing Finance Agency; D.C. Retirement Board pursuant to the D.C. Retirement Reform Act; Metropolitan Police Department’s authority to make procurements of $500,000 or less, as provided in the D.C. Appropriations Act, approved April 6, 1996. (Pub. L. No. 104-134). Since enactment of the 1996 reform act, the Council has amended the Procurement Practices Act many times to exempt additional entities from falling under the authority of the Office of Contracting and Procurement or Procurement Practices Act or both, despite the Procurement Practices Act’s statutory purposes of creating uniform procurement laws in the District and centralizing the District’s procurement authority in the Office of Contracting and Procurement. To date, in addition to those entities mentioned above, the council excluded the following entities from the authority of both Office of Contracting and Procurement and Procurement Practices Act: D.C. Water and Sewer Authority; D.C. Public Service Commission; D.C. Housing Authority, except for the provisions regarding contract protests, appeals, and claims arising from procurements of the Housing Authority; and D.C. Advisory Neighborhood Commissions. Further, the Council amended to Procurement Practices Act to exclude the following entities from the authority of Office of Contracting and Procurement, but they are subject to the Procurement Practices Act: Director of the Child and Family Services Agency; Criminal Justice Coordinating Council; Director of the Department of Mental Health; and Board of Education to solicit, award, and execute contracts, except for security for the District’s public schools for security contracts to begin on or after June 30, 2005. Also, the Council exempted delivery of electrical power and ancillary services for the District from certain requirements of the Procurement Practices Act, subject to Council approval. In addition to these exemptions, the Council continues to use its emergency act authority under the Home Rule Act to exempt the application of all or certain provisions of the Procurement Practices Act or the authority of the Office of Contracting and Procurement for certain District entities or projects. These exemptions can last no more than 90 days or can become permanent if the emergency bill is accompanied by a temporary bill bridging the gap between expiration of the 90-day emergency bill and congressionally-approved permanent legislation on the same matter. The following are GAO’s comments on the CFO’s letter dated January 5, 2007. 1. As we state in the report, the CFO’s analysis of fiscal year 2004 direct voucher payments showed that $217 million fell outside a 1996 financial management and control order. It was only after the CFO, in 2005, added 7 more acceptable uses of direct vouchers to the original order, that these payments were found to be acceptable. The $4 million in payments referred to in the CFO’s comments are those that fell outside this updated policy. 2. We recognize that the CPO’s office is not directly responsible for developing financial management policies. However, we believe that in order to effect meaningful procurement reform, the CPO should be consulted on any policy changes that affect procurement—particularly as such changes have been amended into the procurement law. Elevating the CPO within the District government, as we recommend, would facilitate needed coordination. 3. Because the District’s procurement law already establishes emergency contracting procedures, we stand by our finding and recommendation that including emergency procurements as an acceptable use of direct vouchers duplicates the provision in the law and allows agencies to bypass established contracting procedures. 4. As we state in the agency comments section, we recognize the obligation to pay for accepted goods and services, but we are concerned that the current policy, now codified in the law, is a symptom of the lack of necessary management focus to minimize the number of unauthorized commitments that may be ratified and ultimately paid through direct vouchers. In meetings with CFO staff, they acknowledged that the ratification process needs strengthening to include, in appropriate cases, possible referrals for Anti-Deficiency Act violations. 5. The scope of our review was on the District’s procurement system as a whole, not on the direct voucher process. As part of this review, we examined and discussed with chief procurement officers reform efforts in other cities. Through these discussions, we learned that other cities have consistently taken steps to curb the use of direct vouchers where at all possible and to ensure strict controls are in place to hold employees accountable when their actions result in an unauthorized commitment to vendors. In addition to the individual named above, Carolyn Kirby, Assistant Director; Barry DeWeese; Cynthia Auburn; Rachel Girschick; Kevin Heinz; Bill Petrick; Sylvia Schatz; and Karen Sloan made key contributions to this report. | To improve acquisition outcomes, in 1997 the District established the Office of Contracting and Procurement under the direction of a newly created chief procurement officer (CPO). Since then, the District's inspector general and auditor have identified improper contracting practices. This report examines whether the District's procurement system is based on procurement law and management and oversight practices that incorporate generally accepted key principles to protect against fraud, waste, and abuse. GAO's work is based on a review of generally accepted key principles identified by federal, state, and local procurement laws, regulations, and guidance. GAO also reviewed District audit reports and discussed issues with current and former District officials as well as select state and local officials. The District's procurement law generally does not apply to all District entities nor does it provide authority to the CPO to effectively carry out and oversee the full scope of procurement responsibilities across all agencies. A lack of uniformity in its procurement law and the CPO's limited authority not only undermines transparency, accountability, and competition but also increases the risk of preferential treatment for certain vendors and ultimately drives up costs. The current law exempts certain entities and procurements from following the law's competition and other requirements, and according to current and former District procurement officials, there is a push to expand independent procurement authority--a move that would reverse action taken by the District a decade ago. Other provisions of current law further erode competition. Notably, the law provides broad authority for sole source contracting and establishes high-dollar thresholds for small purchases, which are generally not subject to full and open competition. Also, in implementing the law, sufficient management oversight is lacking to ensure employees do not make unauthorized commitments. The District has been challenged to effectively manage and oversee its procurement function, due in large part to the low-level position of the procurement office in the governmental structure, the rapid turnover of CPOs, and multiple players having authority to award contracts and affect contract decisions. At the same time, the District does not have the basic tools that contracting and agency staff and financial managers need to effectively manage and oversee procurements--including a procurement manual, a professional development program, and an integrated procurement data system. In summary, the District's procurement system does not incorporate a number of generally accepted key principles and practices for protecting taxpayer resources from fraud, waste, and abuse. Specifically, the District lacks a comprehensive procurement law that applies to all District entities over which the CPO has sole procurement authority and promotes competition; an organizational alignment that empowers its procurement leadership; an adequately trained acquisition and contracting workforce; and the technology and tools to help managers and staff make well-informed acquisition decisions. To better ensure every dollar of its more than $1.8 billion procurement investment is well spent, it is critical that the District have a procurement system grounded in a law that promotes transparency, accountability, and competition, and helps to ensure effective management and oversight and sustained leadership. High-level attention and commitment from multiple stakeholders--including Congress--are needed if the District's procurement law is to provide the right structure and authority and if procurement reforms are to succeed. |
A consensus does not exist on a definition of small business, including which specific attributes or thresholds distinguish small businesses from other firms. Estimates of the small business population are driven by the purpose, concepts, and data that are used to produce the estimates. As we have previously reported, various thresholds such as number of employees, gross receipts, and number of shareholders may be used when determining which provisions of the tax code apply to a small business. In this report, we rely on studies that use taxpayer data for individuals and entities that generate business income. Businesses (including small businesses) file specific tax forms based on certain attributes of the business, such as the ownership structure and how the business income is taxed. Below are different types of businesses and the required forms and schedules. Nonfarm sole proprietorships (Form 1040, Schedule C) are unincorporated and owned by a single individual. Net business income or loss is included in the owner’s individual adjusted gross income. Landlords (Form 1040, Schedule E-Part I) are individuals who report rental real estate activity on Part I of Schedule E. Farmers (Form 1040, Schedule F or Form 4835) are individuals who report farm income or landowners who report farm rental income. C corporations (Form 1120) are owned by shareholders. Corporate income is taxed at the corporate level on taxable income and at the shareholder level on distributed profits. S corporations (Form 1120-S) cannot have more than 100 shareholders, among other requirements. Gross income is distributed to shareholders and taxed at the shareholder level. Partnerships (Form 1065) are unincorporated businesses that have two or more owners. Profits and losses are distributed to owners who are taxed at the partner level. IRS has separate operating divisions that focus on different types of taxpayers—individuals, small businesses and self-employed, large businesses, and tax exempt organizations. The Small Business and Self- Employed division oversees taxpayers filing tax returns as individuals with business income and as businesses with less than $10 million in total assets. However, not all of these tax returns are for business entities. This is because the principal purpose of some entities that file tax returns reporting business income may not be to generate revenue or to engage in substantive business activity. For example, some C corporations can serve as investment vehicles that engage in little or no business activity. Further, partnerships may be created to redistribute profits generated by Filers of another partnership and may not generate income themselves.Form 1040, Schedule C, may be independent contractors who may more closely resemble employees rather than small businesses. Additionally, rental income for some individuals may be incidental and not represent business activities. We define tax compliance burden as the time and money spent by the taxpayer to meet tax obligations. This would include federal, state, and local obligations. This does not include tax liability. For the purposes of this report, we are only examining compliance burden as a result of federal tax obligations. Time spent on tax activities can include working with a paid professional, tax planning, keeping records, completing forms, submitting forms, learning tax laws, and working with IRS on tax issues. Monetary burden can include expenses for hiring a paid professional to file taxes, investing in a tax software system, paying for payroll services, and legal fees. When measuring tax compliance burden, researchers may separate burden into both time and money, or they may place a value on the time spent by taxpayers and add it to monetary burden to create a single measure of tax compliance burden. A key concept in tax administration is minimizing burden, including eliminating unnecessary burden. As shown in figure 1, using data from researchers at Treasury’s Office of Tax Analysis (OTA), most small businesses (approximately 69 percent or 16 million) are individual taxpayers who report business income on their Form 1040, using Schedule C (sole proprietor), Schedule E-Part I (landlords), or Schedule F (farmers). The remaining 31 percent of small businesses (or roughly 7.3 million) are partnerships, S corporations, or C corporations. OTA researchers also provide a total income measure, generally defined as the sum of all business income reported on tax returns, including gross receipts, rents, dividends, capital gains, royalties, and interest. Individual small businesses generated only 23 percent (or $1.4 trillion) of the total income of all small businesses, whereas small business partnerships, S corporations, and C corporations accounted for the majority—77 percent (or about $4.5 trillion)—of total small business income. When looking at the average total income for small businesses (total income divided by number of filers), partnerships, S corporations, and C corporations each generated more than $450,000 on average, while sole proprietors, farmers, and landlords reported income of about $100,000 or less on average. Figure 2 shows the estimated average total income by small business type. Small businesses (as defined as reporting total income and deductions of less than $10 million) make up 99 percent of the taxpayers identified as For being engaged in substantial and substantive business activity.each type of filer, small businesses account for at least 95 percent of businesses. Among individual filers reporting business income, small businesses account for most of the reported income. However, among S corporations, C corporations, and partnerships, larger businesses account for most of the reported income, even though they are far outnumbered by small businesses, as shown in figure 3. The estimated average total income across all types of small businesses is $250,000, while the average total income for larger businesses is estimated to be $121 million. Small businesses with at least one employee (which we will refer to as employers) generated most of the reported total income for small businesses (or about 71 percent). Employers account for about 86 percent of total income for small business C corporations and S corporations combined and about 55 percent for small business sole proprietors, farmers, and partnerships. Employers make up about 20 percent of all small businesses. Employers make up 16 percent of the combined group of small business Schedule C sole proprietors, Schedule F farmers, and partnerships and 51 percent of the combined group of small business C corporations and S corporations. Figure 4 shows the estimated number of small business filers and total income separated by employers and non-employers. As shown in figure 5, employer small businesses, on average, generate more income than non-employer small businesses. Small businesses undertake a number of tax compliance-related activities that create burden. These activities can be grouped into general categories: employer-related tax activities, and third-party information reporting and industry-specific tax activities. The tax compliance burden associated with these activities varies by characteristics of the small business. Some of these characteristics include the business’s asset size, filing entity type, number of employees, and industry type. Tax compliance activities are not limited to the annual filing of a tax return, but rather occur throughout the year. For example, sole proprietors are generally required to file income tax returns every April. Some small businesses need to pay estimated income taxes four times a year. Moreover, small businesses with employees are required to deposit employment taxes either monthly or semiweekly, and to report summary information of these activities on a quarterly basis. Additionally, depending on specific business operations, other tax compliance activities such as reporting excise tax, tax planning, and recordkeeping happen throughout the tax year. Figure 6 provides an overview of some of these tax compliance activities for sole proprietors and when they occur. Appendix III, table 8 provides a more detailed description of tax activities. Every year, small businesses need to file income tax returns and may pay estimated income taxes quarterly. The type of small business dictates the type of income tax returns and related schedules that need to be filed. Some of the returns include a set of schedules embedded in the form— found within the income tax return—while some small businesses and individuals with business income must attach a mandatory schedule to their return. For example, the primary corporate income tax return, Form 1120, U.S. Corporation Income Tax Return, contains eight embedded schedules, while sole proprietorships file Form 1040, U.S. Individual Income Tax Return, and attach Form Schedule C, Profit or Loss from Business. Small businesses with employees are responsible for reporting, withholding, and depositing employment and unemployment taxes. While these requirements may impose a cost on employers, withholding is widely believed to improve compliance and may reduce compliance burdens for employees. The number of employment tax reports and deposits depends on the number of employees and the resulting employment tax liability owed at a particular time (see table 1). In general, businesses with an employment tax liability greater than $50,000 need to make deposits more frequently than businesses with a lower liability. Additionally, each year, the employer must furnish a copy of Form W-2, Wage and Tax Statement, to each employee. Since the characteristics of employers vary, responsibilities for withholding, depositing, and reporting employment taxes can differ. For example, consider a small business restaurant owner who has 20 employees and has an employment tax liability of less than $50,000. She files a Form 941 quarterly which details the income tax withholdings for each of her 20 employees. Since her liability is less than $50,000, she deposits these withholdings monthly. At the end of the year, she must complete 20 Forms W-2 to report wages, tips, and other compensation paid to each employee. Small businesses also report health care and retirement information. The information reported for these areas depends on a business’s number of employees. The entity type also plays a role in the information reported about health care. Under the Patient Protection and Affordable Care Act, employers report the cost of coverage under an employer-sponsored group health plan on Form W-2. Beginning in January 2016, employers with 50 or more full-time employees will need to provide employees with a Form 1095-C, Employer-Provided Health Insurance Offer and Coverage. Some employers decide to offer pension plans and are responsible for reporting this information. While businesses must maintain records about these plans, most pension plans do not have any separate filing or reporting requirements with IRS. However, certain retirement plans offer small employers and self-employed individuals a deduction for contributions, and allows them to defer tax on income paid into the plan. To receive deductions, the small businesses must report this information to IRS using certain forms. Many businesses, including small businesses, are required to report on certain transactions they enter into with other entities. This is a form of third-party reporting. IRS uses this information to verify compliance by comparing the income or expenses reported by third parties to the income or expenses reported by small businesses on tax returns. Using Form 1099-MISC, small businesses report items such as rent payments and payments to nonemployees for services of at least $600, subject to certain exceptions. The burden created by this requirement grows with the size of the business because larger businesses would need to file more 1099-MISC forms. However, while a larger business may have more transactions, it may also have an accounting system designed to identify transactions of more than $600 that a smaller business might not have. Another characteristic that affects third-party reporting requirements is entity type. For example, partnership entities are required to report the distributive shares of their partners on Schedule K-1. However, other entity types such as sole proprietorships do not have similar requirements. Additionally, a small business may have many industry-specific requirements related to excise taxes. IRS administers several broad categories of excise taxes, including environmental taxes, communications taxes, fuel taxes, retail sale of heavy trucks and trailers, luxury taxes on passenger cars, and manufacturers’ taxes on a variety of different products. For example, a small business in the trucking industry that makes deliveries over public highways is required to file Form 2290, Heavy Highway Vehicle Use Tax Return. IRS has developed several models to provide information for assessing the impact of the tax code and IRS programs on taxpayers. These models also help IRS assess the role of compliance burden and comply with requirements by the Office of Management and Budget for information on burden under the Paperwork Reduction Act. In the past 15 years, IRS has developed a number of burden models for individual and business taxpayers—both small and large. Estimates of business compliance burdens that IRS’s models have produced over the years indicate that burdens increase with the size of businesses, whether measured in terms of assets, receipts, or employment; however burden per dollar of assets or receipts or per employee decline with size due to economies of scale. For example, a small business owner who does his own taxes may create a spreadsheet to compute the business’s taxes and keep track of the employment taxes he owes for each employee. The effort the small business owner makes to build that spreadsheet is a fixed cost—a cost that does not change with an increase or decrease in the amount of goods or services that are produced. As the small business owner’s sales grow and as he hires more employees, he doesn’t have to repeat that effort; he just has the small additional cost of adding new data on income and employees to the spreadsheet. As this business grows, its total compliance costs decline both as a proportion of sales and on a per-employee basis. For these reasons, the costs per dollar of receipts or per employee are larger for small businesses than for larger ones. IRS measured money and time burden as a portion of total business receipts, total assets, and burden per employee. Across all three measures, IRS results are consistent with the assumption that small businesses face significant fixed compliance costs combined with decreasing marginal costs as the business grows (see appendix III, tables 9 through 11).businesses, estimated total monetized business compliance costs by business entity type varied depending on the type of entity and the entity’s gross receipts. This variation is one reason why compliance burden on small businesses is a concern (see appendix III, table 12). When looking at total receipts and asset size across all Figure 7 shows IRS’s estimates of compliance costs per employee for S corporations, C corporations, and partnerships. According to the estimates, costs for corporations and partnerships with 1 to 5 employees range from $4,308 to $4,746, compared to $182 to $191 per employee for businesses with 50 or more employees. IRS conducted this research using 2002 taxpayer data. Estimates using more recent data have not been produced. A number of factors would likely affect these estimates if they were produced using current data, including inflation, accounting software improvements, and tax law changes. Estimates from IRS’s compliance burden models also show that burdens vary by industry. According to IRS, the retail trade industry incurs the largest pre-filing and filing time burden—businesses in this industry spent an average of between 325 and 331 hours per year on such activities. Manufacturing incurred the largest pre-filing and filing monetary burden, with businesses spending an average of $2,740 to $2,813 per year on these activities. Agriculture, forestry, and fisheries incurred the smallest average time spent on tax compliance activities (180 to 184 hours) and second smallest average compliance costs ($1,489 to $1,590). Some industries have higher time and monetary compliance costs because the nature of those businesses may affect the complexity of tax activities. For additional information on industry burden, see table 13 in appendix III. IRS and Treasury researchers have used both the business and individual taxpayer burden models to estimate the influence of specific business characteristics on compliance burdens. Their estimates suggest that recordkeeping and filing burdens increase as the volume of complex compliance activities undertaken by businesses increases, regardless of the size or other characteristics of those businesses. The results for the full population of individual taxpayers were similar. categorized into varying levels of complexity based on an overall complexity of extracting information from the entity’s financial books, items that may require a separate recordkeeping system or a process with potentially separate rules for each item, and tracking records across years. These results are presented in Rosemary Marcuss, et.al., “Income Taxes and Compliance Costs: How Are they Related?”, National Tax Journal, December 2013, 66 (4), pp. 833-854 and George Contos, et.al., “Taxpayer Compliance Costs for Small Businesses: Evidence from Corporations, Partnerships, and Sole Proprietorships,” (2009) Proceedings of the One Hundred Second Annual Conference on Taxation pp. 50-59, National Tax Association, Washington, D.C. forms to about $2 per form for 100 forms, with one of them charging about $.80 per form for 100,000 forms. IRS has not conducted research to estimate the compliance costs of audits and other post-filing compliance contacts for small businesses. However, IRS conducted preliminary research on compliance costs for individual filers that can provide some insights into the sources of burden that would affect some small businesses that report business income on individual tax returns. From the taxpayer perspective, post-filing begins when the taxpayer receives notice of an issue with an already filed tax return and concludes when the issue has been resolved. Post-filing compliance costs include any time spent on resolving an issue or money spent on things ranging from postage to paying a tax professional. IRS’s preliminary data on individual post-filing compliance costs provide information on the time and money spent on post-filing activities such as an audit—a review of accounts and financial information to ensure information is being reported correctly—or collections—receiving a bill for not paying taxes in full when a tax return was filed. For individual filers, IRS research indicates that the level of compliance costs are highly dependent on the approach IRS takes in how it contacts the taxpayer to address potential underreporting or underpayment of tax obligations. IRS’s preliminary estimates, based on survey data from 2011, indicate that the average post-filing compliance costs were the highest for a field exam—an audit conducted at an individual’s home or place of business—at $4,800, followed by office exam—an audit conducted at an IRS office—at $2,165. A notice informing the taxpayer that they did not report all of their earnings had the lowest estimated average post-filing compliance costs at $230. IRS’s research on the magnitude of audit costs for individual filers likely includes individual filers who are small business owners. Those businesses are likely to have more complicated returns and, as a consequence, their burden is likely to be at least as great as the averages show for individual filers. For more details concerning post-filing compliance costs, see figure 13 in appendix III. According to IRS, the audit rate for small business taxpayers is higher than the rate across all individual taxpayers because small businesses historically have higher noncompliance than other taxpayers. Table 2 provides detailed information on the audit rates across small business types. While we did not examine post-filing costs, in a past report on correspondence audits, we found a number of issues which contribute to taxpayer compliance burden. These issues included IRS backlogs in responding to taxpayers who provide documentation in response to IRS’s audit notices and unrealistic audit time frames set by IRS. One of IRS’s goals in its strategic plan is to deliver high quality and timely service to reduce taxpayer burden and encourage voluntary compliance. Under this goal, IRS has identified seven objectives that further define how it intends to achieve the goal. One objective is to reduce taxpayer burden and increase return accuracy at filing through timely and efficient tax administration processing. IRS outlined performance measures for each strategic goal and objective in a supplement to its financial statement for fiscal years 2013 and 2014.this supplement, IRS describes some of the initiatives launched or continued and progress made in achieving performance goals. IRS also includes a discussion of goals missed. Several of these goals, if achieved, could have a positive impact on reducing small business compliance burden. For example, responding more quickly to telephone calls, correspondence, and requests for in-person service, as well as enhancing the online experience for customers, could benefit small businesses by requiring them to expend less time and fewer resources for IRS outreach. In addition to the goals and objectives that focus on burden reduction in the strategic plan, IRS listed general guiding principles for reducing burden in the Internal Revenue Manual 22.24.1 - IRS Servicewide Burden Reduction Activities. The guiding principles are intended to support the consideration of compliance burden as part of tax administration. According to the manual, IRS carries out its mission to achieve significant reduction in unnecessary burden by considering taxpayer burden when implementing and reviewing policies and procedures. See table 3 for a list of the guiding principles. According to the Internal Revenue Manual, the mission to reduce taxpayer burden and improve service is embedded in the IRS culture and a responsibility of all divisions. Though staffed to the Small Business and Self-Employed division, a senior advisor serves as the single point of contact for taxpayer burden reduction initiatives across all divisions. The manual states that this arrangement is intended to provide a link across the agency to ensure burden reduction is incorporated within decision- making frameworks. The advisor also acts as a liaison with external stakeholders. IRS officials provided examples of efforts made to engage with internal and external stakeholders to reduce small business tax compliance burden. To engage internal stakeholders, employees can suggest ways to reduce burden by using Form 13285, Taxpayer Burden Reduction Referral. This form allows employees to note an issue causing taxpayer burden, describe the affected population, and propose a solution. Employees can also explain who needs to be involved in making the change, the resources needed, taxpayer benefits, compliance risks, and suggestions for how to measure burden reduction savings (e.g., reduced costs to the taxpayer or reduced costs to IRS). One notable example of a burden reduction initiative at IRS was developing a simplified method for determining the Office in the Home tax deduction. Simplified Office in the Home Deduction Illustrates How IRS Considers Burden When Implementing Initiatives IRS officials offered an example of how the agency considered compliance burden principles when implementing new or changed tax laws or administrative procedures with the introduction of a simplified method for small businesses to calculate their Office in the Home Deduction. This method was introduced in 2013 and generally allows filers to receive a deduction of $5 per square foot of office space, up to a maximum area of 300 square feet. The alternative method involves a more complex calculation of property depreciation. Although the Department of the Treasury (Treasury) and IRS officials reported considering this proposal as early as 2006, in July 2012, Treasury and IRS redoubled their efforts in response to an Office of Management and Budget request to identify initiatives that would eliminate at least 2 million hours in annual burden. To meet this request, IRS reached out to employees and Senior Executive staff, and also reviewed prior submissions, form burden statistics, and other suggestions that had been considered in the past. The group reviewed the proposals and made a final determination that this initiative should be implemented. IRS officials told us that they considered burden and compliance risk within this decision-making process and discussed tradeoffs of their decisions. IRS received external stakeholder input from representatives of the small business community, such as the U.S. Chamber of Commerce and the National Federation of Independent Business, who have recognized this as a positive development. IRS said the process of working collaboratively across the organization, with external parties, and with Treasury allowed them to consider the interests and concerns of all parties. This helped IRS weigh tradeoffs of decisions that could affect both compliance and compliance burden. Other internal activities include providing employees with an online burden risk estimator tool designed to aid employees in determining whether certain decisions about the design of tax forms for individuals (Form 1040 and associated schedules and forms) could impose significant burdens on taxpayers. This tool is an Excel spreadsheet that uses some of the data used in the more elaborate burden estimation models discussed previously in this report. The tool provides staff with an estimate of the number of taxpayers who would be affected by a specific potential tax form change, as well as a rough indication of whether the effect on compliance burden would be significant. Divisions can use this tool to identify decisions that merit more in-depth evaluations, potentially involving the full burden estimation model. IRS undertakes a number of activities to engage external stakeholders such as providing information on its website and holding forums with small business representatives. IRS has a website page that defines taxpayer burden, provides links to submit ideas for burden reduction, and outlines how IRS selects burden reduction initiatives. Another example of IRS outreach to the small business community is the quarterly Small Business Forum. IRS officials told us that they use the information from these forums to inform their decision-making process for practices and policies that affect small businesses. For example, IRS used feedback from forum participants to refine the language used in burden surveys it administers to the business community, and used what was learned to inform its current burden models. Similar to internal stakeholders, external stakeholders can make burden reduction suggestions using Form 13285- A, Reducing Tax Burden on America’s Taxpayers (Referral Form for Use by the Public), which allows them to describe the issue causing taxpayer burden, the affected population, and the proposed solution. We also interviewed small business representatives (external stakeholders) who acknowledged IRS’s external stakeholder outreach efforts and how they have been effective in identifying opportunities to reduce compliance burden. However, they also described a number of areas where small business compliance burden could be further reduced. These areas include issues related to IRS customer service, filing requirements, lack of or delayed official guidance, and compliance contacts. According to these representatives, when they call IRS, they can have long wait times, be disconnected, or be directed to IRS staff who are unable to provide the needed assistance. We have recently reported on these issues as well. Further, several representatives shared the perspective that complex filing requirements contribute to compliance burden. While small businesses sometimes anticipate significant tax relief through tax credits and deductions such as the small employer health care credit and mileage and vehicle deductions, some small businesses may not be claiming these credits due to the time, cost, and complexity associated with claiming them. One concern we heard from small business representatives was that after a tax practitioner expends resources to compile the necessary documentation and calculate the credit, their client (the small business) is ineligible to claim the credit. This could result in additional taxpayer burden if tax preparers bill their clients for calculating the credit when it is not claimed. In addition to facing burdens due to new and complex tax provisions, representatives we spoke with also expressed concern over the compliance burden associated with delayed or missing official guidance, particularly for the Patient Protection and Affordable Care Act employer Representatives also noted that deadlines for responding to mandate.certain IRS notices can be difficult for small businesses when the requested information is not readily available. We recognize that IRS is aware of many of these concerns and, through various initiatives, has made efforts to address these issues. However, continued attention to these areas will be key to effectively reducing burden. We routinely issue reports on aspects of IRS’s enforcement and administrative operations, some of which may impact small business tax compliance burden. In many cases we have made recommendations that, if implemented, could help to reduce these burdens. Selected recommendations that have yet to be implemented are listed in appendix IV. Beginning in tax year 2011, payment settlement entities were required to send IRS Forms 1099-K to report gross merchant payments in which a payment card or a third-party payment network was used as the form of payment. Payment settlement entities report the gross amount of all reportable transactions a merchant made through them, for the calendar year, without regard to adjustments for credits, cash equivalents, discounts, fees, refunds, or other deductions. A copy of the 1099-K is also sent to the taxpayer. The reporting of this information to both IRS and the taxpayer can encourage voluntary compliance by small businesses in at least two ways. First, since taxpayers know IRS is also receiving this income information, they are more likely to include it on their tax return. Second, taxpayers have another source of information they can use to help calculate or verify business income. Payment card reporting also provides IRS with an information source it can use to compare against the income reported by small business taxpayers on their tax returns. As such, it can serve as a tool for identifying noncompliant taxpayers, including those who failed to file a tax return at all and those who underreported their income. This type of comparison is a common IRS enforcement technique. For example, IRS can directly compare information it receives on a taxpayer’s Form W-2, Wage and Tax Statement, against a tax return to determine if the taxpayer reported earnings and withheld taxes correctly. However, matching is more complicated for Forms 1099-K than Forms W- 2 because IRS cannot directly match the line items on 1099-Ks to line items on tax returns. The Form 1099-K reports the gross amount of payment card and third party network transactions made through a payment settlement entity. This does not match the gross receipts line on tax returns because Form 1099-K transactions may include items like sales tax, gratuities, and cash back, all of which are not income. Furthermore, tax return gross receipts can include cash and check revenue, which is not captured on Form 1099-K. To leverage Form 1099- K data, IRS researched and tested ways in which the new data can be used to most effectively and efficiently improve voluntary compliance, detect noncompliance, and identify those who did not file returns. The Payment Card Pilot includes six activities to test three methodologies for selecting cases, as described in table 4. In all of the pilot activities, IRS uses taxpayer identification numbers to first match Forms 1099-K with the correct tax returns. IRS then compares Form 1099-K information with business income reported on individual and business tax returns. This process is detailed in figure 8. In the two underreporter pilot activities, IRS compares line by line the gross dollar amount of payments listed on Form 1099-K to gross receipts reported on the tax return to identify potential underreporting of payment card and third-party network revenue. The payment mix methodology aims to identify potential underreporting of gross receipts from both card and cash sources. For this methodology, IRS first calculates a payment mix—the relative ratio of cash and card revenues of similar businesses. IRS determines this ratio by dividing the gross payment amount on Form 1099-K by gross receipts on the tax return. IRS then computes the amount of potential underreporting by comparing this payment mix to that of similar businesses based on variables including industry type and size, population density, per capita income, and average transaction sizes. As part of the test and learn process, IRS has expanded the number of variables to refine identification of possible underreporting taxpayers. One example of this is illustrated in figure 9. If implemented successfully and properly evaluated, the payment card pilot could allow IRS to determine which, if any, pilot activities are effective enough to justify broader expansion, including integration with or replacement of other compliance enforcement efforts. To assess IRS’s plan for evaluating the payment card pilot we used our previously developed guidance to identify key elements for designing quality evaluations. Addressing each element at the overall pilot and pilot activity levels can provide program managers with objective information to iteratively assess program performance. The five key elements we identified for quality evaluation design are described in table 5. IRS’s evaluation plan for pilot activities integrated many characteristics of a well-designed evaluation. As a result, IRS was able to make rapid, ongoing assessments of pilot activities and continually incorporate changes based on what was learned. This approach allowed IRS to test many hypotheses simultaneously while limiting the number of small business taxpayers affected by the pilot. However, the overall evaluation plan for the pilot lacked characteristics of each element that are necessary to ensure a quality evaluation. If IRS does not address these gaps, it risks not having the evidence needed to effectively decide whether, how, and when to integrate pilot activities into broader small business compliance improvement efforts. IRS clearly defined the overall pilot goal, which is to use Form 1099-K data to identify and reduce underreported and unreported income. IRS outlined and detailed the specific program activities it tested and documented pilot planning and results in a strategic planning document and several other executive-level updates. These documents detailed various actions, including internal meetings, assessments, outreach, training, and information technology activities, during the early stages of the pilot. IRS’s strategic planning document also provided a conceptual representation of the different stages of the pilot and the growth of compliance case volume at each stage, as seen in figure 10. IRS has generally documented the expected short-, medium- and long- term impacts of the pilot. One important short-term impact included learning about the small business population to improve identification of noncompliant taxpayers. For example, IRS realized an issue was arising because some small businesses—such as high-end restaurants—have lower cash revenue than other similar businesses. To address this issue, IRS added a line to Form 1099-K to collect data about the number of payment transactions (see figure 11). IRS uses this information to determine the average payment card transaction amount. In the medium term, IRS sees the potential for improved taxpayer voluntary compliance. After the first year of the pilot, IRS tested compliance levels of taxpayers before and after the introduction of the pilot and found that almost half of taxpayers increased their reported gross receipts, and about 60 percent of those contacted reported their income more accurately the following year. In the long term, IRS sees these activities helping to reduce the tax gap. While IRS has defined high-level pilot goals, such as improving voluntary compliance and reducing the tax gap, it did not establish performance measures for these goals and has not decided on a time frame for developing them. IRS has defined broad stages for pilot implementation, but has not clearly identified measures or indicators to determine when the pilot has moved or will move from one stage to the next. IRS identified pilot staffing needs. In June 2012, IRS estimated the number of full-time equivalents (FTE) it would need to conduct field exams. IRS officials also said they track resources for some of the pilot activities, including the implementation of the payment mix methodology pilots. However, IRS’s evaluation plan has not fully identified and tracked resource needs or use, including the actual numbers of FTEs hired or management resources to design and monitor test and learn pilots. IRS identified external factors that could affect the progress or effectiveness of the overall pilot. It identified potential hurdles, including possible litigation and access to necessary technology solutions. However, IRS has not articulated how these factors affect the future of the pilot and what decisions it will make to address them under different scenarios. IRS evaluated pilot activity results, but there is no clear documentation of its evaluation questions or analysis plan. However, these can be inferred based on the evaluation results. According to IRS officials, one of the evaluation goals was to learn why some compliant taxpayers were identified as potential underreporters of income. IRS examined the results of closed cases to learn how to better identify compliant and noncompliant taxpayers. An example of this analysis and resulting change is described in more detail in the text box. IRS Test and Learn Approach to Improve Identification of Noncompliant Taxpayers One IRS pilot learning goal is to test a new case selection methodology called the payment mix methodology. IRS is learning how to improve this methodology to better identify noncompliant taxpayers. When analyzing results of the first year of the pilot program, IRS found that a significant percentage of online-only businesses—which do not accept cash—were falsely identified by the payment mix methodology as potential underreporters of cash income. To decrease the likelihood that compliant online-only businesses were selected in future years, IRS added a line to Form 1099-K that allows the payment settlement entity to specify the aggregate gross amount of all reportable payment transactions during the calendar year where the card was not present at the time of the transaction or the card number was keyed into the terminal. Typically, this relates to online sales, phone sales, or catalogue sales. Because IRS reduces the probability that compliant small business taxpayers are identified as potential underreporters of income, the overall burden for compliant taxpayers is reduced. IRS evaluated the results of all pilot activities. IRS compared the average time to complete an audit and the average dollars assessed in additional tax for each case against existing compliance and enforcement efforts. IRS could use this information to decide which pilot activities to implement in a full compliance program. IRS did not have evaluative questions and criteria to assess whether the overall pilot or the pilot activities achieved the intended goals or produced the intended results. Understandably, as IRS tests and adapts different approaches, it has learned and will continue to learn which approaches demonstrate the most promise in efficiently and effectively identifying noncompliant taxpayers. Clearly articulated evaluative questions and related analysis plans would allow IRS to determine whether the overall pilot and pilot activities are achieving results that would signal what next steps should be taken. These may include deciding the pilot can move beyond the learning stage, be expanded, or, ultimately, moved from pilot to full implementation as a compliance program. Conversely, the determination could be that the pilot and pilot activities are not achieving the intended results and should be discontinued or modified. During the early stages of the pilot, part of IRS’s evaluation of pilot activities involved assessing Form 1099-K data quality. IRS monitored potential errors that payment settlement entities could make when filling out the form, including invalid or missing taxpayer identification numbers. Such data entry errors could negatively affect IRS’s efforts to compare the data with information reported on small business taxpayer returns. When errors were identified IRS contacted the payment settlement entities to make corrections. IRS officials told us that because of this effort, accuracy rates for matching rose from 90.3 percent for tax year 2011 to 95.4 percent for tax year 2013. IRS officials have told us that analysis of Form 1099-K data is ongoing. Since IRS lacks evaluative questions and an analysis plan for assessing the overall pilot, it does not have complete descriptions of the information or data and sources needed to assess the overall pilot against evaluation criteria, how that information will be gathered, and an assessment of data reliability. Because IRS addressed the relevance and quality of data sources in some of the early evaluations of some pilot activities, this information could feed into the development of the broader evaluation plan. IRS documented certain assumptions of its analysis. For example, in the alternative notice pilot, IRS sent assessments to taxpayers who did not respond to the notice and those who admitted to underreporting. IRS referred cases to IRS field work when taxpayers sent insufficient responses or communicated that a review of books and records was necessary. Furthermore, in the early stages of the pilot, IRS showed evidence that it checked that data were free of errors. In the first year of the pilot, officials took steps to ensure that compliance examiners understood and consistently applied decision rules to determine compliance results. IRS provided evidence that it estimated timelines and relative resource needs to move from the test and learn phase to a full compliance program. IRS outlined three scenarios to achieve a given level of compliance at program implementation. However, these scenarios were developed without a program level evaluation. Until IRS conducts an evaluation, it will not have the information it needs to determine which approach to take. Although IRS showed evidence that data, scope, and methodology limitations were considered and addressed for certain pilot activities, these limitations were not fully addressed for the overall pilot. IRS would first need to develop evaluative questions, assessment criteria, and an analysis plan for the overall pilot before it could clearly assess data, scope, and methodology considerations. An assessment of design limitations would include stating any limitations of pilot scope, determining comparisons against which to assess pilot results, and assessing whether the evaluation fits available time and resources. Asking these questions would help clarify the potential impact of any project design limitations when determining whether to move pilot activities toward full program implementation. IRS provided evidence that leadership from multiple offices across the agency—the Small Business and Self-Employed Division, Office of Compliance Analytics, and the Information Technology Organization— demonstrated commitment to using evaluation data to inform pilot decision making for the beginning of the pilot. Senior officials from each of these offices met weekly during the early stages of the pilot. The leadership actively engaged internal stakeholders and developed strategies to internally communicate information about pilot program activities. These strategies included organizing employee focus groups, training, and leadership updates on pilot progress. For example, in October and November of 2012, IRS provided an update on project communication status to communication directors across all operating divisions. In addition, IRS leadership engaged with external stakeholders before launching the pilot. In October 2011, IRS addressed small business representatives’ concerns about paperwork burdens by announcing that it would not require taxpayers to reconcile gross receipts and merchant card transactions. IRS also worked to address tax practitioner questions about the use of the payment mix methodology for case selection. As a result of the outreach, for example, IRS developed and tested a tool to help tax practitioners determine if their clients would be at risk for underreporting cash transactions. IRS’s payment card matching program has the potential to enhance the agency’s ability to identify noncompliant small business taxpayers. Better identification of noncompliance would reduce the burden placed on honest taxpayers because the likelihood they would be selected for costly and time-consuming audits or other compliance contacts could be reduced. Further, more effective identification of noncompliant taxpayers means IRS can more efficiently use limited resources. IRS’s Payment Card Pilot shows promise in producing these results. However, IRS has a long road ahead to figure out whether and how the pilot, and its many activities, can be fully implemented. IRS has not clearly defined the stages of the pilot or measurable goals that it can use to determine when the pilot has moved from one stage to the next, or if it should. Without defining the stages and establishing related metrics, IRS will not be able to articulate the pilot’s status at critical points in time. Further, it will not be able to justify the investment of additional resources if it cannot demonstrate progress toward those goals. In addition, IRS has not developed a full evaluation plan that will allow for a systematic assessment of the overall pilot against evidence-based criteria. Such a plan is necessary so IRS can ensure that it is making informed decisions about moving forward. Following key elements of evaluation design will help ensure that the results of the evaluation are valid and reliable. Finally, documenting the plan’s limitations will reduce the risk that IRS will draw conclusions that are beyond what can be supported. To improve the evaluation of the payment card pilot, the Commissioner of Internal Revenue should take the following actions: Clearly define the stages of the payment card pilot and establish measurable goals for determining when the pilot advances from one stage to the next. Develop an evaluation plan for the overall pilot and building on pilot activities to inform decisions about whether, how, and when to integrate pilot activities into overall enforcement efforts. This plan should include evaluation questions, evidence-based evaluative criteria, an analysis plan, a complete description of data to be collected, a data reliability assessment, and documentation of evaluation limitations. We provided a draft of this report to the Commissioner of Internal Revenue and the Secretary of the Treasury for their review and comment. IRS’s Deputy Commissioner for Services and Enforcement provided written comments, which expressed appreciation to GAO for recognizing IRS’s efforts to consider taxpayer burden when implementing processes and procedures. In its response to the draft, IRS agreed to incorporate an evidence-based assessment of the payment card pilot that includes identifying clearly defined pilot stages and implementing an evaluation plan with measurable goals. IRS stated it will provide a more detailed response to our recommendations after this report has been released. These comments are reprinted in appendix V. IRS also provided us with technical comments, which we incorporated into the report as appropriate. Treasury did not provide comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The objectives of this report are to: (1) describe the characteristics of the small business population; (2) describe how characteristics of a small business affect compliance burden; (3) describe how the Internal Revenue Service (IRS) integrates small business compliance burden considerations in decision making; and (4) assess IRS’s plan for evaluating its payment card pilot. To describe general characteristics of the small business population such as the number of small businesses and total income, we reviewed taxpayer data from IRS Statistics of Income (SOI) and studies with access to taxpayer data. We reviewed SOI documents about data reliability and sampling methodology, and interviewed officials in SOI. We reviewed reports from researchers at the U.S. Department of Treasury, Office of Tax Analysis (OTA). We interviewed two of the OTA authors about their methodology for identifying the small business population. We performed data reliability tests by comparing OTA estimates for all filers against SOI estimates and by comparing OTA estimates for tax year 2007 and 2010. We found the estimates from researchers at OTA were sufficiently reliable for our purposes of describing general characteristics of the small business population. See appendix II for a more detailed discussion of OTA researchers’ methodology and assumptions. To describe how characteristics of a small business affect compliance burden, we conducted a literature review where we reviewed IRS research papers and conference presentations, academic studies, and our prior work on taxpayer compliance burden. We searched relevant databases such as ProQuest, Accounting & Tax, EconLit, ABI/Inform, Nexis.com, and Tax Notes. We identified and reviewed selected IRS studies on tax compliance burden conducted over the last 11 years. We also asked IRS officials from the Research, Analysis, and Statistics (RAS) division to identify any additional IRS research assessing small business tax compliance burden and post-filing burden. To obtain information related to federal small business tax requirements, we reviewed IRS taxpayer guidance found on the IRS website including the 2015 tax calendar. We interviewed relevant IRS officials to clarify our understanding of the research and models, and to verify our analysis. To describe how IRS integrates small business compliance burden considerations in decision making, we examined IRS’s strategic plan and relevant goals and objectives related to taxpayer burden. We also reviewed IRS’s Internal Revenue Manual, which outlines, among other things, guiding principles for considering burden reduction. We interviewed IRS officials in the Small Business and Self-Employed division (SB/SE), RAS, and the Office of Taxpayer Burden Reduction about other activities IRS conducts related to taxpayer burden reduction, tools it uses to manage burden reduction efforts, and initiatives it implemented. In addition, we interviewed tax practitioners, associations, and other liaisons to the small business community to identify areas of burden associated with interactions between IRS and the small business community, and discuss what might alleviate burden. We conducted unstructured interviews with a non-generalizable sample of 12 organizations based on their knowledge of small business tax policy resulting from historical involvement and relationships with the small business community and IRS. We reviewed supporting documentation, where available. We selected these organizations to represent a variety of perspectives and groups within the small business community. To assess IRS’s plan for evaluating the payment card pilot, we reviewed and summarized documentation that included IRS’s Information Reporting and Document Matching Strategic Roadmap; Communication, Outreach, and Education Strategic Plan; and IRS internal presentations to the IRS Commissioner. We interviewed IRS officials from SB/SE and Office of Compliance Analytics divisions on the pilot to link IRS’s test and learn approach to defining strategic goals, evaluation questions, an analysis plan, and the ability to track benefits of the pilot efforts. We compared these efforts to our guidance on program evaluation design and applied criteria adapted from the guidance to both the overall pilot and pilot activities. We conducted this performance audit from July 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For our analysis, we use estimates from researchers at the U.S. Department of the Treasury, Office of Tax Analysis (OTA), to describe the characteristics of the small business population, such as the number of small businesses and total income. There are no universally accepted criteria for defining the small business population. OTA estimates address some of the limitations of describing the small business population using estimates based solely on the type of tax return that is filed by excluding certain tax returns that may not be actual businesses. Not all tax returns that report business income represent business entities with the principal purpose to generate revenue or to engage in substantive business activity. For example, some C corporations can serve as investment vehicles that engage in little or no business activity. Further, partnerships may be created to redistribute profits generated by another partnership and may not generate income themselves. Filers of Form 1040, Schedule C, may be independent contractors who may more closely resemble employees than small businesses. Additionally, rental income for some individuals may be incidental and not represent business activities. Uncertainty within OTA estimates can come from: (1) assumptions that were made to distinguish small businesses engaged in substantial and substantive business activities from other entities that file the same tax return; and (2) that the estimates are based on sampled taxpayer data and subject to sampling error. Consequently, results that are slightly higher or lower than those reported in this particular analysis may be equally valid for describing the numbers of businesses in each subgroup and the size of their incomes. We found the estimates from the researchers at OTA were sufficiently reliable for our purposes of describing general characteristics of the small business population. Throughout this report, we refer to the estimated number of small business filers as the number of small businesses. Using these estimates may understate the number of small businesses where individual taxpayers can own multiple small businesses. For example, individual taxpayers can file multiple schedules to report business activity (profit, loss, and supplemental income and loss from rental real estate activity) from different lines of business. The number of schedules filed is greater than the number of Form 1040 tax returns to which these schedules are attached due to some returns having multiple schedules. The total income estimates are the sum of all business income reported on tax returns, including gross receipts, rents, dividends, capital gains, royalties, and interest. Total income for Schedule E filers is limited to rental real estate activity from Part I of that schedule to avoid attributing income from pass-through entities or from royalties to these businesses. The OTA analysis had access to taxpayer data made available by IRS Statistics of Income (SOI). Available data from SOI samples indicate that the sampling errors for the total number of filers and total business receipts for each type of taxpayer is less than +/- 6 percent at the 95 percent confidence level. Sampling errors of subpopulations may be higher where tax returns have been sampled at lower rates. Data were not available to determine sampling errors for the OTA total income measure; however, business receipts make up such a substantial portion of the OTA total income that we would not expect the sampling error to be significantly greater for the OTA total income estimates than it is for the SOI estimates for business receipts. For C corporations, S corporations, partnerships, and sole proprietorships in 2010, the OTA number of filers is within +/- 2 percent of SOI estimates of the number of filers. OTA total income estimates are higher (within + 20 percent) of SOI total receipts estimates for these filers because OTA’s total income measure includes types of income other than business receipts. The estimates used in this report are based on thousands of returns from the 2010 SOI Individual, Corporate, and Partnership Studies. Although we do not know the exact number of records used for each estimate, the Individual sample has 50,464 Form 1040 returns with a Schedule C with a sample selection amount of less than $10 million and 5,804 Form 1040 returns with Schedule F but without a Schedule C. The sample selection amount is the greater of indexed negative income and indexed positive income. The Corporate sample has 20,085 Form 1120 returns with total assets less than $10 million and size of proceeds less than $2.5 million where proceeds is defined as the larger of the absolute value of net income (deficit) or absolute value of cash flow (which includes net income, depreciation, and depletion). The Corporate sample also has 15,741 Form 1120S returns that have total assets less than $10 million and size of proceeds less than $2.5 million. The Partnership sample has 35,744 returns with total assets or current activity measure less than $10 million. Current activity measure is the maximum of the absolute value of receipts and income/loss. In this situation, receipts is the sum of the net receipts, rental income, gross income, portfolio interest income, dividend interest income, royalty income, and net long-term capital gain/loss. Income/loss is the sum of ordinary income, net income, net income from the balance sheet, portfolio interest income, royalty income, and net long- term capital gain/loss. Table 6 shows SOI estimates for the total number of filers and total income by type of tax return for tax year 2010 (as reported by Prisinzano et al.). The OTA analysis and Prisinzano et al. apply two tests to exclude tax returns filed that do not represent filers that generate substantial income or engage in substantive business activity. The first test is a de minimis activity test that requires taxpayers to report total income or total deductions greater than $10,000, or that their sum be greater than $15,000. The second test requires total deductions be greater than $5,000, which indicates substantive business activity based on expenses related to employees, inventories, and investment, among other things. The application of these two tests results in the exclusion of 46.7 percent of the 44.6 million total tax returns, but only 0.6 percent of the total $33.3 trillion in total income represented by these tax returns. Using this threshold of $10 million or less in both total income and total deductions, small businesses represent 99 percent of tax returns generating substantial income and engaged in substantive business activities (and more than 95 percent of tax returns for each type of tax return). Small businesses account for 17.8 percent of the total income for these tax returns (and more than 91 percent of total income for each type of individual filer and between 5 and 40 percent of total income for each type of corporation and partnership). Table 7 shows the OTA small business estimates are consistent across the years 2007 and 2010, and are similar to corresponding estimates for all filers. In addition to the measurement errors discussed above, OTA estimates for these tax years are affected by other factors, including changing economic conditions. In spite of these factors, small business estimates for 2010 are within +/- 11 percent of 2007 values for number of filers and within +/- 18 percent of 2007 values for total income. For each type of filer, the percent change from 2007 to 2010 for small businesses is similar to the percent change for all filers (the difference in percent change is within +/- 10 percentage points). We found similar relationships for other subpopulations reported within this report. While we do not have sampling errors for these estimates, we consider estimates of percentages that differ by more than 30 percentage points or totals that differ by more that 100 percent to be different. The figures and tables in this appendix supplement those in the second objective providing additional information on small business tax-related activities, IRS burden model methodology, and results from IRS burden models. Table 8 provides a detailed description of tax-related activities that may create burden for small businesses. These activities are grouped by income taxes, employer-related taxes, and third-party information reporting and industry-specific tax activities. Figure 12 illustrates a simplified depiction of IRS pre-filing and filing burden models. Essentially, IRS uses the data from its compliance burden surveys, combined with data that IRS obtains from the tax returns of survey respondents in econometric models. These models estimate the relationship between taxpayer characteristics and reported burden; the models can then be used to estimate total compliance burden. IRS then uses estimates of these relationships in simulation models to predict how potential IRS administrative decisions, such as those relating to tax form design and recordkeeping requirements, may affect taxpayer burden. Although such simulation results may provide useful insights, it is difficult to assess the reliability of those results; consequently they should be used with caution. We found IRS research estimates were reliable for our purposes of obtaining an overview of small business tax compliance costs. One difficulty is that to be able to simulate the effect of burden, IRS needed to develop a complicated methodology for apportioning aggregate burden across all of the different types of pre-filing and filing activities. There are no formal statistical tests to estimate the margins of error around the ultimate simulation results. The following tables and figure are results from IRS burden models. The data were taken from IRS studies concerning how small business characteristics such as size and industry affect small business compliance costs. Tables 9 through 11 examine the estimated pre-filing and filing monetized burden per employee, as a percentage of total receipts, and as a percentage of total assets. Table 12 provides information on total monetized business compliance costs by business entity type and total gross receipts across all businesses. Table 13 shows the estimated average pre-filing and filing time and money burden by industry. Figure 13 provides the estimated post-filing compliance costs for individual filers. Appendix IV: Selected Open GAO Recommendations to IRS That May Affect Small Business Taxpayer Burden Open recommendations The Commissioner of the Internal Revenue should direct the appropriate officials to take the following three actions: (1) Systematically and periodically compare its telephone service to the best in business to identify gaps between actual and desired performance. (2) Include specific countermeasures or options in risk management plans that could guide a response when an adverse event occurs. (3) Develop outcomes that are measurable and plans to analyze service changes that allow valid conclusions to be drawn so that information can be conveyed to Congress, IRS management, and others about the effectiveness of IRS’s service changes and impact on taxpayers. Status (1) IRS disagreed with this recommendation, noting in February 2015 that it is difficult to identify comparable organizations with a size or scope similar to that of the IRS to identify performance gaps, and that such efforts would not yield improved results over the benchmarking process currently used by IRS. We disagree that IRS’s telephone operations cannot be compared to others. We believe this recommendation remains valid and should be implemented. Report: Tax Filing Season: 2014 Performance Highlights the Need to Better Manage Taxpayer Service and Future Risks, GAO-15-163 (Washington, D.C.: Dec.16, 2014). (2) IRS agreed with this recommendation and, in February 2015, reported it has already included specific countermeasures or options in risk management plans for those risks ranked highest in likelihood and impact. IRS also reported it considers such efforts to be ongoing as it develops new risk management plans over time. (3) IRS agreed with this recommendation and, in February 2015, reported it is developing outcome measures and plans for analysis to identify and report on the effect of service changes during the 2015 filing season. IRS also reported it anticipates completing this analysis by the end of fiscal year 2015. To reduce the need for taxpayer calls, ensure that IRS is providing taxpayers with more realistic time frames on when IRS will respond, and more efficiently use IRS resources, the Commissioner of the Internal Revenue Service should: (1) Collect data to analyze whether IRS is responding within the time frames cited in the revised audit notices. (2) If IRS delays are continuing, further revise the notices to provide more realistic response times based on the data and take other appropriate actions to ensure efficient use of IRS tax examiner resources. According to IRS officials: (1) Correspondence audit program officials analyzed fiscal years 2012-2015 data to provide a monthly breakdown of the volumes of taxpayer correspondence worked in less or more than 75 days. Supporting documentation from IRS is pending. Open recommendations goals on ensuring compliance in a cost-effective way while minimizing taxpayer burden. To better inform decisions being made about the correspondence audit program, the Commissioner of the Internal Revenue Service should: (6) Document how the decisions are to be made about the correspondence audit program using performance information. (7) Track and use other program data that have not been used to provide more complete performance information, such as taxpayer burden and experience. Status their respective inventory levels at the time notices are sent. The notices are expected to be available and implemented in January 2016 after necessary program updates. Supporting documentation from IRS is pending. To better ensure an effective investment of resources in the Correspondence Assessment Program (CEAP) efforts, the Commissioner of the Internal Revenue Service should: (8) Clearly document the intended benefits of ongoing efforts to address identified problems, and the process for measuring and tracking actual benefits. (9) Develop a plan and timeline for implementing the CEAP contractor’s recommendations on possible ways to improve the (a) selection of correspondence audit workload and (b) allocation of resources between providing telephone assistance and reviewing taxpayer correspondence. (3) through (5) IRS will review current documentation and ensure there is a clear link establishing the correspondence audit program objectives and measures with the overall IRS goals and objectives. Officials also said they will update official guidance as warranted. Actions on these three recommendations are due by March 2016. Supporting documentation on timeframes for specific actions related to these recommendations is pending. (6) IRS will thoroughly document the original plan development process. Action on this recommendation is due by March 2016. Supporting documentation on timeframes for specific actions related to these recommendations is pending. Report: IRS Correspondence Audits: Better Management Could Improve Tax Compliance and Reduce Taxpayer Burden, GAO-14-479 (Washington, D.C.: June 5, 2014). Open recommendations agreement data into accounts. Report: 2013 Tax Filing Season: IRS Needs to Do More to Address the Growing Imbalance between the Demand for Services and Resources, GAO-14-133 (Washington, D.C.: Dec. 18, 2013). Status since the agency changed its installment agreement program, it has decided to evaluate those changes before exploring whether adopting our recommendation will yield increased efficiencies and lower costs without adversely impacting tax administration. IRS officials stated they will provide a status update in October 2015. To increase the effectiveness of IRS’s examinations of individual tax returns, the Commissioner of the Internal Revenue Service should: (1) Transcribe data from paper-filed Form 1040 Schedules C and E that are not currently transcribed and make those data available to Small Business and Self- Employed division (SB/SE) examiners for classification. If IRS has evidence that the costs related to transcribing all such data on Schedules C and E are prohibitive, IRS could do one or both of the following actions: (a) transcribe less data by transcribing only the missing data for selected line items, such as certain, large expense line items; or (b) develop a budget proposal to fund an initiative for transcribing Schedule C and E. As of March 2015, IRS agreed to study: (1) whether to increase data transcription of additional tax return information as GAO recommended in May 2013. The agency also agreed to study whether to use more data from electronically-filed returns. IRS's study is scheduled to be completed by November 2015, and is expected to weigh the benefits to the agency and the impacts on taxpayers who file returns electronically. (2) expanding the use of electronic data to enhance return classification while weighing the benefits of increased information against the risks of potential impacts to electronic filing. (2) Make all data collected from electronically submitted Form 1040s available to examiners conducting classification. Report: Tax Administration: IRS Could Improve Examinations by Adopting Certain Research Program Practices, GAO-13-480 (Washington, D.C.: May 24, 2013). The Commissioner of Internal Revenue should: (1) Outline a strategy that defines appropriate levels of telephone and correspondence service and wait time, and lists specific steps to manage service based on an assessment of time frames, demand, capabilities, and resources. (1) IRS has taken steps to modify services provided to taxpayers, but has not yet developed a strategy outlining IRS’s customer service goals. (2) Tailor appropriate and timely interventions with taxpayers who file balance due returns by pilot testing risk-based approaches that could include (a) implementing the Advanced Consolidated Data Analytics plan, and (b) using more data-driven methods to identify the most appropriate method for contacting a taxpayer. Report: 2012 Tax Filing: IRS Faces Challenges Providing Service to Taxpayers and Could Collect Balances Due More Effectively, GAO-13-156 (Washington, D.C.: Dec. 18, 2012). (2) IRS agreed with our December 2012 recommendation to pilot more risk-based approaches for contacting taxpayers who have a balance due. However, IRS has reported that because that project was not funded, it used an alternative model to conduct analysis. IRS implemented its updated model in late April 2014. In October 2014, IRS reported that better (more productive) cases will be assigned to select work streams. Further, in January 2015, IRS officials said they will be placing revised models into production in fiscal year 2015. Open recommendations To help ensure that IRS uses its examination resources efficiently, the Commissioner of the Internal Revenue Service should: (1) Document and analyze the results of examinations involving the Small Employer Health Insurance Tax Credit to identify how much of those results are related to the credit versus other tax issues being examined, what errors are being made in claiming the credit, and when the examinations of the credit are worth the resource investment. Status (1) As of October 2014, SB/SE analyzed a statistical sample of 2010 examination results for the Small Employer Health Insurance Tax Credit. As a result of the research, SB/SE concluded that the findings do not justify selecting a specific number of returns for examination with the Credit as the primary issue. Instead, they will identify issues as part of the normal classification process, and prepare guidelines for classifiers to reference when selecting returns for examination. (2) Related to the above analysis of examination results on the credit, identify the types of errors with the credit that could be addressed with alternative approaches, such as soft notices. Report: Small Employer Health Tax Credit: Factors Contributing to Low Use and Complexity, GAO-12-549 (Washington, D.C.: May 14, 2012). (2) As of October 2014, IRS told us that Math Error Authority, fully implemented in January 2014, addresses many of the common errors claiming the Credit. Therefore, there is not an immediate need for alternative approaches, such as soft notices. However, IRS will still consider alternatives. The Commissioner of the Internal Revenue Service should: (1) Develop a new refund timeliness measure and goal to more appropriately reflect current capabilities. (1) As of October 2014, IRS has yet to develop a new refund timeliness measure. However, IRS has taken steps to identify the number of days it takes to issue a refund in addition to the percentage of refunds received in daily increments from 5 to 60 days. (2) Complete an Internet strategy that (a) provides a justification for the implementation of online self-service tools, and includes an assessment of providing online self-service tools that allow taxpayers to access and update elements of their account online; (b) acknowledges the cost and benefits to taxpayers of new online services; (c) sets the time frame for when the online service would be created and available for taxpayer use; and (d) includes a plan to update the strategy periodically. (2) IRS has made progress in improving its Internet online services strategy. In September 2012, IRS provided us with an updated version of that strategy. However, IRS still needs to take a number of steps to more fully develop its long-term online strategy. As of February 2015, IRS officials reported that it does not have a separate online services strategy. Report: 2011 Tax Filing: Processing Gains, but Taxpayer Assistance Could Be Enhanced by More Self- Service Tools, GAO-12-176 (Washington, D.C.: Dec. 15, 2011). Open recommendations questions, upcoming outreach, and description of the letter ruling process. Status coordination. However, IRS officials said they share with external stakeholders the general timeframes for upcoming guidance. Report: Information Reporting: IRS Could Improve Cost Basis and Transaction Settlement Reporting Implementation, GAO-11-557 (Washington, D.C.: May 19, 2011). To gain efficiencies and improve taxpayer service, the Commissioner of Internal Revenue should direct the appropriate officials to: (1) Determine a customer service telephone standard, and the resources required to achieve this standard based on input from Congress and other stakeholders. (2) Assess business units’ needs for holding Contact Analytics calls beyond 45 days and store calls for this period or document that the costs of doing so exceed the benefits. (1) As of August 2014, IRS’s position remained that its measure of telephone service does not need to be revised and the current process for establishing IRS telephone plans is sufficient. However, we continue to believe that a telephone standard would serve as a means of communicating to Congress and others what IRS believes would constitute good service. (3) Establish a performance measure for taxpayer correspondence that includes providing timely service to taxpayers. Report: 2010 Tax Filing Season: IRS’s Performance Improved in Some Key Areas, but Efficiency Gains Are Possible in Others, GAO-11-111 (Washington, D.C.: Dec. 16, 2010). (2) IRS disagreed with this recommendation. As of August 2014, IRS officials continue to maintain that increasing the recorded call storage beyond 45 days would not be a low cost effort. However, we continue to believe that storing calls for extended periods would allow IRS to better identify trends and taxpayer concerns, thus offsetting the costs. (3) IRS agreed with this recommendation and started using more detailed performance measures that includes an overaged/timeliness measure for its correspondence beginning in fiscal year 2011. However, in April 2014, we reported that overaged correspondence increased from 25 to 47 percent; thus, we continue to believe that elevating this measure to IRS’s suite of balanced measures would help provide more visibility and ultimately better service. Open recommendations To gauge the extent of 1099-MISC payer noncompliance and its contribution to the tax gap, we recommend that the Commissioner of the Internal Revenue Service, as part of future research studies: (1) Develop an estimate of 1099-MISC payer noncompliance. Status (1) According to IRS, developing such an estimate requires a multi-pronged approach and a large amount of coordinated effort. As of September 2014, IRS estimates results will be available in December 2015. (2) Determine the nature and characteristics of those payers that do not comply with 1099-MISC reporting requirements so that this information can be factored into an IRS-wide strategy for increasing 1099-MISC payer compliance. (2) IRS researchers are collecting data on 1099-MISC reporting as part of its National Research Program study on employment taxes, a program that involves examinations of a sample of tax returns expected to culminate in 2015. As of September 2014, IRS estimates results will be available in December 2015. Report: Tax Gap: IRS Could Do More to Promote Compliance by Third Parties with Miscellaneous Income Reporting Requirements, GAO-09-238 (Washington, D.C.: Jan. 28, 2009). In addition to the contact named above, Brian James, Assistant Director; Sonya Phillips, Analyst-in-Charge; Courtney Liesener, Robert MacKay, James R. White, and Nell Williams made major contributions to the report. Robert Gebhart, Kirsten Lauber, Donna Miller, Edward Nannenhorn, Karen O’Conor, Andrew Stephens, and James Wozny also provided assistance. | A challenge IRS faces is balancing efforts to minimize taxpayer burden with efforts to ensure compliance with the tax code. Small businesses are a vital source of economic growth in the United States. Reducing their costs for complying with the tax code may free up resources to expand, hire new employees, and contribute to the growth of the U.S. economy. GAO was asked to examine small business tax compliance burden and IRS's payment card pilot that addresses taxpayer non-compliance. This report: (1) describes characteristics of the small business population (2) describes how characteristics of a small business affect compliance burden; (3) describes how IRS integrates small business compliance burden considerations in decision-making; and (4) assesses IRS's plan for evaluating its payment card pilot. To answer these objectives, GAO analyzed Treasury and IRS data, research, and other documentation and interviewed agency officials. GAO used its guidance on program design evaluation to assess IRS's payment card pilot evaluation plan. According to estimates produced by government tax researchers using 2010 taxpayer data, small businesses (defined in the research as individuals or entities with substantive business activity but with less than $10 million in total income and deductions) make up 99 percent of all businesses. Approximately 69 percent of small businesses (about 16 million) are individual taxpayers who report business income and the remaining 31 percent (or roughly 7.3 million) are partnerships or corporations. Small businesses with at least one employee make up about 20 percent of the small business population, but produce about 71 percent of total small business income. Small businesses undertake a number of tax compliance-related activities that create burden. These activities can be grouped into general categories such as income tax activities, employer-related tax activities, and third-party information reporting activities. The tax compliance burden associated with these activities varies depending on the businesses' asset size, filing entity type (e.g., sole proprietor, partnership), number of employees, and industry type. According to IRS research, compliance burden increases with the size of businesses, whether measured in terms of assets, receipts, or employment. IRS also measured money and time burden as a portion of total business receipts, total assets, and burden per employee. Across all three measures, IRS results were consistent with the assumption that small businesses face significant fixed compliance costs combined with decreasing marginal costs as the business grows. IRS's decision-making framework for administering the tax system includes consideration of small business compliance burden. For example, IRS's strategic plan identifies reducing taxpayer burden as a strategic goal. IRS provided examples of how it works with internal and external stakeholders to reduce taxpayer burden on small businesses. For example, IRS collaborated with Treasury and external stakeholders to develop a simplified method for some small businesses to calculate a home office deduction, which was introduced in January 2013. Previously, businesses had to complete a complex property depreciation calculation. To improve tax compliance among small businesses, in 2012, IRS began piloting a program that compares payment data from payment settlement entities (such as credit card companies) with income reported by small businesses. IRS is testing ways to use payment data to detect underreporting of taxable income while minimizing small business taxpayer burden. While IRS's plans for evaluating the pilot include many key evaluation elements that GAO identified, other elements are missing. For example, IRS has defined high level pilot goals such as improving voluntary compliance and reducing the tax gap, but has not established measures for determining progress against these goals. Additionally, the plan did not adequately document evaluative questions, data collection needs, or the evaluative criteria necessary to assess whether pilot activities produced the intended results. Without these and other elements, IRS cannot ensure it is making evidence-based decisions about expanding and integrating pilot activities into broader small business compliance improvement efforts. To improve the evaluation of the payment card pilot, GAO recommends that IRS clearly define the stages of the pilot and establish measurable goals for determining when the pilot progresses from one stage to the next and develop an evaluation plan for the overall pilot that includes evaluation questions, complete descriptions of needed data, and evaluation criteria. IRS agreed to take the recommended actions. |
After the attacks of September 11, 2001, Congress and the President enacted several new laws intended to address many of the vulnerabilities exploited by the terrorists by strengthening layers of defense related to aviation and border security. A summary of key legislative efforts follows. To strengthen transportation security, the Aviation and Transportation Security Act (ATSA) was signed into law on November 19, 2001, with the primary goal of strengthening the security of the nation’s aviation system. To this end, ATSA created the Transportation Security Administration (TSA) as an agency within the Department of Transportation (DOT) with responsibility for securing all modes of transportation, including aviation. ATSA included numerous requirements with deadlines for TSA to implement that were designed to strengthen the various aviation layers of defense. For example, ATSA required TSA to create a federal workforce to assume the job of conducting passenger and checked baggage screening from air carriers at commercial airports. The act also gave TSA regulatory authority over all transportation modes. After ATSA was enacted, the Homeland Security Act of 2002 consolidated most federal agencies charged with providing homeland security, including securing our nation’s borders, into the newly formed Department of Homeland Security (DHS), which was created to improve, among other things, coordination, communication, and information sharing among the multiple federal agencies responsible for protecting the homeland. Legislation also was enacted to enhance various aspects of border security. The Homeland Security Act, for example, generally grants DHS exclusive authority to issue regulations on, administer, and enforce the Immigration and Nationality Act and all other immigration and nationality laws relating to the functions of U.S. consular officers in connection with the granting or denial of visas. The Homeland Security Act authorized DHS, among other things, to assign employees to U.S. embassies and consulates to provide expert advice and training to consular officers regarding specific threats related to the visa process. New legislation also was enacted that contained provisions affecting a major border security initiative that had begun prior to 9/11—a system for integrating data on the entry and exit of certain foreign nationals into and out of the United States, now known as US-VISIT (U.S. Visitor and Immigrant Status Indicator Technology). In 2001, the USA PATRIOT Act provided that, in developing this integrated entry and exit data system, the Attorney General (now Secretary of Homeland Security) and Secretary of State were to focus particularly on the utilization of biometric technology (such as digital fingerprints) and the development of tamper-resistant documents readable at ports of entry (either a land, air, or sea border crossing associated with inspection and admission of certain foreign nationals). It also required that the system be able to interface with law enforcement databases for use by federal law enforcement to identify and detain individuals who pose a threat to the national security of the United States. In addition, the Enhanced Border Security and Visa Entry Reform Act of 2002 required that, in developing the integrated entry and exit data system for ports of entry, the Attorney General (now Secretary of Homeland Security) and Secretary of State implement, fund, and use the technology standard that was required to be developed under the USA PATRIOT Act at U.S. ports of entry and at consular posts abroad. The act also required the establishment of a database containing the arrival and departure data from machine-readable visas, passports, and other travel and entry documents possessed by aliens and the interoperability of all security databases relevant to making determinations of admissibility under section 212 of the Immigration and Nationality Act. (For additional information on legislative requirements related to US-VISIT, see GAO, Border Security: US-VISIT Faces Strategic, Technological, and Operational Challenges at Land Ports of Entry, GAO-07-248 [Washington, D.C.: December 2006]). In December 2004, the Intelligence Reform and Terrorism Prevention Act of 2004 was enacted, containing provisions designed to address many of the transportation and border security vulnerabilities identified, and recommendations made by the 9/11 Commission. It included provisions designed to strengthen aviation security, information sharing, visa issuance, border security, and other areas. For example, the act mandated that TSA develop a passenger prescreening system that would compare passenger information for domestic flights to government watch list information, a function that was at the time, and still is, being performed by air carriers. The act also required the development of risk-based priorities across all transportation modes and a strategic plan describing roles and missions related to transportation security for encouraging private sector cooperation and participation in the implementation of such a plan. In addition, the act required DHS to develop and submit to Congress a plan for full implementation of US-VISIT as an automated biometric entry and exit data system and required the collection of biometric exit data for all individuals required to provide biometric entry data. In an effort to increase homeland security following the terrorist attacks on the United States, President Bush issued the National Strategy for Homeland Security in July 2002. The strategy sets forth overall objectives to prevent terrorist attacks within the United States, reduce America’s vulnerability to terrorism, minimize the damage and assist in the recovery from attacks that may occur. The strategy is organized into six critical mission areas, including (for purposes of this report) one on border and transportation security. For this mission area, in particular, the strategy specified several objectives, including ensuring the integrity of our borders and preventing the entry of unwanted persons into our country. To accomplish this, the strategy provides for, among other things, reform of immigration services, large-scale modernization of border crossings, and consolidation of federal watch lists. It also acknowledges that accomplishing these goals will require overhauling the border security process. The President has also issued 16 homeland security presidential directives (HSPD), in addition to the strategy that was issued in 2002, providing additional guidance related to the mission areas outlined in the National Strategy. For example, HSPD-6 sets forth policy related to the consolidation of the government’s approach to terrorism screening and provides for the appropriate and lawful use of terrorist information in screening processes. HSPD-11 builds upon this directive by setting forth the nation’s policy with regard to comprehensive terrorist-related screening procedures through detecting, identifying, tracking, and interdicting people and cargo that pose a threat to homeland security, among other things. Additionally, HSPD-7 establishes a national policy for federal departments and agencies to identify and prioritize critical infrastructure and key resources and to protect them from terrorist attacks. (For additional information on the National Strategy for Homeland Security and related presidential directives, see GAO, Homeland Security: Agency Plans, Implementation, and Challenges Regarding the National Strategy for Homeland Security, GAO-05-33 ). The federal departments with primary security-related responsibilities for aviation and border security after 9/11—the frontline departments providing key layers of defense—which are included in this report are shown in figure 1. The terrorist attacks of September 11, 2001, became the impetus for change in both the way in which airline passengers are screened and the entities responsible for conducting the screening. With the passage of ATSA, TSA assumed responsibility for civil aviation security from the Federal Aviation Administration (FAA), and for passenger and baggage screening from the air carriers. As part of this responsibility, TSA oversees security operations at the nation’s more than 400 commercial airports, including passenger and checked baggage screening operations. One of the most significant changes mandated by ATSA was the shift from the use of private-sector screeners to perform airport screening operations to the use of federal screeners. Prior to ATSA, passenger and checked baggage screening had been performed by private screening companies under contract to airlines. ATSA required TSA to create a federal workforce to assume the job of conducting passenger and checked baggage screening at commercial airports. The federal workforce was in place, as required, by November 2002. While TSA took over responsibility for passenger checkpoint and baggage screening, air carriers have continued to conduct passenger prescreening (the process of checking passengers’ names against federal watch list data at the time after an airline reservation is made). As noted above, the Intelligence Reform and Terrorism Prevention Act requires that TSA take over this responsibility from air carriers. In addition to establishing requirements for passenger and checked baggage screening, ATSA charged TSA with the responsibility for ensuring the security of air cargo. TSA’s responsibilities include, among other things, establishing security rules and regulations covering domestic and foreign passenger carriers that transport cargo, domestic and foreign all- cargo carriers, and domestic indirect air carriers—carriers that consolidate air cargo from multiple shippers and deliver it to air carriers to be transported; and overseeing implementation of air cargo security requirements by air carriers and indirect air carriers through compliance inspections. In general, TSA inspections are designed to ensure air carrier compliance with air cargo security requirements, while air carrier inspections focus on ensuring that cargo does not contain weapons, explosives, or stowaways. ATSA also granted TSA the responsibility for overseeing U.S. airport operators’ efforts to maintain and improve the security of airport perimeters, the adequacy of controls restricting unauthorized access to secured areas, and security measures pertaining to individuals who work at airports. While airport operators, not TSA, have direct day-to-day operational responsibilities for these areas of security, ATSA directs TSA to improve the security of airport perimeters and the access controls leading to secured airport areas, as well as take measures to reduce the security risks posed by airport workers. Our nation’s current border security process is intended to control the entry and exit of foreign nationals seeking to enter or remain in the United States as well as prevent hazardous cargo or materials from being transported into the country. The primary federal agencies involved in this effort are the Department of State’s Bureau of Consular Affairs and DHS’s Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE). Managing and Administering the Visa Process The first layer of border security begins at the State Department’s overseas consular posts, where State’s consular officers are to adjudicate visa applications for foreign nationals who wish to enter the United States. In deciding to approve or deny a visa, consular officers are on the front line of defense in protecting the United States against potential terrorists and others whose entry would likely be harmful to U.S. national interests. Consular officers must balance this security responsibility against the need to facilitate legitimate travel. The process for determining who will be issued or refused a visa contains several steps, including documentation reviews, in-person interviews, collection of biometrics (fingerprints), and cross-referencing an applicant’s name against a name- check database that includes the names of visa applicants to identify terrorists and other aliens who are potentially ineligible for visas based on criminal histories or other reasons specified by federal statute. In addition, State provides guidance, in consultation with DHS, to consular officers regarding visa policies and procedures and has the lead role with respect to foreign policy-related visa issues. While State manages the visa process, DHS is responsible for establishing visa policy, reviewing implementation of the policy, and providing additional direction. In addition, DHS had designated ICE to oversee efforts to review applications and provide expert advice and training to consular officers regarding specific threats related to the visa process at certain overseas posts. Border Screening and Inspection Processes for Ports of Entry CBP is responsible for conducting immigration and customs inspections for aliens entering the United States at official border crossings (air, land, and sea ports of entry). CBP enforces immigration laws by screening and inspecting international travelers who enter the country through ports of entry. As part of this process, CBP officers verify travelers’ identities through inspection of travel documents, screen travelers against terrorist watch lists, and scan or enter passport data into databases to verify travelers’ identities. CBP also is responsible for conducting customs- related inspections of cargo at ports of entry and for ensuring that all goods entering the United States do so legally. In addition, CBP conducts prescreening of passengers on international flights bound for or departing from the United States. Specifically, CBP reviews biographical data and passport numbers provided by air carriers and conducts queries against terrorist watch lists and law enforcement and immigration databases to determine whether any passengers are to be referred to secondary inspection (whereby passengers are selected for more in-depth review of their identity and documentation) prior to the arrival of the aircraft at a U.S. port of entry. The consolidated terrorist watch list is an important tool used by federal agencies to help secure our nation’s borders. This list provides decision makers with information about individuals who are known or suspected terrorists, so that these individuals can either be prevented from entering the country, apprehended while in the country, or apprehended as they attempt to exit the country. After 9/11, various government watch lists were consolidated into one watch list, which is maintained by the FBI’s Terrorist Screening Center (an entity that has been operational since December 2003 under the administration of the FBI). The consolidated watch list maintained by the center is the U.S. government’s master repository for all known and suspected international and domestic terrorist records used for watch list-related screening. The consolidated watch list is an important homeland security tool used by federal frontline screening agencies, including the departments of State, Justice, and Homeland Security. Based upon agency-specific policies and criteria, relevant portions of the consolidated watch list can be used in a wide range of security-related screening procedures. For instance, air carriers and CBP use subsets of the consolidated watch list to prescreen passengers; State Department consular officers use the information in the visa application process; CBP officers use watch list data as part of the visitor inspection process at ports of entry, and state and local law enforcement officers use watch list data to screen apprehended individuals during traffic stops and for other purposes. In recent years, we, along with Congress (most recently through the Intelligence Reform and Terrorism Prevention Act of 2004); the executive branch (e.g., in presidential directives); and the 9/11 Commission have required or advocated that federal agencies with homeland security responsibilities utilize a risk management approach to help ensure that finite national resources are dedicated to assets or activities considered to have the highest security priority. We have concluded that without a risk management approach, there is limited assurance that programs designed to combat terrorism are properly prioritized and focused. Thus, risk management, as applied in the homeland security context, can help to more effectively and efficiently prepare defenses against acts of terrorism and other threats. A risk management approach entails a continuous process of managing risk through a series of actions, including setting strategic goals and objectives, performing risk assessments, evaluating alternative actions to reduce identified risks by preventing or mitigating their impact, selecting actions to undertake by management, and implementing and monitoring those actions. TSA and other agencies have taken steps to strengthen the various layers of commercial aviation defense—including passenger prescreening (conducted after a reservation is made), passenger checkpoint screening (conducted once passengers are at the airport and proceeding to the gate with any carry-on bags), and in-flight security—that were exploited by the hijackers on 9/11. Many of the vulnerabilities related to these areas have been addressed through new legislation passed by Congress and policies and procedures taken by various federal agencies, though opportunities exist for additional improvements. For example, passengers selected for additional screening after they make their airline reservations receive greater scrutiny prior to boarding, but we have reported that more work is needed to help ensure the process for identifying passengers who are selected results in accurate identification, and TSA has yet to take full responsibility for this process, as mandated. In other areas, passenger checkpoint screening procedures and technologies have been enhanced to aid in detecting prohibited items, and security measures for preparing or responding to in-flight on-board threats, and coordinating responses from the ground, have been strengthened. In addition, other layers of defense in our aviation system have been strengthened, such as checked baggage and air cargo screening, though challenges remain. In baggage screening, for example, while TSA now screens 100 percent of checked baggage using explosive detection systems, enhancing the effectiveness of current baggage screening technologies—and finding the most cost-effective approaches for deploying baggage screening systems to detect explosives—remains challenging. Finally, because we cannot afford to protect everything against all threats in the post-9/11 era, choices must be made about targeting security priorities. Thus, great care needs to be taken to assign available resources to address the greatest risks, along with selecting those strategies that make the most efficient and effective use of resources—within aviation as well as among other transportation security modes, such as passenger rail and maritime industries. TSA and other federal agencies have begun focusing on identifying and prioritizing security needs in these and other areas using a risk-based approach to guide security-related decision making. In addition, efforts are under way to enhance cooperation with domestic and international partners on a broad array of security concerns. At the time of the 9/11 attacks, federal and airline industry rules for commercial airline travel reflected a system that sought to balance security concerns with the need to facilitate consumer travel and manage growing demand. The events of that day revealed many ways in which more stringent security measures were needed for a commercial aviation system that was evidently vulnerable to terrorism. In particular, the nation’s layered system of defense for aviation—including passenger prescreening, passenger checkpoint screening, and in-flight security measures—were not designed to stop the terrorist hijackers from boarding and taking control of the aircraft. A review of aviation security conditions in place prior to 9/11, and the many federal actions taken since then to mitigate the known vulnerabilities, suggest that we have come a long way toward making air travel safer. That said, our work, and that of others, has identified additional actions that are needed to resolve strategic and operational barriers to further enhance the layers of defense for the nation’s aviation system. The prescreening of passengers—the process of identifying passengers who may pose a security risk before they board an aircraft—is an important first layer of defense that is intended to help officials focus security efforts on those passengers representing the greatest potential threat. At the time of the attacks, the passenger prescreening process was made up of two components performed by air carriers in conjunction with FAA: (1) a process to compare passenger names with names on a government-supplied terrorist watch list (i.e., the identity-matching process); and (2) a computer-assisted prescreening system that was used to select passengers requiring additional scrutiny. With respect to the first of these passenger prescreening components, after passengers made their airline reservations, the air carriers used the information passengers had provided (such as name and address) to check them against a no-fly list—a government watch list of persons who were considered by the FBI to be a direct threat to U.S. civil aviation, and which was distributed to the U.S. air carriers by FAA. None of the 19 hijackers who purchased their airline tickets for the four 9/11 flights in a short period at the end of August 2001 using credit cards, debit cards, or cash, was on the no-fly list. This list contained the names of just 12 terrorist suspects; the information for the no-fly list came from one source, the FBI. Other government lists in place at the time contained the names of many thousands of known and suspected terrorists—but were not used to prescreen airline passengers. In the aftermath of the terrorist attacks, the federal government recognized that effective prescreening of airline passengers largely depended on obtaining accurate, reliable, and timely information on potential terrorists and gave priority attention to, among other things, developing more comprehensive and consolidated terrorist watch lists. In response, in part, to recommendations by us, government watch lists were subsequently consolidated into a terrorist screening database—also known as the consolidated watch list—maintained by the FBI’s Terrorist Screening Center. The consolidated watch list maintained by the center is the U.S. government’s master repository for all known and suspected international and domestic terrorist records used for watch list-related screening. This watch list database contains records from several sources, including the FBI’s list of terrorist organizations and information from the intelligence community on the identity of any known terrorists with international ties. For aviation security purposes, a portion of this consolidated watch list is exported by the Terrorist Screening Center and incorporated into TSA’s no-fly and selectee lists. (While according to TSA, persons on the no-fly list should be precluded from boarding an aircraft bound for, or departing from, the United States, any person on the selectee list is to receive additional screening before being allowed to board.) TSA provides updated lists to air carriers for use in prescreening passengers and provides assistance to air carriers in determining whether passengers are a match with persons on the lists. As of June 2006, the number of records in the consolidated watch list that had been extracted for the no-fly and selectee lists had been increased significantly (up from 12 records available on 9/11). With respect to the second component of passenger prescreening, a computer-assisted prescreening system was in place on 9/11, in which data related to a passenger’s reservation and travel itinerary were compared by the air carriers against behavioral characteristics used to identify passengers who appeared to pose a higher than normal risk, and who therefore would be selected for additional security attention prior to their flights. While nine of the 9/11 terrorists were selected for additional scrutiny by the air carriers’ computer-assisted prescreening process, there was little consequence to their selection because, at the time, selection only entailed having one’s checked baggage screened for explosives or held off the airplane until one had boarded; it was not geared toward identifying the weapons and tactics used by the hijackers. The consequences of selection reflected the view that non-suicide bombing was the most substantial risk to domestic aircraft and were designed to identify individuals who might try to bomb a passenger jet using methods similar to those employed in the 1988 bombing of Pan Am Flight 103 over Lockerbie, Scotland, in which a bomb was placed in checked luggage. After the passage of ATSA in November 2001, which created TSA as the agency responsible for ensuring the security of aviation and other transportation modes, TSA took over responsibility for the secondary screening process from the air carriers. TSA subsequently changed the consequences for passengers selected by the prescreening process. Currently, passengers who are selected for secondary prescreening either because they are on TSA’s selectee list or because they are selected by an air carrier’s computer-assisted passenger prescreening system now receive more comprehensive secondary screening. Specifically, all these selectees not only receive greater passenger-checked baggage screening than nonselectees, as was the case at the time of terrorist attacks, but also receive additional physical screening, such as a hand-search of their luggage and a more thorough physical inspection of their person at the checkpoint. All of these efforts have helped to transform the prescreening process into a more robust layer of defense than existed prior to 9/11. Nevertheless, the federal government still faces challenges related to improving the identity- matching portion of the prescreening process to help ensure that known or suspected terrorists are identified before they can board aircraft. For example, while the process of developing and maintaining terrorist watch lists to be used in the identity-matching process requires continuous effort, and no watch list can ever promise to contain a match for every potential traveler, ensuring the quality of watch list data nevertheless remains a key challenge. Concerns have been raised about the overall quality of the consolidated watch list—in particular, that the quality of data in the watch lists varies, and that the underlying accuracy of the data in the consolidated watch list has not been fully determined. The Department of Justice Inspector General reported in June 2005 that the Terrorist Screening Center could not ensure the information in the consolidated watch list database maintained by the center was complete and accurate. For example, the database did not contain names that should be included in watch lists, according to the Inspector General, and it contained inaccurate information about some persons who were on the lists. According to the Inspector General’s report, the Terrorist Screening Center is working on completing a record-by-record quality assurance review of the watch lists to ensure that each record contains the required data to improve watch list quality. In addition, screening center officials have recently stated that all records on the no-fly list are being re-vetted using newly developed no-fly list inclusion guidance to determine if each individual truly belongs on the list. We have work under way addressing the law enforcement response agencies take when an individual on the watch list is encountered. A second challenge that affects the accuracy of the current identity- matching process relates to the nature of the information available to air carriers and the procedures used to match passenger identities against the no-fly and selectee lists that are part of the consolidated terrorist watch list. Although air carriers are required to compare the information supplied by passengers against the names that appear on the no-fly and selectee lists, there is no uniform identity matching process or common software that all air carriers are required to use to conduct their identity matching procedures. In addition, the technical sophistication of air carrier identity matching techniques also varies. Some identity matching technologies might correctly discriminate between “John Smith” and “John Smythe” when comparing these names against the consolidated terrorist watch list, while others may not. Different identity matching results can lead to a passenger being boarded on one carrier’s flight while being denied boarding on another air carrier’s flight, including a connecting flight. Although we did not assess the relative accuracy of the various name- matching procedures used to prescreen passengers, inconsistency in these procedures can be problematic for passengers and creates security concerns. A third challenge relates to concerns about the disclosure of watch list information outside the federal government. Sharing of watch list data with air carriers, or organizations with whom they contract, creates an opportunity for watch lists to be viewed by parties who may use this information in ways that are detrimental to U.S. interests. For example, if a terrorist group could view the no-fly and selectee lists they would learn which—if any—of their operatives would be able to travel on commercial aircraft to or from the United States unhampered. In addition, the 9/11 Commission stated that there are security concerns with sharing U.S. government watch lists with private firms and foreign countries. In an effort to address these security challenges, the commission recommended that TSA take over the domestic watch list identify- matching process from air carriers, and in December 2004, Congress required that the responsibility for the domestic watch list identity- matching process be assumed by TSA. While shifting control over the watch list identity-matching process from the airline industry to the federal government should help address some of the limitations of the current process, for over 3 years, TSA has faced significant challenges in developing and implementing a new and more reliable identity-matching process, and has not yet taken this function over from air carriers. TSA’s Secure Flight program—which is to perform the functions associated with determining whether passengers on domestic flights are on government watch lists—is intended to remedy some of the problems in the current identity-matching process. For example, unlike the current system that operates as part of each air carrier’s reservation system, Secure Flight would be operated by TSA—and TSA, rather than the air carriers, would be responsible for matching passengers’ names against the no-fly and selectee information maintained in the consolidated watch list (this information is currently transmitted to air carriers) as well as information from other watch lists. This approach would, among other benefits, eliminate the need to distribute terrorist watch list information outside the federal government as part of passenger prescreening. In addition, Secure Flight is intended to address the problem related to the lack of standard procedures among air carriers for obtaining passenger- supplied data by defining what type of passenger information is required. Secure Flight also plans, among other things, to use research analysts to resolve discrepancies in the matching of passenger data to data contained in the database. However, we have reported that, taken as a whole, the development of Secure Flight has not been effectively managed—has not, in fact, been implemented—and is at risk of failure. We have reported on multiple occasions that the Secure Flight program has not met key milestones, or finalized its goals, objectives, and requirements and have recommended that TSA take numerous steps to help to develop the program. For example, to help manage risk associated with Secure Flight’s continued development and implementation, we recommended in March 2005 that TSA finalize the system requirements and develop detailed test plans to help ensure that all Secure Flight system functionality is properly tested and evaluated. We also recommended that TSA develop a plan for establishing connectivity among the air carriers, CBP, and TSA to help ensure the secure, effective, and timely transmission of data for use in Secure Flight operations. In early 2006, TSA suspended development of Secure Flight and initiated a reassessment, or rebaselining, of the program, to be completed before moving forward. Our work reviewing air carriers’ current processes has identified two air carriers that are enhancing their identity-matching systems, since it remains unclear when TSA will take over the passenger identity-matching function through Secure Flight. However, any improvements made to the accuracy of an individual air carrier’s identity-matching system will not apply system-wide and could further exacerbate differences that currently exist among the various air carriers’ systems. These differences may result in varying levels of effectiveness in the matching of passenger names against the terrorist watch list. At Congress’s request, we are continuing to monitor TSA’s progress to develop Secure Flight. (See app. III for a list of GAO products related to domestic passenger prescreening, including Secure Flight.) The ongoing security concerns about prescreening for domestic flights, including disclosure of watch list information outside the government and the quality of information used for the identity-matching process, also pertain to international flights departing from or traveling to the United States. As with domestic passenger prescreening, air carriers conduct an initial match of passenger names against terrorist watch lists—the no-fly and selectee lists—before international flights depart to or from the U.S. using information that passengers supply when they make their reservations. Customs and Border Protection (CBP)—the DHS agency responsible for international passenger prescreening—supplements the identity-matching conducted by air carriers by comparing more reliable passenger information collected from passports against the terrorist watch lists and other government databases for international flights. (This information is considered more reliable because passport data is not self- reported.) However, the current process does not require the U.S. government’s identity-matching procedures be completed prior to the departure of international flights traveling to or from the United States. As a result, passengers thought to be a risk to commercial aviation have successfully boarded flights. For example, in calendar year 2005, a number of passengers previously identified by the U.S. government as direct threats to the security of commercial aviation boarded international flights traveling to or from the United States, according to agency incident reports. In seven cases, the resulting risk was deemed high enough to divert the flight from its intended U.S. destination, resulting in costs to the air carriers, delays for passengers, and government intervention. While none of the flights resulted in an attempted hijacking or other security incidents, these flights nevertheless illustrate a continuing vulnerability that high-risk passengers could potentially board international flights and attempt to blow up these aircraft or take control in order to use them as weapons against U.S. interests at home or abroad. To address this vulnerability, as part of the Intelligence Reform and Terrorism Prevention Act of 2004, Congress mandated that DHS issue a proposed plan by February 15, 2005, for completing the U.S. government’s identity-matching process before the departure of international flights. While CBP did not meet this deadline, the agency issued a proposed rule that would eliminate the preliminary screening conducted by air carriers and replace it instead with a process where air carriers select one of two options for transmitting this information earlier to CBP. One option allows air carriers to transmit passport information as each individual passenger checks in. Under this option, CBP would analyze the information against terrorist watch lists, make an immediate (or “real-time”) decision about whether the passenger can board the aircraft, and convey this information electronically to the air carrier. Under this approach air carriers could admit passengers for flights up to 15 minutes before departure. The second option allows air carriers to provide all passengers’ passport information (in a bulk data transmission) to CBP for verification at least 60 minutes before a flight’s departure. Under either option, the government would retain control of the watch lists, resolving this additional security concern. Regardless of which proposed option air carriers choose to pursue, many of CBP’s efforts to improve the international prescreening process are still largely in development, and the agency faces several challenges in implementing its proposed solutions. One challenge, in particular, concerns stakeholder coordination. CBP must rely on a variety of stakeholders to provide input or to implement aspects of the prescreening process, including air carriers, industry associations, foreign governments, and other agencies within and outside DHS. One coordination challenge involves aligning international aviation passenger prescreening with TSA’s development of its Secure Flight program for prescreening passengers on domestic flights. Ensuring that this coordination effort aligns with Secure Flight is important to air carriers, since passengers may have both a domestic and an international part to their itinerary. If these prescreening processes are not coordinated, passengers may be found to be high-risk on one flight and not high-risk on another flight, resulting in air carrier confusion and a potential security hazard. We have recently recommended that DHS take additional steps and make key policy and technical decisions (in order to determine, for example, the data and identity- matching technologies that will be used) that are necessary to more fully coordinate CBP’s international prescreening program with TSA’s prospective domestic prescreening program, Secure Flight. (See app. III for a list of GAO products related to domestic and international passenger prescreening.) While passenger prescreening represents a more secure layer of defense today than it did on 9/11, there is still a need for DHS, TSA, and CBP to follow through on congressional requirements and recommendations we have made to improve the process. Specifically, TSA must still comply with a congressional requirement for transferring responsibility for the passenger identity-matching process from air carriers to TSA for domestic flights. In addition, we made a recommendation in November 2006, which DHS has taken under consideration, aimed at helping the agency to enhance coordination between CBP’s international prescreening program and TSA’s prospective domestic prescreening program, Secure Flight. Such efforts are necessary to help ensure that the prescreening process— as a first layer of aviation defense—is accurate and effective in identifying potential terrorists who should be denied boarding or receive additional screening, and in ensuring that watch list data are not at risk of disclosure to those wishing to do harm to U.S. interests. The layer of aviation security most visible to the general public, as well as to terrorists, is the physical screening of passengers and their carry-on bags at airport checkpoints, known as passenger checkpoint screening. The passenger checkpoint screening process involves the inspection of passengers and their carry-on bags to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other dangerous item on board an aircraft. Checkpoint screening is a critical component of aviation security—and one that has long been subject to security vulnerabilities. Passenger checkpoint screening is comprised of three elements: (1) the people responsible for conducting the screening of airline passengers and their carry-on items; (2) the procedures that must be followed to conduct screening; and (3) the technology used in the screening process. TSA has made progress in implementing security-related measures in all these areas, but there are additional opportunities to further enhance aviation security through the people, processes, and technologies involved in passenger checkpoint screening. Prior to the passage of ATSA, the screening of passengers had been performed by private screening companies under contract to the air carriers. The FAA was responsible for ensuring compliance with screening regulations. As we reported in 2000, since 1978, the FAA and the airline industry have continued to face challenges in improving the effectiveness of airport checkpoint screeners, and we reported that screeners were not detecting dangerous objects, including loaded firearms and, in tests conducted by FAA, simulated explosive devices. We attributed screening detection problems primarily to high turnover rates among screeners, among other things. By the time the terrorist attacks occurred, the FAA was already 2 years behind in issuing a regulation in response to a congressional mandate requiring the companies that employ checkpoint screeners to improve their testing and training through a certification program. As the 9/11 Commission report testified, the terrorist hijackers, having escaped watch-list detection during the prescreening process, had to beat only one layer of security—the security checkpoint process—in order to proceed with their plan. The Commission concluded that at the time of the attacks, while walk-through metal detectors and X-ray machines were in use to stop prohibited items, many potentially deadly and dangerous items—such as the box-cutters carried by the hijackers—did not set off metal detectors or were hard to distinguish in an X-ray machine. Moreover, FAA regulations and guidance did not explicitly prohibit knives with blades under 4 inches long. And the standards for what constituted a deadly or dangerous weapon were “somewhat vague,” the commission found, and were left up to the discretion of air carriers and their screening contractors. Moreover, secondary screening—whereby passengers coming through the checkpoint with carry-on bags are selected for additional screening—took place, by and large, only when passengers triggered metal detectors. Even when such trigger events occurred, passengers often were cleared to board. For example, of the 5 hijackers who boarded planes at Washington Dulles International Airport on 9/11, three set off metal detectors; they (and one carry-on bag as well) were hand-wanded, the bag swiped for explosive trace detection, and then they were cleared to board. TSA Has Made Progress in Training and Evaluating a Federalized Workforce for Screening Airline Passengers After 9/11 and as a result of ATSA, TSA assumed responsibility for screeners and screening operations at more than 400 commercial airports, established a basic screener training program, and has conducted annual proficiency reviews and operational testing of screeners, now known as transportation security officers (TSO). TSA has taken numerous steps to develop and evaluate its screening personnel by, among other things, expanding training beyond the basic training requirement through a self- guided on-line learning center, and by providing additional training on threat information, explosives detection, and new screening approaches. While these efforts and others taken by the agency have helped TSA to develop and evaluate appropriate workforce skills, we have recommended that TSA take additional steps to ensure that this training is delivered. For example, at some airports we have visited, TSOs encountered difficulty accessing and completing recurrent (refresher) training because of technological and staffing constraints. In May 2005, TSA stated that it had a plan for deploying high speed Internet connections at airports. The President's 2007 budget request reported that approximately 220 of the nation's 400 commercial airport and field locations have full information technology infrastructure installation. (See app. III for a list of GAO products related to screener workforce issues.) Passenger Checkpoint Screening Procedures Have Been Enhanced to Improve Security and Procedures Are Regularly Modified to Reflect Current Conditions In addition to TSA’s efforts to train and deploy a federal screener workforce, steps also have been taken to strengthen checkpoint screening polices and procedures to enhance security. One of the most important differences of the current checkpoint screening system compared to the system in place on 9/11 is the additional physical screening that certain passengers selected by the prescreening process, as discussed earlier, must undergo at the checkpoint. In addition, certain screening procedures performed by TSOs, or other authorized TSA personnel, are now mandatory for all passengers. Prior to entering the sterile area of an airport—the area within the terminal where passengers wait to board departing aircraft—all passengers must be screened by a walk-through metal detector and their carry-on items must be X-rayed. Passengers whose carry-on baggage alarms the x-ray machine, passengers who alarm the walk-through metal detectors, or passengers who are selected by the air carriers’ passenger prescreening system, all receive additional screening. These passengers may be screened by hand-wand or pat-down or have their carry-on items screened for explosive traces or physically searched. Figure 2 shows the functions performed as part of passenger checkpoint screening. Because history has shown that terrorists will adapt their tactics and techniques in an attempt to bypass increased security procedures, and are capable of developing increasingly sophisticated measures in an attempt to avoid detection, TSA leadership has emphasized the need to continually test or implement new screening procedures to further enhance security in response to changing conditions. We have ongoing work on how TSA modifies and implements passenger checkpoint screening procedures and plan to issue a report in February 2007. Last year, we testified that TSA security-related proposed changes to checkpoint screening procedures are based on risk-based factors, including previous terrorist incidents, threat information, vulnerabilities of the screening system, as well as operational experience and stakeholder concerns. Recommended modifications to passenger checkpoint screening procedures are also generated based on covert testing conducted by TSA officials and the DHS Office of Inspector General (OIG). Covert tests are designed to assess vulnerabilities in the checkpoint screening system to specific threats, such as vulnerability to the various methods by which terrorists may try to conceal handguns, knives, and improvised explosive devices (IED). We have ongoing work evaluating TSA’s covert testing efforts and expect to report our results later this year. TSA Is Exploring New Technologies to Enhance Detection of Explosives and Other Threats The ever changing terrorist threat also necessitates continued research and development of new technologies and the fielding of these technologies to strengthen aviation security. The President’s fiscal year 2007 budget request notes that emerging checkpoint technology may enhance the detection of prohibited items, especially firearms and explosives, on passengers. Furthermore, the DHS OIG has reported that significant improvements in screener performance may not be possible without greater use of new technology, and has encouraged TSA to expedite its technology testing programs and give priority to technologies that will enable screeners to better detect both weapons and explosives. TSA has recently put increased focus on the threats posed by IEDs and is investing in technology for this purpose. For example, since the September 11 attacks, 94 explosive-detection-trace portal machines have been installed at 37 airports. (These machines detect vapors and residues of explosives, including IEDs.) In addition, as of May 2006, TSA had conducted, or planned to conduct, evaluations of nine new types of passenger screening technology, including, for example, technology that would screen bottles for liquid explosives. It is important that TSA continue to invest in and develop technologies for detecting explosives This is especially important in light of the alleged August 2006 plot to detonate liquid explosives on board multiple commercial aircraft bound for the United States from the United Kingdom. We are currently evaluating DHS’s and TSA’s progress in planning for, managing, and deploying research and development programs in support of airport checkpoint screening operations. We expect to report our results later this year. (See app. III for a list of GAO products related to passenger checkpoint screening.) As with passenger prescreening, the checkpoint screening system in place today is far more robust, reflects more rigorous screening requirements, and deploys better trained staff, than in the years leading up to the terrorist attacks. In its list of recommended actions that the government should take to protect against and prepare for future terrorist attacks, the 9/11 Commission suggested that improving checkpoint screening should be a priority. TSA has largely accomplished this goal, though as with all aspects of aviation security, efforts to further enhance and strengthen procedures are ongoing. For example, new and emerging technologies for detecting threat objects are likely to help enhance the checkpoint screening process. Security protocols and policies for preparing for or responding to threats that occur on board flights already in progress, and coordinating responses to such security events from the ground, have changed significantly since 9/11. With respect to on-board security measures, the airline cabin and flight crews on duty on 9/11 were neither trained for nor prepared to deal with the events that unfolded once the hijackers were on board. Though in-flight security was regarded as a layer of defense in the commercial aviation system, FAA’s security training guidelines at the time did not contemplate suicide hijackers, with aircraft used as guided missiles, as a likely scenario. Flight crews had been taught to cooperate, rather than resist, during an emergency. As with the prescreening and checkpoint screening processes, the ability of the hijackers to manipulate flight crews and penetrate the captain’s cockpit revealed serious weaknesses of in-flight security. In-flight security has since been strengthened in several ways to help mitigate the likelihood of terrorists being able take over an aircraft. For example, TSA established the Federal Flight Deck Officer program in 2002. The program trains eligible flight crew members in the use of force to defend against an act of criminal violence or air piracy. These flight deck officers are deputized as federal law enforcement officers, and may transport and carry a TSA-issued firearm, in a manner approved by TSA. In addition, FAA directed air carriers to harden their cockpit doors and Congress expanded the decades-old Federal Air Marshal Service by mandating in ATSA the deployment of air marshals, on board all high- security risk flights. Before 9/11, there were 33 air marshals altogether; now there are thousands. A key aspect of air marshals’ operating procedures is the discreet (semicovert) movement through airports as they check in for their flight, transit screening checkpoints, and board the aircraft. TSA has also taken steps to ensure that flight and cabin crew members— among the last lines of defense—are prepared to handle potential threat conditions on board commercial aircraft. The revised guidance and standards TSA developed for air carriers to follow in developing and delivering their flight and cabin crew member security training is a positive step forward in strengthening the security on board commercial aircraft. This training includes, among other things, teaching crew members how to search a cabin for explosive devices. Congress also mandated TSA to implement an advanced voluntary crew member self- defense training program for flight and cabin crew members; this training is ongoing. With respect to coordinating responses to on-board threats from the ground, the events of 9/11 revealed the importance of prompt interagency communication to allow for a unified, coordinated response to airborne threats. Once an in-flight security threat is identified, rapid and effective information sharing among agencies on the ground is critical to ensure that each agency can respond according to its mission and that the security threat is handled in the safe manner. The 9/11 Commission Report stated that a weakness in aviation security exploited by the terrorists included a lack of protocols and capabilities in executing a coordinated FAA and military response to multiple hijackings and suicidal hijackers. According to the commission, the response on 9/11 of the Department of Defense’s North American Aerospace Defense Command (NORAD), which is responsible for securing U.S. airspace, was hindered in part by lack of real-time communications with FAA and defense and intelligence agencies. For instance, a shootdown authorization was not communicated to the NORAD air defense sector until 28 minutes after United 93 had crashed in Pennsylvania. Moreover, the commission noted, planes did not know where to go or what targets they were to intercept. And once the shootdown order was given, it was not communicated to the pilots. To address the communications and coordination problems that were highlighted by 9/11, many federal agencies, including the FAA, DOD, and TSA, have taken action. For example, the FAA—which is responsible for managing aircraft traffic entering into or operating in U.S. airspace— established an unclassified teleconference system, called the Domestic Events Network, designed to gather and disseminate information for all types of security threats. The network is monitored by approximately 60 users from a variety of federal agencies as well as state and local entities. This network was originally established as a conference call on the morning of 9/11 to coordinate the federal response to the hijacked aircraft and it has remained in existence since then, serving as a basis for interagency cooperation. Any Domestic Events Network user can broadcast information, allowing other agencies on the Network to communicate and monitor a situation in real-time. According to FAA officials, domestic air carriers have recently been given the capability to link into the Domestic Events Network, allowing for the air carrier to provide real-time situational updates as they are received from the flight crew onboard the aircraft in question without relying on an intermediary party. Another important interagency communications tool is the Defense Red Switch Network which is a secure, classified network administered by the DOD that allows multiple agencies to discuss intelligence information over a secure line. In addition, TSA has established the Transportation Security Operations Center (TSOC), a national center that operates around the clock and coordinates the multi-agency response to in-flight security threats. Air carriers are required to report to TSOC all incidents and suspicious activity that could affect the security of U.S. civil aviation, including any incidents of interference with a flight crew, specific or non-specific bomb threats, and any correspondence received by an aircraft operator that could indicate a potential threat to civil aviation. We have ongoing work analyzing the processes that federal agencies follow to identify, assess, and respond to in-flight security threats and the extent to which interagency coordination problems occurred, if at all, and the steps agencies took to address identified problems. The results of this work, which will be issued in early 2007, will be classified. (See app. III for a list of GAO products related to in-flight security.) Several actions taken in the months after 9/11—notably, hardened cockpit doors, better emergency response training for airborne flight crews, and the presence of federal air marshals on certain flights—have helped to ensure that aircraft are both physically safer and better protected from the actions of on-board hijackers or terrorists. Federal actions also have been taken in response to the communications and coordination failures that occurred on 9/11 in order to enhance coordinated responses to onboard security threats from the ground. Our ongoing work will discuss, among other things, the process federal agencies follow to identify, assess, and respond to security threats, and the challenges, if any, that have arisen in agencies’ coordination efforts and steps taken to deal with them. Two aspects of commercial aviation that were not directly implicated in the 9/11 scenario—checked baggage screening and air cargo screening— are nonetheless recognized as important components of a layered system of aviation defense. Congress and TSA have taken steps to enhance the security of both in the years since 9/11, though resource and technology challenges remain. The infrastructure of commercial airport properties, which can pose risks to security by enabling criminals or terrorists to penetrate sensitive areas (such as boarding areas or baggage facilities), also has received congressional and federal attention. In addition, Congress and federal agencies have taken actions to enhance security in the noncommercial aviation sector, specifically, at the nation’s general aviation airports—small airports that are home to flight training schools as well as privately owned aircraft. With respect to checked baggage screening, at the time of the attacks, there was no federal requirement to screen all checked baggage on domestic flights. In some cases, air carriers screened checked baggage on commercial flights for bulk quantities of explosives using X-ray screening equipment similar to that used for medical CAT scans. As the Congressional Research Service reported a month after the attacks, the availability and cost of baggage screening X-ray equipment, along with the time it took to screen a bag, did not permit its use in all airports, on all flights at airports where it was used, or even on all bags on any given flight. In addition, passengers selected by the passenger prescreening process for additional pre-flight scrutiny were either to have their checked bags scanned for explosives or held until they boarded the aircraft. As noted earlier, 5 of the 8 hijackers selected by the passenger prescreening system in place on 9/11 had their checked bags held prior to boarding and three had their bags scanned for explosives. After the attacks, Congress, through ATSA, mandated that all checked baggage at commercial airports be screened using explosive detection systems. TSA has worked to overcome equipment challenges, and other challenges, in order to fulfill this mandate, and now reports having the capability to screen 100 percent of checked baggage using two types of screening equipment—explosive detection systems (EDS), which use X-rays to scan bags for explosives, and explosive trace detection systems (ETD), in which bags are swabbed to test for chemical traces of explosives. TSA considers screening with EDS to be superior to screening with ETD because EDS machines process more bags per hour and automatically detect explosives without direct human involvement. As of June 2006, in order to screen all checked baggage for explosives at over 400 airports, TSA had procured and installed about 1,600 EDS and 7,200 ETD machines. TSA has begun shifting its focus away from placing these systems primarily in airport lobbies, as had been done initially, because of problems that arose from this configuration. For instance, TSA’s placement of stand-alone EDS and ETD machines in airport lobbies resulted in passenger crowding, which presented unsafe conditions and may have added security risks for passengers and airport workers. TSA has begun to focus instead on systematically deploying the configuration of baggage screening equipment that is considered by TSA to be the most efficient, least labor-intensive, and most cost-effective at many airports— in-line EDS. These systems are integrated with airports’ baggage conveyor and sorting systems (see fig. 3 for an illustration of the checked-baggage screening system using an in-line EDS machine). TSA has also developed smaller and less expensive stand-alone EDS equipment that may be effective at smaller airports or closer to airline check-in counters. A TSA cost-benefit analysis of in-line EDS machines being installed at nine airports conducted in May 2004 showed that they could yield significant savings for the federal government and achieve other benefits—including reduced screener staffing requirements and increased baggage throughput (the rate at which bags are processed). Specifically, TSA estimated that in- line baggage screening systems at these nine airports could save the federal government about $1 billion over 7 years. The Intelligence Reform and Terrorism Prevention Act of 2004 mandated and the conference report accompanying the fiscal year 2005 DHS Appropriations Act directed TSA to, among other things, develop a comprehensive plan for expediting the installation of in-line explosive detection systems. To assist TSA in planning for the optimal deployment of checked baggage screening systems, we recommended in March 2005 that TSA systematically evaluate baggage screening needs at airports, including the costs and benefits of installing in-line EDS systems at airports that did not yet have such systems installed. We suggested that such planning should include analyzing which airports should receive federal support for in-line EDS systems based on cost savings that could be achieved from more effective and efficient baggage screening operations and on other factors, including enhanced security. And we recommended that TSA identify and prioritize the airports where the benefits of replacing stand- alone baggage screening systems with in-line systems are likely to exceed the costs of the systems, or where the systems are needed to address security risks or related factors. In February 2006, in response to our recommendation and a legislative requirement to submit a schedule for expediting the installation and use of in-line systems and replacement of ETD equipment with EDS machines, TSA provided to Congress its strategic planning framework for its checked baggage screening program. This framework introduced a strategy intended to increase efficiency through deploying EDS to as many airports as practicable, lowering lifecycle costs for the program, minimizing impacts to TSA and airport/airline operations, and providing a flexible security infrastructure for accommodating growing airline traffic and potential new threats. The framework is an initial step toward: (1) finding the ideal mix of higher-performance and lower-cost alternative screening solutions for the 250 airports with the highest checked baggage volumes, and (2) funding prioritization schedules by airport, by identifying the top 25 airports that should first receive federal funding for projects related to the installation of EDS based on quantitative modeling of security and economic factors, and other factors. In addition, partly in response to other recommendations we made, TSA is collaborating with airport operators, air carriers, and other key stakeholders to identify funding and cost sharing strategies (in order to determine how to allocate investments in baggage equipment between the federal government and air carriers) and is focusing its research and development efforts on the next generation of EDS technology. For airports where in-line systems may not be economically justified because of high investment costs, we suggested that a cost-effectiveness analysis be used to determine the benefits of additional stand-alone EDS machines to screen checked baggage in place of the more labor-intensive ETD machines. According to TSA, the agency is conducting an analysis of the airports that rely heavily on ETD machines and determined if they would benefit from also having stand-alone EDS equipment. (See app. III for a list of GAO products related to checked baggage screening.) In the aftermath of the 9/11 terrorist attacks, the security of cargo carried on both passenger and all-cargo aircraft became a growing concern both to the public and to members of Congress. Since the attacks, several instances of human stowaways in the cargo holds of all-cargo aircraft have further heightened the concern over air cargo security by revealing vulnerabilities that could potentially threaten the entire air transportation system. TSA has the responsibility for ensuring the security of air cargo, including, among other things, establishing security rules and regulations covering domestic and foreign passenger carriers that transport cargo, domestic and foreign all-cargo carriers, and domestic indirect air carriers (companies that consolidate air cargo from multiple shippers and deliver it to air carriers to be transported); and has responsibility for overseeing implementation of air cargo security requirements by air carriers and indirect air carriers through compliance inspections. In general, TSA inspections are designed to ensure that air carriers comply with air cargo security requirements, while air carrier inspections focus on ensuring that cargo does not contain weapons, explosives, or stowaways (see fig. 4). Because safeguarding the nation’s air cargo transportation system is a shared public and private sector responsibility, air carriers are generally responsible for meeting TSA’s air cargo security requirements, including how employees are to handle and physically inspect cargo. As we reported in October 2005, TSA has implemented a variety of actions intended to strengthen oversight for domestic air cargo security operations conducted by air carriers. For air cargo, TSA has increased the number of dedicated air cargo inspectors used to assess air carrier and indirect air carrier compliance with security requirements, issued a regulation in May 2006 to enhance and improve the security of air cargo transportation, and has taken other actions. However, our work identified factors that may limit the effectiveness of these measures. For example: TSA has primarily relied on its Known Shipper program (allowing individuals or businesses with established histories to ship cargo on passenger carriers) to ensure that cargo transported on passenger air carriers is screened in accordance with ATSA, and that unknown shipments are not placed on passenger aircraft. However, at the time of our review, we reported that the Known Shipper program had weaknesses and may not provide adequate assurance that shippers are trustworthy and that air cargo transported on passenger air carriers was secure. For example, the information in TSA’s database on known shippers was incomplete because participation was voluntary, and the information in the database may not have been reliable. TSA has addressed this issue through its May 2006 regulation on air cargo security requirements, making it mandatory for air carriers and indirect air carriers to provide information to this database by requiring them to submit data on their known shippers. TSA established a requirement for random inspection of air cargo reflecting the agency’s position that inspecting 100 percent of air cargo was not technologically feasible and would be potentially disruptive to the flow of air commerce. However, this requirement contained exemptions based on the nature and size of cargo that may leave the air cargo system vulnerable to terrorist attack. We recommended in 2005 that TSA reexamine the rationale for existing air cargo inspection exemptions, determine whether such exemptions leave the air cargo system unacceptably vulnerable to terrorist attack, and make any needed adjustments to the exemptions. In September 2006, TSA revised the criteria for exemptions for cargo transported within or from the United States on passenger aircraft. TSA is reviewing the remaining inspection exemptions to determine whether or not they pose an unacceptable vulnerability to the air cargo transportation system. TSA conducts audits of air carriers and indirect air carriers to ensure that they are complying with existing air cargo security requirements. But TSA has not developed performance measures to determine to what extent air carriers and others are complying with air cargo security requirements. Without performance measures to gauge air carrier and indirect air carrier compliance with air cargo security requirements, TSA cannot effectively focus its inspection resources on those entities posing the greatest risk. In addition, without measures to determine an acceptable level of compliance with air cargo security requirements, TSA cannot assess the performance of individual air carriers or indirect air carriers against national performance averages or goals that would allow TSA to target inspections and other actions on those that fall below acceptable levels of compliance. We recommended that TSA assess the effectiveness of enforcement actions, including the use of civil penalties, in ensuring air carrier and indirect air carrier compliance with air cargo security requirements. We also recommended that TSA develop measures to gauge air carrier and indirect air carrier compliance with air cargo security requirements to assess and address potential security weaknesses and vulnerabilities. TSA had not analyzed the results of air cargo security inspections to systematically target future inspections on those entities that pose a higher security risk to the domestic air cargo system, or assessed the effectiveness of its enforcement actions in ensuring air carrier compliance with air cargo security requirements. Such targeting is important because TSA may not have adequate resources to inspect all air carriers and indirect air carriers on a regular basis. We recommended that TSA develop a plan for systematically analyzing the results of air cargo compliance inspections and use the results to target future inspections and identify systemwide corrective actions. According to TSA officials, the agency has been working on developing short-term and long-term outcome measures for air cargo security and has begun to analyze inspection results to target future inspections. Finally, with respect to TSA’s regulation on air cargo security requirements, in May 2006, TSA estimated that implementing all the provisions in the regulation (including actions already ongoing, such as requiring air carriers to randomly inspect a percentage of air cargo) will cost approximately $2 billion over a 10-year period (2005-2014). Before the regulation was finalized, industry stakeholders representing air carriers and airport authorities had stated that several of the provisions, such as securing air cargo facilities, screening all individual persons boarding all- cargo aircraft, and conducting security checks on air cargo workers, would be costly to implement. We have not assessed how this regulation, or its costs, may affect TSA or stakeholders. Nor have we undertaken additional work to determine the extent to which TSA’s subsequent actions have addressed the weaknesses identified above and our related recommendations. In our work, we concluded that while the cost of enhancing air cargo security can be significant, the potential costs of a terrorist attack, in terms of both the loss of life and property and long-term economic impacts, would also be significant although difficult to predict and quantify. TSA’s regulation also covers inbound air cargo security requirements (for cargo originating outside the United States). We currently have an ongoing review assessing the security of inbound air cargo, including the regulation’s relevant requirements, and expect to issue this work early this year. Like most other aspects of the aviation system, the security of commercial airport facilities also came under heightened scrutiny after 9/11. Congress included provisions in ATSA to address this aspect of airport security. In particular, ATSA granted TSA the authority to oversee U.S. airport operators’ efforts to maintain and improve the security of airport perimeters (such as airfield fencing and access gates), the adequacy of controls restricting unauthorized access to secured areas (such as building entry ways leading to aircraft), and security measures pertaining to individuals who work at airports. Apart from ongoing concerns about the potential for terrorists to gain access to these areas, in 2004, concerns also were raised about security breaches and other illegal activities, such as drug smuggling, taking place at some airports. These events highlighted the importance of strengthening security in these areas. Taken as a whole, airport perimeter security and related areas, along with passenger and baggage screening, comprise key elements of the aviation security environment at commercial airports. We reported in 2004 that TSA had begun evaluating commercial airport security by conducting compliance inspections, among other things, but needed a better approach for assessing how the results of these efforts would be used to make improvements to the entire commercial airport system. We also reported that TSA had helped some airport operators to enhance perimeter and access control security by providing funds for security equipment, such as electronic surveillance systems. However, TSA had not, at the time of our review, set priorities for these and other efforts or determined how they were to be funded. We also found that while TSA had taken some steps to reduce the potential security risks posed by airport workers, the agency did not require fingerprint-based criminal history checks for all workers, as ATSA required. To help ensure that TSA is able to articulate and justify future decisions on how best to proceed with security evaluations, fund and implement security improvements (including new security technologies), and implement additional measures to reduce the potential security risks posed by airport workers, we recommended that TSA develop a plan for Congress describing how it would meet the applicable requirements of ATSA. Since our report was issued, TSA made several improvements in these areas, through the issuance of a series of security directives that required enhanced background checks and improved access controls for airport employees who work in restricted airport areas. We have new work planned in this area that will, among other things, examine TSA’s further progress in meeting ATSA requirements for reducing the potential security risks posed by airport workers, such as requiring fingerprint-based criminal history checks and security awareness training for all airport workers. We have also recently issued work examining progress toward establishing the Transportation Workers Identification Credential (TWIC) Program. TWIC is intended to establish a uniform identification credential for 6 million workers who require unescorted physical or cyber access to secured areas of transportation facilities, including airports. While TWIC was initially intended to meet an ATSA recommendation that TSA consider using biometric access control systems to verify the identity of individuals who seek to enter a secure airport, as of September 2006, TSA had determined that TWIC would be implemented first for workers requiring unescorted access to secure areas at commercial seaports and that there were no immediate plans to implement the program in the airport environment. General aviation, as distinguished from commercial aviation, encompasses a wide variety of activities, aircraft types, and airports. Federal intelligence agencies have reported in the past that terrorists have considered using general aviation aircraft for terrorist acts—and that the 9/11 terrorists learned to fly at flight schools based at general aviation airports in Florida, Arizona, and Minnesota. We have noted in our work that the extent of general aviation’s vulnerability to terrorist attack is difficult to determine. Nevertheless, as we reported in November 2004, TSA and the FAA have taken steps to address security risks to general aviation through regulation and guidance. For example, TSA has promulgated regulations requiring background checks of foreign candidates for U.S. flight training schools and has issued security guidelines for general aviation airports. Prior to the September 11 attacks, FAA did not require background checks of anyone seeking a pilot’s license. Other measures taken to enhance general aviation security since then include actions by nonfederal general aviation stakeholders who have partnered with the federal government and have individually taken steps to enhance general aviation security. For example, industry associations developed best practices and recommendations for securing general aviation, and have worked with TSA to develop other security initiatives. While these actions represent progress toward enhancing general aviation security, at the time we reported on these efforts, TSA continued to face challenges. Although TSA has issued a limited assessment of threats associated with general aviation, a systematic assessment of threats to, or vulnerabilities of general aviation to determine how to better prepare against terrorist threats, had not been conducted at the time of our November 2004 review because the assessments were considered costly and impractical to conduct at the nearly 19,000 general aviation airports. We recommended that TSA develop and implement a plan to identify threats and vulnerabilities and include, among other things, estimates of funding requirements. Should TSA establish new security requirements for general aviation airports, competing funding needs could challenge the ability of general aviation airport operators to meet these requirements. General aviation airports have received some federal funding for implementing security upgrades since September 11, but have funded most security enhancements on their own. General aviation stakeholders we contacted expressed concern that they may not be able to pay for any future security requirements that TSA may establish. In addition, TSA and FAA are unlikely to be able to allocate significant levels of funding for general aviation security enhancements, given competing priorities of commercial aviation and other modes of transportation. (We made no recommendations related to funding challenges.) We have not undertaken additional work to determine the extent to which subsequent actions taken by DHS or TSA have enhanced general aviation security or have addressed our recommendations. TSA’s efforts to address aspects of aviation security other than those directly implicated in the 9/11 attacks have been mixed. On the one hand, TSA has made significant progress in an area where it has direct operational authority—enhancing detection of threat objects in passengers’ checked baggage. Thanks to the increased use of technology (explosive detection systems), today’s checked baggage undergoes far more scrutiny than before the terrorist attacks. In other areas of aviation, however, where TSA has regulatory and oversight responsibility, but does not take the operational lead, our past work indicates that TSA faced challenges. With respect to air cargo, for example, TSA has implemented a variety of actions intended to strengthen oversight for domestic air cargo security operations conducted by air carriers, including increasing the number of inspectors used to assess air carriers’ compliance with air cargo security requirements, but opportunities exist to better ensure that this compliance process is working. Because we do not have recent work on progress made to enhance the security at general aviation airports, we cannot comment further on the extent of progress made in this area. Our ongoing work on airport perimeter security and access controls will allow us to provide an updated assessment of progress later in 2007. In the aftermath of the attacks on 9/11, Congress and the administration focused their energies first on shoring up our national layers of defense— particularly in the aviation sector, which had proven to be vulnerable to terrorist attacks. As of November 2006, TSA had substantially implemented the major aviation security mandates issued by Congress following the 9/11 attacks, particularly those ATSA mandates designed to address specific vulnerabilities exploited by the terrorists, such as the requirement to deploy federal personnel to screen passengers and baggage at airports. Congress, the 9/11 Commission, federal agencies, and we have recognized the need to develop strategies and take actions to protect against and prepare for terrorist attacks on critical parts of our transportation system other than aviation, which also are considered vulnerable to attack. These areas include passenger rail and the maritime industry—both considered vital components of the U.S. economy. In addition, other modes of transportation also remain vulnerable to attack, such as the nation’s highway infrastructure and commercial vehicles. The passenger rail sector is one critical area of transportation where a number of federal departments and their component agencies have begun taking actions to enhance security. The U.S. passenger rail sector is a vital component of the nation’s transportation infrastructure, with subway and commuter rail systems, among others, carrying more than 11 million passengers each week day. Characteristics of some passenger rail systems—high ridership, expensive infrastructure, economic importance, and location (e.g., large metropolitan areas or tourist destinations)—make them attractive targets for terrorists because of the potential for mass casualties and economic damage and disruption. Indeed, public transportation in general, and passenger rail in particular have continued to be attractive targets for terrorist attack as evidenced by the March 2004 terrorist bomb attacks on commuter trains in Madrid, Spain in which 191 people were killed and 600 injured, and the July 2005 bomb attacks on the London’s subway system, which resulted in over 50 fatalities and more than 700 injuries. Prior to the creation of TSA in 2002, the Federal Transit Administration (FTA) and Federal Railroad Administration (FRA) were the primary federal agencies involved in passenger rail security matters, and both undertook numerous initiatives both before and after 9/11 to enhance security. For example, FTA conducted security readiness assessments of rail transit systems, sponsored security training, and developed security guidance for transit agencies. FRA has assisted commuter railroads and Amtrak in developing security plans, conducted security inspections of commuter railroads, and researched various security technologies, among other things. Since taking over as the lead federal agency responsible for transportation security, TSA has also taken a number of actions intended to enhance passenger rail security. For example, in response to the commuter rail attacks in Madrid, and federal intelligence on potential threats against U.S. passenger rail systems, TSA issued security directives for rail operators in May 2004. The directives required rail operators to implement a number of general security measures, such as conducting frequent inspections of stations, terminals, and other assets, or utilizing canine explosive detection teams, if available. The issuance of these directives was an effort to take swift action in response to a current threat. However, as we reported in September 2005, because these directives were issued with limited input and review by rail industry and federal stakeholders, they may not provide the industry with baseline security standards based on industry best practices. Furthermore, no permanent rail security standards had been promulgated and clear guidance for rail operators was lacking. To ensure that future rail security directives are enforceable, transparent, and feasible, we recommended that TSA collaborate with the Department of Transportation and the passenger rail industry to develop rail security standards that reflect industry best practices and that can be measured, monitored, and enforced. Among other actions taken, TSA has also tested emergency rail security technologies for screening passenger baggage and has enlarged its national explosives detection canine program to train and place canine teams in the nation’s mass transit and commuter rail systems. (See app. III for information on GAO products related to passenger rail security.) In addition to the U.S. passenger rail system, concerns have been raised about the nation’s highway infrastructure, which facilitates transportation for a vast network of interstate and intrastate trucking companies and others. Vehicles and highway infrastructure play an essential role in the movement of goods, services, and people, yet more work needs to be done to assess or address vulnerabilities to acts of terrorism that may exist in these systems. Surface transportation provides terrorists with thousands of points from which to attack and easy escape routes, potentially causing significant loss of life and economic harm. Indeed, threat information and TSA assessments have identified that specific components of the commercial vehicle sector are potential targets—and are vulnerable—to terrorist attacks. Among other targets, attackers can target bridges, tunnels, and trucks, including using hazardous material trucks as weapons. Further, the diversity of the trucking industry poses additional challenges in effectively integrating security in both large, complex trucking operations and smaller owner/operator businesses. We have work under way to analyze federal efforts to strengthen the security of commercial vehicles, including vehicles carrying hazardous materials, and how federal agencies coordinate their efforts to secure the commercial vehicle sector. We expect to report on this work later this year. The maritime sector is another critical area of transportation where a number of federal agencies and local stakeholders have taken many actions to secure seaports. Since the terrorist attacks of September 11, the nation’s 361 seaports have been increasingly viewed as potential targets for future terrorist attacks. These ports are vulnerable because they are sprawling, interwoven with complex transportation networks, close to crowded metropolitan areas, and are easily accessible. Ports contain a number of specific facilities that could be targeted by terrorists, including military vessels and bases, cruise ships, passenger ferries, terminals, locks and dams, factories, office buildings, power plants, refineries, sports complexes, and other critical infrastructure. The large cargo volumes passing through seaports, such as containers destined for further shipment by other modes of transportation such as rail or truck, also represent a potential conduit for terrorists to smuggle weapons of mass destruction or other dangerous materials into the United States. The potential consequences of the risks created by these vulnerabilities are significant as the nation’s economy relies on an expeditious flow of goods through seaports. Although no port-related terrorist attacks have occurred in the United States, terrorists overseas have demonstrated their ability to access and destroy infrastructure, assets, and lives in and around seaports. A successful attack on a seaport could result in a dramatic slowdown in the supply system, with consequences in the billions of dollars. Much was set in motion to address these risks in the wake of the 9/11 terrorist attacks. We have reported that a number of actions have been taken or are under way to address seaport security by a diverse mix of agencies and seaport stakeholders. Federal agencies, such as the Coast Guard, CBP, and TSA, have been tasked with responsibilities and functions intended to make seaports more secure, such as monitoring vessel traffic or inspecting cargo and containers, and procuring new assets such as aircraft and cutters to conduct patrols and respond to threats. In addition to these federal agencies, seaport stakeholders in the private sector and at the state and local levels of government have taken actions to enhance the security of seaports, such as conducting security assessments of infrastructure and vessels operated within the seaports and developing security plans to protect against a terrorist attack. The actions taken by these agencies and stakeholders are primarily aimed at three types of protections: (1) identifying and reducing vulnerabilities of the facilities, infrastructure, and vessels operating in seaports; (2) securing the cargo and commerce flowing through seaports; and (3) developing greater maritime domain awareness through enhanced intelligence, information-sharing capabilities, and assets and technologies. Our work indicated that assessments of potential targets have been completed at 55 of the nation’s most economically and militarily strategic seaports, and more than 9,000 vessels and over 3,000 facilities have developed security plans that have been reviewed by the Coast Guard. New assets are budgeted and are coming on line, including new Coast Guard boats and cutters and communication systems. Finally, new information-sharing networks and command structures have been created to allow more coordinated responses and increased awareness of activities going on in the maritime domain. Some of these efforts have been completed and others are ongoing; overall, the amount of effort has been considerable. (Federal efforts to secure container cargo crossing U.S. borders by land or sea are discussed later in this report.) (See app. III for information on our products related to maritime security.) Even with all the actions taken since 9/11 by Congress and federal agencies to strengthen our transportation-related layers of defense, we have reported that it seems improbable that all risk can be eliminated, or that any security framework can successfully anticipate and thwart every type of potential terrorist threat that highly motivated, well skilled, and adequately funded terrorist groups could devise. This is not to suggest that security efforts do not matter—they clearly do. However, it is important to keep in mind that total security cannot be bought no matter how much is spent on it. We cannot afford to protect everything against all threats— choices must be made about security priorities. Thus, great care needs to be taken to assign available resources to address the greatest risks, along with selecting those strategies that make the most efficient and effective use of resources. One approach we have advocated to help ensure that resources are assigned and appropriate strategies are selected to address the greatest risks is through risk management—that is, defining and reducing risk. To help federal decision makers determine how to best allocate limited resources, we have advocated, the National Commission on Terrorist Attacks Upon the United States (the 9/11 Commission) has recommended, and the subsequent Intelligence Reform and Terrorism Prevention Act of 2004 requires that a risk management approach be employed to guide security decision making. We have concluded that without a risk management approach, there is limited assurance that programs designed to combat terrorism are properly prioritized and focused. A risk management approach is a systematic process for analyzing threats and vulnerabilities, together with the criticality (that is, the relative importance) of the assets involved. This process consists of a series of analytical and managerial steps, basically sequential, that can be used to assess vulnerabilities, determine the criticality (that is, the relative importance) of the assets being considered, determine the threats to the assets, and assess alternatives for reducing the risks. Once these are assessed and identified, actions to improve security and reduce the risks can be chosen from the alternatives for implementation. To be effective, this process must be repeated when threats or conditions change to incorporate any new information to adjust and revise the assessments and actions. In July 2005, in announcing his proposal for the reorganization of DHS, the Secretary of the Department of Homeland Security declared that as a core principle of the reorganization, the department must base its work on priorities driven by risk. DHS has also taken steps to implement a risk- based approach to assessing risks in various transportation modes. For example, TSA completed an air cargo strategic plan 3 years ago that outlined a threat-based, risk management approach to secure the air cargo system by, among other things, targeting elevated risk cargo for inspection. TSA also completed an updated cargo threat assessment in April 2005. However, we reported in November 2005 that TSA had not yet established a methodology and schedule for completing assessments of air cargo vulnerabilities and critical assets—two crucial elements of a risk- based management approach without which TSA may not be able to appropriately focus its resources on the most critical security needs. We recommended that TSA, among other things, complete its assessments of air cargo vulnerabilities and critical assets. (TSA has not provided any documentation to indicate that either the methodology or the schedule has since been completed.) By not yet fully evaluating the risks posed by terrorists to the air cargo transportation system through assessments of systemwide vulnerabilities and critical assets, including analyzing information on air cargo security breaches, TSA is limited in its ability to focus its resources on those air cargo vulnerabilities that represent the most critical security needs and assure Congress that existing funds are being spent in the most efficient and effective manner. With respect to passenger rail, DHS’s Office of Grants and Training (OGT) has developed and implemented a risk assessment methodology that it has used to complete risk assessments at rail facilities around the country. As we reported in September 2005, rail operators we interviewed stated that OGT’s risk management approach has helped them to allocate and prioritize resources to protect their systems. OGT has provided over $320 million in grants to rail transit agencies for certain security activities since fiscal year 2003. OGT has also leveraged its grant-making authority to promote risk-based funding decisions for passenger rail by requiring, for example, that operators complete a risk assessment to be eligible for a transit security grant. TSA has also recently begun to conduct risk assessments of the rail sector as part of a broader effort to assess risk to all transportation modes, but has not completed these efforts or determined how to analyze and characterize risks that are identified. Until these efforts are completed, TSA will not be able to prioritize passenger rail assets based on risk and help guide investment decisions about protecting them. We recommended in 2005 that TSA establish a plan and time line for completing its methodology for conducting risk assessments and evaluate whether the risk assessments used by OGT should be leveraged to facilitate the completion of risk assessments for rail and other transportation modes. Progress also has been made to analyze risks to other transportation sectors. For example, with respect to seaports, Coast Guard has been using a port security risk assessment tool for determining the risk associated with specific attack scenarios against key infrastructure or vessels in local ports. Under this approach, seaport infrastructure that is determined to be both a critical asset and a likely and vulnerable target would be a high priority for security enhancements or funding. In general, we have reported that the most progress has been made on fundamental steps, such as conducting risk assessments of individual assets, and that the least amount of progress has been made on developing ways to translate this information into comparisons and priorities across ports or across infrastructure sectors. Federal agencies with transportation security responsibilities should not expect to develop or implement enhanced security goals and standards for transportation without participation and input from other federal partners, as well as key state, local, private-sector, and international stakeholders. These stakeholders include, for example, federal transportation modal administrations such as FTA and FRA, local governments, air carriers and airports, rail and seaport operators, private industry trade associations, and foreign governments. It is important that all these stakeholders be involved, as applicable and appropriate, in coordinating security-related priorities and activities, and reviewing and sharing best practices on security-related programs and policies as a means of developing common security frameworks. Such efforts are important in part because we are increasingly interdependent when it comes to addressing security gaps. For example, we place Federal Air Marshals on international flights, and we match information from passengers on international flights bound for the United States against terrorist watch lists. This interdependence requires close coordination and opportunities to harmonize security standards and practices with critical stakeholders, such as foreign governments. Federal partnerships with various domestic stakeholders are under way throughout the transportation sector. In aviation, for example, TSA has been developing partnerships with private air carriers to conduct passenger prescreening, but continues to face challenges both identifying and supporting the roles it expects air carriers to play in the prescreening process, especially with regard to Secure Flight. In making recommendations to TSA on passenger prescreening, we have emphasized the need for TSA to continue to strengthen federal partnerships, and its partnerships with air carriers, in order to coordinate passenger screening programs, such as Secure Flight. For passenger rail, as mentioned previously, we have also recommended that TSA collaborate with the Department of Transportation and private industry rail operators on developing security standards that reflect industry best practices. In response, TSA is taking action to strengthen its partnerships with these stakeholders and is currently working with the American Public Transportation Association on developing passenger rail security standards based upon best practices. Establishing federal partnerships with foreign governments and industry associations tackling similar transportation security challenges can provide important strategic opportunities to learn about security practices and programs that have worked elsewhere. As European Union countries and others throughout the world become more focused on aviation and transportation security, and with the establishment of international aviation security standards, TSA officials have acknowledged the importance of coordinating and collaborating with foreign countries on security matters. We have ongoing work examining TSA’s efforts to coordinate with foreign governments on aviation security and expect to report on our results in the first quarter of 2007. In our work on passenger rail security, we identified some practices that are utilized abroad that U.S. rail operators or the federal government had not studied in terms of the feasibility, costs, and benefits. For example, covert testing to determine whether security personnel comply with established security standards, which has been conducted at rail stations in the United Kingdom and elsewhere, is one approach TSA and rail industry stakeholders could consider. We recommended, among other things, that TSA evaluate the potential benefits and applicability—as risk analyses warrant and as opportunities permit—of implementing covert testing processes and other security practices that were not currently in use in the United States at the time our September 2005 report. In response, TSA, through DHS, stated that it had been working with foreign counterparts on rail and transit security issues in order to share and glean best practices and intended to continue to do so. It is understandable that in the months and years following the 9/11 attacks, Congress and federal departments focused primarily on meeting the aviation security deadlines contained in ATSA and, in general, addressing the aviation-related vulnerabilities exploited by the terrorists. Over time, recognizing the threats and vulnerabilities facing other transportation modes, TSA and other agencies have begun to address other transportation security needs that were not the focal point of 9/11, including passenger rail, the maritime sector, and surface transportation modes. In these areas, TSA and other agencies have begun to identify and set priorities, based on risk and other factors, in order to allocate finite resources to enhance protection of the nation’s passenger rail systems, seaports, highways, and other critical transportation assets. Agencies have made some progress but have a long way to go toward working with domestic and international partners to identify critical transportation assets, develop strategies for protecting them, and use a risk-based approach to prioritize and allocate resources across competing transportation security requirements. The visa process is a first layer of border security to prevent terrorists or criminals from gaining entry into the United States. Citizens of other countries seeking to enter the country temporarily for business and other reasons generally must apply for and obtain a visa. Before 9/11, U.S. visa operations focused primarily on illegal immigration concerns; after the attacks, greater emphasis was placed on using the visa process as a counterterrorism tool. Congress, DHS, and State have taken numerous actions to help strengthen the visa process by, among other things, expanding the name-check system used to screen applicants (including portions of the consolidated watch list), requiring in-person interviews for nearly all applicants, revamping consular training to focus on counterterrorism, and augmenting staff at consular posts. Steps also have been taken to help detect and prevent visa fraud. In addition, State and DHS officials have acknowledged that immigrant visa processes—whereby immigrants seeking permanent residency in the United States must obtain a certain type of visa—may warrant further review because these visa types could also pose potential security risks. Citizens of other countries seeking to enter the United States temporarily for business and other reasons generally must apply for and obtain a U.S. travel document, called a visa, at U.S. embassies or consulates abroad before arriving at U.S. ports of entry. The main steps required to obtain a visa are generally the same before and after 9/11: visa applicants must submit an application to a consulate or embassy; consular officials review the applicant’s documentation; the applicant’s information is checked against a name-check system maintained by State; officials then issue, or decline to issue, a visa, which the applicant may then present to CBP officials (formerly Immigration and Naturalization Service inspectors) for inspection prior to entering the United States. While the general visa process has remained intact, the focus before 9/11 was primarily on screening applicants to determine whether they intended to work or reside illegally in the United States, though screening for terrorists was also part of this process. The 9/11 Commission staff reported that no U.S. agency at the time of the attacks thought of the visa process as an antiterrorism tool, and noted that consular officers were not trained to screen for terrorists. Overseas consular posts, which administer the visa process, were encouraged to promote international travel, and were given substantial discretion in determining the level of scrutiny applied to visa applications. For example, posts had latitude to routinely waive in-person interviews as part of their overall visa applicant screening process. In making decisions about who should receive a visa, consular officials relied on a State Department name-check database that incorporated information from many agencies on individuals who had been refused visas in the past, had other immigration violations, and had raised terrorism concerns. This name-check database was the primary basis for identifying potential terrorists and other ineligible applicants. With these policies and State’s name-check system in place, the 19 hijackers exploited this process and were able to obtain visas. (See app. I for details on the hijackers’ visa applications and a time line of visas issued to hijackers during this period.) Specifically, the hijackers were issued a total of 23 visas at five different consular posts from April 1997 through June 2001 (multiple visas were issued over this period, for different stays). These visas were issued based on the belief that the applicants were “good cases,” that is, they were not perceived as security risks and were thought likely to return to their country at the end of their allotted time in the United States. For citizens of either Saudi Arabia or United Arab Emirates, for example, post policies were to consider all of these citizens as “good cases” for visas. Thus, it was policy for consular officers in these countries to issue visas to most Saudi and Emirati applicants without interviewing them unless their names showed up in the name-check database or they had indicated on their applications that they had a criminal history. In addition, consular managers at these posts said that the posts had accepted applications from Saudi and Emirati nationals that weren’t completely filled out and lacked supporting documentation. As it turned out, 17 of the 19 hijackers were citizens of either Saudi Arabia or United Arab Emirates. None of the visa applications for which we were able to obtain documentation was completely filled out and consular officers granted visas to all but 2 of the 15 hijackers for whom records were available, without conducting an interview. Moreover, while consular officers who issued visas to the hijackers followed established procedures for checking to see if these individuals were included in the name-check database when they applied for visas, the database did not contain information on any of them. While the intelligence community notified State a few weeks prior to 9/11 that it had identified two of them as possible terrorists who should not receive visas, the visas had already been issued—and although they were subsequently revoked, by that time the hijackers had entered the country. As we reported in September 2005, State, DHS, and other agencies have taken many steps since the 9/11 attacks to strengthen the visa process as an antiterrorism tool. For example, the consular name-check database has been expanded—the information in this database now draws upon a subset of the Terrorist Screening Center’s consolidated watch list as well as other information. Specifically, State, in cooperation with other federal agencies, has increased the amount of information available to consular officers in the name-check database by fivefold—from 48,000 records in September 2001 to approximately 260,000 records in June 2005. An additional 8 million records on criminal history from the FBI also are now available for the name-check process. In addition, under the leadership of the Assistant Secretary of State for Consular Affairs, our work shows that consular officers are receiving clear guidance on the importance of security as the first priority of the visa process. Our observations of consular sections at eight posts in 2005 confirmed, for instance, that consular officers overseas regard security as their top priority, while also recognizing the importance of facilitating legitimate travel to the United States. Many new policies have been introduced, and existing policies revised, both to strengthen the visa process as a terrorist screening tool and to build in more structure for posts that have traditionally had discretionary latitude in handling visa matters. One key policy change, mandated in the Intelligence Reform and Terrorism Prevention Act of 2004 and which we had previously recommended, requires that consular posts conduct in- person interviews with most applicants for nonimmigrant visas with certain exceptions. Generally, applicants between the ages of 14 and 79 must submit to an in-person interview though under certain circumstances such interviews can be waived. To ensure that these and other new policies for strengthening the visa process as an antiterrorism tool would be understood and implemented by all consular officers at all posts, State, in consultation with DHS, has issued more than 80 new standard operating procedures related to security and other matters. For example, State has issued procedures implementing the legislative provision that places restrictions on the issuance of nonimmigrant visas to persons coming from countries that sponsor terrorism. Another new procedure informs consular offices about fingerprint requirements for visa applicants. State has also established management controls to ensure that visa applications are processed in a consistent manner at each post, in part to reinforce security-related policies and procedures. For example, the department created Consular Management Assistance Teams to conduct management reviews and field visits of consular sections worldwide, providing guidance to posts on standard operating procedures. Over 90 of these reviews have been conducted, in which the teams evaluate operations and make recommendations to mitigate a range of potential vulnerabilities they identify in their visits. In addition, as a means of adding a layer of security review prior to issuing new visas, DHS has, as directed by Congress, assigned visa security officers in Saudi Arabia to review all visa applications prior to adjudication by State’s consular officers, and to provide expert advice and training to consular officers on visa security at selected U.S. embassies and consulates. This effort, known as the Visa Security Program, is being expanded to other posts. According to State’s consular officers, the deputy chief of mission, and DHS officials in Saudi Arabia, the visa security officers deployed in Riyadh and Jeddah, Saudi Arabia, strengthen visa security because of their law enforcement and immigration experience, as well as their ability to access and use information from law enforcement databases not immediately available, by law, to consular officers. Based on recommendations we made in 2005, DHS has developed performance data to assess the results of this program at each post. Consular officers’ training has been revamped and expanded to emphasize counterterrorism. For example, the basic consular training course has been lengthened from 26 days to 31 days to provide added emphasis on visa security, counterterrorism awareness, and interviewing techniques. And last year, State initiated training to enhance interviewing techniques, specifically designed to help consular officers spot inconsistencies in a visa applicant’s story or in the applicant’s demeanor; such observations may form a sufficient basis for denying a visa. State Department officials believe this training is important to help consular officers determine, during the interview period, whether applicants whose documents do not indicate any terrorist ties show signs of deception. To complement efforts taken to implement new guidance, policies and procedures, and management controls, State also has taken actions to address the potential for visa fraud at consular posts. As the 9/11 Commission staff noted, 2 of the 19 terrorist hijackers used passports that had been manipulated in a fraudulent manner to obtain visas needed to enter the country. State has since deployed 25 visa fraud investigators to U.S. embassies and consulates and developed ways for consular officers in the field to learn about fraud prevention including, for example, an on-line discussion group, comprised of more than 500 members, where information on, and lessons learned from, prior fraud cases may be shared. Training on fraud prevention also has been bolstered. For example, State expanded fraud prevention course offerings for managers from 2 to 10 times annually; DHS’s ICE provides training to State’s fraud prevention managers; and ICE’s Forensic Document Laboratory provides training on forensic documentation and analysis to combat travel and identity document fraud. Acting on a recommendation we made in 2005 on fraud prevention, State’s Vulnerability Assessment Unit has begun to conduct more in-depth analyses of the visa information that is collected as a means of detecting patterns and trends that may indicate the potential for fraud and determining whether additional investigation may be needed. Using data- mining techniques (searching large volumes of data for patterns), this unit can, for example, use its internal databases to trigger alerts when specific keywords or activities arise, such as visas issued to individuals associated with certain organizations with terrorist ties, or sudden increases in visas issued to individuals residing in countries where they are not citizens. This proactive analysis may result in investigations and further mitigates potential fraud risks in the visa process. In addition, the Intelligence Reform and Terrorist Prevention Act of 2004 required State in coordination with DHS to conduct a survey of each diplomatic and consular post to assess the extent to which fraudulent documents are presented by visa applicants. The act mandates that State in coordination with DHS identify the posts experiencing the greatest frequency of fraudulent documents being presented by visa applicants and place in those posts at least one full-time antifraud specialist. The presence of full-time fraud officers at high-fraud posts is particularly important given that entry-level officers may serve as fraud prevention managers on a part-time basis, in addition to their other responsibilities. According to State officials, as of July 2006, State had completed its review of fraud levels at posts, and is continuing to refine its methodology for determining which posts have the highest levels of fraud in the visa process. In addition to implementing new policies, procedures, and antifraud measures, State also has taken some steps to address staffing and language proficiency issues at consular posts. Though State added hundreds of Foreign Service consular positions after 9/11, and an additional 150 consular officer positions have been authorized annually from fiscal year 2006 through fiscal year 2009, State has reported that a staffing shortage at consular posts persists, and we have reported on multiple occasions that State has a shortage of mid-level, supervisory, consular officers at key overseas posts, and that the department has not assessed its overall consular staffing needs. Staff shortages have also led to extensive wait times for visa interview appointments at some posts. We are currently reviewing this issue and expect to report on our findings early this year. Moreover, in our earlier work, we found that not all consular officers were proficient in languages at their posts in order to hold interviews with visa applicants. To remedy a shortage of consular officers able to speak critical languages, State has made efforts to focus recruitment of consular officers to include more who are proficient in languages it deems critical. (See app. III for a list of our products related to the visa process.) While State and other agencies have enhanced and strengthened policies and procedures for screening applicants for nonimmigrant visas, State and DHS have acknowledged that the visa process for immigrants seeking to reside in the United States on a permanent basis may warrant further review because these visa types could also pose potential security risks. Immigrant visas are issued on the basis of certain family relationships or types of employment, refugee status, or other circumstances adjudicated by officials at several federal agencies, including the departments of Homeland Security, Labor, and Justice. We have recently begun a review to identify the security risks associated with various immigrant visa programs, and plan to issue a report later this year. One immigrant visa program singled out by the State OIG 3 years ago as potentially risky was the Diversity Visa program, established by Congress in 1995. It authorizes the issuance of up to 50,000 immigrant visas annually to persons from countries that are underrepresented among the 400,000 to 500,000 immigrants coming to the United States each year, and who qualify for a visa on the basis of their education level and/or work experience. This program is commonly referred to as the visa lottery because “winners” are selected through a computer-generated random drawing. The applicants who receive a visa under this program are authorized to live and work permanently in the United States. The State OIG reported as a concern in 2003 that the Diversity Visa program did not generally prohibit the issuance of visas to aliens from countries that sponsor terrorism. (The nonimmigrant visa process, by contrast, places restrictions on the issuance of visas to persons from countries sponsoring terrorism.) Steps have since been taken by the State Department to address this concern. In 2005, the OIG reported that revised consular procedures and heightened awareness generally provided greater safeguards against terrorists entering through the Diversity Visa process than in the past. For example, the OIG noted that consular officers interview all Diversity Visa winners and check applicants’ police and medical records. In addition, all immigrant visa applicants (as well as nonimmigrant applicants) are required to be fingerprinted; the fingerprint system helps to identify fraudulent applicants using false names. Despite these actions, the OIG continues to believe that the program still poses significant risks to national security from hostile intelligence officers, criminals, and terrorists attempting to use the program for entry into the United States as permanent residents. We are also reviewing the potential security risks of the Diversity Visa program as part of our ongoing review of immigrant visa programs. The range of actions that State and DHS have undertaken to strengthen the nonimmigrant visa process as an antiterrorism tool—in part in response to our past recommendations---have, when considered altogether, gone a long way toward reducing the likelihood that terrorists can obtain the visas needed to enter the United States and wreak havoc. While it is generally acknowledged that the visa process can never be entirely failsafe—and that it will never be possible to entirely eliminate the risk of terrorists obtaining nonimmigrant visas issued by the United States government—the federal government has done a creditable job overall of strengthening the visa process as a first line of defense. Separate concerns have been raised about potential risks associated with certain immigrant visa programs, and we have initiated a review to identify and analyze these potential security risks. The processes for screening and inspecting travelers arriving at the nation’s air, land, and sea ports represent a key layer of border security defense. Many measures have been put in place to enhance security in these and related areas, but policies and programs can still be strengthened. For example, the Visa Waiver Program, which enables travelers from certain countries to seek entry into the United States without visas, carries inherent security, law enforcement, and illegal immigration risks because, among other things, visa waiver travelers are not subject to the same degree of screening as those travelers required to obtain visas. In addition, the potential misuse of lost or stolen passports from visa waiver countries is a serious security problem that terrorists and others can potentially exploit. Since 9/11, in response to congressional requirements, DHS has begun taking steps designed to mitigate the risks posed by visa waiver travelers; however, we have reported that additional actions are needed to further mitigate the risks posed by the use of fraudulent identity documentation, including actions to ensure that foreign governments report information on lost or stolen passports. Separately, a border security initiative designed to verify travelers’ identities— US-VISIT—has helped to process and authenticate travelers seeking entry (or reentry) to the country. A key goal of US-VISIT—tracking those who overstay their authorized stay—cannot be fully implemented, however, because, among other things, the exit portion of the initiative has not been developed. Steps also have been taken by various federal agencies to enhance detection of hazardous cargo shipped over land and to identify oceangoing cargo containers that also may contain hazardous materials or weapons, but more work is needed in both areas. While significant progress has been made to ensure that terrorists do not obtain visas as a prelude to gaining entry to the United States, visa holders are by no means the only foreign travelers coming to the United States. Under the Visa Waiver Program, millions of travelers seek entry into the United States each year without visas. The Visa Waiver Program is intended to facilitate international travel and commerce, and ease consular workload at overseas posts, by enabling citizens of 27 participating countries to travel to the United States for tourism or business for 90 days or less without first obtaining a nonimmigrant visa from U.S. embassies and consulates. (See app. II for a map of Visa Waiver Program member countries.) While the Visa Waiver Program provides many benefits to the United States, there are inherent security, law enforcement, and illegal immigration risks in the program because some foreign citizens may exploit the program to enter the United States. In particular, visa waiver travelers are not subject to the same degree of screening as those travelers who must first obtain a visa before arriving in the United States. Furthermore, lost and stolen passports from visa waiver countries could be used by terrorists, criminals, and immigration law violators to gain entry into the United States. While DHS established a unit in 2004 to oversee the program and conduct mandated assessments of program risks, we reported in July 2006 that the assessment process has weaknesses and the unit was unable to effectively monitor risks on a continuing basis because of insufficient resources. Furthermore, while DHS has taken some actions to mitigate program risks, the department has faced difficulties in further mitigating the risks of the program, particularly regarding lost and stolen passports—a key vulnerability. In fiscal year 2005, nearly 16 million travelers entered the United States under the Visa Waiver Program, and visa waiver travelers have represented roughly one-half of all nonimmigrant admissions to the United States in recent years. The program is beneficial, according to federal officials, because it facilitates international travel for millions of foreign citizens seeking to visit the United States each year, provides reciprocal visa-free travel for Americans visiting visa waiver member countries, and creates substantial economic benefits for the United States. Moreover, the program allows State to allocate its limited resources to visa-issuing posts in countries with higher-risk applicant pools. By design, visa waiver travelers are not subject to the same degree of screening as those travelers who must first obtain a visa before arriving in the United States. Travelers who must apply for visas receive two levels of screening as they are first screened by consular officers overseas and then by CBP officers before entering the country. However, visa waiver travelers are first screened in person by a CBP inspector upon arrival at a U.S. port of entry. For all travelers, CBP primary officers observe the applicant, examine that person’s passport, collect the applicant’s fingerprints as part of the U.S. Visitor and Immigrant Status Indicator Technology program (US-VISIT), and check the person’s name against automated databases and watch lists, which contain information regarding the admissibility of aliens, including known terrorists, criminals, and immigration law violators. However, according to the DHS Office of Inspector General, CBP’s primary border officers are disadvantaged when screening visa waiver travelers because they may not know the alien’s language or local fraud trends in the alien’s home country, nor have the time to conduct an extensive interview. In contrast, non-visa waiver travelers, who must obtain a visa from a U.S. embassy or consulate, undergo an interview by consular officials overseas, who conduct a rigorous screening process when deciding to approve or deny a visa. Moreover, consular officers have more time to interview applicants and examine the authenticity of their passports, and may speak the visa applicant’s native language, according to consular officials. Fig. 5 provides a comparison of the process for visa waiver travelers and visa applicants. The Visa Waiver Program, while valuable, can pose risks to U.S. security, law enforcement, and immigration interests because some foreign citizens may try to exploit the program to enter the United States. Indeed, convicted 9/11 terrorist Zacarias Moussaoui and “shoe-bomber” Richard Reid both boarded flights to the United States with passports issued by Visa Waiver Program countries. Moreover, as we have reported, inadmissible travelers who need visas to enter the United States may attempt to acquire a passport from a Visa Waiver Program country to avoid the additional scrutiny that takes place in non-visa waiver countries. Since the terrorist attacks, the government has taken several actions intended to enhance the security of the Visa Waiver Program by improving program management, oversight, and efforts to assess and mitigate program risks, among other things. For example, shortly after 9/11, Congress required DHS to increase the frequency of mandated assessments to determine the effect of each country’s continued participation in the Visa Waiver Program on U.S. security, law enforcement, and immigration interests, from once every 5 years to once every 2 years (biennially). These assessments are important because they enable the United States to analyze individual participating countries’ border controls, security over passports and national identity documents, and other matters relevant to law enforcement, immigration, and national security. In April 2004, the DHS OIG reported that a lack of funding, training and other issues left DHS unable to comply with the congressionally mandated biennial country assessments. In response to the OIG’s findings, DHS established a Visa Waiver Program Oversight Unit to oversee Visa Waiver Program activities and monitor countries’ adherence to the program’s statutory requirements to help ensure that the United States is protected from those who wish to do it harm or violate its laws, including immigration laws. Actions taken by this unit include completing comprehensive assessments for 25 of the 27 visa waiver countries (with the remaining two under way); identifying risks through these assessments, which were brought to the attention of host country governments for five countries; working with countries seeking to join the program; and briefing foreign government representatives from participating countries on issues of interest and concern such as new passport requirements for visa waiver travelers. While the move to a biennial review process and establishment of the Visa Waiver Program Oversight Unit represents a good first step to better assess the inherent risks of the program, our recent work indicates that DHS could improve its administration of this effort and raises concerns about the agency’s ability to effectively monitor the law enforcement and security risks due to staffing and resource constraints. For example, in our July 2006 report, we identified several problems with DHS’s first biennially based review cycle conducted in 2004, including the lack of clear criteria when assessing each country’s participation in the program to determine at what point security concerns in a particular country would trigger discussions with foreign governments to resolve them. Moreover, DHS did not issue the mandated summary report to Congress in a timely manner, describing the findings from its 25 country assessments. DHS, State, and Justice officials acknowledged that the report—consisting of a six-page summary lacking detailed descriptions of the law enforcement and security risks identified during the review process and which was delivered more than a year after the site visits were made—took too long to complete. As a result of this lengthy process, the final report delivered to Congress did not necessarily reflect the current law enforcement and security risks posed by each country, and did not capture recent developments. For example, the large-scale theft of blank passports in a visa waiver country that took place while the report was being processed was not reflected in the country’s report. Thus, there were missed opportunities to report timely information to Congress. In our July 2006 report, we recommended that DHS finalize clear, consistent, and transparent protocols for biennial country assessments and provide these protocols to stakeholders at relevant agencies at headquarters and overseas. These protocols should provide time lines for the entire assessment process, including the role of a site visit, an explanation of the clearance process, and deadlines for completion. In addition, we recommended to Congress that it establish a biennial deadline by which DHS must complete its assessments and report to Congress. In its formal comments to our report, DHS did not appear to support the establishment of a deadline. Instead, DHS suggested that Congress require continuous and ongoing evaluations of the risks of each country’s program. With respect to staffing and resources to carry out these assessment efforts and other program oversight responsibilities, we reported that DHS cannot effectively monitor the law enforcement and security risks posed by 27 visa waiver countries on a consistent, ongoing basis because it has not provided the oversight unit with adequate staffing and funding resources. Without adequate resources, the unit may be unable to monitor and assess participating countries’ compliance with the program. We recommended that additional resources be provided to strengthen the program oversight unit’s monitoring activities. Until this is achieved, staffing and resource constraints may hamper the effectiveness of the Visa Waiver Program and could jeopardize U.S. security interests. DHS has stated that it expects the administration to seek resources appropriate for the oversight unit’s tasks. In addition to efforts to improve administration and oversight and assess the overall risks of the Visa Waiver Program, federal actions also have been taken to mitigate one specific risk: the potential misuse of lost or stolen passports. DHS intelligence analysts, law enforcement officials, and forensic document experts all acknowledge that the greatest security problem posed by the program is the potential exploitation by terrorists, immigration law violators, and other criminals of a country’s lost or stolen passports—whether they’ve been issued (used) or are blank (unused). Lost and stolen passports from visa waiver countries are highly prized among those travelers seeking to conceal their true identities or nationalities. In 2004, the DHS OIG reported that aliens applying for admission to the United States using lost or stolen passports had little reason to fear being caught. DHS has acknowledged that an undetermined number of inadmissible aliens may have entered the United States using a stolen or lost passport from a visa waiver country, and, in fact, passports from Visa Waiver Program countries have been used illegally by travelers attempting to enter the United States. For example, in a 6-month period in 2005, DHS confiscated 298 fraudulent or altered passports at U.S. ports of entry, which had been issued by visa waiver countries. Visa waiver countries that do not consistently report the losses or thefts of their citizens’ passports, or of blank passports, put the United States at greater risk of allowing inadmissible travelers to enter the county. DHS has begun taking steps intended to help mitigate the risks related to lost and stolen passports. For example, in 2004, the DHS OIG reported that a lack of training hampered CBP border inspectors’ ability to detect passport fraud among visa waiver travelers and recommended that CBP officers receive additional training in fraudulent document detection. In response, DHS has doubled the time devoted to fraudulent document detection training for new officers from 1 day to 2 days, and provides additional courses for officers throughout their assignments at ports of entry. Nevertheless, training officials said that fraudulent and counterfeit passports are extremely difficult to detect, even for the most experienced border officers. Congress and DHS have taken additional actions designed to mitigate this risk. For example, all passports issued to visa waiver travelers between October 26, 2005 and October 25, 2006, must contain a digital photograph printed in the document, and DHS is enforcing this requirement. For example, when Italy and France failed to meet the deadline for issuing new passports encoded with digital photographs, DHS began requiring citizens with noncompliant passports to obtain a visa before visiting the United States. In addition, passports issued to visa waiver travelers after October 25, 2006, must be electronic (e-passports). E-passports aim to enhance the security of travel documents, making it more difficult for imposters or inadmissible aliens to misuse the passport to gain entry into the United States. Travelers with passports issued after the deadlines that do not meet these requirements are required to obtain a visa from a U.S. embassy or consulate overseas before departing for the United States. On October 26, 2006, DHS announced that 24 of the 27 Visa Waiver Program countries had met the deadline to begin issuing e-passports. While e-passports may help officers to identify fraudulent and counterfeit passports, because many passports issued from a visa waiver country before the October 2006 deadline are not electronic—and remain valid for years to come—it remains imperative that lost and stolen passports from visa waiver countries be reported to the United States on a timely basis. In 2002, Congress made the timely reporting of stolen blank passports, in particular, a condition for continued participation in the program and required that a country must be terminated from the Visa Waiver Program if the Secretary of Homeland Security and the Secretary of State jointly determine that this information was not reported on a timely basis. According to DHS, detecting stolen blank passports at U.S. ports of entry is extremely difficult and some thefts of blank passports have not been reported to the United States until years after the fact. For example, in 2004, a visa waiver country reported to the United States the theft of nearly 300 blank passports more than 9 years after the theft occurred. DHS and State have chosen not to terminate from the program countries that failed to report these incidents. DHS officials told us that the inherent political, economic, and diplomatic implications associated with removing a country from the Visa Waiver Program make it difficult to enforce the statutory requirement. Nevertheless, recognizing the importance of timely reporting of this information, DHS has taken steps to address this issue. For example, in 2004, during its assessment of Germany’s participation in the Visa Waiver Program, DHS determined that several thousand blank German temporary passports had been lost or stolen, and that Germany had not reported some of this information to the United States. In response, after a series of diplomatic discussions, temporary passport holders from Germany were no longer allowed to travel to the United States without a visa. In addition, because lost or stolen issued passports can be altered, DHS issued guidance in 2005 to visa waiver countries requiring that they certify their intent to report lost or stolen passport data on issued passports. Some visa waiver countries do not provide this information to the United States, due in part to concerns over the privacy of their citizens’ biographical information. While we acknowledge the complexities and challenges of enforcing the statutory requirement and collecting information on both blank and issued stolen and lost passports aside, our recent work has identified areas where DHS could do more to help ensure that countries report this information— and do so in a timely manner. For example, as of June 2006, DHS had not yet issued guidance or standard operating procedures on what information must be shared, with whom, and within what time frame. In July 2006, we recommended that DHS require all visa waiver countries to provide the United States with nonbiographical data from lost or stolen issued passports, as well as from blank passports, and develop and communicate clear standard operating procedures for the reporting of these data, including a definition of timely reporting and a designee to receive the information. In a separate effort to mitigate risks from lost and stolen passports, the U.S. government announced in 2005 its intention to require visa waiver countries to certify their intent to report information on lost and stolen blank and issued passports to the International Criminal Police Organization (Interpol)—the world’s largest international police organization. State reported to Congress in 2005 that it had instructed all U.S. embassies and consulates to take every opportunity to persuade host governments to share this data with Interpol. Interpol already has a database of lost and stolen travel documents to which its member countries may contribute on a voluntary basis. As of June 2006, this database contained more than 11 million records of lost and stolen passports. However, the way visa member countries and the United States interact with and utilize the Interpol database system could be improved. While most of the 27 visa waiver countries use and contribute to Interpol’s database, 4 do not. Moreover, some countries that do contribute do not do so on a regular basis, according to Interpol officials. In addition, Interpol’s data on lost and stolen travel documents are not automatically accessible to U.S. border officers at primary inspection—which is one reason why it is not an effective border screening tool, according to DHS, State, and Justice officials. According to the Secretary General of Interpol, until DHS can automatically query Interpol’s data, the United States will not have an effective screening tool for checking passports. However, DHS has not yet finalized a plan to acquire this systematic access to Interpol’s data. We recently recommended that DHS require all visa waiver countries to provide Interpol with nonbiographical data from lost or stolen issued or blank passports, and implement a plan to make Interpol’s database automatically available during primary inspection at U.S. ports of entry. The Visa Waiver Program aims to facilitate international travel for millions of people each year and promote the effective use of government resources. Effective oversight of the program entails balancing these benefits against the program’s potential risks. To find this balance, as we have reported, the U.S. government needs to fully identify the vulnerabilities posed by visa waiver travelers, and be in a position to mitigate them. However, we found weaknesses in the process by which the U.S. government assesses these risks, and DHS’s Visa Waiver Program oversight unit is not able to manage the program with its current resource levels. While actions are under way to address these issues, they have not all been resolved. Specifically, in response to our recommendation that additional resources be provided to strengthen the program oversight unit’s monitoring activities, DHS stated that it expected the administration to seek resources appropriate for the unit’s tasks. Until this is achieved, as we have reported, staffing and resource constraints may hamper the effectiveness of the Visa Waiver Program and could jeopardize U.S. security interests. Moreover, DHS has not communicated clear reporting requirements for lost and stolen passports—a key risk—nor can it automatically access all stolen passport information when it is most needed—namely, at the primary inspection point at U.S. points of entry. We recently recommended that DHS require all visa waiver countries provide the United States and Interpol with nonbiographical data from lost or stolen issued passports, as well as from blank passports, and implement a plan to make Interpol’s lost and stolen passport database automatically available during the primary inspection process at U.S. ports of entry. DHS is in the process of implementing these recommendations. Finding ways to address these and other challenges, including those related to program staffing and managing the visa waiver country review process, are especially important, given that, while it does not appear there will be any expansion of the Visa Waiver Program in the short term, many countries are actively seeking admission into the program, and the President has announced his support for the program’s expansion. Over the last decade, the United States has, at the direction of Congress, been developing a border security initiative intended to serve as a comprehensive system for recording the entry and exit of most foreign travelers. Prior to 9/11, this system, now known as US-VISIT, was the responsibility of the INS and focused primarily on trying to ensure that nonimmigrant travelers (including those from visa waiver countries) who arrived at U.S. ports of entry (POE) did not overstay their authorized visitation periods in order to work illegally in the country. Our work in the years leading up to the 9/11 attacks, and work by the Justice Department OIG, found weaknesses in overstay processes, in part because the INS did not collect and maintain records that would enable officials to identify all of the foreign nationals who either left the country or who remained past the expiration date of their authorized stay. US-VISIT was initially conceived as one means of addressing this problem. After the terrorist attacks, while immigration enforcement remained an important priority, the ability to track overstays through an entry/exit border inspection system, and to authenticate the identity of travelers arriving at ports of entry, took on added importance, given that three of the six terrorist pilots had managed to remain in the U.S. after their visas had expired. In prior reports on US-VISIT, we have identified numerous challenges that DHS faces in delivering program capabilities and benefits on time and within budget. We have reported, for example, that the US-VISIT program is a risky endeavor, in part because it is large, complex, and potentially costly. (See app. III for a list of our products related to overstay tracking and US-VISIT.) US-VISIT is designed to use biographic information (e.g., name, nationality, and date of birth) and biometric information (e.g., digital fingerprint scans) to verify the identity of those covered by the program, which is being rolled out over a 5-year period, from 2002 to 2007. The program applies to certain visitors whether they hold a nonimmigrant visa, or are traveling from a country that has a visa waiver agreement with the United States under the Visa Waiver Program. Foreign nationals subject to US-VISIT who intend to enter the country encounter different inspection processes at different types of ports of entry (POEs) depending on their mode of travel. Foreign nationals subject to US-VISIT who enter the United States at an air or sea POE are to be processed, for purposes of US-VISIT, in the primary inspection area upon arrival. Generally, these visitors are subject to prescreening before they arrive via passenger manifests, which are forwarded to CBP by commercial air or sea carrier in advance of arrival. By contrast, foreign nationals intending to enter the United States at a land POE are generally not subject to prescreening because they arrive in private vehicles or on foot and there is no manifest to record their pending arrival. Thus, when foreign nationals subject to US-VISIT arrive at a land POE, they are directed by CBP officers from the primary inspection area to the secondary inspection area for further processing. As we have recently reported, DHS has deployed an entry capability for US-VISIT at over 300 air, sea, and land POEs, including 154 land ports along the northern and southwestern borders where hundreds of millions of legitimate border crossings take place annually. Biographic and biometric information, including digital fingerprint scans and digital photographs, are used at these ports to verify the identity of visitors. With respect to land ports specifically (the subject of our most recent US-VISIT work), CBP officials at 21 land POE sites we visited where US-VISIT entry capability had been deployed reported that the program had enhanced their ability to verify travelers’ identities, among other things. However, many land POE facilities, which are small and aging, face ongoing operational challenges, including space constraints and traffic congestion, as they continue to operate the entry capability of US-VISIT while also processing other travelers entering the United States. Moreover, Congress’s goal for US-VISIT—to record entry, reentry, and exit—has not been fully achieved because a biometric exit capability has not been developed or deployed. According to DHS officials, implementing a biometrically-based exit program like that used to record those entering or re-entering the country is potentially costly (an estimated $3 billion), would require new infrastructure, and would produce major traffic congestion because travelers would have to stop their vehicles upon exit to be processed—an option officials consider unacceptable. Officials stated that they expect a viable technology for developing a biometric exit capability for US-VISIT that would not require travelers to stop at a facility will become available within the next 5 to 10 years. Without some type of biometric exit capability, however, the government cannot provide certainty that the person exiting the country is the person who entered— and thus cannot determine which visitors have remained in the U.S. past the expiration date of their authorized stay. In November 2006, we recommended, among other things, that DHS finalize a mandated report to Congress describing how a comprehensive biometrically based entry and exit system would work and how an interim nonbiometric exit solution— one is currently being tested—is to be developed or deployed. DHS agreed with our recommendation. While the goal of US-VISIT is in part to ensure that lawful travelers enter and exit the country using valid identity documents, the program is not intended to verify the identities of all travelers. In particular, U.S. citizens, lawful permanent residents, and most Canadian and Mexican citizens are exempt from being processed under US-VISIT upon entering and exiting the country. It is still possible for travelers such as these to use fraudulent documents as a basis for entering the country. For example, U.S. citizens and citizens of Canada and Bermuda are not generally required to present a passport when they enter the United States via land ports of entry. Instead, as we have reported, they may use other forms of identifying documentation, such as a driver’s license or birth certificates, which can be easily counterfeited and used by terrorists to travel into and out of the country. In 2003, 2004, and again in 2006 our undercover investigators were able to successfully enter the United States from Canada and Mexico using fictitious names and counterfeit driver’s licenses and birth certificates. CBP officials have acknowledged that its officers are not able to identify all forms of counterfeit documentation of identity and citizenship presented at land ports of entry and the agency fully supports a new statutory initiative designed to address this vulnerability. This requires DHS and State to develop and implement a plan by no later than June 2009 whereby U.S. citizens and foreign nationals of Canada, Bermuda, and Mexico must present a passport or other document or combination of documents deemed sufficient to show identity and citizenship to enter or reenter the United States; such documentation is not currently needed by many of these travelers. While this effort, known as the Western Hemisphere Travel Initiative (WHTI), may address concerns about counterfeit documents, it still faces hurdles. For example, key decisions have yet to be made about what documents other than a passport would be acceptable when U.S. and Canadian citizens enter or return to the United States via land ports of entry—a decision critical to making decisions about how DHS is to inspect individuals entering the country. Nor has DHS decided what types of security features should be utilized to protect personal information contained in travel documents that may be required, such as an alternative type of passport containing an electronic tag encoded with information to identify each traveler. DHS also has not determined whether, or how, WHTI border inspection processes would fit strategically or operationally with other current and emerging border security initiatives. The emergence of fraud-prevention efforts such as WHTI pose additional challenges for DHS’s oversight of US-VISIT. For example, DHS has not yet determined how US-VISIT is to align with emerging land border security initiatives and mandates like WHTI, and thus cannot ensure that these programs work in harmony to meet mission goals and operate cost effectively. As we reported 3 years ago, agency programs need to properly fit within a common strategic context governing key aspects of program operations, such as what functions are to be performed and rules and standards governing the use of technology. Although a strategic plan defining an overall immigration and border management strategy has been drafted, DHS has not approved it, raising questions about DHS’s overall strategy for effectively integrating border security programs and systems at land POEs. Until decisions about WHTI and other initiatives are made, it remains unclear how US-VISIT will be integrated with emerging border security initiatives, if at all—raising the possibility that CBP would be faced with managing differing technology platforms and border inspection processes at each land POE. Knowing how US-VISIT is to work in concert with other border security and homeland security initiatives could help Congress, DHS, and others better understand what resources and tools are needed to ensure their success. We recommended in November 2006 that DHS direct the US-VISIT Program Director to finalize in its required report to Congress (as noted earlier) a description of how DHS plans to align US-VISIT with other emerging land border security initiatives. DHS agreed with our recommendation. We have ongoing work looking at many aspects of US-VISIT. Developing and deploying complex technology that records the entry and exit of millions of visitors to the United States, verifies their identities to mitigate the likelihood that terrorists or criminals can enter or exit at will, and tracks persons who remain in the country longer than authorized is a worthy goal in our nation’s effort to enhance border security in a post-9/11 era. But doing so also poses significant challenges; foremost among them is striking a reasonable balance between US-VISIT’s goals of providing security to U.S. citizens and visitors while facilitating legitimate trade and travel. DHS has made considerable progress making the entry portion of the US-VISIT program at land ports of entry operational, and border officials have clearly expressed the benefits that US-VISIT technology and biometric identification tools have afforded them. With respect to DHS’s effort to create an exit verification capability, developing and deploying this capability for US-VISIT at land POEs has posed a set of challenges that are distinct from those associated with entry. US-VISIT has not determined whether it can achieve, in a realistic time frame, or at an acceptable cost, the legislatively mandated capability to record the exit of travelers at land POEs using biometric technology. Finally, DHS has not articulated how US-VISIT fits strategically and operationally with other land-border security initiatives, such as the Western Hemisphere Travel Initiative and Secure Border Initiative. As we have recently reported, without knowing how US-VISIT is to be integrated within the larger strategic context governing DHS operations, DHS faces substantial risk that US-VISIT will not align or operate with other initiatives at land POEs and thus not cost- effectively meet mission needs. We recently recommended that DHS finalize a mandated report to Congress on US-VISIT that would include a description of how a comprehensive biometrically based entry and exit system would work and how DHS plans to align US-VISIT with other emerging land border security initiatives. DHS agreed with these recommendations. In addition to the challenges posed by travelers at U.S. ports of entry, various types of cargo also pose security challenges. Preventing radioactive material from being smuggled into the United States—perhaps to be used by terrorists in a nuclear weapon or in a radiological dispersal device (a so-called dirty bomb)—has become a key national security objective. DHS is responsible for providing radiation detection capabilities at U.S. ports of entry and implementing programs to combat nuclear smuggling. The departments of Energy, Defense, and State, are also implementing programs to combat nuclear smuggling in other countries by providing radiation detection equipment and training to foreign border security personnel. Our work in this area suggests that while the nation may always be vulnerable to some extent to this type of threat, DHS has improved its use of radiation detection equipment at U.S. ports of entry and is coordinating with other agencies to conduct radiation detection programs. DHS has, for example, improved in its use of radiation detection equipment and in following the agency’s inspection procedures implemented since 2003. We have nevertheless identified potential weaknesses in procedures for ensuring both that radioactive material is being obtained and used legitimately in the United States and that appropriate documentation, such as bills of lading, are provided when this material is transported across our borders. For example, we have conducted covert testing to determine whether it was possible to make several purchases of small quantities of radioactive material and used counterfeit documents to cross the border even if radiation monitors detected the radioactive sources we carried. Our purchase of the radioactive substance was not challenged because suppliers are not required to determine whether a buyer has a legitimate use for the material. Nor are purchasers required to produce a document from the Nuclear Regulatory Commission when making purchases of small quantities. During our testing, the radiation monitors properly signaled the presence of radioactive material when our two teams conducted simultaneous border crossings and the vehicles were inspected. However, our investigators were able to enter the United States with the material because they used counterfeit documents. Specifically, the investigators were able to successfully represent themselves as employees of a fictitious company and present a counterfeit bill of lading and a counterfeit Nuclear Regulatory Commission document during inspections. CBP officers never questioned the authenticity of our investigators’ counterfeit documents. In response to our work, officials with the Nuclear Regulatory Commission told us that they are aware of the potential problems with counterfeit documentation and are working to resolve these issues. In other work, we have identified other potential weaknesses related to the regulation and inspection of radioactive materials being shipped to the United States. We found, for example, that while radiological materials being transported into the United States are generally required to have a Nuclear Regulatory Commission license, regulations do not require that the license accompany the shipment. Further, CBP officers do not have access to data that could be used to verify that shippers have acquired the necessary documentation. And CBP inspection procedures do not require officers to open containers and inspect them after an initial alarm is triggered, although under some circumstances, doing so could improve security. DHS has sponsored research, development, and testing activities to address the inherent limitations of currently fielded detection equipment. However, much work remains to achieve consistently better detection capabilities. We have recently recommended to DHS and CBP that, among other things, CBP’s inspection procedures be revised to include physically opening cargo containers in certain circumstances where external inspections prove inconclusive and that federal officials find ways to authenticate licenses that accompany radiological shipments. DHS agreed with our recommendations and has committed to implementing them. (See app. III for a list of our products related to hazardous materials crossing our borders.) In addition to the hazards posed by certain types of land-based cargo, government officials recognize that terrorism also poses risks to oceangoing cargo traveling to and from commercial U.S. seaports. Ocean cargo containers play a vital role in the movement of cargo between global trading partners. In 2004 alone, nearly 9 million ocean cargo containers arrived and were offloaded at U.S. seaports. Responding to heightened concern about national security since 9/11, several U.S. government agencies have focused efforts on preventing terrorists from smuggling weapons of mass destruction in cargo containers from overseas locations to attack the United States and disrupt international trade. To help address its responsibility to ensure the security of this cargo, CBP has in place a program known as the Container Security Initiative. The program aims to target and inspect high-risk cargo shipments at foreign seaports before they leave for destinations in the United States. Under the program, foreign governments agree to allow CBP personnel to be stationed at foreign seaports to use intelligence and risk assessments to target shipments to identify those at risk of containing weapons of mass destruction or other terrorist contraband. As of February 2005 (the date of our most recent work), the Container Security Initiative program was operational at 34 foreign seaports, with plans to expand to an additional 11 ports by the end of fiscal year 2005. We have advocated in recent testimony that CBP’s targeting system should, among other things, take steps to assess the risks posed by oceangoing cargo. (See app. III for a list of our products related to other cargo security initiatives.) Whether the security challenge facing federal authorities at ports of entry involves persons or cargo, the job of securing the nation’s borders is daunting. The task involves the oversight and management of nearly 7,500 miles of land borders with Canada and Mexico, and hundreds of legal ports of entry through which millions of travelers are inspected annually. After 9/11, the government took immediate steps to tackle some of the major border-related vulnerabilities and challenges that we and others had identified, such as those related to passport and document fraud and tracking overstays. While it may never be possible to ensure that all terrorists, criminals, or those violating immigration laws are prevented from entering the country, DHS and other agencies must remain vigilant in developing and implementing programs and policies designed to reduce breaches in our borders and ensure that hazardous cargoes are interdicted. Five years after 9/11 and in the wake of new terrorist threats and tactics, Congress, DHS, and other federal agencies face an array of strategic challenges that potentially affect the ability of each to effectively oversee or execute the ambitious goals and programs that are under way or planned to enhance homeland security. U.S. leaders and policy makers continue to face the need to choose an appropriate course of action going forward—setting priorities, allocating resources, and assessing the social and economic costs of the measures that may be taken governmentwide to further strengthen domestic security. Balancing the trade-offs inherent in these choices—and aligning policies to support them—will not be easy, but is nonetheless essential. Accomplishing this critical task will be further challenged by (1) the federal government’s continued struggle to share information needed to combat terrorism across federal departments and with state and local governments; (2) having to implement a system that assesses the relative risks reduced by investing scarce dollars among varied and competing security alternatives; and (3) a DHS that continues to struggle in becoming a fully integrated and effectively functioning organization that is prepared and positioned to successfully protect the homeland from future terrorist threats. There are numerous challenges that cut across branches of the federal government that must be addressed broadly and in a coordinated fashion at the highest levels. One of the most important and conspicuous of these cross-cutting challenges involves the sharing of information related to terrorism. The former vice chairman of the 9/11 Commission identified the inability of federal agencies to effectively share information about suspected terrorists and their activities as the government’s single greatest failure in the lead-up to the 9/11 attacks. As discussed earlier in this report, FAA’s no-fly list only contained 12 names of potential terrorists on 9/11 because information collected by other agencies, such as the CIA and FBI about terrorist suspects was not shared with FAA at the time. According to the 9/11 report, this undistributed information would have helped identify some of the terrorists, but such information was shared only on a need-to- know rather than a need-to-share basis. The commission recommended, among other things, that terrorism-related information contained in agency databases should be shared across agency lines. Because of the significance of this issue, we designated information sharing for homeland security as a governmentwide high-risk area in 2005. Responding to the lessons of 9/11, Congress and federal departments have taken steps to improve information sharing across the federal government and in conjunction with state and local governments and law enforcement agencies, as well, but these efforts are not without challenges. The FBI has increased its field Joint Terrorism Task Forces, bringing together personnel from all levels of government in their counterterrorism missions. DHS implemented the homeland security information network to share homeland security information with states, localities, and the private sector. States and localities are creating their own information “fusion” centers, some with FBI and DHS support, to provide state and local leaders with information on threats to their communities, a topic on which we have ongoing work. And DHS has implemented a program to encourage the private sector to provide information on the vulnerabilities and security in place at critical infrastructure assets, such as nuclear and chemical facilities by guaranteeing to protect that information from public disclosure. But, the DHS Inspector General found that users of the homeland security information network were confused and frustrated with this system, in part because the system does not provide them with useful situational awareness and classified information and as a result users do not regularly use the system; how well fusion centers will be integrated into the federal information sharing efforts remains to be seen. And DHS has still not won all of the private sector’s trust that the agency can adequately protect and effectively use the information that sector provides. These challenges will require longer-term actions to resolve. These challenges also require policies, procedures, and plans that integrate these individual initiatives and establish a clear, governmentwide framework for sharing terrorism-related information. But as we reported in March 2006, the nation still has not implemented the governmentwide policies and processes that the 9/11 commission recommended and that Congress mandated. Responsibility for creating these policies has shifted over time—from the White House to the Office of Management and Budget, to the Department of Homeland Security, and then to the Office of the Director of National Intelligence. Nevertheless, the Intelligence Reform and Terrorism Prevention Act required that action be taken to facilitate the sharing of terrorism information by establishing an “information sharing environment” that would combine policies, procedures, and technologies that link people, systems, and information among all appropriate federal, state, local, and tribal entities and the private sector. One purpose of this information sharing environment is to represent a partnership between all levels of government, the private sector, and our foreign partners. While this environment was to be established by December 2006, program managers told us that a 3-year road map is to be released in November 2006. According to these officials, the plan will define key tasks and milestones for developing the information sharing environment, including identifying barriers and ways to resolve them, as GAO recommended. Completing the information sharing environment is a complex task that will take multiple years and long-term administration and congressional support and oversight, and will pose cultural, operational, and technical challenges that will require a collaborated response. Addressing the diffuse nature of terrorist threats—and protecting the vast array of assets and infrastructure potentially vulnerable to attack— requires trade-offs that balance security needs with competing priorities for limited resources. Shortly after 9/11, new federal policies sought to acknowledge the importance of determining these trade-offs. For example, as reflected in the National Strategy for Homeland Security of 2002, the United States is to “carefully weigh the benefit of each homeland security endeavor and only allocate resources where the benefit of reducing risk is worth the amount of additional cost.” The strategy recognizes that the need for homeland security is not tied solely to the current terrorist threat but to enduring vulnerability from a range of potential threats that could include weapons of mass destruction and bioterrorism. In addition, Homeland Security Presidential Directive-7, issued in December 2003, charged DHS with integrating the use of risk management into homeland security activities related to the protection of critical infrastructure. The directive called for the department to develop policies, guidelines, criteria, and metrics for this effort. Federal officials are also well aware of the need for taking a risk-based approach to allocating scarce resources for homeland security. The Secretary of DHS testified in June 2005 on the need for managing risk by developing plans and allocating resources in a way that balances security and freedom. He noted the importance of assessing the full spectrum of threats and vulnerabilities, conducting risk assessments, setting realistic priorities, and guiding decisions about how to best organize to prevent, respond to, and recover from an attack. In our January 2005 report on high-risk areas in the federal government, we noted the importance of completing comprehensive national threat and risk assessments—and noted risk management as an emerging area. At that time, we noted that DHS was in the early stages of adopting a risk- based strategic framework for making important resource decisions involving billions of dollars annually. In part, this is because the process is difficult and complex; requires comprehensive information on risks and vulnerabilities; and employs sophisticated assessment methodologies. The process also requires careful trade-offs that balance security concerns with economic interests and other competing interests. DHS, with a fiscal year 2007 budget of about $35 billion, has begun allocating grants based on risk criteria, and has begun risk assessments at individual infrastructure facilities. But, it has not completed all of the necessary risk assessments mandated by the Homeland Security Act of 2002 to set priorities to help focus its resources where most needed. In addition, when applying risk management to critical infrastructure protection, DHS’s risk management framework, which requires the support of a comprehensive, national inventory of critical infrastructure assets that DHS refers to as the National Asset Database, remains incomplete. And, according to the DHS OIG, the agency is still identifying and collecting critical infrastructure data for this tool and this database is not yet comprehensive enough to support the management and resource allocation decisionmaking needed to meet the requirements of HSPD-7. Nonetheless, agencies are making progress in using risk as a basis for decision making. We found, for example, that the Coast Guard had made the greatest progress among three DHS agencies we reviewed in conducting risk assessments—that is, evaluating individual threats, the degree of vulnerability to attack, and the consequences of a successful attack. Also, we found that TSA has begun to assess risks within other transportation modes, such as rail in an effort to begin allocating scarce resources toward the greatest risks and vulnerabilities. Nevertheless, DHS is still faced with the formidable task of developing a more formal and disciplined approach to risk management, and answering questions such as what is an acceptable level of risk to guide homeland security strategies and investments and what criteria should be used to target federal funding for homeland security to maximize results and mitigate risks within available resource levels. Doing so will not be easy. However, as we noted in our analysis of homeland security challenges for the 21st century, defining an acceptable, achievable level of risk, within constrained budgets is imperative to addressing current and future threats. In the longer term, progress in implementing a risk-based approach will rest heavily on how well DHS coordinates homeland security risk management efforts with other federal departments, as well as state, local, and private-sector partners that oversee or operate critical infrastructure and assets. Currently, our work shows that while various risk assessment approaches are being used within DHS, they are neither consistent nor comparable—that is, there is no common basis, or framework, used to evaluate risk assessments within sectors (such as transportation) or across sectors (such as transportation, energy, and agriculture). DHS faces challenges related to establishing uniform assessment policies, approaches, guidelines, and methodologies so that a common risk framework can be developed and implemented within and across sectors. Overall, DHS has much more to do to effectively manage risk as part of its homeland security responsibilities within current and expected resource levels. DHS faces significant management and organizational transformation challenges as it works to protect the nation from terrorism and simultaneously establish itself. It must continue to integrate approximately 180,000 employees from 22 originating agencies, consolidate multiple management systems and processes, and transform into a more effective organization with robust planning, management, and operations. For these reasons, in January 2005, we continued to designate the implementation and transformation of the department as high risk. DHS’s Inspector General also reported, in December 2004, that integrating DHS’s many separate components into a single effective, efficient and economical department remains one of its biggest challenges. Failure to effectively address these management challenges could have serious consequences for our national security. This task of transforming 22 agencies—several with major management challenges—into one department with the critical, core mission of protecting the country against another terrorist attack has presented many challenges to the Department’s managers and employees. While DHS has made progress, it still has much to do to establish a cohesive, efficient and effective organization. Successful transformations of large organizations, even those faced with less strenuous reorganizations and pressure for immediate results than DHS, can take from 5 to 7 years to take hold on a sustainable basis. For DHS to successfully address its daunting management challenges and transform itself into a more effective organization, we have stated that it needs to take the following actions: develop a department wide implementation and transformation strategy that adopts risk management and strategic management principles and establishes key milestones and performance measures; improve management systems including financial systems, information management, human capital, and acquisitions; and implement corrective actions to address programmatic and partnering challenges. The DHS OIG, in its report on the major management challenges facing DHS, identified consolidating the department’s components as a challenge, but noted that the 2005 departmental restructuring has resulted in changes to the DHS organizational structure that refocused it on risk and consequence management and further involved its partners in other federal agencies, state and local governments, and private sector organizations. However, the IG concluded that much more remains to be done. After spending billions of dollars on people, policies, procedures, and technology to improve security, we have improved preparedness compared to the time of the attacks, but much more needs to be done as terrorists change tactics and introduce new vulnerabilities. Consequently, we must remain ever vigilant. Today, we are more alert to the possibility of threats. DHS is engaged in a number of individual efforts and initiatives as it works to implement its vision of an integrated, unified department. The momentum generated by the attacks of 9/11 to create a successful homeland security function could be lost if DHS does not continue to work quickly to put in place key merger and transformation practices that would enable it to be more effective in taking a comprehensive and sustained approach to its management integration. Moreover, it remains vitally important for DHS to continue to develop and implement a risk- based framework to help target where the nation’s resources should be invested to strengthen security, and determine how these investments should be directed—toward people, processes, or technology. And we must continue to improve the sharing of terrorism-related information across organizational and intergovernmental cultures and “stovepipes.” Finally, Congress continues to play an important role in overseeing the nation's homeland security efforts, and has asked GAO to assist in this oversight. Our work, the work of the Inspectors General, and the work of other accountability organizations has helped identify where Congress can provide solutions and enhance our homeland security investments. We will send copies of this report to the Secretary of Homeland Security, the Secretary of the Department of State, and interested congressional committees. We will make copies available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at [email protected] or (202) 512-8777. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The 19 hijackers who participated in the September 11 terrorist attacks received a total of 23 visas at five different consular posts from April 1997 through June 2001 (see fig. 6). Fifteen of them were citizens of Saudi Arabia. They obtained their visas in their home country, at the U.S. consulate in Jeddah (11 hijackers) and the U.S. embassy in Riyadh (4 hijackers). Two others, citizens of the United Arab Emirates, also received their visas in their home country, at the U.S. embassy in Abu Dhabi and at the U.S. consulate in Dubai. The remaining 2 hijackers obtained their visas at the U.S. embassy in Berlin. They were considered third-country national applicants because they were not German citizens: one was a citizen of Egypt, the other of Lebanon. Of the 19 hijackers, 18 received visas for temporary visits for business and pleasure, and 1 received 2 student visas. These visas allowed the holders to enter the United State multiple times during the visas’ validity period, subject to the approval of the immigration officer at the port of entry. Of the 23 issued visas, 4 were valid for a period of 1 year; 15 were valid for 2 years; 2 for 5 years; and 2 for 10 years. Aviation Security: Efforts to Strengthen International Passenger Prescreening are Under Way, but Planning and Implementation Issues Remain. GAO-07-55SU. Washington, D.C.: Nov. 20, 2006. Transportation Security Administration’s Office of Intelligence: Responses to Post Hearing Questions on Secure Flight. GAO-06-1051R. Washington D.C.: August 4, 2006. Aviation Security: Management Challenges Remain for the Transportation Security Administration’s Secure Flight Program. GAO-06-864T. Washington D.C.: June 14, 2006. Aviation Security: Enhancements Made in Passenger and Checked Baggage Screening, but Challenges Remain. GAO-06-371T. Washington, D.C.: April 4, 2006. Aviation Security: Transportation Security Administration Has Made Progress in Managing a Federal Security Workforce and Ensuring Security at U.S. Airports, but Challenges Remain. GAO-06-597T. Washington, D.C.: April 4, 2006. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: Feb. 9, 2006. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information During Secure Flight Program Testing in Initial Privacy Notes, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Aviation Security: Screener Training and Performance Measurement Strengthened, but More Work Remains. GAO-05-457. Washington, D.C.: May 2, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05-356. Washington, D.C.: March 28, 2005. Follow-Up Audit of Passenger and Baggage Screening Procedures at Domestic Airports (Unclassified Summary). Department of Homeland Security Office of Inspector General, OIG-05-16. Washington, D.C.: March 2005. Aviation Security: Measures for Testing the Effect of Using Commercial Data for the Secure Flight Program. GAO-05-324. Washington, D.C.: Feb. 23, 2005. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T. Washington, D.C.: March 17, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: Feb. 13, 2004. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. Washington, D.C.: Feb. 12, 2004. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: Sept. 24, 2003. Aviation Security: Further Study of Safety and Effectiveness and Better Management Controls Needed If Air Carriers Resume Interest in Deploying Less-than-Lethal Weapons. GAO-06-475. Washington, D.C.: May 26, 2006. Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls, GAO-06-203. Washington, D.C.: Nov. 28, 2005. Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed. GAO-05-781. Washington, D.C.: Sept. 6, 2005. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: Nov. 19, 2003. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. Washington, D.C.: June 28, 2002. Aviation Security: TSA Oversight of Checked Baggage Screening Procedures Could Be Strengthened. GAO-06-869. Washington, D.C.: July 28, 2006. Aviation Security: TSA Has Strengthened Efforts to Plan for the Optimal Deployment of Checked Baggage Screening Systems but Funding Uncertainties Remain. GAO-06-875T. Washington, D.C.: June 29, 2006 Aviation Security: Better Planning Needed to Optimize Deployment of Checked Baggage Screening Systems. GAO-05-896T. Washington, D.C.: July 13, 2005. Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems. GAO-05-365. Washington, D.C.: March 15, 2005. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-06-76. Washington, D.C.: Oct. 17, 2005. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-05-446SU. Washington, D.C.: July 29, 2005. Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach. GAO-03-22. Washington, D.C.: Jan. 10, 2003. Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: Dec. 20, 2002. Homeland Security: Agency Resources Address Violations of Restricted Airspace, but Management Improvements Are Needed. GAO-05-928T. Washington, D.C.: July 21, 2005. General Aviation Security: Increased Federal Oversight Is Needed, but Continued Partnership with the Private Sector Is Critical to Long-Term Success. GAO-05-144. Washington, D.C.: Nov. 10, 2004. Aviation Security: Further Steps Needed to Strengthen the Security of Commercial Airport Perimeters and Access Controls. GAO-04-728. Washington, D.C.: June 4, 2004. Aviation Security: Challenges in Using Biometric Technologies. GAO-04-785T. Washington, D.C.: May 19, 2004. Nonproliferation: Further Improvements Needed in U.S. Efforts to Counter Threats from Man-Portable Air Defense Systems. GAO-04-519. Washington, D.C.: May 13, 2004. Aviation Security: Factors Could Limit the Effectiveness of the Transportation Security Administration’s Efforts to Secure Aerial Advertising Operations. GAO-04-499R. Washington, D.C.: March 5, 2004. The Department of Homeland Security Needs to Fully Adopt a Knowledge-based Approach to Its Counter-MANPADS Development Program. GAO-04-341R. Washington, D.C.: Jan. 30, 2004. Transportation Security Administration: More Clarity on the Authority of Federal Security Directors Is Needed. GAO-05-935. Washington, D.C.: Sept. 23, 2005. Aviation Security: Improvement Still Needed in Federal Aviation Security Efforts. GAO-04-592T. Washington, D.C.: March 30, 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: Nov. 20, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: Nov. 5, 2003. Aviation Security: Progress Since September 11, 2001, and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: Sept. 9, 2003. Airport Finance: Past Funding Levels May Not Be Sufficient to Cover Airports’ Planned Capital Development. GAO-03-497T. Washington, D.C.: Feb. 25, 2003. Aviation Security Costs, Transportation Security Agency. Department of Homeland Security Office of Inspector General, CC-003-066. Washington, D.C.: Feb 5, 2003. Airport Finance: Using Airport Grant Funds for Security Projects Has Affected Some Development Projects. GAO-03-27. Washington, D.C.: Oct. 15, 2002. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: Oct. 2, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Challenges Facing TSA in Implementing the Aviation and Transportation Security Act. Department of Homeland Security Office of Inspector General, CC-2002-88. Washington, D.C.: Jan. 23, 2002. Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: Sept. 25, 2001. Actions Needed to Improve Aviation Security. Department of Homeland Security Office of Inspector General, CC-2001-313. Washington, D.C.: Sept. 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: Sept. 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: Sept. 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: Sept. 20, 2001. Aviation Security in the United States. Department of Homeland Security Office of Inspector General, CC-2001-308. Washington, D.C.: Sept. 20, 2001. Rail Transit: Additional Federal Leadership Would Enhance FTA’s State Safety Oversight Program. GAO-06-821. Washington, D.C.: July 26, 2006. Maritime Security: Information-Sharing Efforts Are Improving. GAO-06-933T. Washington, D.C.: July 10, 2006. Information Technology: Customs Has Made Progress on Automated Commercial Environment System, but It Faces Long-Standing Management Challenges and New Risks. GAO-06-580. Washington, D.C.: May 31, 2006. Passenger Rail Security: Evaluating Foreign Security Practices and Risk Can Help Guide Security Efforts. GAO-06-557T. Washington, D.C.: March 29, 2006. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-06-181T. Washington, D.C.: Oct. 20, 2005. Passenger Rail Security: Enhanced Federal Leadership Needed to Prioritize and Guide Security Efforts. GAO-05-851. Washington, D.C.: Sept. 9, 2005. Maritime Security: Enhancements Made, But Implementation and Sustainability Remain Key Challenges. GAO-05-448T. Washington, D.C.: May 17, 2005. Maritime Security: New Structures Have Improved Information Sharing, but Security Clearance Processing Requires Further Attention. GAO-05-394. Washington, D.C.: April 15, 2005. Information Technology: Customs Automated Commercial Environment Program Progressing, but Need for Management Improvements Continues. GAO-05-267. Washington, D.C.: March 14, 2005. Maritime Security: Better Planning Needed to Help Ensure an Effective Port Security Assessment Program. GAO-04-1062. Washington, D.C.: Sept. 30, 2004. Mass Transit: Federal Action Could Help Transit Agencies Address Security Challenges. GAO-03-263. Washington, D.C.: Dec. 13, 2002. Transportation Security: DHS Should Address Key Challenges Before Implementing the Transportation Worker Identification Program. GAO-06-982. Washington, D.C.: September 2006. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: Feb. 15, 2005. Transportation Security R&D: TSA and DHS Are Researching and Developing Technologies, but Need to Improve R&D Management. GAO-04-890. Washington, D.C.: Sept. 30, 2004. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: Sept. 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Federal Aviation Administration: Reauthorization Provides Opportunities to Address Key Agency Challenges. GAO-03-653T. Washington, D.C.: April 10, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: Jan. 17, 2003. Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program. GAO-06-1090T. Washington, D.C.: Sept. 7, 2006. Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program. GAO-06-854. Washington, D.C.: July 28, 2006. Process for Admitting Additional Countries into the Visa Waiver Program. GAO-06-835R. Washington, D.C.: July 28, 2006. Border Security: More Emphasis on State’s Consular Safeguards Could Mitigate Visa Malfeasance Risks. GAO-06-115. Washington, D.C.: Oct. 6, 2005. Border Security: Strengthened Visa Process Would Benefit From Improvements in Staffing and Information Sharing. GAO-05-859. Washington, D.C.: Sept. 13, 2005. Border Security: Actions Needed to Strengthen Management of Department of Homeland Security’s Visa Security Program. GAO-05-801. Washington, D.C.: July 29, 2005. Border Security: Reassessment of Consular Security Resource Requirements Could Help Address Visa Delays. GAO-06-542T. Washington, D.C.: April 4, 2005. Border Security: Streamlined Visas Mantis Program Has Lowered Burden on Foreign Science Students and Scholars, but Further Refinements Needed. GAO-05-198. Washington, D.C.: Feb. 18, 2005. Implementation of the United States Visitor and Immigrant Status Indicator Technology Program at Land Border Ports of Entry. Department of Homeland Security Office of Inspector General, OIG-05-11. Washington, D.C.: Feb. 2005. A Review of the Use of Stolen Passports from Visa Waiver Countries to Enter the United States. Department of Homeland Security Office of Inspector General, OIG-05-07. Washington, D.C.: Dec. 2004. Border Security: State Department Rollout of Biometric Visas on Schedule, but Guidance Is Lagging. GAO-04-1001. Washington, D.C.: Sept. 9, 2004. An Evaluation of DHS Activities to Implement Section 428 of the Homeland Security Act of 2002. Department of Homeland Security Office of Inspector General, OIG-04-33. Washington, D.C.: August 2004. Border Security: Additional Actions Needed to Eliminate Weaknesses in the Visa Revocation Process. GAO-04-795. Washington, D.C.: July 13, 2004. An Evaluation of the Security Implications of the Visa Waiver Program. Department of Homeland Security Office of Inspector General, OIG-04-26. Washington, D.C.: April 2004. Border Security: Improvements Needed to Reduce Time Taken to Adjudicate Visas for Science Students and Scholars. GAO-04-371. Washington, D.C.: Feb. 25, 2004. Border Security: New Policies and Increased Interagency Coordination Needed to Improve Visa Process. GAO-03-1013T. Washington, D.C.: July 15, 2003. Border Security: New Policies and Procedures Are Needed to Fill Gaps in the Visa Revocation Process. GAO-03-798. Washington, D.C.: June 18, 2003. Review of Nonimmigrant Visa Policy and Procedures, memorandum report. Department of State Office of Inspector General, ISP-I-03-26. Washington, D.C.: Dec. 2002. Border Security: Implications of Eliminating the Visa Waiver Program. GAO-03-38. Washington, D.C.: Nov. 22, 2002. Border Security: Visa Process Should Be Strengthened as an Antiterrorism Tool. GAO-03-132NI. Washington, D.C.: Oct. 21, 2002. Border Security: US-VISIT Faces Strategic, Technological, and Operational Challenges at Land Ports of Entry. GAO-07-248. Washington, D.C.: Dec. 06, 2006. Border Security: Continued Weaknesses in Screening Entrants into the United States. GAO-06-976T. Washington, D.C.: August 2, 2006. Information Technology: Immigration and Customs Enforcement Is Beginning to Address Infrastructure Modernization Program Weaknesses but Key Improvements Still Needed. GAO-06-823. Washington, D.C.: July 27, 2006. Homeland Security: Contract Management and Oversight for Visitor and Immigrant Status Program Need to Be Strengthened. GAO-06-404. Washington, D.C.: June 9, 2006. Observations on Efforts to Implement Western Hemisphere Travel Initiative on the U.S. Border with Canada. GAO-06-741. Washington, D.C.: May 25, 2006. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Border Security: Investigators Successfully Transported Radioactive Sources Across Our Nation’s Borders at Selected Locations. GAO-06-545R. Washington, D.C.: March 28, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Combating Nuclear Smuggling: Corruption, Maintenance, and Coordination Problems Challenge U.S. Efforts to Provide Radiation Detection Equipment to Other Countries. GAO-06-311. Washington, D.C.: March 14, 2006. Homeland Security: Visitor and Immigrant Status Program Operating, but Management Improvements Are Still Needed. GAO-06-318T. Washington, D.C.: Jan. 25, 2006. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny With Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: March 11, 2005. US-VISIT System Security Management Needs Strengthening (Redacted). Department of Homeland Security Office of Inspector General. OIG-06-16. Washington, D.C.: Dec. 2005. Information Technology: Management Improvements Needed on Immigration and Customs Enforcement’s Infrastructure Modernization Program. GAO-05-805. Washington, D.C.: Sept. 7, 2005. Review of the Immigration and Customs Enforcement Compliance Enforcement Unit. Department of Homeland Security Office of Inspector General, OIG-05-50. Washington, D.C.: Sept. 2005. Border Security: Opportunities to Increase Coordination of Air and Marine Assets. GAO-05-543. Washington, D.C.: August 12, 2005. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Homeland Security: Some Progress Made, but Many Challenges Remain on U.S. Visitor and Immigrant Status Indicator Technology Program. GAO-05-202. Washington, D.C.: Feb. 23, 2005. Implementation of the United States Visitor and Immigrant Status Indicator Technology Program at Land Border Ports of Entry. Department of Homeland Security Office of Inspector General, OIG-05-11. Washington, D.C.: Feb. 2005. Homeland Security: Management Challenges Remain in Transforming Immigration Programs. GAO-05-81. Washington, D.C.: Oct. 14, 2004. Immigration Enforcement: DHS Has Incorporated Immigration Enforcement Objectives and Is Addressing Future Planning Requirements. GAO-05-66. Washington, D.C.: Oct. 8, 2004. Overstay Tracking: A Key Component of Homeland Security and a Layered Defense. GAO-04-82. Washington, D.C.: May 21, 2004. Homeland Security: First Phase of Visitor and Immigration Status Program Operating, but Improvements Needed. GAO-04-586. Washington, D.C.: May 11, 2004. Security: Counterfeit Identification Raises Homeland Security Concerns. GAO-04-133T. Washington, D.C.: Oct. 1, 2003. Homeland Security: Risks Facing Key Border and Transportation Security Program Needs to Be Addressed. GAO-03-1083. Washington, D.C.: Sept. 19, 2003. Security: Counterfeit Identification and Identification Fraud Raise Security Concerns. GAO-03-1147T. Washington, D.C.: Sept. 9, 2003. Land Border Ports of Entry: Vulnerabilities and Inefficiencies in the Inspections Process. GAO-03-1084R. Washington, D.C.: Aug. 18, 2003. Counterfeit Documents Used to Enter the Country from Certain Western Hemisphere Countries Not Detected. GAO-03-713T. Washington, D.C.: May 13, 2003. Weaknesses in Screening Entrants into the United States. GAO-03-438T. Washington, D.C.: Jan. 30, 2003. Technology Assessment: Using Biometrics for Border Security. GAO-03-174. Washington, D.C.: Nov. 15, 2002. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: Oct. 2006. Terrorist Watch List Screening: Efforts to Help Reduce Adverse Effects on the Public. GAO-06-1031. Washington, D.C.: Sept. 29, 2006. Critical Infrastructure Protection: DHS Leadership Needed to Enhance Cybersecurity. GAO-06-1087T. Washington, D.C.: Sept. 13, 2006. Information Sharing: DHS Should Take Steps to Encourage More Widespread Use of Its Program to Protect and Share Critical Infrastructure Information. GAO-06-383. Washington, D.C.: April 17, 2006. Information Sharing: The Federal Government Needs to Establish Policies and Processes for Sharing Terrorism-Related and Sensitive but Unclassified Information. GAO-06-385. Washington, D.C.: March 17, 2006. Review of the Terrorist Screening Center. Department of Homeland Security Office of Inspector General, Audit Report 05-27. Washington, D.C.: June 2005. DHS Challenges in Consolidating Terrorist Watch List Information. Department of Homeland Security Office of Inspector General, OIG-04-31. Washington, D.C.: Aug. 2004. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004. Homeland Security: Communication Protocols and Risk Communication Principles Can Assist in Refining the Advisory System. GAO-04-682. Washington, D.C.: June 25, 2004 Homeland Security: Efforts to Improve Information Sharing Need to be Strengthened. GAO-03-760. Washington, D.C.: August 27, 2003 Information Technology: Terrorist Watch Lists Should Be Consolidated to Promote Better Integration and Sharing. GAO-03-322. Washington, D.C.: April 15, 2003. GAO’s High Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Progress in Developing the National Asset Database. Department of Homeland Security Office of Inspector General, OIG-06-40. Washington, D.C.: June 10, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: Dec. 15, 2005. Major Management Challenges Facing the Department of Homeland Security. Department of Homeland Security Office of Inspector General, OIG-06-14. Washington, D.C.: Dec. 2005. Department of Homeland Security: Strategic Management of Training Important for Successful Transformation. GAO-05-888. Washington, D.C: Sept. 23, 2005. Strategic Budgeting: Risk Management Principles Can Help DHS Allocate Resources to Highest Priorities. GAO-05-824T. Washington, D.C.: June 29, 2005. Homeland Security: Overview of Department of Homeland Security Management Challenges. GAO-05-573T. Washington, D.C.: April 20, 2005. Department of Homeland Security: A Comprehensive and Sustained Approach Needed to Achieve Management Integration. GAO-05-139. Washington, D.C.: March 16, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: Feb. 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: Jan. 2005. Homeland Security: Agency Plans, Implementation, and Challenges Regarding the National Strategy for Homeland Security. GAO-05-33. Washington, D.C.: Jan. 14, 2005. 9/11 Commission Report: Reorganization, Transformation, and Information Sharing. GAO-04-1033T. Washington, D.C.: Aug. 3, 2004. Status of Key Recommendations GAO Has Made to DHS and Its Legacy Agencies. GAO-04-865R. Washington, D.C.: July 2, 2004. Homeland Security: Selected Recommendations from Congressionally Chartered Commissions and GAO. GAO-04-591. Washington, D.C.: March 31, 2004. Major Management Challenges and Program Risks: Department of State. GAO-03-107. Washington, D.C.: Jan. 1, 2003. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: Sept. 21, 2001. In addition to the individual named above, key contributors to the report include Katie Bernet, Amy Bernstein, Cathleen Berrick, John Brummet, Sally Gilley, David Hooper, Kirk Kiester, Sarah Lynch, Octavia Parks, Susan Quinlan, Brian Sklar, Richard Stana, and Maria Strudwick. | Five years after the terrorist attacks of September 11, 2001, GAO is taking stock of key efforts by the President, Congress, federal agencies, and the 9/11 Commission to strengthen or enhance critical layers of defense in aviation and border security that were directly exploited by the 19 terrorist hijackers. Specifically, the report discusses how: (1) commercial aviation security has been enhanced; (2) visa-related policies and programs have evolved to help screen out potential terrorists; (3) federal border security initiatives have evolved to reduce the likelihood of terrorists entering the country through legal checkpoints; and (4) the Department of Homeland Security (DHS) and other agencies are addressing several major post-9/11 strategic challenges. The report reflects conclusions and recommendations from a body of work issued before and after 9/11 by GAO, the Inspectors General of DHS, State, and Justice, the 9/11 Commission, and others. It is not a comprehensive assessment of all federal initiatives taken or planned in response to 9/11. GAO is not making any new recommendations at this time since over 75 prior recommendations on aviation security, the Visa Waiver Program, and U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT), among others, are in the process of being implemented. Continued monitoring by GAO will determine whether further recommendations are warranted. While the nation cannot expect to eliminate all risks of terrorist attack upon commercial aviation, agencies have made progress since 9/11 to reduce aviation-related vulnerabilities and enhance the layers of defense directly exploited by the terrorist hijackers. In general, these efforts have resulted in better airline passenger screening procedures designed to identify and prevent known or suspected terrorists, weapons, and explosives from being allowed onto aircraft. Nevertheless, the nation's commercial aviation system remains a highly visible target for terrorism, as evidenced by recent alleged efforts to bring liquid explosives aboard aircraft. DHS and others need to follow through on outstanding congressional requirements and recommendations by GAO and others to enhance security and coordination of passengers and checked baggage, and improve screening procedures for domestic flights, among other needed improvements. GAO's work indicates that the government has strengthened the nonimmigrant visa process as an antiterrorism tool. New measures added rigor to the process by expanding the name-check system used to screen applicants, requiring in-person interviews for nearly all applicants, and revamping consular officials' training to focus on counterterrorism. Nevertheless, the immigrant visa process may pose potential security risks and we are reviewing this issue. To enhance security and screening at legal checkpoints (air, land, and sea ports) at the nation's borders, agencies are using technology to verify foreign travelers' identities and detect fraudulent travel documents such as passports. However, DHS needs to better manage risks posed by the Visa Waiver Program, whereby travelers from 27 countries need not obtain visas for U.S. travel. For example, GAO recommended that DHS require visa-waiver countries to provide information on lost or stolen passports that terrorists could use to gain entry. We also recommended that DHS provide more information to Congress on how it plans to fully implement US-VISIT--a system for tracking the entry, exit, and length of stay of foreign travelers. While much attention has been focused on mitigating the specific risks of 9/11, other critical assets ranging from passenger rail stations to power plants are also at risk of terrorist attack. Deciding how to address these risks--setting priorities, making trade-offs, allocating resources, and assessing social and economic costs--is essential. Thus, it remains vitally important for DHS to continue to develop and implement a risk-based framework to help target where and how the nation's resources should be invested to strengthen security. The government also faces strategic challenges that potentially affect oversight and execution of new and ongoing homeland security initiatives, and GAO has deemed three challenges in particular--information sharing, risk management, and transforming DHS as a department--as areas needing urgent attention. DHS and the Department of State reviewed a draft of this report and both agencies generally agreed with the information. Both agencies provided technical comments that were incorporated as appropriate. |
Private sector data have become increasingly available to researchers, and several studies have established that significant geographic variation in spending exists in the private sector. For example, in a recent comprehensive assessment of geographic variation in private sector spending, the Institute of Medicine (IOM) reported on the presence of substantial spending variation, concluding that a large amount of the variation remained unexplained after adjusting for enrollee demographic and health status factors, insurance plan factors, and market-level factors, and suggesting that inefficiency is one of the causes of the current levels of variation. Using private sector claims data from two nationwide databases from 2007 through 2009, IOM found unadjusted spending for the area at the 90th percentile was 36 to 42 percent higher than the area at the 10th percentile, depending on the database used. The spending differences existed at all levels of geography IOM studied, including MSAs, and these differences persisted over time. IOM also found that price is a major determinant of geographic variation in the private sector, and estimated that, after adjusting for underlying costs, price accounted for 70 percent of the geographic variation in private sector spending. The researchers attributed the large impact of price in explaining private sector geographic spending variation to the relatively strong market power of providers in some areas. Other studies, including one by GAO, have reached similar conclusions. The Medicare Payment Advisory Commission (MedPAC) examined geographic variation in private sector spending and estimated that in 2008, hospital inpatient spending for the MSA at the 90th percentile was 90 percent higher than for the MSA at the 10th percentile. MedPAC also found that spending for physician services varied, but less so than hospital inpatient spending. Physician spending at the 90th percentile was 50 percent higher than that at the 10th percentile. Early work by GAO analyzing 2001 private sector claims in the Federal Employees Health Benefits Program also found substantial geographic variation in private sector hospital inpatient prices, physician prices, and spending. IOM also found that areas with relatively high prices tended to have relatively low utilization and vice versa. In addition, IOM found that private sector utilization varied more for some service types than others. For example, emergency department use was 50 to 100 percent higher for the area at the 90th percentile of utilization relative to the 10th percentile, and hospital outpatient visits were 30 to 46 percent higher. In addition, consistent with other research, use of discretionary services varied substantially. For example, the utilization rate for hip replacement, considered a discretionary procedure, for the area at the 90th percentile was 53 percent higher than the area at the 10th percentile, and other discretionary procedures, such as hysterectomies, lower back surgeries, and nuclear stress tests, had even larger differences. Researchers from the National Institute for Health Care Reform recently examined geographic variation in spending for hip and knee replacement episodes of care using 2011 claims data for autoworkers and their dependents in nine geographic areas in six states. They defined episodes as those beginning with a hospital admission and including all services up to 30 days postdischarge. Average spending per episode across the nine markets ranged from below $25,000 in Louisville, However, variation Kentucky, to above $30,000 in Buffalo, New York.across the 36 hospitals within these markets varied more than twofold, and all but one of the markets had a lower-spending hospital option, defined as having average episode spending below $25,000. To get a broader measure of variation in episode spending, these researchers also examined all episode types across hospitals. The spending variations observed for knee and hip replacements held true for other conditions, and hospitals with high spending for one service line (cardiology, orthopedics, etc.) were also likely to have high spending for other service lines. In addition, the price of the initial hospital stay accounted for more than 80 percent of the variation in overall spending. Variation in the prices and volume of physician and other services together accounted for less than one-tenth of the variation in episode spending. These researchers noted that reasons for higher-priced hospitals in some areas included their provision of specialized service lines that other nearby hospitals did not offer, being part of a local hospital system with greater bargaining clout, having unusually good clinical reputations, and being part of a large teaching hospital. We noted variation in episode spending across MSAs for all three procedures, even after adjusting for geographic differences in the cost of doing business and differences in demographics and health status of enrollees in each MSA. For example, average adjusted episode spending across all MSAs in our analysis for laparoscopic appendectomy was $12,506; however, MSAs in the highest-spending quintile had average adjusted episode spending of $17,047, which was almost 94 percent higher than the average adjusted episode spending of $8,802 for MSAs in the lowest-spending quintile. Average adjusted episode spending for this procedure for individual MSAs ranged from $25,924 in Salinas, California, to $6,166 in Joplin, Missouri. We found similar results for the other two procedures we studied, coronary stent placement and total hip replacement. Average adjusted episode spending for MSAs in the highest-spending quintile was about 84 percent and 74 percent higher than for MSAs in the lowest-spending quintile, respectively. (See fig. 1; also, see app. II for complete rankings of MSAs by procedure.) We found greater geographic variation in average episode spending than the research from the National Institute for Health Care Reform, likely because our study included many more geographic areas. For all three procedures, adjustments to control for geographic differences in the cost of doing business and for differences in demographics and health status of enrollees reduced the extent of variation in spending across MSAs. For example, before adjustment, average episode spending for laparoscopic appendectomy in the highest- spending MSA (Salinas, California) was 511 percent higher than the lowest-spending MSA (Joplin, Missouri); and, after adjustment, spending was 320 percent higher. MSAs with higher spending on one procedure generally had higher spending on the other two procedures. For example, Salinas, California, and Fort Wayne, Indiana, were among the highest-spending MSAs for all three procedures, while Hartford, Connecticut, and Youngstown, Ohio, were among the lowest-spending MSAs for all three procedures. We examined average adjusted episode spending in the 78 MSAs that had a sufficient number of episodes for all three procedures and found that the extent of correlation for each pair of procedures for the 78 MSAs ranged from 0.68 to 0.83, consistent with the research from the National Institute for Health Care Reform. (See fig. 2.) The price of the initial hospital inpatient admission was the largest contributor to differences in private sector episode spending across MSAs. Differences in the price of the initial admission accounted for 91 percent or more of the difference in average adjusted episode spending between the lowest- and highest-spending quintiles. For example, for total hip replacement, the difference in average adjusted episode spending in the MSAs in the lowest- and highest-spending quintiles was $14,506, and $13,198 of that difference—or 91 percent— was attributable to differences in the price of the initial inpatient admission. Similarly, differences in initial inpatient admission prices accounted for 92 and 96 percent of the differences in episode spending between MSAs in the lowest- and highest-spending quintiles for coronary stent placement and laparoscopic appendectomy, respectively (see table 1). The role of inpatient admission price as the primary driver of geographic differences in spending in the private sector has been reported in the literature, such as by the National Institute for Health Care Reform. The price of the initial inpatient admission contributed most to geographic differences in average adjusted episode spending for two reasons. First, the price of the initial admission represented the largest percentage of adjusted episode spending. For the lowest- and highest-spending quintiles in each of the three procedures, at least two-thirds of episode spending was for the price of the hospital inpatient admission. For example, for total hip replacement, the price of the initial admission was $17,134, representing 76 percent of the $22,463 in total episode spending for MSAs in the lowest-spending quintile and $30,332, representing 82 percent of the $36,969 in total episode spending for MSAs in the highest-spending quintile. Second, the average price of the initial inpatient admission varied considerably across MSAs.difference in the price of the initial inpatient admission in MSAs in the lowest- and highest-spending quintiles ranged from 77 percent to 121 percent, depending on the procedure. For example, for laparoscopic appendectomy, the price of the initial admission was 121 percent higher for MSAs in the highest-spending quintile compared with MSAs in the lowest-spending quintile (see fig. 3). Specifically, MSAs in the highest- spending quintile had an average price of $13,177 for the initial admission—and ranged from $11,087 in Colorado Springs, Colorado, to $23,432 in Salinas, California—whereas MSAs in the lowest-spending quintile had an average price of $5,971 for the initial admission—and ranged from $4,528 in Las Vegas, Nevada, to $7,430 in San Antonio, Texas. (See app. IV for average adjusted episode spending by procedure and service category, and app. V for complete rankings of hospital inpatient spending, initial admission price, and number of days by MSA and procedure.) We provided a draft of this product to the Department of Health and Human Services, which did not comment on our findings but provided technical comments. We incorporated these technical comments as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and the Administrator of the Centers for Medicare & Medicaid Services. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. This appendix describes the data and methods we used in our study. We created episodes of care based on inpatient admissions for three procedures—coronary stent placement, laparoscopic appendectomy, and total hip replacement—using private health insurance claims and enrollment data from the Truven Health Analytics MarketScan® Commercial Claims and Encounters Database for 2009 and 2010. We identified procedures based on the presence of specific procedure codes in the hospital inpatient and professional service claims. We selected these procedures because they were commonly performed in the years we analyzed and were associated with high levels of national spending in the MarketScan database. In addition, we selected procedures that were generally provided by different medical specialties, and we selected hospital-based procedures because the United States spends more nationally on hospital services than any other type of health care service. We included all services in the episode from the day of admission to 30 days after discharge, and certain services in the 3 days prior to admission. Specifically, we included any outpatient services received by an enrollee in the 3 days prior to admission at the same hospital where the inpatient admission occurred, because those services may be related to the admission. In the episode, we included any drugs provided during the hospital inpatient admission because these drugs were part of the hospital inpatient claims. However, we excluded outpatient drugs, such as prescription drugs, due to limitations of the claims in the MarketScan database. We excluded enrollees from our study who had inpatient admissions for any of the three procedures outside the 50 states and the District of Columbia, had secondary insurance, or were enrolled in a managed care or other capitated plan.conditions that could increase spending for reasons unrelated to the procedure analyzed. For example, we excluded enrollees who received the procedure more than one time during the episode, enrollees whose overall initial hospital admission was coded as being for a reason unrelated to the procedure analyzed, enrollees with diagnoses of end- stage renal disease, enrollees who were pregnant, and enrollees with a hospice stay. In addition, we excluded enrollees under the age of 18 for coronary stent placement and total hip replacement episodes, and we excluded enrollees with a diagnosis of appendix rupture for laparoscopic appendectomy episodes. We analyzed average episode spending across metropolitan statistical areas (MSA) for each procedure. We assigned episodes to MSAs based on the location of the hospital inpatient admission, and we had a sufficient number of episodes to support our analyses of coronary stent placement in 155 MSAs, laparoscopic appendectomy in 139 MSAs, and total hip replacement in 141 MSAs. For some analyses where we draw comparisons across procedures, we report data on only the 78 MSAs that had a sufficient number of episodes to support our analyses for all three procedures. For each procedure, we estimated unadjusted spending and spending adjusted for geographic differences in the cost of doing business and differences in the demographics and health status of enrollees in each MSA. To estimate unadjusted spending, we summed the insurer’s allowed payment amount for all services within the episode, including the amount paid by the insurer and any cost-sharing paid by the enrollee. We adjusted for geographic differences in the cost of doing business by using Medicare’s payment-adjustment methodology. For services provided by physicians and certain other health professionals, we applied the Geographic Practice Cost Index, which is Medicare’s estimate of the geographic differences in the costs of operating a medical practice, to the unadjusted spending for professional services. For services provided by hospitals, such as during an inpatient admission, and by certain other facilities, we applied the Hospital Wage Index value, which is Medicare’s estimate of differences in the wage-related component of the costs of doing business, to a portion of the unadjusted spending for those services. We additionally adjusted for differences in the demographics and health status of enrollees in each MSA by using a regression-based approach. In the regression, the dependent variable was total cost- adjusted episode spending, and the independent variables were enrollee- level factors (such as age, gender, number of readmissions, and certain comorbidities) and MSA-level indicator variables to identify the portion of the remaining variation in episode spending that was attributable to specific geographic areas. Using all MSAs in our analyses, we reported the distribution of average adjusted episode spending for each procedure. Using the 78 MSAs with a sufficient number of episodes for all three procedures, we reported the correlation coefficient to show the extent to which MSAs with high or low episode spending for one procedure also had high or low episode spending for another procedure. We also examined whether MSAs in the lowest- and highest-spending quintile were concentrated in particular regions of the nation. To examine how one of the components of spending—mix of service types—contributes to variation in episode spending across geographic areas, we assigned all adjusted spending within an episode to one of five service categories based on the procedure code for the service and place of service. The five service categories were (1) hospital inpatient, (2) hospital outpatient, (3) postdischarge, (4) professional, and (5) ancillary. For the 78 MSAs with a sufficient number of episodes for all three procedures, we compared the MSAs in the lowest- and highest-spending quintiles for each procedure, and we reported the extent to which those differences in spending for each service category contributed to differences in episode spending. We also reported the difference in adjusted spending by service category between the quintiles. To examine how the other components of spending contribute to variation in episode spending for private payers, we analyzed volume, intensity, and price of services for hospital inpatient and professional services. For hospital inpatient services, we measured volume as the number of days of the hospital stay, and we measured price by the amount of spending on the initial hospital inpatient admission (which excluded spending on any subsequent readmissions) because hospitals are generally paid one amount per admission regardless of the patient’s length of stay or the services delivered. In addition, we calculated the extent to which the price of the initial inpatient admission contributed to differences in episode spending between MSAs in the lowest- and highest-spending quintiles. For professional services, we measured volume as the number of services, measured intensity based on the relative value unit (RVU), which is an estimate of the resources needed to provide a given service,and calculated the price per unit of intensity by dividing average spending on professional services by the total units of intensity (number of RVUs) associated with those services.we used a regression-based approach to control for differences in the demographics and health status of enrollees in each MSA. In addition, we compared differences in volume, intensity, and price per unit of intensity between MSAs in the lowest- and highest-spending quintiles. This appendix ranks metropolitan statistical areas (MSA) by average adjusted episode spending for each of the three procedures we analyzed—coronary stent placement, laparoscopic appendectomy, and total hip replacement. Number of metropolitan statistical areas (MSA) This appendix presents average adjusted episode spending for each of the three procedures we analyzed—coronary stent placement, laparoscopic appendectomy, and total hip replacement—by service category for metropolitan statistical areas (MSA) in the lowest- and highest-spending quintiles. This appendix presents hospital inpatient spending, initial admission price, and number of days, by metropolitan statistical area (MSA), for each of the three procedures we analyzed—coronary stent placement, laparoscopic appendectomy, and total hip replacement. This appendix presents professional service spending, number of services, intensity, and price, by metropolitan statistical area (MSA), for each of the three high-cost procedures we analyzed—coronary stent placement, laparoscopic appendectomy, and total hip replacement. In addition to the contact named above, Christine Brudevold, Assistant Director; Ramsey Asaly; Greg Giusto; Andy Johnson; Corissa Kiyan; Elizabeth T. Morrison; Vikki Porter; and Dan Ries made key contributions to this report. | Research shows that spending on health care varies by geographic area and that higher spending in an area is not always associated with better quality of care. While a substantial body of research exists on geographic variation in spending in Medicare, less research has been done on variation in private sector health care spending, although this spending accounts for about a third of overall health care spending. As U.S. health expenditures continue to rise, policymakers and others have expressed interest in better understanding spending variation and how health care systems can operate efficiently—that is, providing equivalent or higher quality care while maintaining or lowering current spending levels. GAO was asked to examine geographic variation in private sector health care spending. GAO examined (1) how spending per episode of care for certain high-cost procedures varies across geographic areas for private payers, and (2) how the mix of service types, and the volume, intensity, and price of services contribute to variation in episode spending across geographic areas for private payers. Using a large private sector claims database for 2009 and 2010, GAO examined spending by MSA for episodes of care for three commonly performed inpatient procedures and examined spending by hospital inpatient, hospital outpatient, postdischarge, professional, and ancillary service categories. For inpatient and professional services, GAO examined the volume, intensity, and price of services. GAO's findings may not be generalizable to all private insurers due to data limitations. Spending for an episode of care in the private sector varied across metropolitan statistical areas (MSA) for coronary stent placement, laparoscopic appendectomy, and total hip replacement, even after GAO adjusted for geographic differences in the cost of doing business and differences in enrollee demographics and health status. MSAs in the highest-spending quintile had average adjusted episode spending that was 74 to 94 percent higher than MSAs in the lowest-spending quintile, depending on the procedure. MSAs with higher spending on one procedure generally had higher spending on the other two procedures. High- or low-spending MSAs were not concentrated in particular regions of the nation. The price of the initial hospital inpatient admission accounted for 91 percent or more of the difference in episode spending between MSAs in the lowest- and highest-spending quintiles. The price of the initial admission was the largest contributor to the difference for two reasons. First, it represented the largest percentage of adjusted episode spending. For example, for total hip replacement, the average price of the initial admission was $17,134, representing 76 percent of the $22,463 in total episode spending for MSAs in the lowest-spending quintile and $30,332, representing 82 percent of the $36,969 in total episode spending for MSAs in the highest-spending quintile. Second, the price of the initial admission varied considerably across MSAs. For MSAs in the highest-spending quintile, the average price of the initial admission for total hip replacement was 77 percent higher than for MSAs in the lowest-spending quintile. Professional services—office visits and other services provided by a physician or other health professional—were the second largest contributor to geographic differences in episode spending, but accounted for 7 percent or less of the difference in episode spending between MSAs in the lowest- and highest-spending quintiles. (See table.) MSAs in the highest-spending quintile had higher average prices and intensity (a measure of the resources needed to provide a service) but fewer services (volume) than MSAs in the lowest-spending quintile for all three procedures. The Department of Health and Human Services provided technical comments on a draft of this report, which were incorporated as appropriate. |
Over the past decade, the number of acres burned annually by wildland fires in the United States has substantially increased. Federal appropriations to prepare for and respond to wildland fires, including appropriations for fuel treatments, have almost tripled. Increases in the size and severity of wildland fires, and in the cost of preparing for and responding to them, have led federal agencies to fundamentally reexamine their approach to wildland fire management. For decades, federal agencies aggressively suppressed wildland fires and were generally successful in decreasing the number of acres burned. In some parts of the country, however, rather than eliminating severe wildland fires, decades of suppression contributed to the disruption of ecological cycles and began to change the structure and composition of forests and rangelands, thereby making lands more susceptible to fire. Increasingly, the agencies have recognized the role that fire plays in many ecosystems and the role that it could play in the agencies’ management of forests and watersheds. The agencies worked together to develop a federal wildland fire management policy in 1995, which for the first time formally recognized the essential role of fire in sustaining natural systems; this policy was subsequently reaffirmed and updated in 2001. The agencies, in conjunction with Congress, also began developing the National Fire Plan in 2000. To align their policies and to ensure a consistent and coordinated effort to implement the federal wildland fire policy and National Fire Plan, Agriculture and Interior established the Wildland Fire Leadership Council in 2002. In addition to noting the negative effects of past successes in suppressing wildland fires, the policy and plan also recognized that continued development in the wildland-urban interface has placed more structures at risk from wildland fire at the same time that it has increased the complexity and cost of wildland fire suppression. Forest Service and university researchers estimated in 2005 that about 44 million homes in the lower 48 states are located in the wildland-urban interface. To help address these trends, current federal policy directs agencies to consider land management objectives—identified in land and fire management plans developed by each local unit, such as a national forest or a Bureau of Land Management district—and the structures and resources at risk when determining whether or how to suppress a wildland fire. When a fire starts, the land manager at the affected local unit is responsible for determining the strategy that will be used to respond to the fire. A wide spectrum of strategies is available to choose from, some of which can be significantly more costly than others. For example, the agencies may fight fires ignited close to communities or other high-value areas more aggressively than fires on remote lands or at sites where fire may provide ecological or fuel-reduction benefits. In some cases, the agencies may simply monitor a fire, or take only limited suppression actions, to ensure that the fire continues to pose little threat to important resources, a practice known as “wildland fire use.” The Forest Service and Interior agencies have initiated a number of steps to address issues that we and others have identified as needing improvement to help federal agencies contain wildland fire costs, but the effects of these steps on containing costs are unknown, in part because many of the steps are not yet complete. Dozens of studies by federal agencies and other organizations examining federal agencies’ management of wildland fire have repeatedly identified a number of similar issues needing improvement to help contain wildland fire costs. These issues generally fall into one of three operational areas—reducing accumulated fuels, acquiring and using firefighting assets, and selecting firefighting strategies. Recent studies have also raised concerns about the framework used to share the cost of fighting fires between federal and nonfederal entities. First, federal firefighting agencies have made progress in developing a system to help them better identify and set priorities for lands needing treatment to reduce accumulated fuels. Many past studies have identified fuel reduction as important for containing wildland fire costs because accumulated fuels can contribute to more-severe and more costly fires. The agencies are developing a geospatial data and modeling system, called LANDFIRE, intended to produce consistent and comprehensive maps and data describing vegetation, wildland fuels, and fire regimes across the United States. The agencies will be able to use this information to help identify fuel accumulations and fire hazards across the nation, help set nationwide priorities for fuel-reduction projects, and assist in determining an appropriate response when wildland fires do occur. According to Forest Service and Interior officials, the agencies completed mapping the western United States in April 2007; mapping of the eastern states is scheduled to be completed by 2008 and of Alaska and Hawaii by 2009. The agencies, however, have not yet finalized their plan for ensuring that collected data are routinely updated to reflect changes to fuels, including those from landscape-altering events, such as hurricanes, disease, or wildland fires themselves. Forest Service and Interior officials told us that they recognize the importance of ensuring that data are periodically updated and are developing a plan to operate and maintain the system, including determining how often data will be updated. The agencies expect to submit this plan to the Wildland Fire Leadership Council for approval in June 2007. Second, the agencies have also taken some steps to improve how they acquire and use firefighting personnel, aviation resources, and equipment—assets that constitute a major cost of responding to wildland fires—but much remains to be done. The agencies have improved their systems for dispatching and monitoring firefighting assets and for gathering and analyzing cost data. However, they have yet to complete the more fundamental step of determining the appropriate type and quantity of firefighting assets needed for the fire season. Over the past several years, the agencies have been developing a Fire Program Analysis (FPA) system, which was proposed and funded to help the agencies determine national budget needs by analyzing budget alternatives at the local level—using a common, interagency process for fire management planning and budgeting—and aggregating the results; determine the relative costs and benefits for the full scope of fire management activities, including potential trade-offs among investments in fuel reduction, fire preparedness, and fire suppression activities; and identify, for a given budget level, the most cost-effective mix of personnel and equipment to carry out these activities. We have said for several years—and the agencies have concurred—that FPA is critical to helping the agencies contain wildland fire costs and plan and budget effectively. Recent design modifications to the system, however, raise questions about the agencies’ ability to fully achieve these key goals. A midcourse review of the developing system resulted in the Wildland Fire Leadership Council’s approving in December 2006 modifications to the system’s design. FPA and senior Forest Service and Interior officials told us in April 2007 they believed the modifications will allow the agencies to meet the key goals. The officials said they expected to have a prototype developed for the council’s review in June 2007 and to substantially complete the system by June 2008. We have yet to systematically review the modifications, but after reviewing agency reports on the modifications and interviewing knowledgeable officials, we have concerns that the modifications may not allow the agencies to meet FPA’s key goals. For example, under the redesigned system, local land managers will use a different method to analyze and select various budget alternatives, and it is unclear whether this method will identify the most cost-effective allocation of resources. In addition, it is unclear how the budget alternatives for local units will be meaningfully aggregated on a nationwide basis, a key FPA goal. Third, the agencies have clarified certain policies and are improving analytical tools to assist agency officials in identifying and implementing an appropriate response to a given fire. Officials have a wide spectrum of strategies available to them when responding to wildland fires, some of which can be significantly more costly than others. For individual fires, past studies have found that officials may not always consider the full range of available strategies and may not select the most appropriate one, which would consider the cost of suppression; value of structures and other resources threatened by the fire; and, where appropriate, any benefits the fire may provide to natural resources. The agencies call a strategy that considers these factors the “appropriate management response.” The agencies updated their policies in 2004 to require officials to consider the full spectrum of available strategies when selecting one to use. Nevertheless, other policies limit the agencies’ use of less aggressive strategies, which typically cost less. The Forest Service and Interior agencies are working together to revise these policies—revisions that could, for example, allow different areas of the same fire to be managed for suppression and wildland fire use concurrently or allow a fire that was previously being suppressed to be managed instead for wildland fire use. The agencies are also continuing to refine existing tools, and to develop new ones, for analyzing both fuel and predicted weather conditions to model expected fire behavior, information that officials can use to identify appropriate suppression strategies; these tools are still being designed and tested. It is still too early to tell, however, to what extent the policy changes being considered or the new tools being developed will help to contain costs. Finally, we and others have also reported that the existing framework for sharing firefighting costs between federal and nonfederal entities insulates state and local governments from the cost of protecting homes and communities in or near wildlands, which may reduce those governments’ incentive to adopt building codes and land use requirements that could help reduce the cost of suppressing wildland fires. Federal agencies, working with nonfederal entities, have recently taken steps to clarify guidance and better ensure that firefighting costs are shared consistently for fires that threaten both federal and nonfederal lands and resources. In early 2007, the Forest Service and Interior agencies approved an updated template that land managers can use when developing master agreements—which establish the framework for sharing costs between federal and nonfederal entities—as well as agreements on how to share costs for a specific fire. Because master agreements are normally updated every 5 years, however, it may take several years to fully incorporate this new guidance. Although the new guidance states that managers must document their rationale for selecting a particular cost-sharing method, officials told us that the agencies have no clear plan for how they will provide oversight to ensure that appropriate cost-sharing methods are used. Despite steps taken to strengthen their management of cost-containment efforts, the agencies have neither clearly defined their cost-containment goals and objectives nor developed a strategy for achieving them—steps that are fundamental to sound program management. To manage their cost-containment efforts effectively, the Forest Service and Interior agencies should, at a minimum, have (1) clearly defined goals and measurable objectives, (2) a strategy to achieve the goals and objectives, (3) performance measures to track their progress, and (4) a framework for holding appropriate agency officials accountable for achieving the goals. First, although the agencies have established a broad goal of suppressing wildland fires at minimum cost considering firefighter and public safety and the resources and structures to be protected, they have established neither clear criteria by which to weigh the relative importance of these often-competing priorities nor measurable objectives by which to determine if they are meeting their goal. Without such criteria and objectives, according to agency officials we interviewed and reports we reviewed, officials in the field lack a clear understanding of the relative importance that the agencies’ leadership places on containing costs and, therefore, are likely to select firefighting strategies without due consideration of costs. Second, the agencies have yet to establish an overall cost-containment strategy. Without a strategy designed to achieve clear cost-containment goals, the agencies (1) have no assurance that the variety of steps they are taking to help contain wildland fire costs are prioritized so that the most important steps are undertaken first and (2) are unable to determine to what extent these steps will help contain costs and if a different approach may therefore be needed. Third, the agencies recently adopted a new performance measure—known as the stratified cost index—that may improve the agencies’ ability to evaluate their progress in containing costs, but the measure may take a number of years to fully refine. Also, although the agencies have in recent years improved their data on suppression costs and fire characteristics, additional improvement is needed. In particular, cost data for “fire complexes”—that is, two or more fires burning in proximity that are managed as a single incident—are particularly difficult to identify. Thus, the costs of many of the largest fires are not included in the index, limiting its effectiveness. Further, to date, the index is based solely on fires managed by the Forest Service. Forest Service researchers are currently developing, at Interior’s request, a similar index for fires managed by the Interior agencies, but it will be several years, at the earliest, before enough data have been collected for the index to be useful. In addition, because the stratified cost index is based on costs from previous fires—and because the agencies have only recently begun to emphasize the importance of using less aggressive suppression strategies—we are concerned that the index does not include data from many fires where less costly firefighting strategies were used. As a result, the index may not accurately identify fires where more, or more-expensive, resources were used than needed. According to Forest Service officials, data from recent fires will be added annually; over time, the index should therefore include more fires where less aggressive firefighting strategies were used. Finally, the agencies have also taken, or are beginning to take, steps to improve their oversight and accountability framework, although the extent to which these steps will assist the agencies in containing costs is unknown. For example, the agencies have issued guidance clarifying that land managers, not fire managers, have primary responsibility for containing wildland fire costs, but they have not yet determined how the land managers are to be held accountable for doing so. Rather, the agencies have taken several incremental steps intended to assist land managers in carrying out this responsibility—such as assigning “incident business advisors” to observe firefighting operations and work with fire managers to identify ways those operations could be more cost-effective, and requiring land managers to evaluate fire managers for how well they achieve cost-containment goals. The utility of these steps, however, may be limited because the agencies have yet to establish a clear measure to evaluate the benefits and costs of alternative firefighting strategies. Some past studies have concluded that the absence of such a measure fundamentally weakens the agencies’ ability to provide effective oversight. Continuing concerns about the cost of preparing for and responding to wildland fires have spurred numerous studies and actions by federal wildland fire agencies, but little in the way of a coordinated and focused effort to rein in these costs. Although the agencies have taken—and continue to take—steps intended to contain wildland fire costs, the effect of these steps on containing costs is unknown, in part because the agencies lack a clear vision for what they want to achieve. Without clearly defined cost-containment goals and objectives, federal land and fire managers in the field are more likely to select strategies and tactics that favor suppressing fires quickly over those that seek to balance the benefits of protecting the resources at risk and the costs of protecting them. Further, without clear goals, the agencies will be unable to develop consistent standards by which to measure their performance. Perhaps most important, without a clear vision of what they are trying to achieve and a systematic approach for achieving it, the agencies—and Congress and the American people—have little assurance that cost-containment efforts will lead to substantial improvement. Thus, to help the agencies manage their ongoing efforts to contain wildland fire costs effectively and efficiently, and to assist Congress in its oversight role, we recommended in our report that the Secretaries of Agriculture and the Interior work together to direct their respective agencies to (1) establish clearly defined goals and measurable objectives for containing wildland fire costs, (2) develop a strategy to achieve these goals and objectives, (3) establish performance measures that are aligned with these goals and objectives, and (4) establish a framework to ensure that officials are held accountable for achieving the goals and objectives. Because of the importance of these actions and continuing concerns about the agencies’ response to the increasing cost of wildland fires—and so that the agencies can use the results of these actions to prepare for the 2008 fire season—the agencies should provide Congress with this information no later than November 2007. In commenting on a draft of our report, the Forest Service and Interior generally disagreed with the characterization of many of our findings; they neither agreed nor disagreed with our recommendations. In particular, the Forest Service and Interior stated that they did not believe we had accurately portrayed some of the significant actions they had taken to contain wildland fire costs, and they identified several agency documents that they believe provide clearly defined goals and objectives that make up their strategy to contain costs. Although documents cited by the agencies provide overarching goals and objectives, we believe that they lack the clarity and specificity needed by their land management and firefighting officials in the field to help manage and contain wildland fire costs. Therefore, we believe that our recommendations, if effectively implemented, would help the agencies better manage their cost- containment efforts and improve their ability to contain wildland fire costs. Mr. Chairman, this concludes my prepared statement. I would be please to answer any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. David P. Bixler, Assistant Director; Ellen W. Chu; Jonathan Dent; Janet Frisch; Chester Joy; and Richard Johnson made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Annual appropriations to prepare for and respond to wildland fires have increased substantially over the past decade, in recent years totaling about $3 billion. The Forest Service within the Department of Agriculture and four agencies within the Department of the Interior (Interior) are responsible for responding to wildland fires on federal lands. GAO determined what steps federal agencies have taken to (1) address key operational areas that could help contain the costs of preparing for and responding to wildland fires and (2) improve their management of their cost-containment efforts. This testimony is based on GAO's June 2007 report, Wildland Fire Management: Lack of Clear Goals or a Strategy Hinders Federal Agencies' Efforts to Contain the Costs of Fighting Fires (GAO-07-655). The Forest Service and Interior agencies have initiated a number of steps to address key operational areas previously identified as needing improvement to help federal agencies contain wildland fire costs, but the effects on containing costs are unknown, in part because many of these steps are not yet complete. First, federal firefighting agencies are developing a system to help them better identify and set priorities for lands needing treatment to reduce fuels, but they have yet to decide how they will keep data in the system current. Second, federal agencies have taken some steps to improve how they acquire and use personnel, equipment, and other firefighting assets--such as implementing a computerized system to more efficiently dispatch and track available firefighting assets--but have not yet completed the more fundamental step of determining the appropriate type and quantity of firefighting assets needed for the fire season. Third, the agencies have clarified certain policies and are improving analytical tools that assist officials in identifying and implementing an appropriate response to a given fire, but several other policies limit the agencies' use of less aggressive firefighting strategies, which typically cost less. Fourth, federal agencies, working with nonfederal entities, have recently taken steps to clarify guidance to better ensure that firefighting costs are shared consistently for fires that threaten both federal and nonfederal lands and resources, but it is unclear how the agencies will ensure that this guidance is followed. The agencies have also taken steps to address previously identified weaknesses in their management of cost-containment efforts, but they have neither clearly defined their cost-containment goals and objectives nor developed a strategy for achieving them--steps that are fundamental to sound program management. Although the agencies have established a broad goal of suppressing wildland fires at minimum cost--considering firefighter and public safety and resources and structures to be protected--they have no defined criteria by which to weigh the relative importance of these often-competing priorities. As a result, according to agency officials and reports, officials in the field lack a clear understanding of the relative importance the agencies' leadership places on containing costs and, therefore, are likely to select firefighting strategies without due consideration of the costs of suppression. The agencies have also yet to develop a vision of how the various cost-containment steps they are taking relate to one another or to determine the extent to which these steps will be effective. The agencies are working to develop a better cost-containment performance measure, but the measure may take a number of years to fully refine. Finally, the agencies have taken, or are beginning to take, steps to improve their oversight and increase accountability--such as requiring agency officials to evaluate firefighting teams according to how well they contained costs--although the extent to which these steps will assist the agencies in containing costs is unknown. |
Similar to drunk driving, drug-impaired driving can result in crashes leading to death or injury of vehicle occupants and pedestrians, along with other safety and traffic issues for individuals and society. Taking into consideration these possible harms, Congress has authorized grant funding to states to combat impaired driving through transportation legislation—most recently the Moving Ahead for Progress in the 21st Century Act (MAP-21). These grant programs are designed to encourage states to adopt and implement effective programs to reduce driving under the influence of alcohol, drugs, or the combination of alcohol and drugs. Historically, these programs have been focused on reducing alcohol-impaired driving. For example, in the late 1990s, Congress made grant funds available to states to encourage them to lower the illegal per se driving blood-alcohol concentration (BAC) limit to 0.08.words, with respect to a BAC limit of 0.08, anyone whose blood contains 8/100th of 1 percent of alcohol (or higher) would be deemed to be driving while intoxicated. All 50 states and the District of Columbia have 0.08 laws as well as laws making it illegal to drive while impaired by drugs. NHTSA administers grant programs for safety initiatives to assist states in their efforts to reduce traffic-related fatalities, including fatalities involving drug- and alcohol-impaired driving. NHTSA also provides guidance and technical assistance to states, and conducts research on drivers’ behavior and traffic safety. As part of such research, NHTSA works with traffic safety organizations, such as GHSA and MADD, and other federal agencies, such as ONDCP. In addition to NHTSA, other federal agencies conduct research and implement programs that, in whole or in part, seek to increase knowledge about the problem of drug-impaired driving and to identify and implement policies and programs to reduce drug-impaired driving. These agencies include ONDCP; HHS’s SAMHSA, NIH, FDA, and CDC; Department of Justice (DOJ); and NTSB. Drug-impaired driving may be caused by use of illegal drugs, legally prescribed or OTC drugs that are misused, and some legally prescribed or OTC drugs even when used as intended. While 23 states and the District of Columbia have legalized the use of marijuana for medical purposes, according to the NCSL, and two states—Colorado and Washington—allow the use of marijuana for recreational purposes, the federal government continues to consider marijuana as an illegal drug, with no medical use.which data may include marijuana as either a legal or illegal drug. According to the FDA, some prescription and OTC drugs can impair driving ability, while others have no effect or can even enable patients to drive more safely. Throughout the report, we have noted instances in For the purposes of this report, we have used the following terminology: Drugged driving: driving with the presence of drugs in one’s system regardless of impairment. Drug-impaired driving: driving with a diminished ability to operate a vehicle due to drug use. Drug test: the toxicological analysis of a biological specimen—blood, urine, or oral fluid (saliva)—to determine the presence or absence of specific drugs or their metabolites. DUI: driving under the influence of alcohol and/or drugs. DUID: driving under the influence of drugs. Impairment: a diminished ability to perform specific functions. Metabolites: the products of drug metabolism found in bodily fluids, which indicate prior drug use. Tetrahydrocannabinol (THC): the main psychoactive compound found in the cannabis (marijuana) plant. Toxicology: the study of the effects of drugs—whether illegal, prescription, or over-the-counter—on humans. Drugs may be categorized in several ways including by the chemical type or by the way the drug is used. For example, this report uses the following terms to classify drugs, among others: Antidepressants: drugs used to treat depression and other conditions, including anxiety disorders. Cannabinoids: compounds contained in marijuana. Narcotics: drugs including opium and those derived from it, such as heroin and codeine. Depressants: drugs that inhibit the activity of the brain and may result in muscle relaxation, lowered blood pressure and heart rate, and slowed breathing; includes anxiety and seizure medications. Stimulants: drugs that may result in increased alertness and elevated heart rate and respiration; includes cocaine and amphetamines. Synthetics: synthetic drugs, as opposed to natural drugs such as marijuana, are chemically produced in a laboratory to mimic the effects of other drugs. Synthetic drugs may be developed in order to circumvent existing drug laws. Examples include synthetic cannabinoids and cathinones. There is no national source of data on the extent of drug-impaired driving in the United States, but various state and national sources of data on drugged driving can provide limited information on the extent to which drivers in the United States have drugs in their systems. For example, national and state roadside surveys provide information on the prevalence of drugged driving in respective survey areas. Other sources of data provide some information on drugged and drug-impaired driving, such as surveys on self-reported drugged-driving behavior, impaired- driving arrests and toxicology results, and crash data. However, limitations to the currently available data include underreporting and a lack of centralization or standardization in reporting. National and state roadside surveys provide data on the prevalence of drugged driving in statistically representative samples of drivers. For example, NHTSA’s 2007 National Roadside Survey of Alcohol and Drug Use by Drivers (NRS) provides information on drivers testing positive for illegal, prescription, and OTC drugs in a nationally representative sample of weekend-nighttime and Friday daytime drivers. Based on the 2007 survey, NHTSA estimated that 16.3 percent of nighttime drivers nationwide would have tested positive for at least one drug, with marijuana being the most common drug found in test results (see table 1). While NRS survey data provides useful information on the estimated prevalence of drugged driving, these results do not measure the extent to which drivers are impaired by the drugs in their systems as the presence of drugs or drug metabolites does not necessarily indicate impairment. For example, marijuana metabolites can be detected in blood samples several weeks after daily users’ last use. Nonetheless, the 2007 NRS provided the first objective data on drug use among drivers in the United States and, according to officials at the National Institute on Drug Abuse (NIDA) and SOFT, served as a wake-up call regarding the extent of drugged-driving in the United States. According to NHTSA, the survey was repeated in 2013-2014 following the same general methodology as the 2007 survey and results will be available in 2015. Preliminary results from that survey estimated that 20 percent of nighttime weekend drivers would have tested positive for illegal, prescription, or OTC drugs that An assessment of trends have been identified as potentially impairing.in drugged driving in the United States may be feasible as future results beyond the 2013-2014 NRS become available. Table 1 shows additional drug test results from this survey. Lacey et al. The NRS methodology involves randomly stopping drivers at locations Poly-Use (more than one class of drug) During nighttime weekend times, drivers were tested for the presence of drugs using oral fluid. A positive oral fluid test is a positive drug test. California’s Office of Traffic Safety found results similar to the 2007 NRS during its 2012 California Roadside Survey of Nighttime Weekend Drivers’ Alcohol and Drug Use (see table 1). Specifically, 14 percent of nighttime weekend drivers tested positive for at least one drug, with marijuana being the most frequent drug identified. Additionally, the Washington Traffic Safety Commission has commissioned the first of two roadside surveys, using what NHTSA describes as comparable methodology to the 2007 NRS, meant to measure drug and alcohol use among drivers before and after implementation of the 2012 state law legalizing recreational marijuana use. Results from the survey are expected in 2015. The NRS and California surveys also collected self-reported data on drug use among drivers who participated in the survey. For instance, during the California Survey, approximately 14 percent of all drivers who reported having used marijuana in the past reported having used it within 2 hours of driving in the past year. In the NRS, all drivers who provided an oral fluid sample were asked to report if they had used a drug before driving and, if so, what type. For the subset of Friday and nighttime weekend drivers who tested positive for at least one drug based on the oral fluid sample, the drivers’ answers regarding prior drug use were compared to positive oral fluid-analysis results to determine agreement between self-reported behavior and the oral fluid test. Results of this comparison include: an estimated 7.5 percent of nighttime weekend drivers testing positive for cocaine reported they had used cocaine in the past 24 hours, an estimated 25.7 percent of nighttime weekend drivers testing positive for marijuana reported they had used marijuana in the past 24 hours, an estimated 59.9 percent of nighttime weekend drivers testing positive for pain medication reported they had used pain medication in the past 24 hours, and an estimated 66.4 percent of nighttime weekend drivers testing positive for antidepressants reported they had used antidepressants in the past 24 hours. Agreement between reported drug use in the past 24 hours and positive oral-fluid-analysis results for the nighttime-driving samples was greatest among users of antidepressants, cough suppressants, and pain medications and lowest for amphetamines and barbiturates.reported data may be useful in tracking trends in reported drug use and attitudes about drugged driving, the NRS methodology noted that it may While self- under-report actual activity and therefore be insufficient for estimating the extent of drugged driving. Additional studies compile self-reported data on attitudes and behaviors regarding drug use and driving, which may be helpful in tracking trends in behavior. SAMHSA’s National Survey on Drug Use and Health (NSDUH) is an annual survey of a nationally representative sample of the United States population. The NSDUH includes questions specific to driving while under the influence of illegal drugs. According to this survey, driving under the influence of illegal drugs is most common in respondents aged 18–25 (an estimated 10.6 percent in 2013, the latest survey results available). Overall, the percentage of respondents aged 12 or older who report driving under the influence of illegal drugs in the past year has been around 4 percent from 2008 through 2012. NIH’s NIDA also supports various studies on drug use including the College Life Study on health-related behaviors of college students, and Monitoring the Future, which measures attitudes of adolescents related to drug and alcohol use. According to the 2012 Monitoring the Future Survey, an estimated 10.6 percent of high school seniors drove a car, truck, or motorcycle in the prior 2 weeks after having smoked marijuana. Data on drug-impaired driving arrests and toxicology results in our seven selected states provide some information on drug-impaired driving, but are limited by a lack of separation of data from driving under the influence (DUI) arrests, underreported instances of drug-impaired driving, decentralized reporting, and a lack of standardization in drug testing. For example, officials in six of the seven selected states told us that state arrest data does not currently separate data on drug-impaired driving and alcohol-impaired driving cases across law enforcement agencies. Officials from California stated that although the state recently revised its vehicle code to delineate DUI into three separate reportable sections, it could be several years until any data generated from the new system can be Arizona’s Governor’s Office of considered complete and accurate.Highway Safety tracks arrests for driving under the influence of alcohol and drugs separately, and data show an increase in drug-impaired driving arrests in the past 5 years, but this rise does not necessarily indicate an increase in drug-impaired driving. For example, Arizona data show a 966 percent increase in drug-impaired driving arrests from 2005 through 2013. However, the increase in arrests may be due to the 659 percent increase in the number of officers participating in impaired-driving enforcement activities, the 1,604 percent increase in the number of traffic stops, and better reporting of those traffic stops, rather than an increase in drug- impaired driving. The rates of contacts resulting in DUID arrests do not show an increasing trend. See table 2. In addition, drug-impaired driving may be underreported as drivers impaired by both alcohol and drugs will likely be tested and prosecuted only for alcohol impairment because, according to officials from NHTSA and six of the seven selected states, evidence collection and prosecution are much easier for alcohol-impaired driving. Officials from NHTSA and six states said that in general, if a person suspected of impaired driving has a BAC over 0.08, the individual is not tested further for the presence of drugs, regardless of whether drug impairment is also suspected. As a result, drivers impaired by both drugs and alcohol may not be reported accurately in arrest data, contributing to a lack of knowledge about the number of drivers impaired by both drugs and alcohol. Further, based on our review of information from the seven selected states on arrests and toxicology results for DUID cases, we found that state data on impaired driving are often not centralized or complete. Among the selected states, data on DUID arrests are generally collected by local law enforcement agencies, and one of the seven selected states collects statewide data on DUIDs in a centralized database. While Arizona collects arrest data from a majority of its local law enforcement agencies in a central database, a small number of local agencies do not participate. One official estimated about 2 percent of agencies do not submit data. Similarly, statewide toxicology/drug-testing data may not be easily available because it is decentralized. In five of the seven selected states, toxicology data are maintained by individual state and local law enforcement agencies and toxicology labs (including private and public labs), with no centralized database. For example in California, DUID testing varies by jurisdiction and can be completed by one of 22 private labs or 6 public labs at the local or state level. In contrast, Vermont and Washington have centralized results of all DUID drug tests, which are conducted by a single laboratory for each state. While Vermont toxicology data do not show any clear trends, a study using Washington state toxicology data indicates that the prevalence of marijuana in suspected impaired-driving cases increased after marijuana was legalized, from an average of 19.1 percent of cases positive for tetrahydrocannabinol (THC) from 2009 through 2012 to 24.9 percent in 2013 (post legalization). However, according to one of the authors, it is unclear whether this increase is due to factors other than an increase in marijuana-impaired driving, such as an increased focus on marijuana impairment in the state. Over the same period, the prevalence of alcohol and drugs other than marijuana in the population of suspected impaired drivers remained relatively stable in Washington. Further, drug test results may not be comparable among laboratories. Officials from three of the seven selected states, as well as representatives from SOFT, stated that a lack of standardization among labs means that test results from different labs cannot necessarily be compared. For example, labs do not have uniform reporting-level cutoffs for drugs (the level at which a drug is reported as present). Therefore, for the same sample, one lab may report the sample as positive for the presence of a drug while another lab may report the sample as negative because the amount present is below the reporting level cutoff for the second lab. According to officials from HHS, while there are federal standards for forensic toxicology testing for federal agencies and states may establish standards for forensic testing, there are currently no federal laboratory certification requirements for forensic laboratories conducting toxicology testing for state and local law-enforcement agencies. Despite these limitations, drug testing results for DUID cases can provide information on the types and amounts of drugs present in drivers’ systems. A 2004–2005 survey of labs in the United States conducted by SOFT, in conjunction with the National Safety Council and the American Academy of Forensic Sciences, shows that the most common drug encountered in DUID cases is cannabis (marijuana), followed by benzodiazepines (anti-anxiety medications with sedative properties), then narcotics (including cocaine, hydrocodone, and morphine/codeine). The most common drugs, and how drugs are categorized, differ from region to region, as indicated by toxicological results we gathered from seven state toxicology labs. Four of the selected states do not separate out DUID toxicological testing from other toxicological testing. For the three selected states that have separate toxicology data for DUID cases, the most common drug found for the most recent data available was marijuana; the second most common drug or drug category found was methamphetamine for two states and benzodiazepines in one state. Laurel J. Farrell, Sarah Kerrigan, and Barry K. Logan, “Recommendations for Toxicological Investigation of Drug Impaired Driving,” Journal of Forensic Sciences, vol. 52 no. 5 (2007). because of the limitations and variations in data collection, testing, and reporting noted above, among other things. In addition to limited data on the extent of drugged and drug-impaired driving, federal and state officials we spoke with cited difficulty in defining drug impairment as a significant challenge to addressing drug-impaired driving. Compared to alcohol, which is chemically simple and has relatively predictable effects, defining and identifying impairment due to drugs is much more complicated due to the large number of available drugs and their unpredictable side effects. The lack of a definition of drug impairment, in turn, exacerbates challenges in enforcing drug-impaired driving laws and informing the public about the dangers of driving under the influence of drugs. Toxicologists in three of seven selected states, officials from NIDA, SAMSHA, and representatives from SOFT stated that identifying a link between impairment and drug concentrations in the body, similar to the 0.08 BAC threshold established for alcohol, is complex and, according to officials from SOFT, possibly infeasible. Alcohol is a chemically simple molecule that is absorbed and metabolized at a relatively consistent and predictable rate. In contrast, most drugs are chemically complex molecules; various drugs are absorbed and eliminated from an individual’s system at different rates. As a result, impairment does not necessarily correspond to a specific concentration level in the blood, and detectable amounts of certain drugs may remain even after impairing effects wear off. For example, as noted earlier, marijuana can be detected in a daily marijuana user’s system up to 30 days after using the drug. Toxicologists in four states and representatives from SOFT stated that, as a result, a positive drug test does not necessarily indicate impairment. Additionally, drugs can have varying and unpredictable effects on individuals. For example, individuals with prescriptions for central nervous system depressants, such as a prescription sleep aid, can develop a tolerance which can reduce some of the impairing effects. During the first few days of taking a prescribed central-nervous-system depressant, a person can feel sleepy and uncoordinated, but as the body becomes accustomed to the effects of the drug and tolerance develops, these side effects begin to disappear. As a result, drug concentrations that would be impairing for one individual may not be impairing to another. Further, drivers may combine more than one drug or mix drugs with alcohol, which can have unpredictable results and cause impairment more quickly than the same amounts of each substance taken alone. According to literature we reviewed, when combined, multiple drugs or drugs and alcohol can have a synergistic effect, rather than a simple additive effect, so each substance may increase the impairing effects of the others. Drug testing is more time consuming and expensive than testing for alcohol because rather than the single blood or breath test needed to determine blood alcohol level, separate tests must be conducted for each suspected drug class (e.g., pain relievers, antidepressants), and the required instrumentation is sophisticated and costly. According to toxicologists from two states and representatives from SOFT, it is more expensive to test for drugs than alcohol. For instance, one toxicologist stated that standard equipment for alcohol analysis costs between $100,000 and $120,000; but equipment needed to test for certain types of drugs can cost up to $500,000. Additionally, the number of potentially impairing legal and illegal drugs is large. For example, the NRS tested drivers for 75 illegal, prescription, and OTC drugs identified as potentially impairing, and while some medications do not affect or can even improve driving ability, the FDA has identified eight common classes of prescription and OTC medications as potentially impairing. In addition, in 2013, the National Safety Council’s Alcohol, Drugs and Impairment Division reviewed and recommended a list of 33 drugs that should be included in the scope of drug testing. Further, new drugs are continually being developed for both legal and illegal markets, especially synthetics. For instance, the United Nations Office on Drugs and Crime reported that in the United States, 51 synthetic cannabinoids (developed to reproduce the effects of THC/marijuana) and 31 synthetic cathinones (mimicking the effects of amphetamines) were identified in 2012. According to toxicologists in two states, to pursue detection of new drugs, even if the molecular structure is only slightly different from other known drugs, labs need to develop a new testing methodology and then validate that methodology through extensive testing on each individual instrument. For example, according to one toxicologist, developing and validating testing methods for a new drug recently cost about $31,000. Validation of new methodologies is a complicated task and requires qualified personnel, time, and money. Additionally, one toxicologist stated that, to run tests on synthetic cannabinoids and other synthetics, a standard sample against which to test must be purchased from either a chemical company or another source. The entire process is lengthy and expensive. State prosecutors and highway safety office officials in three of the seven selected states said that there is a lack of knowledge among law enforcement about drug impairment in drivers. Furthermore, according to officials from NHTSA, GHSA, and IACP, basic training for officers on impaired driving enforcement is insufficient for identifying drivers that may be impaired by drugs. For example, officers may be trained to administer the Standardized Field Sobriety Test, which focuses on detecting alcohol- impairment in drivers; however, officers may not be trained to recognize drug impairment. One prosecutor stated that there is a misperception among some officers who have not received training to identify drug impairment that a drug-impaired driver should exhibit similar symptoms as a drunken one; including slurred speech and difficulty maintaining balance. As a result, officers who are not trained to detect drug impairment may mistakenly think that a driver is not impaired. See table 3 for a comparison of some of the possible symptoms of alcohol and drug impairment, which vary depending on the type of drug used. The time between arrest and collection of a sample for drug testing can affect the quality of biological evidence, such as blood samples, because the concentration of drugs in the body is constantly changing. Specifically, logistical challenges and legal requirements pertaining to evidence collection can increase the time between arrest and sample collection, reducing evidence quality. Currently, there is no validated roadside drug- testing device, such as the evidential breath-testing device for alcohol, which would facilitate faster sample collection. Drug testing can be conducted from a blood, oral fluid, or urine sample. According to toxicologists from two states, representatives from SOFT, and literature we reviewed, blood sample analysis is currently the most accurate method of detecting recent drug use. Officials in two of the states that we selected said that it can be time consuming to obtain a search warrant for a blood sample, because it requires approval by a judge. For example, officials from a state highway-safety office stated that a DUID arrest can take 3 to 4 hours if blood is being collected, because arresting officers must wait for a warrant signed by a judge to conduct the blood test. Moreover, depending on local requirements and resources, potential offenders may need to be transported by law enforcement to a hospital or other location for a phlebotomist or nurse to collect a blood sample, leading to further delays. As the arresting officer waits to collect the sample, the drug content in the suspect’s blood can decrease significantly, resulting in a less accurate measure of the drug content in the blood at the time of the actual traffic stop. According to toxicologists in four states, the lack of qualified lab personnel and testing equipment can contribute to a backlog of samples that need to be tested for drugs, which can result in long waits for toxicology results. In five of the states that we selected, officials told us that current lab backlogs ranged from no backlog to about 2,000 cases (the oldest case being 2 years old). As a result of a lab backlog, officials in two of the states that we selected said that a prosecutor may have to move forward with a drug-impaired driving case without toxicology results due to legal time constraints for prosecution. Additionally, toxicologists from two states said that some drug compounds continue to degrade once blood samples have been collected and may not be detectable in the sample three to six months after collection, making the evidence less useful or of no use to prosecutors. State prosecutors, toxicologists, law enforcement and highway-safety office officials from all of the selected states, as well as NIDA, told us that they believe that there is a lack of public awareness about the dangers of driving after using prescription medications and marijuana. According to prosecutors whom we spoke to in three states, alcohol-impaired driving is easy for people to understand, because the public has been educated about the dangers of drunk driving through various campaigns. However, they noted that people believe there is less danger associated with prescription medications and, in some cases, marijuana. As a result, jurors may have a more difficult time understanding the dangers associated with driving under the influence of prescription medication, for example, based on their personal experience of taking similar medications without perceiving they are impaired or having a driving incident. Additionally, according to state prosecutors, toxicologists and law-enforcement and highway-safety office officials from all of the selected states as well as NIDA, the public perceives that driving after using marijuana is not dangerous. Moreover, officials from two state highway safety offices and NIDA stated that in their view, the public is generally unaware of the unpredictable effects of combining multiple drugs. As a result of this perceived lack of awareness, members of the public may risk unknowingly driving while impaired, potentially leading to vehicle collisions, injuries, and fatalities. Officials in four states said that there is a lack of focus on drug impairment in highway-safety public-education campaigns. For example, the traffic-safety-marketing communications resources available on NHTSA’s website for states, partner organizations, and highway safety professionals who are related to impaired driving are generally focused on reducing drunk driving: “Drive Sober or Get Pulled Over” and “Buzzed Driving is Drunk Driving.” See figure 1. Additionally, campaigns in the states we selected generally use language that may suggest a focus on impairment due to alcohol, rather than drugs, for example, “Drive Hammered, Get Nailed.” Federal and selected state agencies are taking actions to address drug- impaired driving, including the challenges previously cited. These actions include improvements in the areas of research and data, education of law enforcement and court personnel, evidence quality, legal remedies, and public awareness. Furthermore, NHTSA, ONDCP, and states have coordinated their efforts to address drug-impaired driving challenges. However, NHTSA’s current public awareness initiatives do not clearly include drug-impaired driving and state officials we spoke with stated that NHTSA could do more to increase public awareness about the dangers of drugged driving. Research on Drug Impairment: Federal agencies have completed and are conducting research to increase knowledge about the relationship between drugs, impairment, and crash risk. For example, NHTSA is currently researching the crash risk of drug and alcohol use (including illegal, prescription, and OTC drugs) by collecting samples from more than 10,000 crash- and non crash-involved drivers in one city for 20 months. The results of this study are expected in 2015. Additionally, components of HHS, including NIDA, have researched the impairing effects of various drugs, including the effects of habitual marijuana use. For example, NIDA has conducted research on the length of time marijuana can be detected in blood after use (up to about 30 days in daily users). Regarding prescription and OTC drugs, the FDA uses information from studies conducted by drug manufacturers to assess new medications for adverse effects, including drowsiness, and requires that those effects are appropriately discussed in labeling, including package inserts. Data Availability, Consistency, and Timeliness: To increase the availability of data on drug-impaired driving, NHTSA has recommended that states distinguish among alcohol or drugs, or both for impaired driving cases. Some states, including Colorado and Arizona, are implementing systems to track whether impaired-driving arrests involved drugs or alcohol, or both. NHTSA officials stated that such state efforts may also help improve federal data sources, such as FARS. Additionally, California, Hawaii, and New York have separated driving under the influence of alcohol, drugs, or the combined influence of drugs and alcohol in their impaired driving statutes, a move that may result in more detailed data on the extent of drug-impaired and alcohol-impaired driving. In 2013, the National Safety Council’s Alcohol, Drugs and Impairment Division reviewed and updated a set of minimum recommendations to toxicologists for drug testing in suspected impaired-driving cases and fatal crashes, including recommendations to improve the consistency of data on the frequency with which specific drugs are linked with impaired Specifically, the recommendations included standards for the driving.type of sample tested (blood, oral fluid, or urine), the scope of drugs for which to test, and cutoff values for reporting the presence of a drug. The NTSB has recommended that NHTSA develop and disseminate similar standards to state officials. According to NHTSA officials, they have discussed these types of standards with officials from SAMHSA, NTSB, and ONDCP. SAMHSA has recently developed oral fluid drug-testing standards for federal workplaces. These standards are currently under review and have not yet been released for public comment. NHTSA officials stated that they plan to wait until these workplace standards are further along in the approval process in order to develop guidance for states that are generally consistent with SAMHSA’s workplace standards. Further, some states have initiated or implemented plans to increase the capability of toxicology labs to improve the timeliness and availability of data. For example, Kansas has made recent efforts to increase the capacity of its forensic lab, housed in the Kansas Bureau of Investigation, through increased funding to retain specialized technicians, increased toxicology staffing, and building a new facility. According to an official from the lab, the improvements should help the state decrease its backlog of 2,000 toxicology cases (as of August 2014). Additionally, according to a lab official, the Ohio Crime Lab received federal grant funding in 2013 to purchase needed instrumentation and coordinates with the Indiana Department of Toxicology and the Kentucky State Police Toxicology Laboratories to share information and validation techniques for new drug- testing methodologies. Drug Recognition Expert Program: One strategy for increasing knowledge about drug impairment among law enforcement mentioned by officials in all seven selected states is the Drug Recognition Expert (DRE) program, which provides training to law enforcement officers and others to identify drivers under the influence of drugs. For this program, IACP and NHTSA coordinated to leverage training originally developed in California. The training includes 72 hours of classroom training and between 40 and 60 hours of field training. Law enforcement officers who complete this training are certified by states as Drug Recognition Experts (DRE) and qualified to perform a 12-step evaluation protocol to assess subjects for drug impairment, which includes psychophysical tests and physical examinations. According to IACP’s 2013 annual report on the DRE program, as of December 2013, about 6,750 DREs have been certified in all 50 states and the District of Columbia. said that the DRE program was effective, some also discussed challenges related to the program, including: While officials in all of the selected states Training is time-consuming and expensive: Beyond the cost for training, which is often covered through state and federal grants, departments may need to pay for travel and lodging costs as well as overtime pay and additional coverage while officers attend training. Retention of certified officers: Officials from three selected states as well as IACP told us that high attrition among DREs makes it difficult to maintain enough certified officers. Reasons cited for this attrition include: o DRE-certified officers tend to be high-performing officers and are quickly promoted out of traffic units. International Association of Chiefs of Police, The 2013 Annual Report of the IACP Drug Recognition Section (Alexandria, VA: Oct. 20, 2014). o DRE re-certification requirements are time-consuming and expensive and may be difficult for small departments to fulfill. NHTSA’s database of DRE reports is difficult to use: NHTSA maintains a database of DRE reports (submitted voluntarily by DRE officers) as a source of data on the program and drug-impaired driving. However, according to law enforcement officials from four of seven selected states, the DRE database is difficult to use, and the data are not currently available in a format that allows tracking of evaluations conducted by individual officers or departments. For example, officers from two states said that they have trouble accessing the system. They noted that, as such, some officers do not report evaluations, making the database incomplete. According to NHTSA officials, they periodically provide system improvements to make the database easier and more effective for officers to use; for example, they are currently determining what identifying information may be added to the system to make tracking easier, without compromising privacy or security. Additionally, NHTSA has plans to improve the system interface and software. We were not able to identify any comprehensive study on the effectiveness of the program through our literature review, but NHTSA is currently conducting a study of a sample of DRE reports to examine the predictive validity of each of the components of the DRE evaluation; the results are expected in early 2015. Advanced Roadside Impaired Driving Enforcement Program: In addition to the DRE program, law enforcement agencies in all seven of the states we selected have implemented Advanced Roadside Impaired Driving Enforcement (ARIDE) training, which is meant to bridge the gap between the basic training on impaired driving provided in most police academies and the more intensive DRE program. The 16-hour ARIDE training program, developed through coordination between NHTSA, IACP, and the Virginia Association of Chiefs of Police, trains officers to identify and assess drivers suspected of being under the influence of drugs. Additionally, ONDCP, NHTSA, and IACP have coordinated to create an online version of the ARIDE class, which could improve access to drug impairment evaluation training to law enforcement agencies with more limited resources. However, officials from five of the seven states said that they do not allow their officers to take the online version of the ARIDE class because, in their view, it is not a good substitute for the classroom training. NHTSA is currently conducting an evaluation of the ARIDE program, including a comparison of the original training with the online version, with an expected reporting date of early 2016. Education for legal professionals: To increase the chances of successful prosecution of drug-impaired drivers, NHTSA grant funding may be used for state and regional level Traffic Safety Resource Prosecutor (TSRP) and Judicial Outreach Liaison (JOL) positions to provide training, education, and technical assistance to state prosecutors, judges, law enforcement officials, and toxicologists. For example, TSRPs in Arizona and California train toxicologists on providing effective testimony during trials. TSRPs also provide technical support to state and local prosecutors both generally and on a case-by-case basis to increase local ability to convict impaired drivers. Guidance: Federal Agencies including NHTSA and DOJ have provided states with guidance regarding the enforcement of drug-impaired driving laws. For instance, DOJ’s Community Oriented Policing Services component issued guidance on drug-impaired driving as part of its Problem-Specific Guides series. The guide includes a general description of drug-impaired driving and its causes as well as strategies to address enforcement challenges (many described above) and considerations for implementing the strategies described. Similarly, NHTSA published Saturation Patrols and Sobriety Checkpoints Guide: A How-to Guide for Planning and Publicizing Impaired Driving Enforcement Efforts, which guides state and local law enforcement agencies in planning and conducting high visibility enforcement campaigns (discussed below). To improve the likelihood that drug testing results will accurately reflect drug concentrations at the time of a traffic stop or crash, some states have taken actions to reduce the time between initial contact with law enforcement and collection of evidence. Roadside testing: Development of an accurate roadside drug-testing device, comparable to breath sensors for alcohol detection could increase law enforcement officers’ ability to identify drivers who have used drugs. Oral-fluid testing devices that are currently available test for a limited scope of drugs; representatives from SOFT stated that the scope includes the most common drugs found in drivers. NIH has conducted studies to validate the results of oral fluid and breath testing devices for certain drugs in controlled settings. Further, NHTSA is currently conducting research on the feasibility of incorporating available roadside oral fluid-testing devices in criminal justice processes, with results expected by early 2016. Additionally, a pilot program using various roadside oral fluid-testing devices has been conducted in Miami, with varying results depending on the device and type of drug. For example, one device was more accurate than the other overall, and accuracy for certain drugs was higher than for others. Electronic warrant systems: Washington and Arizona have established or are in the process of establishing electronic warrant systems, through which applications for warrants to collect biological samples are submitted, reviewed, and either granted or denied via electronic means (telephone, fax, or e-mail). According to law enforcement officials in those states, these systems can decrease the time between arrest and collection of samples for drug testing, in an effort to preserve evidence quality Increased access to phlebotomy services: For the past 8 to 9 years, Arizona has been training law enforcement officers as phlebotomists to reduce the time between arrest and collection of samples for drug testing, thus preserving evidence quality. Sentencing policies: Officials from state agencies in four of seven states said that sentencing strategies such as the use of impaired-driving courts reduce recidivism through programs that use a model of post-conviction supervision and treatment, combined with punishment such as fines, in order to change behavior. We have previously reported that participants that received such additional supervision and treatment through adult drug courts, including designated impaired driving courts, were generally less likely to be re-arrested than comparison group members drawn from the criminal court system. Zero-tolerance per se laws: ONDCP, GHSA, and others have recommended that states establish “zero tolerance” laws, which make it illegal per se (in itself) to drive with a detectable amount of a prohibited drug (defined by state law) in one’s system, regardless of whether there is evidence of impairment. According to a 2010 NHTSA report on the effectiveness of such laws, the compelling argument for zero tolerance laws is that, in their absence, a driver under the influence of an illegal substance was less likely to be prosecuted for impaired driving than a driver under the influence of alcohol. This problem existed because a maximum threshold linked to impairment has been established for alcohol, but there is no practical way to establish such a level for drugs. This study found some anecdotal support that zero tolerance laws increased prosecution rates, but a lack of reliable data prevented NHTSA from conclusively determining the effectiveness of such laws. Further, officials from NTSB stated there is no evidence that zero tolerance laws reduce impaired driving (since a driver need not be impaired to be prosecuted under the law). As of December 2014, 15 states have enacted laws that prohibit driving with a detectable prohibited substance in the driver’s body, without any other evidence of impairment. For example, Illinois prohibits drivers from having a detectable amount of any illegal substance or other prohibited substances listed in the statute in his or her system, which would include certain medications such as hyoscyamine, which is used to control symptoms associated with gastrointestinal disorders. See figure 2. Per se laws/drug concentration limits: Some officials recommend establishing “per se” limits, or thresholds, for certain drugs, similar to the 0.08 BAC limit established for alcohol. These laws make it illegal per se (in itself) for a driver to have a specific amount of a certain drug in his/her blood, oral fluid, or urine regardless of detectable impairment. As of December 2014, six states have enacted per se laws based on drug concentration limits for one or more drugs (see fig. 2). For instance, Washington has established a limit of 5 nanograms of THC per milliliter of blood for drivers. Colorado has implemented a similar law establishing 5 nanograms of THC per milliliter of blood as a “permissible inference,” which, according to a state official, means jurors may infer that the defendant was impaired but are not required to do so. Additionally, Nevada and Ohio have developed per se thresholds for certain controlled substances including illegal drugs such as cocaine and heroin as well as legal drugs such as amphetamines, which can be used to treat conditions such as attention deficit disorder and narcolepsy. Per se laws based on drug concentration limits may increase prosecutions for drivers who are over the established limits, but the effectiveness of these laws is unknown and may have unintended consequences. Officials from Colorado, Ohio, and SOFT stated that per se limits make prosecution of drivers who are over the limits more likely. However, others, including officials from California and Washington, stated that it may also make prosecution of drivers who were observed to be impaired but whose drug test results were under the established limit more difficult: once thresholds are established, drivers and jurors may develop the false assumption that driving below the established limit is legal, even if there is observable impairment. Some toxicologists, including representatives from SOFT, stated that per se laws based on thresholds may serve a particular policy goal of increasing prosecutions, but that a link between the established thresholds and impairment levels cannot be supported scientifically. A representative from SOFT also stated that, because illegal drugs generally have no medical purpose, there is a significant difference between establishing per se threshold levels for illegal drugs versus per se threshold levels for prescription and OTC medications. According to the representative, setting per se limits for prescription and OTC medications may cause problems for those who are taking medications as prescribed and may not be impaired. Some federal and selected state agencies have implemented drug- impaired driving awareness campaigns to increase public knowledge about the dangers of drugged driving. For instance, ONDCP developed the Teen Drugged Driving: Parent, Coalition, and Community Group Activity Guide, which provides coalitions, prevention groups, and parent organizations with facts on the dangers and extent of teen and young adult drugged driving, parent and community activities for effective prevention, and resources to further assist in prevention activities. At the state level, Colorado, Washington, and Ohio have conducted public awareness campaigns focused on drug-impaired driving. For example, Colorado has aired a series of public service announcements focusing on the dangers of driving after using marijuana and emphasizing that driving impaired remains illegal, even as marijuana has been legalized at the state level. While NHTSA has also established impaired-driving public awareness programs, materials associated with these programs do not explicitly include information on the dangers of drug-impaired driving. NHTSA’s public awareness programs include high-visibility enforcement campaigns such as the “Drive Sober or Get Pulled Over” and “Buzzed Driving is Drunk Driving” campaigns (see fig. 1, presented previously), which according to NHTSA officials, include drug-impaired driving. For these campaigns, NHTSA provides media, such as television and radio advertisements, to states to help inform the public about the dangers of impaired driving and provides grant funding for state and local police to perform highly visible checkpoints and patrols to reinforce the concept that impaired drivers are at a high risk of being caught and prosecuted. However, using the terms “sober” and “drunk” in the campaign slogans may indicate that the campaigns are about the dangers of driving after consuming alcohol as opposed to drugs. NHTSA’s mission is to support state traffic safety efforts. However, officials from six of seven selected states as well as representatives from GHSA stated that public education more explicitly focused on the dangers of drugged driving is needed, particularly on impairment due to prescription and OTC medications and marijuana. Officials from some states recommended actions such as increased education and requirements for medical professionals regarding prescription drug use and drug-impaired driving, but also recommended that NHTSA expand the current messaging on impairment to include the dangers of marijuana and prescription drugs, which are not explicitly addressed through NHTSA’s impaired driving advertising campaigns. According to NHTSA officials, the current lack of data on impairment thresholds and the broad range of drug effects make it more difficult to concisely communicate the dangers of drug-impaired driving compared to alcohol-impaired driving and has prevented them from including drugs more explicitly in current messaging. However, the messaging for current alcohol-impaired driving campaigns—such as Drive Sober or Get Pulled Over—does not specifically allude to the .08 BAC limit. Increased focus on information about the potential dangers of driving after using drugs could provide an important reminder to drivers that alcohol is not the only substance that may impair driving ability. Adding more explicit messaging about drug-impaired driving could be relatively simple, and could potentially reduce crashes and associated injuries and fatalities. NHTSA officials also said they have other plans to improve public awareness about the dangers of drug-impaired driving. For example, NHTSA officials plan to conduct a recurring survey of driver attitudes and behaviors regarding drugged and drug-impaired driving. Data from this survey could help NHTSA more fully understand any gaps in public awareness about the dangers of drug-impaired driving and develop appropriate public awareness campaigns to address those gaps. NHTSA officials also plan to provide training for physicians and other medical professionals on how to inform patients about the dangers of driving after taking some prescription and OTC medications. These efforts to improve public awareness are in the initial planning stages and could take several years to implement. To leverage the expertise of various stakeholders to address drug- impaired driving, federal agencies—including NHTSA, ONDCP, HHS, NTSB, and states—have coordinated to identify strategies to address drug-impaired driving. For instance, ONDCP and NHTSA convened a roundtable of drug testing and criminal justice experts to examine new drug testing technology in 2012, and have since coordinated to initiate the additional research and testing of roadside oral-fluid-testing devices previously discussed. Additionally, Colorado and Washington have established impaired-driving working groups to develop and implement strategies for addressing drug- and alcohol-impaired driving. These working groups include state and local law-enforcement, traffic-safety, public-health, and motor-vehicle agencies as well as representatives from the court system, professional organizations, the marijuana industry, and others. The lack of complete and reliable data on the extent and nature of drug- impaired driving presents federal, state, and local agencies with challenges to developing and implementing effective countermeasures. Ongoing and planned activities by NHTSA, ONDCP, and others are intended to increase available information on drug-impaired driving and strategies to address the problem, and coordination across the various federal, state, and local stakeholders is essential to fully implement any strategy. For example, development and validation of a roadside oral- fluid-testing device may improve evidence collection processes for local and state law enforcement, but continued efforts to standardize lab procedures, collect and maintain data, educate law enforcement to recognize potential drug impairment, and educate prosecutors are also important to realize the benefits of faster evidence collection. Despite limited data and the challenge of defining impairment, federal and state agencies have identified and implemented promising activities— such as the DRE Program, initiatives to reduce the time to collect and analyze evidence, and public awareness—to combat drug-impaired driving and associated crashes, fatalities, and injuries. For example, the DRE Program and high-visibility enforcement campaigns have already been implemented in many jurisdictions. NHTSA and other federal agencies have initiated, supported, and continue to improve these activities. However, state officials consistently noted that their public awareness efforts would benefit from additional support from NHTSA to help increase public knowledge of the potential dangers of drug-impaired driving, including impairment due to some prescription medications and marijuana. While NHTSA’s plans to improve public awareness of drug- impaired driving through a survey on public behaviors and attitudes and training for medical professionals are promising, these initiatives will take time to implement. Additional efforts, such as general messaging reminding the public about the impairing effects of some drugs and the dangers of driving after using drugs, could help improve public awareness in the near term. We recommend that the Secretary of Transportation direct the Administrator of NHTSA to identify actions—in addition to the agency’s currently planned efforts—to support state efforts to increase public awareness of the dangers of drug-impaired driving. This effort should be undertaken in consultation with ONDCP, HHS, state highway-safety offices, and other interested parties as needed. We provided a draft of this report to DOT, ONDCP, and HHS for review and comment. In written comments (reproduced in appendix II), DOT agreed with our findings and recommendation. ONDCP had no comments. HHS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Transportation, the Director of the White House’s Office of National Drug Control Policy, the Secretary of Health and Human Services, interested congressional committees, and other interested parties. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The Senate Report accompanying the Transportation and Housing and Urban Development, and Related Agencies Appropriations Bill, 2014 requires us to conduct a study on the strategies that NHTSA, ONDCP, and states have taken to address drug impairment and assess the challenges they face in detecting and reducing drug-impaired driving. Pursuant to that mandate, we reviewed the actions of relevant federal agencies and selected states as well as relevant literature to identify actions taken to address drug-impaired driving and associated challenges. Specifically, we analyzed (1) what is known about the extent of drug-impaired driving in the United States; (2) what challenges, if any, exist for federal, state, and local agencies in addressing drug-impaired driving; and (3) what actions federal and state agencies have taken to address drug-impaired driving and what gaps exist, if any, in the federal response to drug-impaired driving. This review defines drug-impaired driving as driving while impaired by illegal drugs or prescription and over- the-counter (legal) medications. This review does not include impaired driving among commercial motor carriers, for which different laws and regulations apply than for members of the general public. To describe what is known about the extent of drug-impaired driving in the United States, to identify challenges to addressing drug-impaired driving, and to identify actions federal and state agencies have taken to mitigate those challenges, we conducted a literature search to identify sources of data on the extent of drugged and drug-impaired driving in the United States and studies on the issue of drug-impaired driving, including challenges and strategies for addressing the problem. We identified existing studies from peer-reviewed journals, government reports, and conference papers based on searches of various databases, such as ProQuest and Transportation Research International Documentation. Search parameters included international studies, studies across the U.S. and in specific states, and research on drug-impaired-driving challenges and countermeasures. These parameters resulted in 394 abstracts, which we narrowed to 225 by eliminating, for example, studies addressing only the extent of drugged or drug-impaired driving in countries other than the United States or studies on the broader topic of drug abuse. We further divided the literature into studies on the extent of drugged or drug- impaired driving in the United States and studies on drug-impaired driving challenges and countermeasures. Studies including data on the extent of drugged or drug-impaired driving in the United States were reviewed to identify the source of the data and limitations. We reviewed these studies and determined that they were sufficiently reliable for the purposes of this report. Studies on countermeasures to address drug-impaired driving and challenges were used to provide additional context and information when needed. We also reviewed state laws to develop information regarding state zero-tolerance per se laws and state per se laws based on drug concentration limits. Additionally, we reviewed documentation, such as research studies and plans and agency guidance, and interviewed officials from relevant governmental and non-governmental organizations to identify (1) sources of data and their limitations, (2) challenges to addressing drug-impaired driving, and (3) actions taken by federal and state agencies to address drug-impaired driving as well as gaps in the federal response. Federal agencies, advocacy organizations, and professional organizations were chosen based on having a mission relevant to the issue of drug-impaired driving and recommendations from relevant stakeholders. We interviewed officials at relevant federal agencies including the National Highway Traffic Safety Administration (NHTSA); the White House’s Office of National Drug Control Policy (ONDCP); National Transportation Safety Board (NTSB); and Department of Health and Human Services’ (HHS) components including the Substance Abuse and Mental Health Services Administration (SAMHSA), Centers for Disease Control and Prevention (CDC), Food and Drug Administration (FDA), and National Institutes of Additionally, we reviewed documentation obtained from Health (NIH).and interviewed officials in seven states: Arizona, California, Colorado, Kansas, Ohio, Vermont, and Washington. We reviewed documentation and interviewed officials at state agencies responsible for highway-safety and drug-impairment programs, advocacy organizations, and professional organizations based on recommendations from the state highway-safety office. For example, we interviewed officials from state highway-safety offices, departments of public health and motor vehicles, state law- enforcement agencies, Drug Recognition Expert (DRE) program coordinators, state Traffic Safety Resource Prosecutors (TSRP), associations of police chiefs and district attorneys, state and local toxicologists, and local interest groups. We selected these states based on recommendations from federal officials and representatives from advocacy and professional organizations and to represent a variety of laws, programs, and other factors. Our selection included: states with legalized recreational marijuana, states that geographically border states in which recreational marijuana use has been legalized, states with legalized medical marijuana, states representing a variety of drug-impaired driving laws, and states identified as having robust programs dealing with driving under the influence of drugs. We also reviewed documentation and interviewed representatives from advocacy and professional organizations including the Governors Highway Safety Association (GHSA), National District Attorneys Association, Society of Forensic Toxicologists, Inc. (SOFT), Mothers Against Drunk Driving (MADD), International Association of Chiefs of Police (IACP), Insurance Institute for Highway Safety, and National Council of State Legislatures (NCSL). We conducted this performance audit from April 2014 through February 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact above, Sara Vermillion (Assistant Director), Ria Bailey-Galvis, Melissa Bodeau, D. Kyle Fowler, Katie Hamer, Sara Ann Moessbauer, Cheryl Peterson, and Maria C. Staunton made key contributions to this report. | The issue of alcohol-impaired driving has received broad attention over the years, but drug-impaired driving also contributes to fatalities and injuries from traffic crashes. However, knowledge about the drug-impaired- driving problem is less advanced than for alcohol-impaired driving. Through Senate Report No. 113-45 (2013), Congress required GAO to report on the strategies NHTSA, ONDCP, and states have taken to address drug-impaired driving and challenges they face in detecting and reducing such driving. This report discusses (1) what is known about the extent of drug-impaired driving in the United States; (2) challenges that exist for federal, state, and local agencies in addressing drug-impaired driving; and (3) actions federal and state agencies have taken to address drug-impaired driving and what gaps exist in the federal response. GAO reviewed literature to identify sources of data on drug-impaired driving; reviewed documentation and interviewed officials from NHTSA, ONDCP, and HHS; and interviewed officials from relevant advocacy and professional organizations and seven selected states. States were selected based on: legal status of marijuana, proximity to states with legalized marijuana, and drugged-driving laws. Various state and national-level data sources—including surveys, arrest data, drug-testing results, and crash data—provide limited information on the extent of drugged and drug-impaired driving in the United States. For example, based on preliminary results from a representative sample of weekend-nighttime and Friday daytime drivers, the National Highway Traffic Safety Administration's (NHTSA) 2013-2014 National Roadside Survey of Alcohol and Drug Use by Drivers ( NRS ) estimated that 20 percent of drivers would have tested positive for at least one drug, with marijuana being the most common drug. However, the survey does not capture the extent to which drivers were impaired by drugs. Arrest data and drug-testing results provide some information on drug-impaired driving, but these data are limited. For example, data for drug impairment may not be separated from that for alcohol impairment and drug testing is not standardized. According to NHTSA officials, currently available data on drug involvement in crashes are generally unreliable due to variances in reporting and testing. The lack of a clear link between impairment and drug concentrations in the body makes it difficult to define drug impairment, which, in turn, exacerbates challenges related to enforcement and public awareness. Compared to alcohol, defining and identifying impairment due to drugs is more complicated due to the large number of available drugs and their unpredictable effects. For example, the NRS includes tests for 75 illegal prescriptions, and over-the-counter (OTC) drugs identified as potentially impairing. Additionally, law enforcement processes for obtaining samples for drug testing can be time consuming and result in a loss of evidence. For example, there is no validated device for roadside drug testing, and obtaining a search warrant to collect a blood sample to confirm the presence of drugs in a driver's system could take several hours, during which time the concentration of the drug in the driver's system could dissipate. Further, state officials identified limited public awareness about the dangers of drugged driving as a challenge. As a result, members of the public may drive while impaired without knowing the risks, potentially leading to collisions, injuries, and fatalities. Federal and state agencies—including NHTSA, the White House Office of National Drug Control Policy (ONDCP), and the Department of Health and Human Services (HHS)—are taking actions to address drug-impaired driving, including improvements in the areas of research and data, education for police officers, evidence gathering, and legal changes. For example, NHTSA is currently conducting research to assess the crash risk associated with drug use (including illegal, prescription, and OTC drugs) by collecting samples from more than 10,000 drivers. However, public awareness of the dangers of drug-impaired driving is an area in which state officials told us that NHTSA could do more to support their efforts. As part of its mission to support state safety efforts, NHTSA has provided media and other materials to states for impaired-driving awareness programs, but these materials are focused on alcohol-impaired driving. While NHTSA plans to improve public awareness through initiatives to conduct surveys on drug-impaired-driving behaviors and attitudes as well as training for medical professionals, these plans could take several years to implement. Additional efforts, such as general messaging reminding the public about the impairing effects of drugs, could help improve public awareness in the near term. GAO recommends that NHTSA take additional actions to support states in emphasizing to the public the dangers of drug-impaired driving. DOT agreed with GAO's recommendation. |
The FTS2001 program is the successor to the two programs that provided long distance telecommunications to the federal government: the Federal Telecommunications System (FTS) and the FTS 2000 program. Each program represented an improvement over its predecessor in terms of available services and technology. The programs’ principal differences in acquiring and delivering long distance telecommunications services are summarized in table 1. A significant difference between the FTS 2000 and the FTS2001 programs is that, unlike the FTS 2000 program, the FTS2001 program is not mandatory. That is, agencies are not required to use FTS2001 for their telecommunications needs. Nevertheless, all but one federal agency represented on the IMC agreed in October 1997 to transfer their core telecommunications requirements expeditiously from FTS 2000 to FTS2001 contracts upon award of those contracts. Between 1994 and 1997, IMC and GSA cooperatively developed, revised, and issued a post-FTS 2000 program strategy, during that time considering and incorporating comments from industry as well as from the Congress. IMC and GSA set two goals for the FTS2001 program: to ensure the best service and price for the government and to maximize competition for services. An integral part of the basic strategy to achieve those goals was ultimately to move beyond offering only long distance telecommunications services by adding integrated end-to-end telecommunications services, that is, permitting each contractor to offer both local and long distance services. Consistent with this original program strategy, the overall FTS2001 program allows further competition in the long distance market beyond the two contractors already awarded FTS2001 contracts. For example, service providers who are awarded contracts under GSA’s Metropolitan Area Acquisition (MAA) program—which provides local telecommunications services in selected geographic areas—may be permitted to compete for FTS2001 business (1) if allowed by law and regulation, (2) after the FTS2001 contracts have been awarded for a year, and (3) if GSA determines that it is in the government’s best interests to allow such additional competition. In implementing this program strategy, GSA awarded two contracts for FTS2001 long distance services—one to Sprint in December 1998 and one to MCI WorldCom in January 1999. Services offered to agencies under these contracts include toll-free and other voice services; international voice and data services; Internet- and intranet-based services; and low- speed and high-speed data communications services. Each contract is for 4 base years from the date of award, with four 1-year options, and each vendor is guaranteed minimum revenues of $750 million over the life of the contracts. Although to date it has also made MAA contract awards to 8 service providers in 19 metropolitan areas across the country, GSA has not yet allowed MAA contractors to offer FTS2001 long distance services. Observing that the FTS2001 minimum revenue guarantees may take longer to meet than the 4-year base period of the Sprint and MCI WorldCom contracts, the GSA Administrator considered those guarantees to be a major factor in deciding when to open the FTS2001 long distance market to MAA contractors. Therefore, the sooner the federal government can be assured of satisfying its FTS2001 minimum revenue guarantees, the sooner GSA can add more long distance options and maximize the ability of federal agencies to achieve basic program objectives cost effectively. In an April 2000 report to the Chairman of the Committee on Government Reform, we assessed the FTS2001 minimum revenue guarantees and their constraining effect on GSA’s ability to add competition to the FTS2001 program. To support service continuity during the FTS2001 transition period, GSA awarded sole-source extension contracts, effective in December 1998, to the two FTS 2000 contractors. These contracts had a 12-month base period with two 6-month options. The AT&T and Sprint extension contracts were originally valued at $801.3 million and $285.5 million, respectively. The second 6-month option on the FTS 2000 extension contracts expired on December 6, 2000, thereby establishing this date as the goal for completing the FTS2001 transition. The transition of the federal government’s long distance telecom- munications services from its FTS 2000 contracts with Sprint and AT&T to its FTS2001 contracts with Sprint and MCI WorldCom is a sizable and complex undertaking. For example, the multibillion-dollar FTS 2000 long distance services contracts ultimately reached more than 1.7 million users during the contracts’ 10-year existence. FTS 2000 revenues for fiscal year 1999 alone approached $752 million for a variety of voice, data, and video communications services to users throughout the federal government. The significant differences between the government’s FTS 2000 transition and its transition to FTS2001 are highlighted in table 2. Although the FTS2001 long distance contracts are administered by GSA, several parties share responsibility for moving to and implementing those contracts. In particular, agencies themselves must select which of the two service providers best meets their service requirements and cost objectives. (Agencies can also select both providers if that arrangement best suits their needs.) This selection is the first step in the transition process. Once this selection is made, the next step is for the selected FTS2001 contractor to complete a site survey of agency requirements and develop a site transition plan. The next step is for agencies to order services. The FTS2001 contractors then must complete the order for the service to be transitioned. At this point in the process, local exchange carriers become involved. In coordination with the two FTS2001 long distance service providers, local carriers provide the facilities and network connectivity that link a customer agency’s premises to the FTS2001 contractor’s network. Finally, after the transition order is completed, the agency must issue a disconnect order to the incumbent FTS 2000 service provider, who must then execute it. This shared responsibility shifts some of the control over transition processes, for some agencies, away from GSA. Rather than actively managing and directing the FTS2001 program transition, as it did with FTS 2000, GSA views itself as a facilitator. Principal responsibility for transition rests with the agencies, in partnership with their selected service providers, where an agency chooses to manage its own transition. Nevertheless, GSA does have important program-level responsibility for transition planning. For example, GSA’s Federal Technology Service organization is responsible for FTS2001 program management and contract administration; centralized customer service; ongoing coordination and procurement of services; billing support to agencies; and engineering, planning, and performance support through review of transition plans and contractor performance monitoring. In addition to that provided by GSA, oversight is also provided by IMC’s Transition Task Force, established to aid transition efforts by sharing information and lessons learned, identifying and solving common problems, and advising GSA FTS managers on transition management and contractual issues. This IMC Transition Task Force began meeting with agency, contractor, and GSA staffs in December 1999 to oversee and support transition activities. According to the IMC’s Transition Task Force, about 88 percent of FTS2001 transition service orders were completed as of February 2001, whereas the original schedule called for the transition to be complete by December 6, 2000. Transition progress varies by the type of service ordered. According to transition management reports prepared by IMC’s Transition Task Force, the government had by February 2000 transferred most voice services from FTS 2000 to FTS2001 and substantially completed the transition of its dedicated transmission services. However, the transition of switched data services—primarily large agency data communications networks using frame relay or ATM (asynchronous transfer mode) technologies—was lagging significantly. These transition results are summarized in table 3. Revised schedules developed by Sprint and MCI WorldCom for the IMC Transition Task Force in February 2001 projected that the contractors would complete their FTS2001 service orders in April 2001 and June 2001, respectively. As the transition progresses, trends suggest that the final services to be transitioned are the most time-consuming. As summarized in figures 1 and 2 below, the number of days on average from the time a contractor receives an order for service until it completes the order has significantly increased in recent months, particularly with respect to data communications services. There are several reasons for FTS2001 transition delays, which involve all the key players in the program, including GSA, federal agencies, FTS2001 contractors, and local exchange carriers: The FTS2001 contractors did not provide GSA with the management data it needed to manage and measure this complex transition process. GSA was not able to rapidly add all the services to the FTS2001 contracts required by agencies to complete their transition. Customer agencies were slow to order FTS2001 services. FTS2001 contractors had staffing shortfalls and turnover on account teams, as well as billing and procedural problems, which impaired their support of agency transition activities. Local exchange carriers had problems delivering facilities and services on time to the FTS2001 contractors. Although progress has been made to correct these problems, they prevented the completion of FTS2001 transition actions by the original December 6, 2000, deadline. As transition manager, GSA plays a critical role in coordinating the efforts of the other players, but it is having a difficult time collecting the accurate and comprehensive data it needs to carry out its responsibilities. While GSA developed an automated system to help track transition data and develop reports, the FTS2001 contractors did not furnish GSA with the data it needed to populate this management system. As a result, GSA and agency transition managers are not receiving the timely, up-to-date information they need to effectively manage transition activities. In April 1999, GSA awarded the SETA Corporation a task order, valued at $245,000, to develop a Transition Status and Monitoring System that could be used by both GSA and agency transition managers to actively manage the FTS2001 transition. The system was intended to provide managers with up-to-date status reports, event notices, and jeopardy reports based on overall contractor transition plans and current progress. Managers could then select these reports by contractor, agency, bureau, location, service type, and transition phase. Using detailed, up-to-date transition information to be provided by the two FTS2001 contractors’ respective on-line transition management plans and databases, this management system was to provide GSA and agency transition managers with the information they needed to measure transition progress and identify variances from transition plans. Although SETA developed the system and delivered it to GSA in September 1999, it has not been used to manage the transition as planned. According to GSA managers, the system is not operational because the basic management information it needs to operate was not provided by the FTS2001 contractors. The FTS2001 contracts require the contractors to develop on-line versions of their respective transition management plans and to update the information in these plans daily. In addition, the contractors are required to develop and maintain information on transition schedules, along with a summary of all information contained in transition management plans, in a transition database. This database information was required to be fully up-to-date for a given location at the time access service was ordered for that location, and the contractors were to update it as required to maintain its currency and accuracy until transition was complete. GSA transition managers were not able to obtain usable and complete transition management information from the contractors until recently, however, which prevented the use of this information in populating the automated transition management system as planned. GSA managers cited two reasons for this problem. First, the FTS2001 contractors were slow to develop this on-line information. For example, GSA did not receive a usable version of a transition database from MCI WorldCom until December 2000; in January 2001, GSA was considering how to use that information to populate its management system to support future telecommunications planning and acquisition efforts. GSA is continuing to work with Sprint to obtain its transition database and expects to receive that information in March 2001. Second, because the contractors were slow to develop the required information, SETA, GSA, and the FTS2001 contractors could not agree on a common interface format that would have allowed SETA to populate the transition management system with any available information sooner. In the interim, GSA and others have been gauging the progress of the transition from information on service orders submitted by agency managers, agency activity reports, and contractor activity reports. In doing so, GSA used time-consuming, ad hoc processes to obtain transition event and status information, including manually reconciling changes as they were reported. In addition to GSA’s efforts, the IMC’s Transition Task Force has been verifying transition-reporting data with agencies and contractors in order to improve the accuracy of their transition measurements. In spite of these efforts, GSA cannot be certain that the information it gathers presents a full accounting of transition progress. Although both the IMC Transition Task Force and GSA report transition progress in terms of transition orders completed, their reports provide an incomplete perspective because they do not report on the final step in the transition process—the issuance and completion of disconnect orders required to turn off FTS 2000 services. Reporting of this final step can significantly affect perceptions of progress. For example, as a means of tracking transition completion, monthly reports from the U.S. Department of Agriculture’s FTS2001 transition manager include information on both transition orders completed and FTS 2000 billing statistics. That is, USDA managers are using their FTS 2000 billing information to confirm that service disconnect orders are completed by AT&T. As illustrated in table 4 below, although orders completed indicate that USDA is making substantial transition progress, this progress is substantially reduced when viewed in terms of completed service disconnection. GSA receives disconnect reports from AT&T and is comparing the data in those reports to its inventory of FTS 2000 services and to reports from the FTS2001 contractors of transition orders completed. Where it appears that FTS 2000 services have not yet been disconnected, GSA flags those instances and reports them to the affected agency. However, GSA does not use this information to report formally on transition progress. As a result, transition progress reports that focus only on service order completion will not indicate full transition completion because of the time lag between the completion of an FTS2001 service order and the disconnection of the FTS 2000 service that it replaces. In addition to its responsibility for overseeing the transition, GSA has administrative responsibility for processing and authorizing contract modifications. This function is critical to the ongoing transition because, at the time of their initial award, the FTS2001 contracts did not contain all the services that agencies need to complete their transition. To transfer their services from FTS 2000 contracts, agencies must be able to order suitable replacement services from their FTS2001 contractors. Adding all the services needed to complete transition to the FTS2001 contracts has taken time, however, which has in turn delayed agency transition efforts. Although GSA set a target of completing a contract modification within 60 days of receipt of proposal from the contractor, the time for completion has actually varied widely, ranging from 1 week to more than 15 months. For example, for nine transition-critical modifications completed by October 23, 2000, the processing time averaged 162 days from the time the contractors’ proposal was received to the time the modification was completed. Six of those nine modifications required over 60 days to complete processing. Modifications can take longer than expected to complete because GSA and the contractors must negotiate the terms, and according to GSA managers, customer agency need for customized services also contributes to delay in processing contract modifications. One modification—a 7.5 kHz dedicated transmission service for the FBI that affected over 225 service orders—was under consideration for more than 11 months, delayed by pricing considerations. Other transition-critical modifications are still in evaluation, such as modifications for managed network services required to support transition efforts at the Social Security Administration, Treasury, Interior, and Coast Guard. GSA has taken steps to improve its processing of contract modifications, and workarounds have been used to minimize the effect of these delays. For example, in August 2000, on the advice of the IMC Transition Task Force, GSA began prioritizing its processing of transition-related contract modifications. By February 21, 2001, all but one contract modification required to complete the Sprint FTS2001 transition had been made, and six transition-related contract modifications required for the MCI WorldCom contract were still in process. GSA expected to complete the most critical of these modifications by the end of February 2001 and the remainder by the end of April 2001. Further, agencies are receiving managed network services on a trial basis as a workaround while the managed network services modifications with MCI WorldCom are being developed and processed. Although IMC specifically recognized the time-critical nature of the FTS2001 transition when it chartered the Transition Task Force, this did not result in prompt FTS2001 service ordering. The delay in issuing transition service orders has been significant. Both FTS2001 contracts were awarded by January 12, 1999, with the planned completion date for the transition being December 6, 2000. As of January 2000, halfway through the allotted transition period, less than a third of the total service orders required for transition had been submitted by agencies. After February 2000, the pace of agency order submissions increased significantly. Nevertheless, for transitioning switched data services, where the least progress has been made, agencies had submitted only about half the service orders required for transition by June 2000—18 months after the final FTS2001 contract was awarded and 12 months after the start of transition activity. The slow pace of orders was associated with two factors. First, the initial 12 months of the FTS2001 contracts coincided with agency planning and preparation associated with the Year 2000 computer issue. As a result, many transition activities were suspended during this period. Second, agency efforts were hindered by a reported lack of resources devoted to transition planning and management. For example, 7 of 11 transition managers at federal agencies that planned to move to FTS2001 told us that agency resource limits hampered their transition progress. Recognizing the need for assistance, GSA stepped in and made contractor support resources available to agencies, covering the cost of those resources out of the FTS2001 transition fund. As of February 2001, agencies had submitted almost all orders for switched voice and dedicated transmission services, with orders for less than 4 percent of switched data services still outstanding. Reported shortcomings with FTS2001 contractors’ customer support inhibited agency transition efforts and contributed to transition delay. For example, 10 of 12 agency transition managers we spoke with stated that initial transition efforts were hampered by turnover in contractor account teams and inadequate contractor procedures. These issues were specifically raised by the Treasury Chief Information Officer in a November 1999 letter to GSA’s FTS Commissioner expressing dissatisfaction with Treasury’s service provider, noting the contractor’s continual inability to meet customer due dates, failure to provide adequate transition resources, and unacceptable project planning and scheduling. The Treasury’s Office of Comptroller of the Currency (OCC), which began its transition in June 1999, terminated that effort in August 1999 because of contractor performance concerns and in February 2000 was threatening to leave the FTS2001 program. In response to these shortcomings, both Sprint and MCI WorldCom took steps to increase substantially their resources supporting transition efforts and to improve their procedures. As a result, following discussion with its Sprint contractor on performance concerns, OCC restarted its transition in February 2000. The second major problem area undermining transition progress was a lack of accurate, up-to-date billing information and the improper billing of services. The IMC Transition Task Force Chairman stated at that group’s September 2000 meeting that billing was emerging as the number one transition-related issue. We were not able to obtain data to quantify the severity of billing problems across all agencies. However, we did document instances where the National Park Service, the Bureau of Land Management, the Tennessee Valley Authority, and bureaus within the U.S. Department of Agriculture were improperly billed by MCI WorldCom at higher commercial rates instead of at FTS2001 program rates after moving to FTS2001. In some cases these commercial bills led to collection activities against the agency for nonpayment and in a few instances actually resulted in the disconnection of service. Rather than focusing on transition matters such as ordering services, these agencies had to redirect resources to resolve incorrect billings, respond to and try to resolve collection actions that had been improperly initiated, and restore erroneously disconnected services. The National Park Service and the Bureau of Labor Management either suspended or threatened to suspend their service ordering and transition efforts as a result of these problems and the time and effort required to solve them. These billing problems arose because GSA did not ensure that the FTS2001 contractors met all billing requirements. For example, MCI WorldCom was required to have a contract-compliant service ordering and billing system in place before agencies began ordering services, but only recently has GSA completed acceptance testing for that system. GSA had waived the test and acceptance requirement for an indefinite period pending completion of testing to allow MCI WorldCom to begin accepting and processing FTS2001 service orders. However, GSA suspended acceptance testing in May 2000 because the MCI WorldCom billing system experienced persistent problems with the quality and timeliness of the monthly invoices it was producing for GSA. GSA escalated these billing issues with MCI WorldCom, and since September 2000 has held biweekly, executive-level meetings to resolve them. After receiving more timely and complete invoices from MCI WorldCom, GSA restarted service order and billing system acceptance testing in December 2000 and completed testing in February 2001; formal acceptance is expected in March 2001. FTS2001 billing problems are not limited to MCI WorldCom. GSA has been trying to solve problems regarding approximately 23 contract deliverable items (including nine billing-related requirements) that Sprint has either not yet provided to the government or has not delivered in an acceptable form. GSA is continuing to address these issues with Sprint as well. The completion of FTS2001 service orders has also been delayed because of difficulties obtaining required network access services and facilities from local carriers when and where needed. The IMC Transition Task Force chairman reported in March 2000 that 46 percent of agency locations that required local carrier access had experienced delays completing their service orders ranging from a few days to months. This problem has been worse where agencies wish to obtain higher speed access facilities in rural locations, such as Idaho Falls, Idaho, and in metropolitan areas that are experiencing a competing high demand for services and facilities, such as the Washington, D.C., metropolitan area. Further compounding this issue was the recent strike by employees of the local exchange carrier, Verizon, which adversely affected more than 1,200 FTS2001 transition orders in the Northeast and Mid-Atlantic areas of the country. These particular problems, which affect all users seeking to expeditiously obtain services from their local carriers, are not unique to the FTS2001 contracts. Nevertheless, they contributed to delays in implementing these contracts. FTS2001 transition delays have three important effects on the program goals of ensuring the best service and price for the government and maximizing competition. First, delays in transitioning services increase the costs of those services. Second, because the FTS2001 contracts waive service performance requirements until the transition is complete, the government cannot ensure that service delivery meets expectations. Third, delays in transitioning services slow the accumulation of revenues to meet the FTS2001 contracts’ minimum revenue guarantees, which makes GSA reluctant to add more contractors offering long-distance services. Delays in completing the FTS2001 transition will increase the cost of telecommunications for those agencies that have not completed their transition. There are several reasons why costs will rise for these agencies: Discounts under FTS2000 that were offered by Sprint expired on September 30, 2000, increasing the cost of services contracted after that date by approximately 20 to 25 percent. The modification made to AT&T’s FTS 2000 extension contract in December 2000 discontinues discounts of 20 to 65 percent that had been in effect for a variety of services. The AT&T extension contract modification made in December 2000 also required a one-time payment to AT&T of $8 million. GSA is raising the $8 million payment by assessing a 20 percent surcharge against user agencies’ monthly FTS 2000 bills through June 6, 2001. For FTS 2000 contractors Sprint and AT&T, volume discounts for voice services are in effect. That is, the unit price that agencies will pay for these services will increase as the volume of traffic on the FTS 2000 extension contracts decreases. For example, a telephone call placed with AT&T increases by more than 77 percent, to almost 10 cents per minute, once aggregate calling volume declines to less than 50 million minutes. Also, this increase does not include increases in access costs that are also sensitive to call volume. The FTS2001 contract waives basic contract performance requirements until the FTS2001 transition has been completed, thereby restricting the government’s ability to hold the FTS2001 contractors accountable for shortcomings in performance. These performance requirements include such things as the timeliness of service delivery, the availability of services, the quality or grade of service, and the restoration of failed or degraded service. As a result, transition delays not only increase the price the government pays for telecommunications services, they also hinder the government’s ability to hold the FTS2001 contractors accountable for timely and effective service delivery. In developing the FTS2001 program strategy, IMC and GSA envisioned that FTS2001 contractors would be allowed to compete to offer services in the local MAA telecommunications markets and that MAA contractors would be allowed to compete in the FTS2001 long-distance market. This strategy would benefit agencies by allowing them to competitively acquire telecommunications services on an end-to-end local and long-distance service basis. There are several potential advantages to this approach. First, agencies might be able to obtain services at lower cost than they would otherwise because of opportunities to aggregate multiple service requirements with one provider. Second, using a single contractor would permit agencies to reduce the cost and effort associated with managing multiple contractors. Third, customer agencies might be able to obtain better network performance guarantees by purchasing end-to-end services from a service provider who owns or operates that infrastructure. These advantages—obtaining reliable, high-quality telecommunications services at low cost—increase in importance as the federal government moves to deliver more information and services electronically. GSA’s ability to maximize competition for services and enable agencies to acquire end-to-end services is constrained by its need to meet the substantial FTS2001 revenue guarantees. Under the terms of the respective contracts, each of the FTS2001 contractors is guaranteed minimum revenues of $750 million over the life of the contracts, which may run from 4 to 8 years. Year 3 of the FTS2001 contracts began on October 1, 2000. When it awarded these contracts, GSA believed that they might be worth more than $5 billion over an 8-year period. However, a GSA analysis of FTS2001 savings completed on January 28, 1999, revealed that the contracts’ lowest prices could actually result in total contract revenues of only $2.3 billion over 8 years. Revised program estimates developed in February 2000 affirmed this $2.3 billion revenue estimate. Because of the need to meet the FTS2001 revenue commitments, GSA has not yet allowed other contractors into FTS2001 as originally envisioned. Delays in completing the FTS2001 transition slow the accumulation of revenue to meet the government’s contract commitment. Although FTS 2000 revenues do not correlate directly with FTS2001 revenues because of service and pricing differences, the available revenue data indicate that significant FTS 2000 expenditures are continuing that cannot be applied to meet FTS2001 minimum revenue guarantees. During fiscal year 2000, for example, more than $465 million was paid out for FTS 2000 services. In addition, GSA reported that although 84 percent of all FTS2001 agency locations had completed transition by January 3, 2001, agencies still spent almost $36.5 million on FTS 2000 services in December 2000, the last month for which data are available. Even for Sprint, which is both an incumbent FTS 2000 service provider and an FTS2001 contractor, payments made for services not moved to the FTS2001 contract do not reduce the government’s minimum revenue commitments to Sprint for FTS2001. Sprint’s monthly FTS 2000 billings were about $9.4 million in December 2000. Sprint expects to complete its portion of the FTS2001 transition in April 2001. In managing the contracts’ minimum revenue guarantees, GSA must cope not only with transition delay, but also with transition deferral and the loss of program customers. For example, despite some agency plans to transfer their FTS 2000 services to the FTS2001 contracts, 17 departments or agencies have since decided to use alternative suppliers for all or part of their services, which GSA values collectively at more than $78 million. A few examples illustrate these losses. The Internal Revenue Service, in order to minimize risk, has delayed transitioning its toll-free 800 number services until it completes its systems modernization. NASA decided that it would be more efficient to acquire its data communications services through the agency’s information technology support contract. (The agency is, however, transitioning its switched voice service to FTS2001.) The U.S. Postal Service, believing it could obtain better prices outside FTS2001, has awarded its own contract to meet most of its service needs. The Tennessee Valley Authority decided in October 2000 that it would not transfer its remaining services to FTS2001, partly due to problems encountered with billing and disconnected service. This decline in customer base further exacerbates the difficulty of managing FTS2001 revenue guarantees. If transition can be completed rapidly, and if there is no further loss of customers, FTS2001 will be in a better position to expeditiously meet the minimum revenue guarantees, which will give GSA greater latitude in adding contractors in order to achieve its basic program goals. Despite progress, the government did not meet its deadlines for transition to FTS2001 and has not yet completed this effort. The deadline was missed for numerous reasons: a lack of sufficient information to effectively oversee and manage this complex transition, slowness in completing all the contract modifications needed to add transition-critical services to the FTS2001 contracts, slowness of some customer agencies to order FTS2001 services, staffing shortfalls and billing problems on the part of FTS2001 contractors, and local exchange carriers’ difficulties providing facilities and services on time. Until GSA addresses the outstanding issues impeding transition and expeditiously completes this transition, it will be unable to fully achieve its basic FTS2001 goals of ensuring the best service and maximizing competition. To enable more accurate tracking of FTS2001 transition progress, we recommend that the Administrator of General Services direct the program manager for FTS2001 to obtain usable and complete management information, as required by contract, from the FTS2001 contractors by April 27, 2001; and track the status of FTS 2000 service disconnection orders and include that information in GSA’s transition progress reports from April 6, 2001, onward. To ensure achievement of FTS2001 program goals, we recommend that the Administrator direct the program manager for FTS2001 to promote the completion of the FTS2001 transition by ensuring that all remaining contract modification proposals related to the transition are processed expeditiously. To ensure prompt identification and resolution of any outstanding billing issues, we recommend that the Administrator direct the program manager for FTS2001 to work with IMC to catalog all billing problems raised since January 2000 during the meetings of IMC and the IMC’s Transition Task Force, GSA’s biweekly FTS2001 management meetings, and other agency working groups; document the status of problems raised, and how and when they were resolved, as appropriate; obtain and document agency confirmation of the resolution of closed develop an action plan that identifies all current billing problems, the actions taken to date to resolve those problems, and a plan that will correct those problems by July 2, 2001. Further, we recommend that the Administrator direct the program manager for FTS2001 to continue efforts to obtain consideration from the FTS2001 contractors for failure to meet management information and billing requirements within the time frames established in the contracts. In written comments on a draft of this report, the Acting Administrator for General Services generally agreed with our report and our recommendations, and indicated that GSA was acting to implement all recommendations. The Administrator stated, however, that the report did not reflect the success of the FTS2001 transition. We believe that we have fairly characterized progress made on the transition and GSA’s efforts to address those factors that are impeding completion. At the same time, we have noted that the deadline for completing transition was missed and as a result FTS2001 is experiencing delays in meeting its goals. We did not assess the cost savings that GSA mentions because this was not part of our review. GSA also disagreed with our use of transition progress measurements developed by the IMC Transition Task Force because those measurements are incomplete and misleading. GSA requested that we use statistics generated by its Transition Coordination Center, which measure transition progress by customer sites, because GSA has been using these statistics for 18 months and the methodology was endorsed by IMC. We do not concur with GSA’s position. The Transition Task Force’s measurements are based on the number of service orders completed—a measurement that GSA ultimately tracks as well—as reported to the Transition Task Force by contractor program management staff and verified with agency transition managers. While we report that there are limitations on available transition management information, we believe that the IMC Transition Task Force’s statistics represent a reasonably developed and independently derived assessment. In its comments, GSA lists four additional factors that it believes have contributed to transition delays: a lack of an accurate service inventory, time and effort required to arrange for procedural agreements and network gateways between FTS2001 and FTS 2000 contractors, customer agencies’ need to upgrade their facilities before or during transition, and customer agencies’ need for customized services. Because of the complexity of the transition process, we recognize that we did not discuss all the factors contributing to its delay. Rather, we focused on presenting the most significant factors. GSA mentioned some other contributing factors that may be involved. With regard to GSA’s statements on service inventories and the need to upgrade customer facilities, we agree that an agency should have an accurate service inventory and a clear understanding of its transition needs—including upgrade requirements— before ordering services. These factors may have contributed to the agency delays in ordering FTS2001 services described in our report. Further, we recognize that there were delays in establishing procedural agreements and network gateways between the contractors. We agree with GSA, therefore, that the delay from the time that this transition risk was identified to the time the agreements were reached likely impaired transition activity for some services. Finally, we recognize that some agencies’ need for customized services was a reason for delays in the development and processing of contract modifications, and we have incorporated those comments where appropriate. GSA offered two technical comments with respect to our recom- mendations concerning completion of contract modifications and the pursuit of consideration for requirements not met that we have incorporated as appropriate. GSA provided a number of other technical comments that we have incorporated as appropriate. GSA’s written comments are presented in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from its issue date. At that time, we will send copies of this report to Representative Janice Schakowsky, Ranking Minority Member, Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations; Representative Jim Turner, Ranking Minority Member, Subcommittee on Technology and Procurement Policy; and interested congressional committees. We are also sending copies to the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget, and the Honorable Thurman M. Davis, Sr., Acting Administrator of the General Services Administration. Copies will be made available to others upon request. The report will also be available on GAO’s home page at http://www.gao.gov. If you have any questions regarding this report, please contact me or Kevin Conway at (202) 512-6240 or by e-mail at [email protected] or [email protected], respectively. Other major contributors to this report were George L. Jones and Mary Marshall. Our objectives were (1) to determine the status of the FTS2001 transition, (2) to identify the reasons for delays in transitioning to FTS2001, and (3) to evaluate the potential effects of transition delays on meeting program goals of maximizing competition for services and ensuring best service and price. To determine the status of FTS2001 transition efforts, we obtained and analyzed the transition plans and related documentation prepared by GSA, the FTS2001 contractors, and select federal agencies. We obtained and reviewed transition management reports independently prepared by GSA FTS2001 program managers and by the Interagency Management Council’s Transition Task Force. To identify the reasons for the pace of the FTS2001 transition and to determine why the transition was taking so long to complete, we interviewed GSA’s FTS2001 program managers as well as FTS2001 contractors in order to better understand their respective transition processes and reasons for progress to date. We also reviewed transition documentation, including minutes and presentations from monthly IMC and IMC Transition Task Force meetings and GSA bi-weekly management sessions with the FTS2001 contractors. In addition, we interviewed transition managers in 12 agencies to understand the processes they had in place for the transition, progress made, and problems encountered. The agencies selected were the Departments of Defense, Energy, Housing and Urban Development, Treasury, Agriculture, Education, Health and Human Services, and Interior, as well as the Administrative Office of the U.S. Courts, the Tennessee Valley Authority, the National Aeronautics and Space Administration, and the U.S. Postal Service. We selected 10 of the 12 agencies because they represented the five leading agencies and five lagging agencies identified in a June 2000 GSA transition progress report. We subsequently identified through program management documents two additional agencies that could provide us with greater insight into the billing issues that were impeding transition progress. We also interviewed officials from an FTS 2000 service provider, AT&T, and a local exchange carrier, Verizon, to determine their roles in the transition process and to identify impediments they may have encountered while working with agencies and FTS2001 contractors to transfer telecommunications services to FTS2001. To evaluate the potential effect of transition delay on program goals, we reviewed program strategy documentation, FTS2001 contracts, and reports and documentation including weekly GSA transition status reports, minutes of monthly IMC and IMC Transition Task Force meetings, presentations from monthly IMC Transition Task Force meetings, and minutes of GSA bi-weekly management sessions with the FTS2001 contractors. Further, we reviewed government FTS 2000 and FTS2001 billing reports current through the month of December 2000 (the last month for which billing information was available) and revised FTS2001 contractor transition completion estimates. We also reviewed a September 2000 revenue analysis prepared for GSA by Mitretek Systems that considered the potential effect of transition delays and changes in revenue projections—positive and negative—on minimum revenue guarantees based on transition progress up to that date. We obtained documentation and reviewed the terms and conditions of FTS 2000 extension contract modifications that were made in December 2000 and interviewed GSA FTS2001 contracting staff to understand the implications of those modifications on FTS2001 minimum revenue guarantees. We performed our audit work from July 2000 through February 2001 in accordance with generally accepted government auditing standards. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | Telecommunications services are increasingly critical to transforming the way the federal government does business; communicates internally and externally; and interacts with citizens, industry, and state, local, and foreign governments. Electronic government services based on reliable, secure, and cost-effective telecommunications can enable agencies to streamline the way they do business, reduce paperwork and delays, and increase operational efficiencies. It is important that a far-reaching program, such as the FTS2001 program, take full advantage of new services offered by industry; that agencies effectively and efficiently implement these telecommunications services to improve operations; and that the program be successfully implemented to maximize benefits to the taxpayers. Despite progress, the government did not meet its deadlines for transition to FTS2001 and has not yet completed this effort. The government missed its deadline for several reasons, including a lack of sufficient information to effectively oversee and manage this complex transition, slowness in completing all the contract modifications needed to add transition-critical services to the FTS2001 contracts, slowness of some customer agencies to order FTS2001 services, staffing shortfalls and billing problems on the part of FTS2001 contractors, and local exchange carriers' difficulties providing facilities and services on time. Until the General Services Administration addresses the outstanding issues impeding transition and expeditiously completes this transition, it will be unable to fully achieve its basic FTS2001 goals of ensuring the best service and maximizing competition. |
Historically, the mining of hardrock minerals, such as gold, lead, copper, silver, and uranium, was an economic incentive for exploring and settling the American West. However, when the ore was depleted, miners often left behind a legacy of abandoned mines, structures, safety hazards, and contaminated land and water. Even in more recent times, after cleanup became mandatory, many parties responsible for hardrock mining sites have been liquidated through bankruptcy or otherwise dissolved. Under these circumstances, some hardrock mining companies have left it to the taxpayer to pay for cleanup of the mining sites. Four federal agencies—the Department of Agriculture’s Forest Service, the Environmental Protection Agency (EPA), and the Department of the Interior’s BLM and Office of Surface Mining Reclamation and Enforcement (OSM)—fund the cleanup and reclamation of some of these abandoned hardrock mine sites. BLM’s and the Forest Service’s Abandoned Mine Lands programs focus on the safety of their land by addressing physical and environmental hazards. EPA’s funding, under its Superfund Program, among other things, focuses on the cleanup and long-term health effects of air, ground, or water pollution caused by abandoned hardrock mine sites, and is generally for mines on nonfederal land. OSM, under amendments to the Surface Mining Control and Reclamation Act of 1977, can provide grants to fund the cleanup and reclamation of certain hardrock mining sites. BLM and the Forest Service are responsible for managing more than 450 million acres of public land in their care, including land disturbed and abandoned by past hardrock mining activities. BLM manages about 258 million acres in 12 western states, and Alaska. The Forest Service manages about 193 million acres across the nation. In 1997, BLM and the Forest Service each launched a national Abandoned Mine Lands Program to remedy the physical and environmental hazards at thousands of abandoned hardrock mines on the federal land they manage. According to a September 2007 report by these two agencies, they had inventoried thousands of abandoned sites and, at many of them, had taken actions to cleanup hazardous substances and mitigate safety hazards. BLM and the Forest Service are also responsible for managing and overseeing current hardrock operations on their land, including the mining operators’ reclamation of the land disturbed by hardrock mining. Reclamation can vary by location, but it generally involves such activities as regrading and reshaping the disturbed land to conform with adjacent land forms and to minimize erosion, removing or stabilizing buildings and other structures to reduce safety risks, removing mining roads to prevent damage from future traffic, and establishing self-sustaining vegetation. One of the agencies’ key responsibilities is to ensure that adequate financial assurances, based on sound reclamation plans and cost estimates, are in place to guarantee reclamation costs. If a mining operator fails to complete required reclamation, BLM or the Forest Service can take steps to obtain funds from the financial assurance provider to complete the reclamation. BLM requires financial assurances for both notice-level hardrock mining operations—those disturbing 5 acres of land or less—and plan-level hardrock mining operations—those disturbing over 5 acres of land and those in certain designated areas, such as the national wild and scenic rivers system. For hardrock operations on Forest Service land, agency regulations require reclamation of sites after operations cease. According to a Forest Service official, if the proposed hardrock operation is likely to cause a significant disturbance, the Forest Service requires financial assurances. Both agencies allow several types of financial assurances to guarantee estimated reclamation costs for hardrock operations on their land. According to regulations and agency officials, BLM and the Forest Service allow cash, letters of credit, certificates of deposit or savings accounts, and negotiable U.S. securities and bonds in a trust account. BLM also allows surety bonds, state bond pools, trust funds, and property. EPA administers the Superfund Program, which was established under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 to address the threats that contaminated waste sites, including those on nonfederal land, pose to human health and the environment. The act also requires that the parties statutorily responsible for pollution bear the cost of cleaning up contaminated sites, including abandoned hardrock mining operations. Some contaminated hardrock mine sites have been listed on Superfund’s National Priorities List— EPA’s list of seriously contaminated sites. Typically, these sites are expensive to cleanup and the cleanup can take many years. For example, in 2004, EPA’s Office of Inspector General determined there were 63 hardrock mining sites on the National Priorities List that would cost up to $7.8 billion to cleanup, $2.4 billion of which was expected to be borne by taxpayers rather than the parties responsible for the contamination. Regarding financial assurances, EPA has statutory authority under the Superfund program to require businesses handling hazardous substances on nonfederal land to provide financial assurances and is taking steps to do so. In 2006, we testified that without the mandated financial assurances, significant gaps in EPA’s environmental financial assurance coverage exist, thereby increasing the risk that taxpayers will eventually have to assume financial responsibility for cleanup costs. OSM’s Abandoned Mine Land Program primarily focuses on cleaning up abandoned coal mine sites. However, OSM, under amendments to the Surface Mining Control and Reclamation Act of 1977, can provide grants to fund the cleanup and reclamation of certain hardrock mining sites either (1) after a state certifies that it has cleaned up its abandoned coal mine sites and the Secretary of the Interior approves the certification or (2) at the request of a state or Indian tribe to address problems that could endanger life and property, constitute a hazard to the public and safety, or degrade the environment, and the Secretary of the Interior grants the request. In 2008, we reported that OSM had provided more than $3 billion to cleanup dangerous abandoned mine sites. Its Abandoned Mine Land Program had eliminated safety and environmental hazards on 314,108 acres since 1977, including all high-priority coal problems and noncoal problems in 27 states and on the land of three Indian tribes. In 2008 and 2009, we reported that BLM and the Forest Service have had difficulty determining the number of abandoned hardrock mines on their land and have no definitive estimates on the number of such sites. Moreover, we reported that other estimates that had been developed about the number of abandoned hardrock mine sites on federal, state, and private land in the 12 western states and Alaska (where most of the mining takes place) varied widely and did not provide an accurate assessment of the number of abandoned mines in these states. For example, federal agency estimates included abandoned nonhardrock mines such as coal mines, and included a large number of sites on land with “undetermined” ownership, which may not all be on federal land. Similarly, we reviewed six studies conducted between 1998 and 2008 that estimated the number of abandoned hardrock mine sites in the 12 western states and Alaska, regardless of the type of land they were located on. However, we found that the estimates in these studies varied widely in part because there was no generally accepted definition for what constitutes an abandoned hardrock mine site and because different states define these sites differently. In 2008, we developed a standard definition of an abandoned hardrock mining site and used this definition to determine how many such sites potentially existed on federal, state and private land in the12 western states and Alaska. Based on our survey of these states, we determined that there were at least 161,000 abandoned hardrock mine sites in these states, and at least 33,000 of these sites had degraded the environment, by, for example, contaminating surface water and groundwater or leaving arsenic-contaminated tailings piles. We also determined that these 161,000 sites had at least 332,000 features that may pose physical safety hazards, such as open shafts or unstable or decayed mine structures. In 2008, we reported that BLM, the Forest Service, and the U.S. Geological Survey (USGS) either do not routinely collect or do not consistently maintain data on the amount of hardrock minerals being produced on federal land, the amount of hardrock minerals remaining, and the total acreage of federal land withdrawn from hardrock mining operations. According to officials with BLM and the Forest Service, they do not have the authority to collect information from mine operators on the amount of hardrock minerals produced on federal land, or the amount remaining. In April 2011, we reported on this issue again and found that this information is not being collected. In contrast, USGS collects extensive data on hardrock mineral production through its mineral industry surveys and reports these data in monthly, quarterly, and annual reports, but mine operators’ participation in these surveys is voluntary, and USGS does not collect land ownership data that would allow it to determine the amount of hardrock mineral production on federal land. As a result, we found that it is not possible to determine hardrock mineral production on federal land from the USGS data. In addition, although USGS does publish the total amount of hardrock mineral production by mineral type, it is prohibited by law from reporting individual mine production and other company proprietary data unless the mine operator authorizes release of that information. In some cases, mine operators that respond to these surveys report consolidated data that covers production from several mines. Therefore, information on hardrock mineral production for every mine is not available to the public. Some hardrock mineral production data are available from state sources and through financial reports filed with the Securities and Exchange Commission. However, these data may not always provide the level of detail necessary to determine the amount of mineral production on federal land. BLM also does not centrally maintain data on the amount of federal land withdrawn from hardrock mining operations. BLM documents land withdrawn from hardrock mining operations on its master title plats— detailed paper maps maintained at BLM’s state offices. These maps contain land survey information on federal land, including ownership information, land use descriptions, and land status descriptions. BLM’s annual publication, Public Land Statistics, does report the total number of acres withdrawn each year, but these data do not account for instances in which multiple withdrawals may have overlapping boundaries, which can result in double-counting the number of acres withdrawn. Furthermore, the reason for withdrawing the land is not always indicated, making it difficult to determine whether it was withdrawn from mining or from other purposes. In March 2008, we reported that over a 10 year period, four federal agencies—BLM, the Forest Service, EPA, and OSM— had spent at least a total of $2.6 billion to reclaim abandoned hardrock mines on federal, state, private, and Indian land. Of this amount, EPA had spent the most— $2.2 billion. The amount each agency spent annually varied considerably, and the median amount spent for abandoned hardrock mines on public land by BLM and the Forest Service was about $5 million and about $21 million, respectively. EPA spent substantially more—a median of about $221 million annually—to cleanup abandoned mines that were generally on nonfederal land. Further, OSM provided grants with an annual median value of about $18 million to states and Indian tribes through its program for hardrock mine cleanups. As we have reported, contributing to the costs incurred by the federal government to reclaim land disturbed by mining operations are inadequate financial assurances required by BLM for current hardrock mining operations. Since 2005, we have reported several times that operators of hardrock mines on BLM land have provided inadequate financial assurances to cover estimated reclamation costs in the event that they fail to perform the required reclamation. Specifically, in June 2005 we reported that some current hardrock operations on BLM land did not have financial assurances, and some had no or outdated reclamation plans and/or cost estimates on which the financial assurances were based. At that time we concluded that BLM did not have an effective process and critical management information needed for ensuring that adequate financial assurances are actually in place, as required by federal regulations and BLM guidance. We made recommendations to strengthen BLM’s management of financial assurances for hardrock operations on its land, which the agency generally implemented. However, when we again looked at this issue in 2008, we found that although BLM had taken actions to strengthen its processes, the financial assurances that it had in place as of November 2007 were still inadequate to cover estimated reclamation costs. Specifically, as of November 2007, hardrock mining operators had provided financial assurances valued at approximately $982 million to guarantee the reclamation costs for 1,463 hardrock mining operations on BLM land in 11 western states, according to BLM’s Bond Review Report. BLM’s report indicated that 52 of the 1,463 hardrock mining operations had inadequate financial assurances—about $28 million less than needed to fully cover estimated reclamation costs. However, our review of BLM’s assessment process found that BLM had inaccurately estimated the shortfall, and that in fact the financial assurances for these 52 operations should be more accurately reported as about $61 million less than needed to fully cover estimated reclamation costs. In addition, we found that BLM’s approach for determining the adequacy of financial assurances is not useful because it does not clearly lay out the extent to which financial assurances are inadequate. For example, in California, BLM reported that, statewide, the financial assurances in place were $1.5 million greater than required, suggesting reclamation costs are being more than fully covered. However, according to our analysis of only those California operations with inadequate financial assurances, the financial assurances in place were nearly $440,000 less than needed to fully cover reclamations costs for those operations. Having adequate financial assurances to pay reclamation costs for BLM land disturbed by hardrock operations is critical to ensuring that the land is reclaimed if operators fail to complete reclamation as required. When operators with inadequate financial assurances fail to reclaim BLM land disturbed by their hardrock operations, BLM is left with public land that requires tens of millions of dollars to reclaim and poses risks to the environment and public health and safety. In conclusion, Mr. Chairman, while it is critical to develop innovative approaches to cleanup abandoned mines, our work also demonstrates the importance of federal agency’s having accurate information on the number of abandoned hardrock mines to know the extent of the problem and adequate financial assurances to prevent future abandoned hardrock mines requiring taxpayer money to cleanup. Chairman Lamborn, Ranking Member Holt, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions that you might have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Anu K. Mittal, Director, Natural Resources and Environment team, (202) 512-3841 or [email protected]. Key contributors to this testimony were Andrea Wamstad Brown and Casey L. Brown. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The General Mining Act of 1872 helped foster the development of the West by giving individuals exclusive rights to mine gold, silver, copper, and other hardrock minerals on federal land. However, miners often abandoned mines, leaving behind structures, safety hazards, and contaminated land and water. Four federal agencies--the Department of the Interior's Bureau of Land Management (BLM) and Office of Surface Mining Reclamation and Enforcement (OSM), the Department of Agriculture's Forest Service, and the Environmental Protection Agency (EPA)--fund the cleanup of some of these hardrock mine sites. From 2005 through 2009, GAO issued a number of reports and testimonies on various issues related to abandoned and current hardrock mining operations. This testimony summarizes some of the key findings of these reports and testimonies focusing on the (1) number of abandoned hardrock mines, (2) availability of information collected by federal agencies on general mining activities, (3) amount of funding spent by federal agencies on cleanup of abandoned mines, and (4) value of financial assurances for mining operations on federal land managed by BLM. In 2005, GAO recommended that BLM strengthen the management of its financial assurances, which BLM generally implemented. BLM also agreed to take steps to address additional concerns raised by GAO in 2008. GAO's past work has shown that there are no definitive estimates of the number of abandoned hardrock mines on federal and other lands. For example, in 2008 and 2009, GAO reported that BLM and the Forest Service had difficulty determining the number of abandoned hardrock mines on their lands and had no definitive estimates. Similarly, estimates of the number of abandoned hardrock mine sites in the 12 western states and Alaska (where most of the mining takes place) varied widely because there was no generally accepted definition of what constitutes an abandoned hardrock mine site. In 2008, GAO developed a standard definition for abandoned hardrock mining sites and used this definition to determine that there were at least 161,000 abandoned hardrock mine sites in the 12 western states and Alaska, and at least 33,000 of these sites had degraded the environment, by contaminating surface water and groundwater or leaving arsenic-contaminated tailings piles. In 2008, GAO reported that BLM, the Forest Service, and the U.S. Geological Survey (USGS) either do not routinely collect or do not consistently maintain data on the amount of hardrock minerals being produced on federal land, the amount of hardrock minerals remaining, and the total acreage of federal land withdrawn from hardrock mining operations. According to BLM and Forest Service officials, they do not have the authority to collect information from mine operators on the amount of hardrock minerals produced on federal land or the amount remaining. In contrast, USGS collects extensive data on hardrock mineral production through its mineral industry surveys and reports these data in monthly, quarterly, and annual reports, but the agency does not collect land ownership data that would allow it to determine the amount of hardrock mineral production on federal land. As a result, comprehensive information on hardrock mineral production is generally not available to the public. From 1997 to 2008, four federal agencies--BLM, the Forest Service, EPA, and OSM--had spent at least a total of $2.6 billion to reclaim abandoned hardrock mines on federal, state, private, and Indian lands. Of this amount, EPA had spent the most--$2.2 billion. The amount each agency spent annually varied considerably, and the median amount spent for abandoned hardrock mines on public lands by BLM and the Forest Service was about $5 million and about $21 million, respectively. EPA spent substantially more--a median of about $221 million annually--to clean up abandoned mines that were generally on nonfederal land. OSM provided grants with an annual median value of about $18 million to states and Indian tribes through its program for hardrock mine cleanups. One factor that contributes to costs for reclamation of federal lands disturbed by mining operations is inadequate financial assurances required by BLM. Since 2005, GAO has reported several times that operators of hardrock mines on BLM lands have not provided financial assurances sufficient to cover estimated reclamation costs in the event that operators fail to perform the required reclamation. Most recently, in 2008, GAO reported that the financial assurances that were provided for 52 operations were about $61 million less than needed to fully cover estimated reclamation costs, which could leave the taxpayer with the bill for reclamation, if the operator fails to do so. |
The Assistant Secretary of Defense for Health Affairs leads the MHS and is ultimately responsible for the Defense COEs. In 2011, DOD leadership delegated responsibility for designating and overseeing Defense COEs to the Oversight Board. The Oversight Board includes members appointed by the Surgeons General of the Army, the Navy, and the Air Force; VA; and other components within DOD, as shown in table 1. The Oversight Board’s charter, which was signed by the Assistant Secretary of Defense for Health Affairs, delegates oversight of Defense COEs to the Oversight Board. The charter lists the Oversight Board’s responsibilities and activities, and the quarterly meeting minutes provide documentation of the Oversight Board’s activities, procedures, and decisions. For example, the Oversight Board’s charter requires the Board to conduct periodic reviews of Defense COEs’ performance and to review and recommend applicants for Defense COE designation. The charter was originally signed in September 2011 by the Assistant Secretary of Defense for Health Affairs, and updated and signed again in May 2015. The updated charter clarified responsibilities for the Oversight Board related to overseeing COEs, such as validating that a Defense COE is meeting objectives for which it was established and that the return from its work merits continued investment. For VA, VHA’s Under Secretary for Health is ultimately responsible for VHA COEs. Three program offices within VHA—the Office of Patient Care Services, the Office of Research and Development, and the Office of Academic Affiliations—have COEs. These three program offices delegate responsibility for their COEs to service offices within their organizational structures. The Office of Patient Care Services has three service offices with COEs—Mental Health Services, Specialty Care Services, and Geriatric and Extended Care Services. The Office of Research and Development has one service office with COEs— Rehabilitation Research and Development Service—and the Office of Academic Affiliations has two service offices with COEs, referred to as coordinating centers—Primary Care Education and Patient-Centered Specialty Care Education. (See fig. 1). DOD officials established criteria that COEs must meet to be designated a Defense COE and a uniform process for applicants. VHA’s service offices use a peer review process to designate COEs. However, unlike DOD, VHA has not established criteria for an entity to be designated as a COE. Criteria for designating entities as COEs. In 2011, following our study of one of DOD’s statutorily mandated COEs, DOD leadership determined that it needed to conduct a review of all existing COEs and develop a definition that would be used as criteria for designating entities as Defense COEs. MHS leadership developed a definition for Defense COEs, and the COE Oversight Board members refined and approved the criteria contained in this definition. Only entities that meet the criteria in the Oversight Board-approved definition can be given the designation of a Defense COE. The definition approved by the Oversight Board states that Defense COEs will focus on an associated group of clinical conditions and achieve improvement in outcomes through clinical, educational, and research activities. The criteria require Defense COEs to provide the entire clinical spectrum of care for a patient—from the prevention of diseases and treatment of clinical conditions through rehabilitation and transition to civilian life; for example, by developing clinical practice guidelines and educational materials and identifying research priorities and strategies for improving access to care. In addition, the Oversight Board developed other criteria that entities applying for Defense COE designation have to meet, such as clearly defining their mission, developing metrics to quantitatively assess their progress in meeting their mission, and determining whether the research they plan to conduct is needed because of existing research gaps. The Oversight Board acting chairman said the board developed criteria for a Defense COE because it is important for the board, as well as MHS leadership, to apply consistent criteria when designating entities as Defense COEs. The acting chairman said not having clear and consistent criteria could make it easier for entities to self-identify as a Defense COE without meeting rigorous requirements. In addition, the criteria may facilitate coordination among COEs to meet the agency’s intended objectives for them—which is to improve the health of servicemembers, veterans, their families, and ultimately the military readiness of servicemembers. Approval process for Defense COEs. The Oversight Board established a uniform process that requires applicants to present consistent information when applying for Defense COE designation. Applicants are required to describe why the designation as a Defense COE is important to them and how they meet the criteria of a Defense COE. The Oversight Board, after considering the applications from potential COEs, subsequently determines whether a Defense COE applicant passes the preliminary review. Applicants that do not pass the preliminary review are instructed to provide additional information or clarify the information presented for reconsideration. There is no limit on the number of times they can resubmit their application. Those that pass this preliminary review are instructed to develop a concept of operations (CONOPS) and a briefing for the Oversight Board. A CONOPS is a document that is designed to give an overall picture of the operation of the proposed Defense COE, explaining what the applicant intends to accomplish and how it will be done using available resources. The CONOPS includes a description of the value that the applicant brings to the MHS and a brief description of MHS needs and gaps that the applicant will address. The Oversight Board developed a CONOPS template to ensure that during required briefings information is consistently presented to the Board by applicants seeking Defense COE status. The applicant’s briefing to the Oversight Board is intended to provide an overview of the mission and goals of the applicant COE, including how the applicant meets the Defense COE criteria, as established by the Oversight Board. After reviewing documentation from applicants, Oversight Board members make recommendations about Defense COE designation to the Assistant Secretary of Defense for Health Affairs, who makes the final decision. According to DOD officials, in 2012, after the Oversight Board reviewed the briefings and CONOPS for the four statutorily mandated COEs and made its recommendations, the Assistant Secretary of Defense for Health Affairs designated these applicants as Defense COEs. Subsequently, the Oversight Board reviewed and approved three other applicants as Defense COEs, according to these officials. The seven Defense COEs are listed in table 2 along with the origin of the Defense COE—that is, whether the Defense COE was statutorily mandated or departmentally designated. No criteria for designating entities as COEs. VHA has not developed consistent criteria for designating an entity as a COE. VHA officials told us they believe the term COE is “a term of art” and does not lend itself to standard and consistent criteria. Furthermore, officials said they never considered a need for these criteria. Standards for Internal Control in the Federal Government provide that management should have a control environment that provides management’s framework for planning, directing, and controlling operations to achieve agency objectives, such as VHA’s objectives for how COEs are to operate and what COEs are supposed to achieve. A good internal control environment requires that the agency’s organizational structure clearly define key areas of authority and responsibility for operating activities. The organizational structure encompasses the operational processes needed to achieve management’s objectives. Without VHA developing criteria to establish, execute, control, and assess COEs, VHA management risks not meeting its objectives for COEs. The lack of standard and consistent criteria for designating COEs hinders VHA’s ability to carry out the following functions. VHA cannot provide both a basis for determining whether COEs are meeting the agency’s intended objectives for COEs and a coordinated direction for its COEs. These objectives include meeting the needs of veterans and their families, conducting pertinent research, and promoting innovative approaches to care delivery by VHA clinicians, according to VHA officials. VHA officials might not be able to determine the precise number of COEs within the agency as a basis for planning, directing, and controlling operations to achieve agency objectives. VHA officials reported to us that the agency has 70 COEs, with the largest number located within VHA’s Office of Patient Care Services. The Office of Patient Care Services reported that it has 49 COEs—39 statutorily mandated and 10 VHA designated. The Rehabilitation Research and Development Service, under VHA’s Office of Research and Development, reported it has 13 VHA-designated COEs, and VHA’s Office of Academic Affiliations reported 8 VHA-designated COEs. However, VHA’s Office of Patient Care Services Chief of Financial Operations could not confirm this number, telling us he could not provide us with a definite number of COEs because there were no criteria for him to use to identify entities designated as COEs. Other VHA officials have also had difficulty identifying the universe of VHA’s COEs. For example, VHA officials initially omitted listing the 20 Geriatric Research Education Clinical Centers (GRECC) as VHA COEs until we told these officials that GRECC officials told us they were designated as VHA COEs. In addition, confusion exists within VHA about statutorily mandated COEs because of the lack of criteria. For the COEs that VHA officials said were statutorily mandated, the statutory language often uses the term “center.” VHA decided to designate some of these centers as COEs, even though the statutory language was the same or similar for many centers that VHA did not designate as COEs. For example, the National Center for Preventive Health has statutory language similar to the language establishing the National Center for Post-Traumatic Stress Disorder (PTSD); however, the National Center for PTSD is considered a VHA COE while the National Center for Preventive Health is not. Officials from VHA and VA’s Office of General Counsel were unable to explain why some centers listed in statutory language are designated as COEs, while other centers with similar language are not designated as COEs. In addition, these officials could not provide the criteria that were used to designate these centers as COEs. Process for designating entities as VHA COEs. VHA service offices use a peer review process to designate entities as VHA COEs. In general, a peer review process is often used by government agencies to determine the merit of proposals submitted by researchers applying for grants or some type of funding. VHA service offices typically solicit applications or proposals from entities interested in being designated a COE, and interested entities complete an application or submit a proposal. Each application or proposal for COE designation may differ depending on the condition, disease, or specific health-related area being studied. For example, applications or proposals for a mental health COE designation may require applicants to address how their research will focus on bipolar disorder, borderline personality disorder, or schizophrenia, while applications or proposals for an educational COE may require applicants to address how they will develop and test innovative approaches for curricula related to patient-centered care or study new approaches and models of collaboration among health care professionals. Within each of the six service offices, applications or proposals are reviewed by a panel of subject matter experts who prioritize the applications based on the strengths and weaknesses of the proposals or on the proposals with the highest merit rating. The service office peer review panel may be made up of experts from VHA entities or from entities external to VHA, according to VHA officials. Generally, the experts review information such as the focus of the planned research and available staffing and funding. Once the experts identify the best applicants, they forward this information to the appropriate service office officials. If the service office staff agrees with the list of best applicants, some service offices forward this list to the Under Secretary for Health, who ultimately makes the final decision with respect to designating an entity as a VHA COE, while other service offices forward the list to their program director, who makes the final decision. While all six service offices use a peer review process to review, approve, and designate entities as COEs, the processes may differ in several respects. First, the content of submitted applications or proposals may differ among service offices. This is due, in part, to the lack of consistent and standard criteria within VHA that applicants must meet to be designated a VHA COE, such as how the applicant will meet the needs of veterans and their families and ensure that pertinent research is conducted to meet these needs—VHA’s intended objectives for its COEs, according to VHA officials. Second, the types and levels of review within service offices vary. For example, to help prioritize the best applicants, two VHA service offices developed a numerical scoring system to rate each application based on scientific and technical merit, typically based on the requirements contained in the solicitation for applications or proposals. For instance, if the solicitation for applications or proposals requires an evaluation plan that contains specific evaluation criteria, such as proposed outcome measurements, reporting methodology, expected findings, and potential implications for VHA and the community, applicants can be awarded up to 10 points for including these items. Other service offices do not include a scoring system as part of their approval process. Third, four of VHA’s six service offices conduct a site visit to the highest rated applicants’ facilities as part of their approval process; the other two do not, according to VHA officials. An official from one service office told us the staff members that conduct the visits have seen many potential COEs, and such on-site inspections can help to determine the applicant’s potential viability as a COE. Our review of the Oversight Board’s charter found that it does not contain procedures for how oversight of Defense COEs will be documented. Specifically, the Oversight Board charter does not explain (1) how the Oversight Board will provide and document its feedback to Defense COEs; (2) how the COEs will respond, if needed, to this feedback; and (3) how the Oversight Board will determine and document that the COEs’ actions resolved any identified problems. The acting chairman of the Oversight Board told us the board’s charter gives the board its authority to conduct oversight of Defense COEs, and if these types of procedures are needed, the Oversight Board’s charter and meeting minutes will serve this purpose. The Oversight Board’s acting chairman said the board’s minutes document its activities and decisions, including the procedures followed when conducting COE oversight reviews and any problems identified. However, our review of the Oversight Board’s minutes, from its inception in 2011 to April 2015, shows that the minutes did not indicate the procedures followed when conducting COE reviews and did not explain how the Oversight Board documented and resolved identified problems. The Standards for Internal Control in the Federal Government state that transactions and events should be promptly documented to maintain their relevance and value to management in controlling their operations and helping make decisions. Further, the standards state that significant events, such as in this instance the identification of problems during oversight and the actions taken to correct these problems, need to be clearly documented, and the documentation should be readily available for examination. Federal internal control standards also state that significant events should appear in management directives, policies, or operating manuals to help ensure management’s directives are carried out as intended. Once established, federal internal control standards state that management should monitor and assess over time the quality of performance, including monitoring the policies and procedures to ensure that the findings of reviews are promptly resolved. Oversight Board officials told us that feedback to Defense COEs from oversight reviews conducted by the board is typically provided verbally and has not been documented. Therefore, documentation of feedback, both positive and negative, is not always available. Officials said that negative feedback may be documented in the Oversight Board’s minutes; however, if the Oversight Board identifies problems with the Defense COEs, the board’s charter and meeting minutes do not require that the Defense COEs provide written corrective action plans. As a result, there will not be a record of the corrective action taken by a Defense COE and whether the action resolved the problem identified by the Oversight Board. Absent specific procedures for how oversight should be conducted and how findings and corrective actions should be documented, DOD leadership lacks assurance that the Oversight Board has identified all problems and has taken appropriate action to determine that the problems have been resolved. Only one of six VHA service offices has written procedures for documenting the oversight of its COEs, including providing written critiques of findings from service office reviews to its COEs and requiring corrective actions from COEs when needed. The other five service offices do not have written procedures for documenting oversight activities. While most service offices do not have written procedures that require them to document their COE oversight, several currently provide written feedback to their COEs on the results of the service offices’ reviews. Specifically, three of the five service office directors provide written feedback to their COEs on the findings from the service office reviews. However, only one of these three service office directors requests that his COEs provide written corrective action plans. Officials from three service offices told us they believe the process they currently have works fine because they provide written feedback to the COEs on the results of their reviews. However, if these directors leave the service offices, another director might not request written documentation of the oversight that is conducted because the documentation procedures are not written. The Standards for Internal Control in the Federal Government state that transactions and events should be promptly documented to maintain their relevance and value to management in controlling their operations and helping make decisions. Further, the standards state that significant events, such as in this instance the identification of problems during oversight and the actions taken to correct these problems, need to be clearly documented, and the documentation should be readily available for examination. Federal internal control standards also state that significant events should appear in management directives, policies, or operating manuals to help ensure management’s directives are carried out as intended. Once established, federal internal control standards state that management should monitor and assess over time the quality of performance, including monitoring the policies and procedures to ensure that the findings of reviews are promptly resolved. Without written procedures for documenting oversight activities, VHA leadership lacks assurance that its service offices are identifying and correcting all problems. Leadership also does not have evidence that the service offices conducted a review and do not have documentation of past and present problems to identify potential patterns and take action quickly to minimize the effects of these problems. Unlike DOD, VHA has not developed standard criteria that entities must meet in order to be designated a VHA COE. Without defined criteria, VHA lacks reasonable assurance that its COEs are meeting the agency’s intended objectives for COEs, such as meeting the needs of veterans and their families throughout VA’s health care system and operating with coordinated direction. By not having written procedures that outline how the agencies will document the activities through which they monitor and oversee the performance of COEs, both DOD and VHA lack assurance that oversight activities are performed consistently over time as intended. Written procedures would better ensure a common understanding of oversight activities among staff and enhance clear communication, especially as normal turnover occurs among the staff responsible for monitoring and providing feedback to COEs. Having systematic procedures for documenting oversight activities is necessary to better ensure that the agencies’ COEs are accountable for accomplishing the agencies’ objectives for them, such as meeting the health care needs of servicemembers and veterans. To help ensure that COEs are meeting VHA’s intended objectives for them, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to establish clear, consistent standard criteria that entities must meet to receive COE designation, and require all existing VHA COEs, as well as new applicants for COE status, to meet these criteria. To improve documentation of the activities DOD undertakes to oversee the Defense COEs, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to require the MHS Defense COE Oversight Board to develop written procedures on how to document oversight activities of Defense COEs, including requirements for documenting feedback, both positive and negative, and documenting the resolution of identified problems. To help improve VHA’s oversight of its COEs, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to require VHA service offices to develop written procedures on how to document their oversight activities of COEs, including requirements for documenting feedback, both positive and negative, and documenting the resolution of identified problems. VA provided written comments on a draft of this report, as well as an action plan for implementing our recommendations. We have reprinted VA’s comments and action plan in appendix III. In its comments, VA generally agreed with our conclusions and concurred with our recommendations. VA stated that a team of VHA subject matter experts will develop standards to be used in designating COEs and overseeing their performance. DOD also provided written comments on a draft of this report, which we have reprinted in appendix IV. In its comments, DOD concurred with our findings and recommendation and explained how it intends to implement the recommendation. DOD also provided technical comments, which we have incorporated in the report as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretary of Veterans Affairs; and other interested parties. We will also make copies available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The Veterans Health Administration (VHA) within the Department of Veterans Affairs has three program offices—the Office of Patient Care Services, the Office of Research and Development, and the Office of Academic Affiliations—that have centers of excellence (COE). These three program offices delegate responsibility for their COEs to six service offices within their organizational structure. The Office of Patient Care Services has three service offices with responsibility for VHA COEs— Mental Health Services, Specialty Care Services, and Geriatric and Extended Care Services. The Office of Research and Development has one service office with COEs—Rehabilitation Research and Development Service—and the Office of Academic Affiliations has two service offices with COEs, referred to as coordinating centers—Primary Care Education and Patient-Centered Specialty Care Education. VHA’s Office of Patient Care Services has all 39 of the COEs that were statutorily mandated, as well as 10 COEs that were departmentally designated. This office has three service offices that are responsible for these COEs: Mental Health Services, Specialty Care Services, and Geriatrics and Extended Care Services. Table 3 lists each COE in the Office of Patient Care Services and groups the COEs by their specific service office, as well as indicating their location. The table also indicates the origin of the COE—whether the COE was statutorily mandated or departmentally designated—and provides a brief description of the COE’s research, clinical, and/or educational focus, as provided by VHA. VHA’s Office of Research and Development has 13 COEs that were departmentally designated, according to VHA officials. This office has one service office—Rehabilitation Research and Development—that is responsible for these COEs. Table 4 lists each COE in this service office and provides the location of the COE, as well as a brief description of the COE’s research, clinical, and/or educational focus, as provided by VHA. VHA’s Office of Academic Affiliations has eight COEs that were departmentally designated, according to VHA officials. This office has two service offices, referred to as coordinating centers—Primary Care Education and Patient-Centered Specialty Care Education—that are responsible for COEs. Primary Care Education has five COEs and Patient-Centered Specialty Care Education has three COEs. Table 5 lists each COE by service office and gives the location of the COE and a brief description of the research, clinical, and/or educational focus of the respective COEs. To describe the collaboration efforts of the Defense Centers of Excellence (COE) in the Department of Defense (DOD) and the Veterans Health Administration (VHA) COEs in the Department of Veterans Affairs (VA), we sent a Web-based, structured questionnaire to COE directors, identified by DOD and VHA officials, to obtain information about how their COEs collaborate. We sent the questionnaire to the COE directors between December 2014 and January 2015. The questionnaire asked COE directors to describe the extent to which Defense and VHA COE staff collaborate internally—with other staff from within their agencies, and externally—with staff from other federal agencies and academic organizations. The questionnaire also asked the COE directors if they use certain tools, such as written agreements, staff participation in committees, working groups, councils, or task forces; or other tools or mechanisms to coordinate or collaborate. All 7 Defense COE directors responded to the questionnaire, and 60 of VHA’s 70 COE directors, or 86 percent, responded to the collaboration section of the questionnaire. Tables 6 through 11 provide information about Defense and VHA COE collaboration efforts. Defense COEs report using written agreements or other tools to collaborate. Table 6 shows the Defense COEs and their reported collaboration activities. VHA’s Office of Patient Care Services has three service offices that have established 49 COEs, with 39 of them statutorily mandated and 10 departmentally designated, according to VHA officials. The service offices are the Mental Health Services, the Specialty Care Services, and the Geriatric and Extended Care Services. The COEs that responded to the collaboration section of the questionnaire report using written agreements or other tools to collaborate. Tables 7, 8 and 9 provide information on the collaboration activities of the three service offices within the Office of Patient Care Services that have COEs. VHA’s Office of Research and Development has one service office that has COEs—the Office of Rehabilitation Research and Development Service. This service office reports they have 13 COEs, all departmentally designated, that focus on a selected area of research relevant to veterans with disabilities. The COEs report using written agreements or other tools to collaborate. Table 10 indicates the collaboration activities of the COEs. VHA’s Office of Academic Affiliations reports that it has eight COEs and all were departmentally designated. This office has two service offices, referred to as coordinating centers: Primary Care Education and Patient- Centered Specialty Care Education. Primary Care Education has five COEs and Patient-Centered Specialty Care Education has three. The Primary Care Education and Patient-Centered Specialty Care Education COEs that responded to the collaboration section of our questionnaire indicated that they use written agreements or other tools to collaborate. Table 11 shows the collaboration activities of the COEs that responded to our questionnaire. In addition to the contact named above, Marcia A. Mann, Assistant Director; Mary Ann Curran Dozier; Martha Fisher; Carolyn Fitzgerald; Carolina Morgan; and Jacquelyn Hamilton made key contributions to this report. | Both DOD and VA's VHA have COEs that are expected to improve certain services throughout both agencies' health care systems. To date, DOD and VHA have designated 7 and 70 COEs, respectively. Congressional hearings have raised questions about DOD's and VHA's oversight of the COEs, including the criteria used to designate them, and whether they are meeting their intended missions. GAO was asked to review DOD and VHA COEs. GAO (1) examined the criteria and processes DOD and VHA use to designate entities as COEs and (2) assessed how DOD and VHA document the oversight activities related to their agencies' COEs. GAO compared agency criteria against federal internal control standards, and analyzed relevant laws, committee reports, and available agency documents. GAO also analyzed documents from the 7 Defense COEs and from the 6 VHA service offices responsible for the 70 VHA COEs to understand the criteria and processes used to designate them and how oversight activities are documented. GAO interviewed officials from both agencies to obtain additional information about their COEs. The Department of Defense (DOD) has developed criteria to designate an entity as a Defense Center of Excellence (COE), but the Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) has not. Health-focused COEs are intended to bring together treatment, research, and education to support health provider competencies; identify gaps in medical research and coordinate research efforts; and integrate new knowledge into patient care delivery. GAO found that DOD leadership and its Defense COE Oversight Board established and refined the definition and criteria for designating entities as Defense COEs. DOD's criteria require its Defense COEs, for example, to achieve improvements in clinical care outcomes and produce optimal value for servicemembers. The Oversight Board developed these criteria in order to have a consistent basis for designating entities as Defense COEs and to limit entities from self-identifying as Defense COEs without meeting the criteria. DOD also developed a uniform process for designating COEs. VHA service offices use a peer review process to designate their COEs. However, unlike DOD, VHA has not developed criteria for designating its COEs. Federal internal control standards provide that management should have a control environment that provides management's framework for planning, directing, and controlling operations to achieve agency objectives, such as VHA's objectives for how COEs are to operate and what COEs are supposed to achieve. Without defined criteria, VHA lacks reasonable assurance that its COEs are meeting the agency's intended objectives for COEs. The Defense COE Oversight Board and most service offices responsible for overseeing VHA COEs lack written procedures for documenting oversight activities related to their COEs, including requirements for documenting identified problems and their resolution. GAO found that the Oversight Board's charter does not explain how (1) the board will provide and document its feedback, (2) the Defense COEs will respond to this feedback, and (3) the board will document resolution of identified issues. The Oversight Board's acting chairman told GAO the charter gives the board its authority to conduct oversight of Defense COEs and if these types of procedures are needed, the Oversight Board's charter and meeting minutes will serve this purpose. However, GAO's review of the charter and minutes found that they do not contain these types of procedures. Likewise, GAO found that five of six VHA service offices have no written procedures for documenting their findings and the corrective actions taken by COEs. VHA officials told GAO that they do not see a need to develop specific written procedures for documenting oversight of their COEs. Federal internal control standards state that transactions and events should be promptly documented to maintain their relevance and value to management in controlling their operations. Further, significant events, such as the identification of problems and the actions taken to correct them, need to be clearly documented, and these events should appear in management directives, policies or operating manuals to help ensure management's directives are carried out as intended. Absent written oversight procedures, both DOD and VHA lack reasonable assurance that oversight procedures are consistently and routinely performed over time, and that issues raised during oversight are resolved. GAO recommends that VHA establish criteria for designating entities as COEs. GAO also recommends that DOD and VHA develop written procedures for documenting oversight of their COEs. VA and DOD concurred with GAO's recommendations and provided an action plan for implementing them. |
Risk assessments are conducted to estimate whether and/or how much damage or injury can be expected from exposures to a given risk agent and to assist in determining whether these effects are significant enough to require action, such as regulation. The effects of concern can be diseases such as cancer, reproductive and genetic abnormalities, workplace injuries, or various types of ecosystem damage. The risk agent analyzed in an assessment can be any number of things, including chemicals, radiation, transportation systems, or a manufacturing process. The product of a risk assessment is a quantitative and/or qualitative statement regarding the probability that an exposed population will be harmed and to what degree. Risk assessment, particularly quantitative risk assessment, is a relatively new discipline, developed in the first half of the 20th century to establish various health and safety codes and standards. The role of risk assessment in the regulatory process was accelerated by the enactment of various health, safety, and environmental statutes in the early 1970s. The development of chemical risk assessment procedures has traditionally followed two different tracks—one for assessments of cancer risks and another for assessments of noncancer risks. The procedures associated with cancer risks have historically assumed that there is no “threshold” below which an agent would not cause adverse effects. In contrast, procedures for assessments of noncancer risks were largely developed under the assumption that there is such a threshold—that exposures up to a certain level would not be expected to cause harm. In 1983, NAS identified four steps in the risk assessment process: (1) hazard identification (determining whether a substance or situation could cause adverse effects), (2) dose-response assessment (determining the relationship between the magnitude of exposure to a hazard and the probability and severity of adverse effects), (3) exposure assessment (identifying the extent to which exposure actually occurs), and (4) risk characterization (combining the information from the preceding analyses into a conclusion about the nature and magnitude of risk). This paradigm, originally intended to address assessments of long-term health risks, such as cancer, has become a standard model for conducting risk assessments, but is not the only model (e.g., different models are used for ecological risk assessments). According to NAS, the results of the risk assessment process should be conceptually distinguished from how those results are used in the risk management process (e.g., the decision on where to establish a particular standard). As illustrated by figure 1, the risk management decision considers other information in addition to the risk characterization. More recent reports have updated and expanded on these original concepts. In 1996, NAS urged risk assessors to update the original concept of risk characterization as a summary added at the end of a risk assessment. Instead, the report suggested that risk characterization should be a “decision-driven” activity directed toward informing choices and solving problems and one that involves decision makers and other stakeholders from the very inception of a risk assessment. In this updated view, the nature and goals of risk characterization are dictated by the goals of the risk management decisions to be made. Similarly, the Presidential/Congressional Commission on Risk Assessment and Risk Management (hereinafter referred to as the Presidential/Congressional Commission) recommended in 1997 that the performance of risk assessments be guided by an understanding of the issues that will be important to risk management decisions and to the public’s understanding of what is needed to protect public health and the environment. Substantial numbers and amounts of chemical substances and mixtures are produced, imported, and used in the United States. For example, there are over 70,000 commercial chemicals in EPA’s Toxic Substances Control Act (TSCA) Chemical Substances Inventory, and the agency receives about 1,500 petitions each year requesting the approval of new chemicals or new uses of existing chemicals. However, there is relatively little empirical data available on the toxicity of most chemicals and the extent to which people or the environment might be exposed to the chemicals. For example, we previously reported that EPA’s Integrated Risk Information System (IRIS), which is a database of the agency’s consensus on the potential health effects of chronic exposure to various substances found in the environment, lacks basic data on the toxicity of about two-thirds of known hazardous air pollutants. Furthermore, to the extent that data on health effects are available, the data are more often from toxicological studies involving animal exposures than from epidemiological studies involving human exposures. As a consequence, chemical risk assessments must rely often on extrapolation from animal studies and are quite different from risk assessments that use epidemiological studies or actuarial data (such as accident statistics). The limited nature of information on chemical toxicity was illustrated in a 1998 EPA report on the data that were publicly available on approximately 3,000 high-production-volume (HPV) chemicals. For each of these chemicals, EPA examined the available data corresponding to six basic tests that have been internationally agreed to as necessary for a minimum understanding of a chemical’s toxicity. As shown in figure 2, the agency concluded that the full set of basic toxicity data was available for only about 200 (7 percent) of the chemicals, and that 43 percent of the chemicals did not have publicly available data for any of the six tests. There are also significant gaps in the available data on the extent to which people are exposed to chemicals. For example, last year we reviewed federal and state efforts to collect human exposure data on more than 1,400 naturally occurring and manmade chemicals considered by HHS, EPA, and other entities to pose a threat to human health. We reported that, taken together, HHS and EPA surveys measured the degree of exposure in the general population for only 6 percent of those chemicals. Even for those chemicals that were measured, information was often insufficient to identify smaller population groups at high risk (e.g., women, children, and the elderly). There is an ongoing debate about the appropriate application of risk assessment in federal regulation. In 1990, Congress mandated that a commission be formed to “make a full investigation of the policy implications and appropriate uses of risk assessment and risk management in regulatory programs under various Federal laws to prevent cancer and other chronic human health effects which may result from exposure to hazardous substances.” The Presidential/Congressional Commission published its final report in 1997, and noted that often “the controversy arises from what we don’t know and from what risk assessments can’t tell us.” NAS has also emphasized that science cannot always provide definitive answers to questions raised during the course of conducting a risk assessment, so risk assessors must use assumptions throughout the process that reflect professional judgments and policy choices. One focus of the risk assessment debate has been agencies’ use of precautionary assumptions and analytical methods. The term “precautionary” refers to the use of methods and assumptions that are intended to produce estimates that should not underestimate actual risks. Some critics of federal risk assessment practices believe agencies use assumptions that are unjustifiably precautionary in the face of new scientific data and methods, thereby producing estimates that overstate actual risks. The critics contend that this effect is compounded when multiple precautionary assumptions are used. Others, however, criticize agency practices for not being precautionary enough in the face of scientific uncertainties, failing, for example, to adequately account for the synergism of exposures to multiple chemicals or the risks to persons most exposed or most sensitive to a particular toxic agent. Other observers, including NAS, have expressed concerns about whether the agencies’ procedures and assumptions are sufficiently transparent, thereby providing decision makers and the public with adequate information about the scientific and policy bases for agencies’ risk estimates as well as the limitations and uncertainties associated with those estimates. We have discussed these issues in several previous reports. For example, in 1993, we noted that EPA used precautionary assumptions throughout the process that it used to assess risk at Superfund hazardous waste sites, and that the agency had been criticized for overstating risk by combining precautionary estimates. In September 2000, we reported on EPA’s use of precautionary “safety factors” pursuant to the Food Quality Protection Act of 1996. In October 2000, we said that three factors influenced EPA’s use of precautionary assumptions in assessing health risks: (1) the agency’s mission to protect human health and safeguard the natural environment, (2) the nature and extent of relevant data (e.g., animal versus human studies), and (3) the nature of the health risk being evaluated (e.g., cancer versus noncancer risks). The context in which chemical risk assessments are conducted plays an important role in determining what type of assessments federal regulatory agencies perform and why certain approaches are used. Two dimensions seem particularly important to understanding the context for an agency’s chemical risk assessment activities: (1) the general statutory and legal framework underlying the agency’s regulation of chemicals and (2) how the agency plans to use the risk assessment information. The statutory and legal framework determines the general focus and goals of an agency’s chemical risk assessment activities and also can shape how risk assessments for those activities are supposed to be conducted. The specific tasks and purposes for which an agency will use the results of a particular risk assessment determine the questions that the assessment needs to address and the scope and level of detail of the assessment. A diverse set of statutes addresses potential health, safety, and environmental risks associated with chemical agents. These statutory mandates generally focus on different types and sources of exposure to chemicals, such as consumption of pesticide residues in foods, occupational exposures to chemicals, or inhalation of toxic air pollutants. Therefore, different agencies (and different offices within those agencies) have distinctive concerns regarding chemical risks. For example, each major program office within EPA (e.g., the Office of Air and Radiation or the Office of Water) is responsible for addressing the risk-related mandates of one or more statutes (e.g., the Clean Air Act, the Clean Water Act, or the Safe Drinking Water Act). Also, international agreements provide important legal context for transportation risk assessment activities. For example, criteria for classifying dangerous chemicals in transportation have been internationally harmonized through the United Nations’ Recommendations on the Transport of Dangerous Goods. The legal framework underlying chemical regulation influences both the extent to which risk assessment is needed for regulatory decision making and how risk assessments are supposed to be conducted. Some statutes require regulatory decisions to be based solely on risk (considering only health and environmental effects), some require technology-based standards (such as requiring use of the best available control technology), and still others require risk balancing (requiring consideration of risks, costs, and benefits). For example, section 112 of the Clean Air Act (CAA), as amended, has a technology-based mandate requiring the use of the maximum achievable control technology to control emissions of hazardous air pollutants. A risk assessment is not needed to determine such technology, but would be used to evaluate residual risks that remain after that technology is in use. Some statutes also place the primary responsibility for conducting risk assessments and compiling risk-related data for a particular chemical or source of exposure to chemical agents with industry, states, or local entities, rather than with the federal regulatory agencies. For example, industry petitioners have the primary responsibility to provide the data needed to support registration and tolerances from EPA for their pesticides, including information on the toxicological effects of the pesticides. Statutes can also affect risk assessment by specifically defining what will be considered a hazard, directing the agency to take certain methodological steps, or specifying the exposure scenario of regulatory concern. For example, in response to the “Delaney Clause” amendments to the Federal Food, Drug, and Cosmetic Act, FDA identifies any food additive for which an adequately conducted animal cancer study indicates that the additive produces cancer in animals as a carcinogen under the conditions of the study. No further corroboration or weight-of-evidence analysis is required. The Food Quality Protection Act of 1996 requires EPA to add an additional 10-fold safety factor to protect infants and children when deriving standards for allowable pesticide residues in foods, unless reliable data show that a different factor will be safe. Provisions in the Occupational Safety and Health Act focus OSHA’s risk assessments on estimating the risks to workers exposed to an agent for a working lifetime. However, in most cases the statutes simply provide a general framework within which the agencies make specific risk assessment assumptions and methodological choices. For example, section 109 of the CAA requires EPA to set national ambient air quality standards that in the judgment of the EPA Administrator—and allowing for an “ample margin of safety”—are requisite to protect the public health. EPA risk assessors translate that general requirement into specific risk assessment assumptions and methods (e.g., whether to assume a threshold or no-threshold relationship between dose and response at low doses). The specific purpose or task of an assessment determines the kinds of risk information needed for the agency to make its risk management decisions, and can significantly influence the scope and level of detail required of a risk assessment. For example, If the agency’s task is to set a specific health-based standard (e.g., a national air quality standard), a rigorous and detailed estimate of risks at particular exposure levels might be required. If the agency’s task is to decide whether to approve the production and use of commercial chemicals or pesticides, risk assessors may initially focus on potential upper-bound exposures (e.g., assuming that a chemical agent will be used at the maximum level permitted by law or focusing on individuals who consume the greatest amounts of a food containing residues of the agent at issue). If such upper-bound estimates exhibit no cause for concern, the agency may have no need to complete a more comprehensive and refined risk assessment. A decision on whether to add or remove a chemical from the list of potential hazards might focus the risk assessors on determining whether the potential risk is above or below a specific threshold level, such as the risk of 1 extra cancer case over the lifetime of 1 million people. The influence of the specific regulatory task at hand is illustrated by a method commonly used by agencies for risk assessments of noncancer health effects. Agencies such as EPA and FDA have historically attempted to identify a dose level of a chemical associated with no observed adverse effect level (NOAEL) in animal experiments—or the lowest observed adverse effect level (LOAEL) in the study, if every tested dose exhibited some effect. They then divided that NOAEL or LOAEL dose by multiple “safety” or “uncertainty” factors to account for the possibility that humans may be more sensitive to the chemical than animals and other uncertainties. This procedure is designed to identify a dose not likely to result in harm to humans, not to provide an explicit quantitative estimate of the risks associated with a given chemical. In other words, sometimes the focus of federal agencies’ “risk” assessments could more accurately be described as a safety assessment (i.e., estimating a “safe” level of exposure to chemical agents or a dose below which no significant risk is expected to occur) rather than a risk assessment (i.e., estimating the actual risks associated with exposures to chemical agents). Because of contextual differences, the risk assessment procedures used, the resulting risk estimates (and regulatory actions based upon those estimates), and even whether a substance would be subject to risk assessment, can vary among different agencies and programs within the same agency. The following examples illustrate how contextual differences affect the conduct of risk assessments. Because regulation of certain wastes may be impractical or otherwise undesirable, regardless of the hazards that the waste might pose, Congress and EPA exempted certain materials (e.g., agricultural or mining and mineral processing wastes) from the definitions of hazardous wastes. If a material meets one of the categories of exemptions, it cannot be identified as a hazardous waste even if it otherwise meets the criteria for listing as a hazardous waste. For example, according to EPA’s RCRA Orientation Manual, wastes generated in raw material, product storage, or process (e.g., manufacturing) units are exempt from EPA’s hazardous waste regulation while the waste remains in such units. However, OSHA might assess and regulate risks associated with such materials as part of its mission to protect the health of employees in the workplace. FDA and EPA both assess potential human health risks associated with ingestion of chemical substances. If a substance is being assessed by FDA as a food additive and results from any adequate study indicate that the substance produces cancer in animals, FDA labels that additive as a carcinogen without considering other scientific evidence (per the Delaney clause of the Federal Food, Drug, and Cosmetic Act, as amended). However, when assessing the risks associated with consumption of residues from animal drugs (FDA) and pesticides (EPA) the agencies may need to consider many scientific studies in determining whether and under what conditions an agent might cause cancer or other adverse health effects in humans. EPA’s risk assessments of commercial chemicals under TSCA vary depending on whether the chemical at issue is “existing” or “new.” For EPA to control the use of an existing chemical, the agency must make a legal finding that the chemical will present an unreasonable risk to human health or the environment. EPA said this standard requires the agency to have conclusive data on risks associated with that particular chemical. By comparison, newly introduced chemicals can be regulated based on whether they may pose an unreasonable risk, and this finding of risk can be based on data for structurally similar chemicals, not just data on that particular chemical. Because industrial chemicals in commerce were “grandfathered” under TSCA into the inventory of existing chemicals more than 20 years ago, without considering whether they were hazardous, there are situations in which existing chemicals might not be controlled while, at the same time, EPA would act to control a new chemical of similar or less toxicity. Within EPA’s Office of Water, risk assessments vary depending on whether the assessment is done to establish drinking water standards or standards for ambient water (e.g., bodies of water such as lakes and rivers). Risk assessments for drinking water standards focus solely on human health effects, but assessments used to establish ambient water quality criteria consider both human health and ecological effects. Even when considering just the human health risks, an important difference between the ambient and drinking water risk assessments is an additional focus for ambient water on exposures to contaminated water through consumption of contaminated fish or shellfish. This additional factor is a primary reason for potential differences in drinking water and ambient water risk estimates and standards for the same chemical. Appendices II through V describe the relevant contextual factors for each of the four selected agencies in greater detail. All four of the agencies included in our review have standard procedures for conducting risk assessments, although the agencies vary in the extent to which their procedures are documented in written guidance. In general, there are more similarities than differences across EPA, FDA, and OSHA procedures, because each of these agencies generally follows the four-step NAS risk assessment process. The procedures address the same basic questions regarding hazard identification, dose-response assessment, and exposure assessment. The specific analytical methods and approaches in those procedures are also very similar (e.g., extrapolating from animal study data to model dose-response relationships in humans, and generally using different procedures for assessing cancer and noncancer risks). The most substantive differences across and within these agencies are related to exposure assessment, reflecting the diversity in the agencies’ regulatory authorities regarding chemical agents across different kinds or sources of exposure. For example, both OSHA and EPA consider methylene chloride (also known as dichloromethane) to be a probable human carcinogen. However, this same chemical can be identified as a significant hazard by one agency in one exposure setting (OSHA for purposes of assessing health risks associated with occupational exposures) but as a low hazard by another agency in a different setting (EPA for purposes of Superfund hazard ranking screening). RSPA, although sharing a concern over identifying risks and analyzing their consequences and probabilities of occurrence, has a different structure to its risk assessments than the other three agencies because of its focus on risks associated with unintentional releases of hazardous materials during transportation. In general, all four agencies are incorporating more complex analytical models and methods into their risk assessment procedures. However, some of the advanced models require much more detailed information than may be currently available for many chemicals. EPA has extensive written internal risk assessment procedures. For example, EPA has agencywide guidelines, policy memoranda, and handbooks covering the following aspects of risk assessment: carcinogen risk assessment, neurotoxicity risk assessment, reproductive toxicity risk assessment, developmental toxicity risk assessment, mutagenicity risk assessment, health risk assessment of chemical mixtures, exposure assessment, ecological risk assessment, evaluating risk to children, use of probabilistic analysis in risk assessment, use of the benchmark dose approach in health risk assessment, and use of reference dose and reference concentration in health risk assessment. EPA also has numerous program-specific guidelines and policy documents, such as the Risk Assessment Guidance for Superfund series and a set of more than 20 science policy papers and guidelines from the Office of Pesticide Programs in response to the Food Quality Protection Act of 1996. Many of the agency’s guidance documents are draft revisions to earlier documents or procedures or draft guidance on new issues that have not previously been addressed by EPA. Although such drafts are not yet final, official statements of agency policies or procedures, they may better represent the current practice of risk assessment in EPA than earlier “final” documents. EPA generally follows the NAS four-step risk assessment process. (The major exception is the agency’s Chemical Emergency Preparedness and Prevention Office, which follows a different set of procedures because of its focus on risks associated with accidental chemical releases from fixed facilities. See app. II for a discussion of this office’s risk assessment procedures.) EPA’s risk assessment activities generally involve both the program offices (e.g., the Office of Air and Radiation or the Office of Solid Waste) and the Office of Research and Development (ORD), which is the principal scientific and research arm of the agency. ORD often does risk assessment work for EPA program offices that focuses on the first two steps in the four-step NAS process—hazard identification and dose- response assessment—in particular, the development of “risk per unit exposed” numbers. Preparation of the final two steps in the process— exposure assessment and risk characterization—tends to be the responsibility of the relevant program offices. Several programs, for example, frequently use a single hazard assessment, but for different exposure scenarios. There are, however, exceptions to this generalization. For example, ORD carries out all steps for highly complex, precedent- setting risk assessments, such as those for dioxin and mercury. There are also instances when EPA program offices carry out all four steps of the process. In some situations, EPA agencywide procedures also depart slightly from the NAS paradigm. For example, when assessing noncancer health effects, EPA’s normal practice is to do hazard identification in conjunction with the analysis of dose-response relationships, rather than as distinct steps. According to EPA’s guidelines, this is because the determination of a hazard is often dependent on whether a dose-response relationship is present. In the case of ecological risk assessments, EPA’s guidelines suggest a three-step process consisting of (1) problem formulation, (2) analysis, and (3) risk characterization, rather than the four- step process used for health risk assessments. EPA has identified several new directions in its approach to exposure assessment. First is an increased emphasis on total (aggregate) exposure to a particular agent via all pathways. EPA policy directs all regulatory programs to consider in their risk assessments exposures to an agent from all sources, direct and indirect, and not just from the source that is subject to regulation by the office doing the analysis. Another area of growing attention is the consideration of cumulative risks, when individuals are exposed to many chemicals at the same time. The agency is also increasing its use of probabilistic modeling methods to analyze variability and uncertainty in risk assessments and provide better estimates of the range of exposure, dose, and risk to individuals in a population than are provided by single point estimates. EPA’s guidance on probabilistic methods outlines standards that exposure data prepared by industry or other external analysts must meet to be accepted by EPA. FDA and OSHA also generally follow the NAS risk assessment paradigm, but neither FDA nor OSHA had written internal guidance specifically on conducting risk assessments at the time of our review. However, both agencies’ standard procedures are well documented in the records of actual risk assessments and in summary descriptions that have appeared in scientific and professional literature. In addition, FDA has published volumes of guidance on risk assessments for use by external parties affected by the agency’s regulations (e.g., animal drug manufacturers seeking FDA approval for their products). According to FDA officials, the documents are meant to represent the agency’s current thinking on the scientific data and studies considered appropriate for assessing the safety of a product, and sometimes include detailed descriptions of the risk assessment methods deemed appropriate to satisfy FDA’s requirements under various statutory provisions. However, these guidelines do not preclude the use of alternative procedures by either FDA or external parties. The responsibility for conducting risk assessments in FDA is divided among the agency’s program offices. For example, FDA’s Center for Food Safety and Applied Nutrition (CFSAN) is responsible for assessing risks posed by food additives and contaminants, while the Center for Veterinary Medicine (CVM) is responsible for assessing risks posed by animal drug residues in food. In addition, FDA’s National Center for Toxicological Research conducts scientific research to support the agency’s regulatory needs, including research aimed at understanding the mechanisms of toxicity and carcinogenicity and at developing and improving risk assessment methods. FDA officials said that there are variations in the risk assessment approaches used among the agency’s different product centers and, in some cases, within those centers. In general, those variations are traceable to differences in factors such as the substances being regulated, the nature of the health risks involved (particularly carcinogens versus noncarcinogens), and whether the risk assessment is part of the process to review and approve a product before it can be marketed and used (premarket) or part of the process of monitoring risks that arise after a product is being used (postmarket). For example, risk assessments by CFSAN’s Office of Food Additive Safety and Office of Nutritional Products, Labeling and Dietary Supplements are mandatory for new dietary ingredients (and are used for premarket review of such ingredients) but discretionary for other food (and are associated with postmarket review). A unique characteristic of the hazard identification phase of risk assessment in FDA is that, by statute, if there is an adequate study that indicates a food additive can cause cancer in animals, that additive is labeled as a carcinogen under the conditions of the study. No additional corroboration or weight-of-evidence analysis is required in such cases, and there is no need to complete the other three risk assessment steps before proceeding to a regulatory decision. FDA’s CVM is permitted to allow the use of carcinogenic drugs in food-producing animals under the DES proviso of the Federal Food, Drug, and Cosmetic Act, as amended, provided that “no residue of such drug will be found.” OSHA’s Directorate of Health Standards Programs is primarily responsible for conducting the agency’s chemical risk assessments. Such assessments focus specifically on the potential risks to workers associated with exposures to chemicals in an occupational setting. In contrast to agencies regulating environmental exposures to toxic substances, OSHA frequently has relevant human data available on occupational exposures. Even when the agency assesses risks based on animal data, OSHA said that the workplace exposures of concern are often not far removed from levels tested in the animal studies. Therefore, OSHA’s risk assessments do not extrapolate as far beyond the range of observed toxicity as might be necessary to characterize environmental exposure risks. OSHA’s risk assessment procedures have also evolved to consider data from advanced physiologically based pharmacokinetic (PBPK) models on the relationship between administered doses and effective doses (i.e., the amounts that actually reach a target organ or tissue). However, PBPK models are complicated and require substantial data, which may not be available for most chemicals. OSHA therefore developed a set of 11 criteria to judge whether available data are adequate to permit the agency to rely on PBPK analysis in place of administered exposure levels when estimating human equivalent doses. The applicable risk assessment guidance for RSPA is generally documented within broader DOT-wide guidance on conducting regulatory analyses and also in materials describing the agency’s Hazardous Materials Safety Program. Because of the particular regulatory context in which it operates, RSPA does not apply the NAS four-step paradigm for risk assessment used by EPA, FDA, and OSHA. RSPA is primarily concerned with potential risks associated with the transportation of hazardous materials. In particular, it is concerned with short-term or acute health risks due to relatively high exposures from unintentional release of hazardous materials. For its purposes, RSPA identifies chemicals as hazardous materials according to a regulatory classification system that is harmonized with internationally recognized criteria and EPA-defined hazardous substances. This classification system defines the type of hazard associated with a given material according to chemical, physical, or nuclear properties (e.g., whether it is an explosive, a flammable liquid, or a poisonous substance) that can make it dangerous in or near transporting conveyances. Therefore, a chemical’s toxicity is only one of its characteristics of concern to RSPA, rather than being the primary focus of analysis as in assessments of the other three agencies. The risk analyses by RSPA focus on identifying the potential circumstances under which unintentional releases of hazardous materials could occur during transit (e.g., due to transportation accidents) and assessing their consequences and probability of occurrence. Analysis of different modes (e.g., via truck, rail, or aircraft) and routes of transportation is an important component of RSPA’s consequence and probability analyses. Through DOT databases, directly relevant data on the incidence and severity of hazardous materials transportation accidents are available to assist RSPA in identifying and analyzing hazard scenarios. Appendices II through V provide more detailed descriptions of the standard procedures for chemical risk assessments in each of the four selected agencies. Assumptions and methodological choices are an integral and inescapable part of risk assessment. They are often intended to address uncertainty in the absence of adequate scientific data. However, those assumptions and methods may also reflect policy choices, such as how to address variability in exposures and effects among different individuals and populations, or particular contextual requirements. To the extent that the four agencies identified the specific reasons for selecting their major assumptions or methods, they most often attributed their choices to an evaluation of available scientific data, the precedents established in prior risk assessments, or policy decisions related to their regulatory missions. Agencies’ statements regarding the likely effects of their preferred assumptions and methods most often addressed the extent to which the default options would be considered precautionary. Some of the major assumptions and methodological choices of EPA, FDA, and OSHA address similar issues and circumstances during the risk assessment process, especially regarding assessment of a chemical’s toxicity. Agency procedural guidelines and officials we contacted during our review identified a large number and wide variety of major assumptions and methodological choices that they might use when conducting chemical risk assessments, in the absence of information that would indicate the particular assumption or method is not valid in a given case. Some of these assumptions and methodological choices were very broad (e.g., the common assumption that, in the absence of evidence to the contrary, substances that produce adverse health effects in experimental animals pose a potential threat to humans). Other assumptions and choices were more specific, covering particular details in the analytical process (e.g., identifying the preferred options for extrapolating high dose-response relationships to low doses). EPA and OSHA identified some of their choices as the default assumptions and methods of their agencies. FDA officials said that their agency does not require the use of specific default assumptions or risk assessment methods, but there are assumptions and methods that typically have been used as standard choices in FDA risk assessments. Although assumptions are also needed in RSPA’s risk assessments, RSPA officials said that they do not have any default assumptions. Instead, they said that their assumptions are specific to, and must be developed as part of, each risk assessment. Appendices II through V present detailed information on some of what the agencies identified as their major assumptions and methodological choices in chemical risk assessments. The tables illustrate both the number and variety of assumptions that agencies may use when conducting those assessments. The following sections summarize information that was available from the four agencies’ procedures and related documents on (a) when the agencies employ major assumptions and methods, (b) their reasons for selecting these options, (c) the likely effects on risk assessment results of these options, and (d) how they compare to the assumptions and choices used by other agencies or programs in similar circumstances. In some cases the agencies’ documents did not contain this information, but there is no requirement that the agencies do so. Also, the reason for using a particular assumption and its effect on risk assessment results can vary on a case-by- case basis, and therefore might not be addressed in general risk assessment guidance. Nevertheless, both NAS and the Presidential/Congressional Commission recommended greater transparency regarding the procedures, assumptions, and results of agencies’ risk assessments. Also, as will be discussed more fully later in this report, the agencies’ own risk characterization policies and practices emphasize the value of such transparency in communicating information about risk assessment procedures and results. Recent regulatory reform proposals considered by Congress have had provisions requiring transparency in the use of assumptions. As previously mentioned, NAS and the Presidential/Congressional Commission have both emphasized that science cannot always provide definitive answers to questions raised during a risk assessment. For example, in 1983, NAS identified at least 50 points during the course of a cancer risk assessment when choices had to be made on the basis of professional judgment, not science. EPA’s guidelines similarly point out that, because there is no instance in which a set of data on an agent or exposure is complete, all risk assessments must use general knowledge and policy guidance to bridge data gaps. Except in the case of RSPA, default or standard assumptions and methods may be used by agencies to address these gaps in knowledge, and to encourage consistency in the efforts of agencies’ risk assessors to address such basic issues as: uncertainty in the underlying data, model parameters, or state of scientific understanding of how exposure to a particular chemical could lead to adverse effects; variability in the potential extent of exposure and probability of adverse effects for various subgroups or individuals within the general population; and statutory requirements (and the related general agency missions) to be protective of public health and the environment (e.g., to set standards with “an adequate margin of safety”). However, agency risk assessors have considerable flexibility regarding whether to use particular assumptions and methods, even when the agency has default or standard options. For example, EPA stated that its revised guidelines for carcinogen risk assessment were intended to be both explicit and more flexible than in the past concerning the basis for making departures from defaults, recognizing that expert judgment and peer review are essential elements of the process. The Executive Director of ORD’s Risk Assessment Forum pointed out that, although EPA’s guidelines always permitted such flexibility, without detailed guidance on departing from default assumptions there had been a tendency for analysts to not do so. He also stated that when determining whether to use a default, the decision maker must consider available information on an underlying scientific process and agent-specific data, and that scientific peer review, peer consultation workshops, and similar processes are the principal ways of determining the strength of thinking and the general acceptance of these views within the scientific community. FDA officials emphasized that their agency does not presume that there is a “best way” of doing a risk assessment and does not require the use of a specific risk assessment protocol or of specific default assumptions, but they are continually updating procedures and techniques with the goal of using the “best available science.” Agencies identified assumptions and methodological choices throughout the risk assessment process, and each of the first three steps in the process can have its own set of issues and choices that risk assessors need to address. During hazard identification, agencies must make choices about which types of data to use and what types of adverse effects and evidence will be considered in their analyses. For example, risk assessors need to decide whether data on benign tumors should be used along with data on malignant tumors as the basis for quantitative estimates of cancer risks, or whether only data on malignant tumors should be used. During dose- response assessment, agencies may need to make assumptions when extrapolating effects from animals to humans (e.g., how to determine equivalent doses across different species). In particular, choices among assumptions and methods are needed when estimating dose-response relationships at doses that are much lower than those used in the scientific studies that provided the data for quantitative analysis. During exposure assessments, assumptions might be needed to address issues such as when exposures occur (e.g., in infancy or childhood versus as an adult), how long exposures last (e.g., short versus long term and continuous versus episodic), differences in exposures and effects for the population as a whole versus those affecting subpopulations and individuals, and questions about the concentration and absorption of chemical agents. Assumptions about human behavior also affect the relative likelihood of different exposure scenarios. For example, in assessing children’s residential exposures to a pesticide, risk assessors might need to make assumptions about how long children play in a treated area, the extent to which they are wearing clothing, and potential hand-to-mouth exposure to treated soil, among other factors. Agencies generally indicated that they use their major assumptions and methodological choices in risk assessments when professional judgments or policy choices must substitute for scientific information that is not available or is inconclusive. We examined risk assessment guidance documents and procedures in the four agencies to determine whether the agencies stated a specific scientific or policy basis for their choices, as recommended by NAS and the Presidential/Congressional Commission. In approximately three-quarters of the choices that we reviewed, the agencies provided at least some rationale for the use of particular assumptions or methods. The reasons most commonly cited were (1) an evaluation of available scientific data, (2) the precedents established in prior risk assessments, and (3) policy decisions related to their regulatory mandates. In some instances, the agencies cited more than one reason in support of their choices. For example, officials from FDA’s Center for Veterinary Medicine said they assume that an adult weighs 60 kilograms when converting an acceptable daily intake (ADI) to an intake level of residues in food because of historical precedent and because this assumption should protect women, growing adolescents, and the elderly. Of the three reasons, the agencies most often cited their evaluation of available scientific evidence as a reason for selecting particular assumptions or analytical methods. For example, one of the default assumptions in EPA’s carcinogen risk assessment guidance is that positive effects in animal cancer studies indicate that the agent under study can have carcinogenic potential in humans. EPA cited scientific research supporting this assumption, such as the evidence that nearly all agents known to cause cancer in humans are carcinogenic in animals in tests with adequate protocols. Other EPA guidelines stated that, in general, a threshold is assumed for the dose-response curve for agents that produce developmental toxicity. EPA’s guidelines noted that this assumption is based on the known capacity of the developing organism to compensate for or repair a certain amount of damage at the cellular, tissue, or organ level. OSHA cited scientific evidence and the views of the Office of Science and Technology Policy on chemical carcinogenesis (the origin or production of a tumor) to support its choice to combine data on benign tumors with the potential to progress to malignancies with data on malignant tumors occurring in the same tissue and the same organ site. Even when basing a choice upon available scientific studies and data, professional judgment may still be required regarding which particular method or assumption to choose among competing alternatives. The scientific evidence might show a range of assumptions or methods that provide plausible results and may, in specific cases, vary in terms of which one best fits the available evidence. For example, different mathematical models can be used for estimating the low-dose effects of exposure to suspected carcinogens. A basic problem for risk assessors is that, while the results produced by different models may be similar at higher doses, the estimates can vary dramatically at the low doses that are of concern to agency regulators. One study of 5 dose-response models showed that all of the models produced essentially the same dose-response curves at higher doses, but the models’ estimates differed by 3 or 4 orders of magnitude (values 1,000 to 10,000 times different) at lower doses. Because the mechanism of carcinogenesis is not sufficiently understood, none of the mathematical procedures for extrapolation has a fully adequate biological basis. Furthermore, because of the limitations in the ability of toxicologic or epidemiologic studies to detect small responses at very low doses, dose- response relationships in the low-dose range are practically unknowable. Agencies can encounter similar problems in attempting to determine how much of a chemical will produce the same effect in humans that was observed in animals. An interagency group of federal scientists that studied this issue noted that, although many alternatives had been developed for such cross-species scaling, and despite considerable study and debate, “no alternative has emerged as clearly preferable, either on empirical or theoretical grounds.” The group noted further that the various federal agencies conducting chemical risk assessments therefore developed their own preferences and precedents, and this variation “stands among the chief causes of variation among estimates of a chemical’s potential human risk, even when assessments are based on the same data.” For purposes of consistency in federal risk assessments, the group recommended a method intermediate between the two methods most commonly used by federal agencies, but reiterated that methodologies in use “have not been shown to be in error.” Other reasons cited by the agencies for selecting assumptions or methods included the precedents established in prior risk assessments and policy decisions related to their regulatory missions and mandates. For example, FDA officials said that their practice of using the most sensitive species and sex when calculating the ADI of animal drug residues in food was based on historical precedents dating back to at least 1954. In other instances, FDA said that its use of precautionary assumptions was based on the agency’s statutory responsibility to ensure to a “reasonable certainty” that the public will not be harmed. Similarly, EPA guidelines pointed out that the default assumptions used in the agency’s risk assessments were chosen to be health protective because EPA’s overall goal is public health protection. For example, EPA’s neurotoxicity guidelines said that a choice to use the most sensitive animal species to estimate human risk “provides a conservative estimate of sensitivity for added protection to the public.” The agencies provided information in their guidelines on the likely effects of using particular assumptions or methods in about half of the examples that we reviewed. When that information was provided, it was usually in the context of whether and to what extent the agencies’ choices could be considered precautionary. In a number of cases, EPA and FDA characterized their assumptions and methods as precautionary in that they were intended to avoid underestimating risks in the interest of protecting public health. Such assumptions tend to raise an agency’s estimate of risk and lower the levels of exposure that are of regulatory concern. Precautionary assumptions and methodological choices were a common component of programs that have “tiered” approaches for conducting risk assessments (e.g., EPA’s Superfund and pesticides programs). In these tiered risk assessment approaches, agencies move from initial rough screening efforts to increasingly more refined and detailed levels of analyses. The initial screening assessments will typically involve very precautionary “upper-bound” or even “worst-case” assumptions to determine whether there is cause for concern. Successive tiers of assessment, if deemed necessary, are characterized in agency documents as more detailed and focused assessments that require more extensive data and rigorous analysis. For example, EPA indicated that its screening assessments might well use precautionary upper-bound point estimates of exposures (e.g., that a chemical is used on 100 percent of the eligible crop and at the maximum permissible limit). However, subsequent tiers of assessments might refine those estimates through the use of probability distributions of exposure parameters or the use of monitoring data on actual exposures, when feasible. OSHA and RSPA also use precautionary assumptions in certain parts of their risk assessment procedures. However, these agencies identified few of their risk assessment assumptions and methods as precautionary. In fact, OSHA sometimes selected assumptions or methods that it explicitly characterized as less precautionary than those used by other agencies in similar circumstances. For example, OSHA stated that its standard approach to low-dose extrapolation can be much less precautionary than EPA’s or FDA’s approaches because it tends to use central estimates of potency rather than upper-bound confidence limits. OSHA officials also noted that the algorithm they use is less precautionary because it may lead to models that are sublinear at low doses. The effect on risk estimates of using any one assumption is likely to be less significant than that of applying a series of assumptions while conducting a risk assessment, particularly if the assessment is compounding a string of largely precautionary assumptions. As we previously pointed out, assumptions and choices may be needed at many points during each step of an agency’s analysis. The agency’s policy may well be to use precautionary choices at most, if not all, of those points, if adequate information is not available to indicate that the precautionary choice is invalid in a specific case. The potential for such a string of precautionary assumptions is illustrated by the set of standard choices identified by FDA for risk assessments of carcinogenic animal drug residues in foods consumed by humans. 1. Regulation is based on the target tissue site exhibiting the highest potential for cancer risk for each carcinogenic compound. 2. If tumors are produced at more than one tissue site, the minimum concentration of the compound that produced a tumor is used. 3. Cancer risk estimates are generally based on animal bioassays, using upper 95-percent confidence limits of carcinogenic potency.4. Low-dose extrapolation is done using a nonthreshold, conservative, linear-at-low-dose procedure (i.e., assuming that there is no dose that would not cause cancer and that effects vary in proportion to the amount of the dose). 5. It is assumed that the carcinogenic potency in humans is the same as that in animals. 6. The concentration of the residue in the edible product is at the permitted concentration. 7. Consumption is equal to that of the 90th percentile consumer. 8. All marketed animals are treated with the carcinogen. 9. In the absence of information about the composition of the total residue in edible tissue, assume that the entire residue is of carcinogenic concern. FDA’s description of its risk assessment procedures acknowledged that these assumptions “result in multiple conservatisms” and stated that some of these choices are likely to overestimate risk by an unknown amount (although the fourth assumption could also underestimate risk by an order of magnitude). However, the agency also said that these assumptions are prudent because of the uncertainties involved and cited its statutory responsibility to ensure to a reasonable certainty that the public will not be harmed. It is important to keep in mind that the primary purposes for preparing such assessments are to identify safe concentration levels in edible tissues and residue tolerances (the amount permitted to remain on food) for postmarket monitoring rather than to produce a general estimate of the risk posed by use of the animal drug. Agency documents very rarely made direct comparisons of their assumptions and methodological choices to those used by other agencies, and there is no requirement that they do so. Our review indicated that EPA, FDA, and OSHA risk assessment procedures have many basic assumptions in common—for example, that one can use results of animal experiments to estimate risks to humans, and that most potential carcinogens do not have threshold doses below which adverse effects would not occur. There are other default or standard assumptions and models in the three agencies’ risk assessment procedures that are similar, but not identical. For example, all three agencies employ a linear mathematical model for low-dose extrapolation (in the absence of information indicating that a linear model is inappropriate in a particular case). However, the agencies prefer different options in the details of fitting such models, such as the point of departure to low doses. EPA and FDA also consider similar, but not identical, sets of uncertainty or safety factors when using the NOAEL approach for noncancer risk assessments. Finally, as the discussion above regarding low-dose extrapolation illustrates, there are also instances in which the agencies use different assumptions in similar circumstances. Table 1 compares and contrasts some of the risk assessment assumptions or analytical methods identified in the guidelines or other descriptive documents of EPA, FDA, and OSHA for use under similar circumstances. (Note that, for comparability, the examples in table 1 all focus on carcinogen risk assessments based on animal studies, but the agencies’ major assumptions and methods are not limited to only carcinogen risk assessments. Note also that the “circumstances” listed in the table also include that the assumption or method would be used in the absence of data to the contrary.) There appears to be some convergence in the agencies’ risk assessment assumptions in at least one area where there had been significant differences—their methods for cross-species dose scaling. In the absence of adequate information on differences between species, EPA’s standard practice in carcinogenic risk assessments had been to scale daily administered doses by body surface area, whereas FDA’s and OSHA’s standard practice had been to scale doses by body weight. Recently, the agencies have either adopted, or consider as one of their options, the expression of doses in terms of daily amount administered per unit of body weight to the ¾ power. All four of the agencies included in our review have also been incorporating more complex analytical methods and models into their risk assessment procedures. Some of these methods (such as the use of probabilistic analyses to provide distributions of exposure parameters) help to address issues of uncertainty and variability in risk assessments and lessen the need for some precautionary assumptions. Other advances, such as the use of PBPK models, can provide better insights into how and to what extent a chemical might produce adverse effects in humans. One outcome of the integration of these methods into agencies’ procedures is a diminishing of the traditional distinction between cancer and noncancer risk assessment methods. EPA, in particular, has noted that it is less likely to consider cancer and noncancer endpoints in isolation as it develops and incorporates more advanced scientific methods to measure and model the biological events leading to adverse effects. According to EPA, the science of risk assessment is moving toward a harmonization of the methodology for cancer and noncancer assessments. The use of newer, more complex models and methods also opens up a new range of choices and assumptions in the analysis—along with the potential for risk estimates to diverge because of the different assumptions that might be used. For example, in its methylene chloride final rule OSHA reported on the results of its analyses as well as risk assessments submitted to OSHA by other risk assessors. Although most of the risk assessments used a linearized multistage model to predict risk, there were differences in the estimates produced by these assessments. OSHA pointed out that the differences in risk estimates were not generally due to the dose-response model used, but to whether the risk assessor used PBPK modeling to estimate target tissue doses and what assumptions were used in the PBPK modeling. Appendices II through V present more detailed information on some of the major assumptions and methodological choices in each of the four selected agencies. In the risk characterization step of a risk assessment, agencies bring together the results of the preceding analyses in the form of estimates and conclusions about the nature and magnitude of a potential risk. Agencies’ risk characterizations play a crucial role in explaining to decision makers and other interested parties what the agency’s risk assessors have concluded and on what basis they reached those conclusions. Both EPA and DOT have agencywide written policies on risk characterization that emphasize the importance of providing comprehensive and transparent characterizations of risk assessment results. Although FDA and OSHA do not have written risk characterization policies, officials of those agencies pointed out that, in practice, they also tend to emphasize comprehensive characterizations of risk assessment results, discussions of limitations and uncertainties, and disclosure of the data and analytic methodologies on which the agencies relied. EPA’s program offices are generally responsible for completing risk characterizations, and EPA’s agencywide guidance on this issue includes a risk characterization policy, a guidance memorandum, and a handbook. EPA’s policy stipulates that risks should be characterized in a manner that is clear, transparent, reasonable, and consistent with other risk characterizations of similar scope. EPA said that all assessments “should identify and discuss all the major issues associated with determining the nature and extent of the risk and provide commentary on any constraints limiting fuller exposition.” EPA’s policy documents also recommend that risk characterization should (1) bridge the gap between risk assessment and risk management decisions; (2) discuss confidence and uncertainties involving scientific concepts, data, and methods; and (3) present several types of risk information (e.g., a range of exposures and multiple risk descriptors such as high-end estimates and central tendencies). It is also EPA’s policy that major scientifically and technically based work products related to the agency’s decisions normally should be peer-reviewed. In its guidelines for carcinogen risk assessment, EPA also suggests preparing separate “technical” characterizations to summarize the findings of the hazard identification, dose-response assessment, and exposure assessment steps. The agency’s risk assessors are then to use these technical characterizations to develop an integrative analysis of the whole risk case, followed by a less extensive and nontechnical summary intended to inform the risk manager and other interested readers. EPA identified several reasons for preparing separate characterizations of each analysis phase before preparing the final integrative summary. One is that different people often do the analytical assessments and the integrative analysis. The second is that there is very often a lapse of time between the conduct of hazard and dose-response analyses and the conduct of the exposure assessment and integrative analysis. Thus, according to EPA, it is necessary to capture characterizations of assessments as the assessments are done to avoid the need to go back and reconstruct them. Finally, several programs frequently use a single hazard assessment for different exposure scenarios. DOT’s policy principles regarding how the results of its risk or safety assessments should be presented are straightforward and encourage agency personnel to: make public the data and analytic methods on which the agency relied (for replication and comment); state explicitly the scientific basis for significant assumptions, models, and inferences underlying the risk assessment, and explain the rationale for these judgments and their influence on the risk assessment; provide the range and distribution of risks for both the full population at risk and for highly exposed or sensitive subpopulations and encompass all appropriate risk to health, safety, and the environment; place the nature and magnitude of risks being analyzed in context (including appropriate comparisons to other risks); and use peer review for issues with significant scientific dispute. FDA does not have a written risk characterization policy, but FDA officials said that, in practice, the agency uses a standard approach that is similar to EPA’s official policy. They said that FDA’s general policy is to reveal the risk assessment assumptions that have the greatest impact on the results of the analysis, and to state whether the assumptions used in the assessment were conservative. FDA officials also said that their risk assessors attempt to show the implications of different distributions and choices (e.g., the results expected at different levels of regulatory intervention). FDA may employ probabilistic methods, such as Monte Carlo analysis, to provide additional information on the effects of variability and uncertainty on estimates of risk, and there are some differences in FDA risk characterization procedures depending on the products being regulated and the nature of the risks involved. Although OSHA does not have written risk characterization policies, in recent rules the agency emphasized (1) comprehensive characterizations of risk assessment results; (2) discussions of assumptions, limitations, and uncertainties; and (3) disclosure of the data and analytic methodologies on which the agency relied. The agency devoted considerable effort to addressing uncertainty and variability in its risk estimates. Such efforts included performing sensitivity analyses and providing estimates produced by alternative analyses and assumptions (including analyses by risk assessors outside of OSHA). In its risk characterizations, OSHA provided both estimates of central tendency and upper limits (such as the 95th percentile of a distribution). Appendices II through V provide more detailed descriptions of the risk characterization policies or approaches of each of the four selected agencies. Risk assessment is an important, but extraordinarily complex, element in federal agencies’ regulation of potential risks associated with chemicals. The assessments can help agencies decide whether to regulate a particular chemical, select regulatory options, and estimate the benefits associated with regulatory decisions. Scientific studies in such areas as toxicology and epidemiology are often used to produce the information needed for risk assessment decisions. However, assessors frequently must produce estimates of risk without complete scientific information about the extent of exposures to potentially hazardous substances and the effects of those exposures on human health and safety or the environment. Therefore, professional judgment with regard to assumptions and methodological choices is an inherent part of conducting risk assessments. The appendices to this report identify many of the major assumptions and methods that can be used in risk assessments prepared for EPA, FDA, OSHA, and RSPA. The number and variety of those assumptions and methods illustrate the range of issues that risk assessors confront during the course of their analyses. Although there were more similarities than differences in the general risk assessment procedures of three of the four agencies, there were also some notable differences in the agencies’ specific approaches, methods, and assumptions. These differences can significantly affect the results and conclusions drawn from the assessments. Therefore, risk estimates prepared by different agencies, or by different program offices within those agencies, may not be directly comparable, even if the same chemical agent is the subject of the risk assessment. In some cases, the reasons for those differences are readily apparent, such as when agencies focus on different types of adverse effects (e.g., cancer versus noncancer) or different types and sources of exposure. For example, the same chemical (e.g., methylene chloride) might be identified as a significant hazard by one agency in one exposure setting (OSHA for occupational exposures) but as a low hazard by another agency in a different setting (EPA for Superfund hazard ranking screening). In other cases, the reasons for different estimates may be more subtle and harder to discern within the many layers of analyses and professional judgments used to prepare the risk assessment. Because of the range of assumptions and methods that are scientifically plausible in a given situation, the risk characterization phase of the risk assessment process takes on added importance. In their risk characterization policies or procedures, the four agencies acknowledge the importance of clearly communicating not only their conclusions about the nature and likelihood of a given risk but also disclosing (1) the assumptions, methods, data, and other choices that had the greatest impact on risk estimates; (2) why those choices were made; and (3) the effect that alternative choices would have had on the results of a risk assessment. Transparency is important with regard to both individual risk assessments and in agencies’ general procedures regarding how the assessments should be conducted. Those procedures encourage consistency in how agencies conduct risk assessments and provide insights into agencies’ decision making when analyzing risks. For example, frameworks delineated by EPA and OSHA for departing from certain default assumptions inform both agency personnel and external parties as to whether particular data or analyses are acceptable to the agency. Our review focused on describing the framework for agencies’ chemical risk assessments. We did not evaluate how that framework is applied in practice, or how risk assessment results affect risk management decisions by agencies and other policymakers. Nevertheless, our report highlights the value of policymakers and other interested parties becoming aware of the underlying risk assessment context, procedures, assumptions, and policies when using risk assessment data for risk management and other public policy decisions. For example, prudent use of risk data requires the user to be aware of the extent to which the data: represent estimates from screening assessments (which may rely heavily on precautionary assumptions) or estimates from subsequent, more rigorous assessments (which are likely to rely on more detailed and case-specific data and analyses); show the distribution of exposures and potential adverse effects across the population, including the extent to which the data address risks of the most exposed or sensitive subgroups of the population, or focus on only part of that distribution; were produced using directly relevant scientific data that were available or had to rely on general assumptions and models; and reflect the flexibility permitted in agencies’ standard procedures or guidelines to depart from past precedent and default choices to use alternative assumptions and models, when appropriate. In our review we also found that, although the underlying statutes specified the use of particular methods or assumptions in only three instances, the legal and situational context within which an agency is conducting a chemical risk assessment has a major effect on the specific focus, scope, and level of detail of the resulting assessment. Comparison of risk assessment estimates from different agencies and programs therefore requires careful consideration of these contextual differences. Because the central purpose of our review was to describe the framework for selected agencies’ chemical risk assessments, rather than to evaluate and critique how that framework is applied in practice, we are not making any recommendations in this report. At the end of our review, we sent a draft of this report to five experts in the field of risk assessment to ensure the technical accuracy of the report. The three experts who provided comments were (1) the Executive Director of the Presidential/Congressional Commission, (2) the individual who prepared the Survey of Methods for Chemical Health Risk Assessment Among Federal Regulatory Agencies for the Commission, and (3) an expert in risk assessment at Resources for the Future. The experts generally indicated that the report had no material weaknesses, but provided a number of technical suggestions that we incorporated as appropriate. For example, two of the reviewers suggested that the report’s discussion of the NAS four-step risk assessment paradigm, although reflecting the definitions generally relied upon by federal agencies, should also identify an updated view regarding the concept of risk characterization. The updated view is that risk characterization should be a decision-driven activity performed as part of the risk management decision making process rather than a stand-alone activity at the end of a risk assessment. We included this perspective in the report’s background section. During our review, we obtained technical comments from officials in each of the four agencies on a draft of the appendices to this report, which we incorporated as appropriate. On June 18, 2001, we sent a draft of the full report to the Secretaries of Health and Human Services, Labor, and Transportation, and the Administrator of EPA for their review and comment. None of the agencies provided formal comments on the report, but we received additional technical comments and suggestions from all four of the agencies, which we incorporated as appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Ranking Minority Member, House Committee on Energy and Commerce; the Ranking Minority Member, Subcommittee on Environment and Hazardous Materials, House Committee on Energy and Commerce; the Secretaries of Health and Human Services, Labor, and Transportation; and the Administrator of EPA. We will also make copies available to others on request. If you have any questions concerning this report, please call me or Curtis Copeland at (202) 512-6806. Key contributors to this assignment were Timothy Bober and Aaron Shiffrin. As requested, our review focused on the chemical risk assessment procedures, assumptions, and policies of four federal agencies with responsibilities for regulating or managing risks from potential exposure to chemicals—the Environmental Protection Agency (EPA), the Food and Drug Administration (FDA) within the Department of Health and Human Services (HHS), the Occupational Safety and Health Administration (OSHA) within the Department of Labor, and the Department of Transportation’s (DOT) Research and Special Programs Administration (RSPA—in particular the Office of Hazardous Materials Safety). Our specific objectives were to identify and describe (1) the general context for the agencies’ chemical risk assessment activities; (2) what the agencies view as their primary procedures for conducting risk assessments; (3) what the agencies view as the major assumptions or methodological choices in their risk assessment procedures; and (4) the agencies’ procedures or policies for characterizing the results of risk assessments. To the extent feasible, we were also asked to identify for the assumptions and choices identified in the third objective (a) at what stages of the risk assessment process they are used, (b) the reasons given for their selection, (c) their likely effects on risk assessment results, and (d) how they compare to the assumptions and choices used by other agencies or programs in similar circumstances. To address our objectives, we relied primarily on a detailed review and analysis of agencies’ general guidance documents on chemical risk assessment or, if there were no guidance documents, reviews of specific examples of agency risk assessments. We supplemented that information with material from secondary source reports on risk assessment and interviews with agency officials. Among the secondary sources that we used were relevant reports by the Congressional Research Service, National Academy of Sciences (NAS), and the Presidential/Congressional Commission on Risk Assessment and Risk Management (hereinafter referred to as the Presidential/Congressional Commission). In particular, as a starting point for our review we used a report on federal agencies’ chemical risk assessment methods that was prepared by Lorenz Rhomberg for the Presidential/Congressional Commission. That report provided the baseline descriptions of some of the chemical risk assessment procedures at EPA, FDA, and OSHA. We asked officials of those agencies to review Rhomberg’s report to identify information that was still relevant to addressing the objectives of this report as well as information that they felt should be revised or added to reflect the agencies’ current procedures. There are several important limitations to our review. First, chemical risk assessment is just one of several types of risk assessment being conducted in federal agencies. Therefore, our review cannot be used to characterize other types of risk assessments (e.g., risks associated with radiation exposure). In fact, FDA officials considered risk assessments related to the human drug approval process to be outside the scope of our review because a completely different protocol is used in those assessments. However, limiting the scope of our review to chemical risk assessments makes comparisons among the agencies included more relevant and meaningful. Second, our review did not include all agencies or programs that conduct risk assessments involving chemicals. For example, we did not include the Consumer Product Safety Commission, which periodically assesses products with potential risks from chemicals. Nor did we include the Agency for Toxic Substances and Disease Registry, which prepares “health assessments” that closely resemble risk assessments but has no regulatory authority. We focused on the risk assessment procedures in four federal agencies that regularly conduct chemical risk assessments in support of regulatory activities and/or could illustrate the diversity of risk assessment procedures. However, the results of our review cannot be considered representative of chemical risk assessments in all federal agencies. Third, our review does not describe every chemical risk assessment procedure or assumption used by the agencies we reviewed. The material describing the agencies’ procedures is both voluminous and extremely complex. The detailed information that we provide on agency assumptions is illustrative of the assumptions included in agencies’ procedures, but not a compendium of all such assumptions. In addition, we concentrated primarily on the human health and safety risk assessment procedures of the four agencies and, to a lesser extent, on ecological risk assessment procedures. Fourth, this report describes agencies’ general procedures and policies, but it is not a compliance review of how well those procedures and policies are applied with regard to individual assessments. The agencies’ guidelines represent suggested procedures and are not binding, so the agencies’ practices may justifiably vary from the general frameworks we describe. In practice, risk assessments do not follow a simple recipe or formula. Each assessment has unique issues or characteristics that require case-specific resolutions. Finally, this report does not address risk management issues—e.g., using the results of a risk assessment to determine what level of exposure to a risk agent represents an acceptable or an unacceptable risk and deciding what control options should be used. We conducted this review between February 2000 and March 2001 in the Washington, D.C., headquarters offices of the selected agencies in accordance with generally accepted government auditing standards. We obtained technical comments on our descriptions of the agencies’ procedures, assumptions, and policies in the appendices from knowledgeable agency personnel. We then provided the draft report to external experts in risk assessment, including the Center for Risk Analysis at the Harvard School of Public Health in Boston, MA; Resources for the Future in Washington, D.C.; the Executive Director of the Presidential/Congressional Commission; and Lorenz Rhomberg, the analyst who surveyed federal agencies’ chemical risk assessment procedures for the Commission. After incorporating their comments, we provided a draft of this report to the Secretaries of Health and Human Services, Labor, and Transportation; and the Administrator of the Environmental Protection Agency for their review and comment. In the following appendices, we provide more detailed information regarding the framework and methods applicable to chemical risk assessment activities of EPA, FDA, OSHA, and RSPA. There is a separate technical appendix covering each of these four agencies, along with their relevant offices, programs, or centers that are involved in conducting chemical risk assessments. For consistency and ease of presentation, we have generally organized the appendices on each agency according to a standard format with four major sections. 1. We describe the general context for the chemical risk assessment activities of each agency. This includes a summary of the primary risk statutes, mandates, and tasks related to potential risks from exposure to chemical agents. 2. We identify and summarize the standard risk assessment procedures of each agency and, if applicable, each agency’s various offices, programs, or centers. This section is generally organized by the major analytical steps of the risk assessment process: hazard identification, dose- response assessment, and exposure assessment. These correspond to the first three steps of the four-step paradigm for risk assessment as defined by NAS and used by three of the four agencies covered by our review. (We address the fourth step of the process, risk characterization, as a separate objective in the final section of each agency appendix.) Within the descriptions of those steps, we often distinguish between the procedures used for assessing cancer and noncancer effects. Given developments in risk assessment methods, these distinctions are sometimes more artificial than real.3. We present additional information about major assumptions and methodological choices in the agencies’ standard risk assessment procedures. For EPA, FDA, and OSHA, the primary focus of this section is a detailed table identifying some of the major agencywide or program-specific assumptions that may be used in chemical risk assessments. To the extent that such information was available, each of these tables also includes information on the agency’s reason(s) for selecting a particular assumption, when in the risk assessment process the agency would apply the assumption, and the likely effect of using the assumption on risk assessment results. (Because agencies very rarely made direct comparisons of their choices to those of other agencies in their risk assessment guidelines or related documents, we have not included a separate column on that topic in the appendix tables. That objective is, however, addressed in the letter portion of this report.) The appendix on RSPA does not include all of these elements because of differences in its context and approach to chemical risk assessment. 4. The final section of each appendix addresses each agency’s approach or policies for characterizing the results of risk assessments for agency decision makers and other interested parties. In particular, we describe the agency’s policies or practices with regard to the transparency of risk assessment results, such as reporting the range and distribution of risks and identifying the uncertainties in the risk analysis and underlying data. To avoid repetition in the appendices on agencies’ risk assessment procedures, our most detailed descriptions of basic methods and issues appear in the EPA appendix under the discussion of agencywide procedures. Descriptions of procedures used by other agencies or programs, including the individual program offices within EPA, then reference the EPA-wide descriptions of those particular methods, if they are similar. Although we provide much more detailed technical information in these appendices than in the main body of the report, it is still important to recognize that agencies’ risk assessment methods are more involved and complex than we have described in this report. In particular, the tables of assumptions do not represent a comprehensive listing of all assumptions and choices of the agencies. Agencies might use many different types and numbers of assumptions in any given assessment, and the assumptions are being altered over time to reflect scientific improvements and changes in risk approaches and the regulatory context. However, the information presented is intended to illustrate the types and diversity of procedures and assumptions employed by the agencies we examined. Chemical risk assessment at the Environmental Protection Agency (EPA) is a complex and diverse undertaking. The variety and range of the relevant regulatory authorities and activities has a major effect on the organization and conduct of risk assessment at the agency. An expanding set of agency guidelines reflects the evolving nature of EPA’s risk assessment procedures. EPA generally follows the four-step risk assessment process identified by the National Academy of Sciences (NAS). Changes are occurring in EPA’s approaches to cancer, noncancer, and exposure assessments, with a general trend toward the development and application of more complex and comprehensive methodologies. To a greater extent than the other agencies we reviewed, EPA has established a set of default assumptions (often precautionary in nature) and standard data factors for use by its risk assessors. In the “tiered” risk assessment approaches commonly employed by EPA’s program offices, precautionary default assumptions are most often used during initial screening assessments, when the primary task generally is to determine whether a risk might exist and more rigorous analysis is needed. However, the information necessary for more detailed analysis is not always available, so for regulatory purposes the agency may be limited to using results from its initial tiers of risk assessments. In presenting the results of its risk assessments, it is EPA’s policy that risk characterizations should be prepared in a manner that is clear, transparent, reasonable, and consistent with other risk characterizations of similar scope prepared across the programs in the agency. The following sections describe for EPA and its component offices, the context for chemical risk assessment, the general procedures for conducting risk assessments, major assumptions and methodological choices in those procedures, and the agency’s policy for risk characterization. Because chemical risk assessment at EPA is such a complex and diverse activity, this appendix can only summarize and illustrate the range of contexts, procedures, assumptions and methods, and policies that affect the conduct of EPA risk assessments. For example, as in our report as a whole, this appendix focuses primarily on human health and safety risk assessment and less on ecological risk assessment. However, we have included a brief section on EPA’s ecological risk assessment guidelines under our discussion of agencywide risk assessment procedures and illustrated the role played by ecological risk assessment in the risk assessment activities of some, but not all, of EPA’s program offices under our discussion of program-specific procedures. As a practical matter, this appendix reflects risk assessment topics that were addressed in agencywide or program-specific guidelines or descriptions of chemical risk assessment at EPA. To the extent that such activities were not explicitly addressed in the agency’s risk assessment guidelines and related documents, there may be little information on them in this appendix. EPA is responsible for a wide range of regulatory—and related risk assessment—activities pertaining to potential health, safety, and environmental risks associated with chemical agents. This range of activities reflects an equally broad and diverse range of underlying environmental statutes. According to EPA, close to 30 provisions within the major environmental statutes require decisions based on risk, hazard, or exposure assessment, with varying requirements regarding the scope and depth of the agency’s analyses. In general, EPA’s regulatory authority regarding chemical agents is compartmentalized according to the various kinds and sources of exposure—such as pesticides, drinking water systems, or air-borne pollutants—and reflected in the agency’s organization into various program offices—such as the Office of Air and Radiation, Office of Solid Waste, and Office of Water. Table 2 summarizes the principal statutes, regulatory tasks, and risk mandates associated with chemical risk assessment activities of EPA’s offices. A number of other contextual factors affect the extent of involvement by EPA offices in assessing and using risk assessment information in support of the various statutes, mandates, and tasks identified in table 2. Risk assessment information may not be the only, or even the primary, basis for the ultimate risk management decision. EPA statutes vary fundamentally by whether the basis for regulation is (1) risk (health and environmental) only, (2) technology-based, or (3) risk balancing (consideration of risks, costs, and benefits). For some chemical risk assessment activities, EPA has a secondary role. Instead, the main responsibility for determining the relative risk of a chemical, compiling and analyzing risk-related data, or completing other tasks associated with a particular statute might lie with industry, states, or local entities. In practical terms, the resources available for conducting a risk assessment for a given chemical might limit the depth and scope of EPA’s (or other parties’) analysis. Such resource limitations might include not only schedule and staffing constraints, but often the amount and quality of directly relevant scientific data available for analysis. Risk assessment activities involve both EPA’s program offices and its Office of Research and Development (ORD), which is the principal scientific and research arm of the agency. ORD often does risk assessment work for EPA program offices that focuses on the first two steps in the four-step NAS process—hazard identification and dose-response assessment—in particular, the development of “risk per unit exposed” numbers. The exposure assessment and risk characterization steps tend to be the responsibility of the various regulatory programs at EPA. However, according to agency officials, both program offices and ORD may conduct all of the risk assessment steps in particular cases. For example, OW’s Office of Science and Technology does all of the assessments for purposes of the SDWA, and, because of their particular statutory mandates, OPP and OPPT have developed the capability to conduct all steps of a risk assessment on their own. ORD carries out all steps of highly complex, precedent-setting risk assessments of specific chemicals, such as dioxin and mercury. ORD also helps to coordinate the development of EPA’s risk assessment methods, tools, models, and policies. In particular, much of EPA’s agencywide guidance on conducting risk assessments is developed and disseminated through ORD, with input from EPA’s program offices, Science Policy Council, and Science Advisory Board, as well as other external parties. Coordination of risk assessment activities also occurs through EPA’s Risk Assessment Forum and the agency workgroups that approve information for entry into EPA’s Integrated Risk Information System (IRIS). The Risk Assessment Forum is a standing committee of senior EPA scientists that was established to promote agencywide consensus on difficult and controversial risk assessment issues and to ensure that this consensus is incorporated into appropriate EPA risk assessment guidance. Managed by ORD, IRIS is a computerized database that contains information on human health effects that may result from exposure to various chemicals in the environment. IRIS was initially developed for EPA staff in response to a growing demand for consistent information on chemical substances for use in risk assessments, decision making, and regulatory activities. The entries in IRIS on individual chemicals represent a consensus opinion of EPA health scientists representing the program offices and ORD and have been subject to EPA’s peer review policy since its issuance in 1994. There are agencywide risk assessment procedures that EPA’s various program offices generally follow, but each office also has different statutory mandates and risk assessment tasks associated with its regulatory authority. These contextual differences contribute to some program-specific variations in the conduct of chemical risk assessments. In addition, EPA’s procedures are in transition from more simplistic traditional methods for identifying and assessing risks to increasingly complex models and methods. It is particularly important to recognize that, while most EPA guidelines (and this appendix) distinguish between cancer and noncancer procedures, this distinction is becoming increasingly blurred as new scientific methods are being developed and applied. In general, EPA follows the NAS four-step process for human health risk assessments: (1) hazard identification, (2) dose-response assessment, (3) exposure assessment, and (4) risk characterization. However, for ecological risk assessment, EPA’s guidelines recommend a three-step process: (1) problem formulation, (2) analysis, and (3) risk characterization. To a much greater extent than the other agencies we reviewed, EPA has documented its risk assessment procedures and policies in a voluminous and expanding set of guidelines, policy papers, and memoranda. These documents are primarily intended as internal guidance for use by risk assessors in EPA and those consultants, contractors, or other persons who perform work under EPA contract or sponsorship. However, the documents also make information on the principles, concepts, and methods used in EPA’s risk assessments available to other interested parties. EPA’s guidelines undergo internal and external peer review. Beginning in 1986, EPA published a series of risk assessment guidelines to set forth principles and procedures to guide EPA scientists in the conduct of agency risk assessments, and to inform agency decision makers and the public about these procedures. In general, EPA adopted the guiding principles of fundamental risk assessment works, such as the 1983 Red Book by the NAS’ National Research Council (NRC). EPA’s guidelines supplement these principles. Five sets of guidelines were finalized in 1986, including guidelines for carcinogen risk assessment, mutagenicity risk assessment, health risk assessment of chemical mixtures, health assessment of suspect developmental toxicants, and estimating exposures. In part to respond to advances and changes in risk assessment methods— but also in response to criticisms of its guidelines by NRC, among others— EPA has revised most of these guidelines, in either proposed or final form, and produced additional guidance documents. Statutory changes have also prompted revisions and expansions of EPA’s risk assessment guidelines and policy papers. In the Clean Air Act Amendments of 1990, for example, Congress directed EPA to revise its carcinogen risk assessment guidelines, taking into consideration the NAS recommendations, before making any determinations of the “residual risks” associated with emissions of hazardous air pollutants. The results of the NAS study appeared in the 1994 NRC report, Science and Judgment in Risk Assessment. Among other things, NRC recommended that EPA better identify the inference (default) assumptions in its guidelines, explain the scientific or policy bases for selecting them, and provide guidance on when it would be appropriate to depart from the assumptions. The current set of agencywide risk assessment guidelines and policies includes the following major topics: carcinogen risk assessment, neurotoxicity risk assessment, reproductive toxicity risk assessment, developmental toxicity risk assessment, mutagenicity risk assessment, health risk assessment of chemical mixtures, guidelines for exposure assessment, guidelines for ecological risk assessment, other risk assessment tools and policies, probabilistic analysis in risk assessment, use of the benchmark dose approach in health risk assessment, reference dose (RfD) and reference concentration (RfC), evaluating risk to children, and EPA risk characterization program. In addition to these agencywide documents, there are also numerous program-specific guidelines and policy documents. For example, the Risk Assessment Guidance for Superfund series covers various stages of human health evaluation as well as ecological risk assessment and probabilistic risk assessment. There are also guidelines and policy memoranda at the headquarters and regional office level that supplement these general Superfund guidelines. Similarly, OPP, with input from ORD, has developed a series of science policy papers specifically on issues related to pesticide risk assessments, in response to provisions of the Food Quality Protection Act of 1996. Describing EPA’s risk assessment procedures with any certainty is a difficult task, given the sheer volume of EPA guidance documents, the continuing evolution of risk assessment practices, and the extent to which many of EPA’s revisions are currently draft in nature. For example, the official guidelines for cancer risk assessment are still the 1986 version, but the agency published a proposed revision of those guidelines in 1996, and continued to revise them in 1999, but the revised guidelines have not yet been made final by EPA. Although the various revisions since 1986 do not represent official agency policy at this stage, the approaches that they describe are likely to provide a more accurate reflection of current practices and directions in EPA risk assessments. To some extent EPA is already applying these newer approaches, for example in the Office of Water’s revised methodology for deriving ambient water quality criteria for the protection of human health and the Office of Pesticide Programs’ Cancer Peer Review Committee. The following sections summarize the basic elements of EPA’s agencywide procedures for conducting risk assessments. Because most of EPA’s guidelines focus on human health risks, this section also focuses primarily on health assessments in describing EPA’s general approach. EPA generally uses the NAS four-step process for those assessments. However, a separate short section on EPA’s approach to ecological risk assessment appears at the end of this agencywide summary. Also, while this appendix (and most of the source material from which it was derived) discusses procedures for assessing cancer and noncancer effects separately, this distinction is increasingly artificial. As EPA noted in its Strategy for Research on Environmental Risks to Children, the agency is less likely to consider cancer and noncancer endpoints in isolation as it develops and incorporates more advanced scientific methods to measure and model the biological events leading to adverse effects. According to EPA, the science of risk assessment is moving toward a harmonization of the methodology for cancer and noncancer assessments. EPA’s approach to hazard identification changed significantly between the agency’s 1986 guidelines and its proposed revision. In its 1986 guidelines, EPA defined a hierarchical classification scheme for hazard identification of chemical agents (see table 3). In this scheme, analysis of whether an agent is a potential human carcinogen proceeds through distinct steps based on the type of human, animal, or “other” evidence available and its quality (whether such evidence is sufficient, limited, or inadequate), resulting in classification of the agent in one of six alphanumeric categories. In response to further developments in the understanding of carcinogenesis, and to address limitations of its 1986 scheme, EPA proposed a revised approach that melds the separate human-animal-other processes into a single comprehensive evaluation. In this approach, weighing the evidence and reaching conclusions about the carcinogenic potential of an agent would be accomplished in a single step after assessing all individual lines of evidence. Compared to the 1986 guidelines, the proposed revision also encourages fuller use of all biological information— instead of relying primarily on tumor findings—and emphasizes analysis of the agent’s mode of action in leading to tumor development. “Mode of action” is defined as a series of key events and processes, starting with interaction of an agent with a cell and proceeding through operational and anatomical changes resulting in cancer formation. EPA starts with a review and assessment of the toxicological database to identify the type and magnitude of possible adverse health effects associated with a chemical. Exposure to a given chemical might result in a variety of toxic effects, so EPA has produced separate guidelines for the assessment of mutagenicity, developmental toxicity, neurotoxicity, and reproductive toxicity. However, assessments for these noncancer health effects may also overlap. For example, developmental effects might be traced to exposures and factors also covered by reproductive toxicity assessments, and developmental exposures may result in genetic damage that would require evaluation of mutagenicity risks. The EPA guidelines for noncancer effects are not step-by-step manuals, and they do not prescribe a hazard identification classification scheme. Instead, they focus on providing general advice to risk assessors on different types of toxicity tests or data and on the appropriate toxicological interpretation of test results (e.g., which outcomes should be considered adverse effects). In addition to considering the types and severity of potential adverse effects, hazard identification would also consider and describe the nature of exposures associated with these effects. A review of the full range of possibilities would consider: acute effects—generally referring to effects associated with exposure to one dose or multiple doses within a short time frame (less than 24 hours, for example); short-term effects—associated with multiple or continuous exposure occurring within a slightly longer time frame, usually over a 14-day to 28- day time period; subchronic effects—associated with repeated exposure over a limited period of time, usually over 3 months; and chronic effects—associated with continuous or repeated exposure to a chemical over an extended period of time or a significant portion of the subject’s lifetime. Procedurally, there is an important variation from the distinct four steps of the risk assessment paradigm. In its guidelines, EPA notes that its normal practice for assessments of noncancer health effects is to do hazard identification in conjunction with the analysis of dose-response relationships. This is because the determination of a hazard is often dependent on whether a dose-response relationship is present. According to EPA, this approach has the advantages of (1) reflecting hazards in the context of dose, route, duration, and timing of exposure; and (2) avoiding the potential to label chemicals as toxicants on a purely qualitative basis. Risk assessors conducting dose-response assessments must make basic choices regarding which data to base analyses upon and which models and assumptions to use for extrapolation of study results to the potential human exposures of regulatory interest. Data choices focus on the availability and quality of human or animal studies. Three of the more important extrapolation tasks are estimation of low-dose relationships (i.e., those that fall below the range of observation in the studies supporting the agency’s analysis), calculation of toxicologically equivalent doses when dose-response data from animal studies are applied to human exposures, and extrapolating results from data on one route of exposure to another route. The two main types of studies that provide data useful in a quantitative dose-response assessment are (1) epidemiological studies of human populations and (2) toxicological laboratory studies using animals or, sometimes, human cells. Epidemiological studies examine the occurrence of adverse health effects in human populations and attempt to identify the causes. At a minimum, such studies can establish a potential link between exposures to chemical agents and the occurrence of particular adverse effects by comparing differences in exposed and nonexposed populations. If there is adequate information on the exposure levels associated with adverse effects, these studies can also provide the basis for a dose- response assessment. Because such data obviate the need to extrapolate from animals to humans, EPA (like other agencies) prefers to use data from epidemiological studies, if available. Often, however, the available data for dose-response assessment will come from animal studies. A common assumption underlying risk assessments by EPA (and other agencies) is that an agent that produces adverse effects in animals will pose a potential hazard to humans. EPA’s guidelines emphasize that case-specific judgments are necessary in considering the relevance of particular studies and their data. However, in the absence of definitive information to the contrary, EPA’s guidelines establish some standard default choices to assist risk assessors in selecting which studies and data to use. (See the section on assumptions in this appendix for more information on such default choices and assumptions.) Quantifying risks engenders another set of issues and choices. In particular, some type of low-dose extrapolation is usually necessary, given that the doses observed in studies tend to be higher than the levels of exposure of regulatory concern. There are limits to the ability of both epidemiological and toxicological studies to detect changes in the likelihood of health effects with acceptable statistical precision, especially at the low-dose exposures typical of most environmental exposures and given practical limits to the sizes of research studies. A number of different models might be used for extrapolation, all giving plausible results. In its proposed revision of the carcinogen risk assessment guidelines, EPA identifies use of a biological extrapolation model as the preferred approach for quantifying risk. Such models integrate events in the carcinogenic process throughout the dose-response range from high to low doses and include physiologically based pharmacokinetic (PBPK) and biologically based dose-response models. PBPK models address the exposure-dose relationship in an organism taken as a whole, estimating the dose to a target tissue or organ by taking into account rates of absorption into the body, metabolism, distribution among target organs and tissues, storage, and elimination of an agent. Biologically based dose-response models describe specific biological processes at the cellular and molecular levels that link target-organ dose to the adverse event. These models are useful in extrapolation between animals and humans and between children and adults because they allow consideration of species- and age-specific data on physiological factors affecting dose levels and responses. However, biological models require substantial quantitative data and adequate understanding of the carcinogenic process for a specific agent. EPA cautions that the necessary data for using such models will not be available for most chemicals. Therefore, the agency’s guidelines describe alternative methods. Dose-response assessment is a two-step process when a biologically based model is not used. The first step is the assessment of observed data to derive a point of departure, and the second step is extrapolation from that point of departure to lower (unobserved) exposures. According to EPA guidelines, the agency’s standard point of departure for animal studies is the effective dose (ED) corresponding to the lower 95-percent confidence limit on a dose associated with 10-percent extra risk (LED) compared to the control group. EPA may use a lower point of departure for data from human studies of a large population or from animal studies when such data are available. For the extrapolation step, EPA’s proposed guidelines provide three default approaches which assume, respectively, that the dose-response relationship is linear, nonlinear, or both. The choice of which default approach to apply is to be based on the available information on the mode(s) of action of the chemical agent. is chosen to account protectively for experimental variability and is an appropriate representative of the lower end of the observed range, because the limit of detection in studies of tumor effects is about 10 percent. EPA’s program offices usually perform the exposure assessment step, given the different exposure scenarios of interest for the separate regulatory programs. However, EPA has published agencywide guidelines for exposure assessment that describe general principles and practices for conducting such assessments. The focus of EPA’s guidelines is on human exposures to chemical substances, but the agency noted that much of the guidance also applies to wildlife exposure to chemicals or human exposure to biological, physical (e.g., noise), or radiological agents. EPA points out, though, that assessments in these other areas must consider additional factors that are beyond the scope of the exposure assessment guidelines. EPA’s guidelines establish a broad framework for agency exposure assessments by describing the general concepts of exposure assessment, standardizing the terminology (such as defining concepts of exposure, intake, uptake, and dose), and providing guidance on the planning and implementation of an exposure assessment. The guidelines are not, however, intended to serve as a detailed instructional guide. EPA’s guidance prescribes no standard format for presenting exposure assessment results, but recommends that all exposure assessments, at a minimum, contain a narrative exposure characterization section that provides a statement of purpose, scope, level of detail, and approach used in the assessment, including key assumptions; presents the estimates of exposure and dose by pathway and route for individuals, population segments, and populations in a manner appropriate for the intended risk characterization; provides an evaluation of the overall quality of the assessment and the degree of confidence the authors have in the estimates of exposure and dose and the conclusions drawn; interprets the data and results; and communicates the results of the exposure assessment to the risk assessor, who can then use this information with the results from other risk assessment elements to develop the overall risk characterization. The guidelines encourage agency staff to use multiple “descriptors” of both individual and population risks, rather than a single descriptor or risk value. The exposure guidelines also emphasize the use of more realistic estimates of high-end exposures than had been the case in some previous practices. In the past, EPA sometimes relied on exposure estimates derived from a hypothetical “maximally exposed individual” who might spend, for example, a 70-year lifetime drinking only groundwater with the highest concentrations of contaminants detected. According to the 1997 report of the Presidential/Congressional Commission, this approach was often based on such unrealistic assumptions that it impaired the scientific credibility of risk assessments. Now, however, EPA has adopted the use of distributions of individual exposures as the preferred practice. EPA’s guidance indicates that risk assessments should include both central estimates of exposure (based on either the mean or median exposure) and estimates of the exposures that are expected to occur in small, but definable, high-end segments of the population. EPA states that a high-end exposure estimate is to be a plausible estimate of the individual exposure for those persons at the upper end of an exposure distribution. The agency’s intent is to convey an estimate of exposure in the upper range of the distribution, but to avoid estimates that are beyond the true distribution. EPA has identified several new directions in its approach to exposure assessment. First is an increased emphasis on total (aggregate) exposure via all pathways. EPA policy directs all regulatory programs to consider in their risk assessments exposures to an agent from all sources, direct and indirect, and not just from the source that is subject to regulation by the office doing the analysis. Another area of growing attention is the consideration of cumulative risks, when individuals are exposed to many chemicals at the same time. The agency is also increasing its use of probabilistic modeling methods, such as Monte Carlo analysis, to analyze variability and uncertainty in risk assessments and provide better estimates of the range of exposure, dose, and risk in individuals in the population. EPA policy directs regulatory programs to pay special attention to the risks of children and infants. EPA has produced some reference documents for exposure assessments, such as the Exposure Factors Handbook. This handbook is intended to provide parameter values for use across the agency and to encourage use of reasonable exposure estimates by providing appropriate data sources and suggested methods. The handbook provides a summary of available statistical data on various factors used to assess human exposure to toxic chemicals. These factors include: drinking water consumption; soil ingestion; inhalation rates; dermal factors including skin area and soil adherence factors; consumption/intake of fruits and vegetables, fish, meats, dairy products, homegrown foods, and breast milk; human activity patterns, such as time spent performing household tasks; consumer product use; and residential characteristics. EPA provides recommended values for the general population and also for various segments of the population who may have characteristics different from the general population (e.g., by age, gender, race, or geographic location). EPA guidance cautions, though, that these general default values should not be used in the place of known, valid data that are more relevant to the assessment being done. The default values used in EPA risk assessments, however, sometimes vary slightly from the recommended values appearing in the handbook. For example, while the handbook’s mean recommended value for adult body weight is 71.8 kilograms (kg), the handbook also noted that a value of 70 kg has been commonly assumed in EPA’s risk assessments. Similarly, the recommended value to reflect average life expectancy of the general population is 75 years, but 70 years also has been commonly assumed in EPA risk assessments. Officials from EPA program offices pointed out that they may use different exposure factors in their risk assessments because they sometimes develop exposure assessment methods specific to their programs using different data sources or population characteristics than those used by ORD for the Exposure Factors Handbook. Ecological risk assessment is different from human health risk assessment in that it may examine entire populations of species and measure effects on partial or whole ecosystems. Often, the focus is on not just a single ecological entity, but on the potential adverse effects on multiple species and their interactions (for example, on the food chain). While human health risk assessment is primarily concerned with an agent’s toxicity to humans, ecological risk assessment might consider a range of adverse effects on natural resources (such as crops, livestock, commercial fisheries, and forests), wildlife (including plants), aesthetic values, materials or properties, and recreational opportunities. For example, a chemical agent could be considered a risk to wildlife if exposure to it caused death, disease, behavioral abnormalities, mutations, or deformities in the members of a species or their offspring. It could be considered a risk to aesthetic values if it affected the color, taste, or odor of a water source. By EPA’s definition, ecological risk assessment is a process that evaluates the likelihood that adverse ecological effects may occur or are occurring as a result of exposure to one or more “stressors.” In other words, ecological risk assessments may be prospective or retrospective, and, in many cases, both approaches are included in a single risk assessment. Chemicals are only one of the possible ecological stressors that EPA might consider, along with physical and biological ones. EPA’s guidance focuses on stressors and adverse ecological effects generated or influenced by human activity, which could be addressed by the agency’s risk management decisions. In comparison to human health risk assessment procedures, the approaches for ecological risk assessment are more recent and less well developed. However, as these methods have changed to incorporate and better characterize dynamic, interconnected ecological relationships, EPA has updated its guidance documents on the subject, with input from multiple interested internal and external parties. According to EPA, the solicitation of input from an array of sources is based, in part, on the need to establish a framework for characterizing risks based on numerous stressors, interconnected pathways of exposure, and multiple endpoints (adverse effects). The most recent version of EPA’s framework appears in Guidelines for Ecological Risk Assessment, published in 1998. EPA’s guidelines describe an iterative three-phase process consisting of problem formulation, analysis, and risk characterization. These guidelines incorporate many of the concepts and approaches called for in human health risk assessments. However, particularly in the addition of a problem formulation phase, the ecological risk assessment framework deviates from the standard four-step process used for human health risk assessments. EPA pointed out that, unlike human health assessments where the species of concern and the endpoints (e.g., cancer) have been predetermined, ecological risk assessments need a phase that focuses on the selection of ecological entities and endpoints that will be the subject of the assessment. Table 4 summarizes the activities and expected outcomes for each of the three phases of an ecological risk assessment. Prior to these phases, according to EPA, a planning stage occurs during which risk assessors, risk managers, and other interested parties are to have a dialogue and scope the problem. Among the things considered during problem formulation is the selection of assessment endpoints, which are “explicit expressions of the actual environmental value that is to be protected.” This is unlike human health assessments, where the species of concern and the endpoints have been predetermined. The selection of endpoints at EPA has traditionally been done internally by program offices, but more recently, affected parties or communities are assisting in the selection of endpoints with their selection based on ecological relevance, susceptibility, and relevance to management goals. Furthermore, conceptual models are developed during the problem formulation phase. Such models contain risk hypotheses in the form of written descriptions and visual representations, outlining predicted relationships between ecological entities and the stressors to which they may be exposed. According to EPA the hypotheses are in effect assumptions, being based on theory and logic, empirical data, mathematical models, probability models, and professional judgment. Subsequently, during the analysis phase data are selected that will be used on the basis of their utility for evaluating risk hypotheses. The major items considered during this phase are the sources and distribution of stressors in the environment, the extent of contact and stressor-response relationships, the evidence for causality, and the relationship between what was measured and the assessment endpoint(s). Field studies involving statistical techniques (i.e., correlation, clustering, or factor analysis), surveys, the formation of indices, and the use of models are approaches to evaluating the determined risk hypotheses. (EPA’s guidance on the risk characterization phase of an ecological risk assessment is discussed in the final section of this appendix.) EPA’s various program offices generally follow the agencywide risk assessment procedures and guidelines described above. The major exception to this is the Chemical Emergency Preparedness and Prevention Office, which does not follow the NAS four-step process for its risk assessment procedures because of its focus on risks associated with accidental chemical releases. Overall, there is great diversity in the context for risk assessment activities across EPA’s program offices. Each program has different statutory mandates and risk assessment tasks associated with its specific regulatory authority, and these contribute to variations in the way the offices conduct risk assessments. In particular, there are differences in the exposure assessment step across, and sometimes within, EPA’s program offices. This is not surprising, given that EPA’s regulatory authorities regarding chemical agents primarily vary according to types and sources of exposure. Although there are overlaps in these various exposures to chemicals, EPA’s program offices generally assess and regulate different aspects of the risks associated with exposures to humans and/or the environment. There are also some variations in the conduct of hazard identification and dose-response analysis. The following sections summarize the risk assessment activities and procedures of those EPA program offices that are most likely to conduct assessments involving chemical risks. The descriptions highlight some of the major variations and similarities across the program offices. OPP is part of EPA’s Office of Prevention, Pesticides and Toxic Substances (OPPTS). The primary risk assessment-related activities of OPP are the registration of pesticides and the setting of tolerances for pesticide residues. Registration involves the licensing of pesticides for sale and use in agriculture and extermination. No chemical may be sold in the United States as a pesticide without such registration, which establishes the conditions of legal use. All uses within the scope of the registration conditions and limits are permissible, although actual practice may vary. Pesticide tolerances are the concentrations (maximum pesticide residue levels) permitted to remain in or on food, as it is available to the consumer. Registrations and tolerances are obtained through petitions to OPP. The petitioner has the primary responsibility to provide the data needed to support registration and tolerances, including information on the toxicological effects of the pesticide. There are three major risk statutes affecting EPA’s actions regarding pesticides. Registration is carried out under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). Tolerances are established under the Federal Food, Drug, and Cosmetic Act (FFDCA). In 1996, Congress amended both FIFRA and FFDCA through the FQPA, which mandated some key changes in risk assessment of pesticides. Major features and characteristics of chemical risk assessment by OPP are summarized below. OPP conducts all steps of risk assessments. Because OPP generally follows the NAS four-step process for human health risk assessment and the EPA-wide risk assessment guidelines, most of its procedures mirror those used elsewhere in the agency. OPP officials noted that, over the last three decades, their office has developed a rigorous process to support the development of chemical risk assessments. This process includes regulations to establish baseline data requirements and published guidelines for conducting required studies. OPP officials emphasized the transparency of the process used to develop EPA’s risk assessment procedures and the transparency of the procedures EPA uses to make decisions on the risk of individual pesticides. As an example, they noted that their program has consulted with outside experts and asked for public comment on its guidelines for reviewing studies, science policies for assessing the significance of study data, and standard operating procedures for implementing these policies in the development of a hazard identification or exposure assessment for a chemical. They also pointed out that OPP adopted a public participation process for reregistration and tolerance reassessment decisions on registered pesticides and that they publish for public comment proposed tolerances for proposed new uses of pesticides. In some circumstances, OPP consults with outside experts concerning a risk assessment of an individual pesticide. Pesticide registration decisions are based primarily on OPP’s evaluation of the test data provided by petitioners (applicants). EPA has established a number of requirements, such as the Good Laboratory Practice Standards, to ensure the quality and integrity of pesticide data. OPPTS has also developed harmonized test guidelines for use in the testing of pesticides and toxic substances and the development of test data that must be submitted to EPA for review under federal regulations. Depending on the type of pesticide, OPP can require more than 100 different tests to determine whether a pesticide has the potential to cause adverse effects to humans, wildlife, fish, and plants. The FQPA established a single, health-based standard—“reasonable certainty of no harm”—for pesticide residues in all foods. All existing tolerances that were in effect when the FQPA was passed are to be reevaluated by 2006 to ensure that they meet the new safety standard. The law requires EPA to place the highest priority for tolerance reassessment on pesticides that appear to pose the greatest risk. To make the finding of “reasonable certainty of no harm” OPP considers: 1. the toxicity of the pesticide and its break-down products; 2. how much of the pesticide is applied and how often; and 3. how much of the pesticide remains in or on food by the time it is marketed and prepared (the residue). Among other key changes affecting OPP’s risk assessments when setting tolerances, the FQPA requires the agency to: 1. Explicitly address risks to infants and children and to publish a specific safety finding before a tolerance can be established. It also requires an additional tenfold uncertainty factor (unless reliable data show that a different factor will be safe) to account for the possibly greater sensitivity and exposure of children to pesticides. 2. Consider aggregate exposure from a pesticide, including all anticipated dietary and all other exposures for which there is reliable information. These include exposures through food, drinking water, and nondietary exposures encountered through sources in the home, recreational areas, and schools. 3. Consider cumulative exposures to pesticides with a common mechanism of toxicity, which previously had been considered separately. Title III of the FQPA also requires certain data collection activities of the Secretary of Agriculture, in consultation or cooperation with the Administrator of EPA and the Secretary of Health and Human Services, regarding food consumption patterns, pesticide residue levels, and pesticide use that, according to EPA, affect its risk assessments when setting tolerances. Also as a result of the FQPA, OPP uses a population adjusted dose (PAD), which involves dividing the acute or chronic reference dose by the FQPA uncertainty factor. According to OPP officials, this allowed OPP to be consistent with the rest of the agency regarding setting RfDs, but still use the FQPA factor for regulating pesticides. OPP is concerned with both cancer and noncancer toxicity. However, for noncancer effects, OPP has paid special attention to neurotoxicity (because many pesticides work through this mechanism) and, more recently, to endocrine disrupting effects (those affecting the body’s hormone system). OPP officials noted that, while their agency has made important use of “real life” monitoring or incident data, it primarily relies on studies conducted in laboratory animals and on laboratory or limited field studies. They stated that, in their experience, “real life” data have profound limitations and that such data are inconsistent, expensive, inconclusive, and are not available for premarket decision making. They said that, most importantly, by the time there are observable health or environmental effects, it is too late to prevent the harm that could have been predicted from judicious use of animal or environmental fate studies conducted in the laboratory. During the exposure assessment step, OPP is concerned with a variety of routes, sources, and types of exposure. The three routes by which people can be exposed to pesticides are inhalation, dermal (absorbing pesticides through the skin), and oral (getting pesticides in the mouth or digestive tract). Depending on the situation, a pesticide could enter the body by any one or all of these routes. Typical sources of pesticide exposure include food, home and personal use of pesticides, drinking water, and work-related exposure to pesticides (in particular, to pesticide applicators or vegetable and fruit pickers). In its approach to exposure assessment, OPP distinguishes between residential and occupational types of exposures. OPP officials noted that their program is further developing procedures to conduct drinking water exposure assessments and residential exposure assessments and that they have new procedures for ecological risk assessments. OPP calculates estimates of acute (i.e., short-term) pesticide exposure slightly differently from those for chronic (i.e., longer-term) exposures. This is because an acute assessment estimates how much of a pesticide residue might be consumed in a single day, while a chronic assessment estimates how much might be consumed on a daily basis over the course of a lifetime. In an important difference, acute assessments are based on high-end individual exposure assumptions, while chronic assessments use average exposure assumptions. In assessing both acute and chronic risks, OPP uses a tiered approach, starting with an initial screening tier and proceeding through progressively more elaborate risk assessments, if needed. The analytical tiers proceed from more conservative to less conservative assumptions. For the first-tier risk assessment, OPP uses “worst-case” assumptions (e.g., that pesticide residues are at tolerance levels and that 100 percent of the food crop is treated with the pesticide) that give only an upper-bound estimate of exposure. For more refined analyses, OPP officials noted that they have new procedures for conducting probabilistic dietary exposure assessments. Generally, the level of resources and the data needed to refine exposure estimates increase with each tier. Typically, if risks from pesticide residues are not of concern using lower-tier exposure estimates, OPP does not make further refinements through additional tiers. However, with the aggregate and cumulative exposure assessments now required by the FQPA, EPA notes that it is likely that higher-tier exposure estimates will be needed. The agency has developed procedures for modeling the environmental fate of pesticides. OPP officials said that these models use real data on the physical and chemical properties of the pesticide, information on the proposed or actual uses of the pesticide, and real data on the movement of pesticides or other materials through soil, air, water, skin, textiles, or other media to predict potential exposures to a pesticide. These models are guided by scientific judgments that are based upon data and scientists’ experience in drawing inferences from these data. OPPT (formerly the Office of Toxic Substances) is also part of OPPTS. OPPT was established to implement the Toxic Substances Control Act (TSCA), which authorizes EPA to screen existing and new chemicals used in manufacturing and commerce to identify potentially dangerous products or uses. TSCA focuses on the properties of a chemical and paths of exposure to that chemical. Risk assessment activities are primarily related to four sections of TSCA: Section 4 directs EPA to require manufacturers and processors to conduct tests for existing chemicals when: (1) their manufacture, distribution, processing, use, or disposal may present an unreasonable risk of injury to health or the environment; or (2) they are to be produced in substantial quantities and the potential for environmental release or human exposure is substantial or significant. Under either condition, EPA must issue a rule requiring testing if existing data are insufficient to predict the effects of human exposure and environmental releases and testing is necessary to develop such data. Rhomberg pointed out that these conditions require OPPT to do some preliminary risk assessment and that, unlike testing mandates under other statutes (e.g., regarding pesticides), the agency has the burden of showing that such testing is necessary. Section 5 addresses future risks through EPA’s premanufacture screening—the premanufacture notification (PMN) process. This also applies to a “significant new use” of an existing chemical. Section 6 directs EPA to control unreasonable risks presented or that will be presented by existing chemicals. Section 8 requires EPA to gather and disseminate information about chemical production, use, and possible adverse effects to human health and the environment. This section requires EPA to develop and maintain an inventory of all chemicals, or categories of chemicals, manufactured or processed in the United States. All chemicals not on the inventory are, by definition, “new” and subject to the notification provisions of section 5. Once a chemical enters commerce through the section 5 process, it is listed as an existing chemical. Although TSCA gives EPA general authority to seek out and regulate any “unreasonable risk” associated with new or existing chemicals, there are two major limitations on the agency’s regulatory actions. First, as implemented by EPA, regulation under TSCA involves consideration of both risks and applying the least burdensome requirement needed to regulate the risk. The term “unreasonable risk” is not defined in TSCA. However, according to EPA, the legislative history indicates that unreasonable risk involves the balancing of the probability that harm will occur, and the magnitude and severity of that harm, against the effect of a proposed regulatory action on the availability to society of the expected benefits of the chemical substance. The second major limitation on EPA’s authority under TSCA is a requirement to defer to other federal laws. Generally, if a risk of injury to health or the environment could be eliminated or reduced to a sufficient extent by actions taken under another federal law, that other law must be deferred to unless it can be shown to be in the public interest to regulate under TSCA. The major distinction in the procedures that apply to OPPT risk assessments is between the evaluation of potential risks associated with exposures to new versus existing chemicals. For EPA to control the use of a chemical listed on the inventory of existing chemicals, according to OPPT, a legal finding has to be made that the chemical will present an unreasonable risk to human health or the environment. According to OPPT, this standard requires the agency to have conclusive data on that particular chemical. The agency noted, in comparison, that newly introduced chemicals (or uses) can be regulated under TSCA based on whether they may present an unreasonable risk, and this finding of risk can be based on data for structurally similar chemicals. Because industrial chemicals in commerce in 1975-1977 were “grandfathered” into the inventory without considering whether they were hazardous, there are situations in which existing chemicals might not be controlled, while EPA would act to control a new chemical of similar or less toxicity under the PMN program. Additional information on the major features and characteristics of assessments for new versus existing chemicals is presented below. Premanufacture notification for new chemicals or significant new uses TSCA requires manufacturers, importers, and processors to notify EPA at least 90 days prior to introducing a new chemical into the U.S. or undertaking a significant new use of a chemical already listed on the TSCA inventory. If available, test data and information on the chemical’s potential adverse effects on human health or the environment are to be submitted to EPA. Much of this submission must be kept confidential by OPPT. However, there is no defined toxicity data set required before PMN, and, unless EPA promulgates a rule requiring the submission of test data, TSCA does not require prior testing of new chemicals. Consequently, according to EPA, less than half of the PMNs submitted include toxicological data. OPPT reviews approximately 1,500 PMNs annually. EPA has 90 days after notification to evaluate the potential risk posed by the chemical. EPA must then decide whether to (1) permit manufacture and distribution (the default if EPA takes no action), (2) suspend manufacture and distribution or to restrict use pending the development of further data, or (3) initiate rulemaking to regulate manufacture or distribution. OPPT typically has very limited chemical-specific data on toxic effects and exposure associated with new chemicals. When no data exist on the effects of exposure to a chemical, EPA may make its determination on what is known about the chemical’s molecular structure (called the structure-activity relationship, or SAR) and the effects of other chemicals that have similar structures and are used in similar ways. OPPT’s New Chemicals Program has issued a document entitled Chemical Categories that describes information for numerous classes of chemicals. In assessing exposures for new chemicals where exposure monitoring data are unavailable, OPPT uses several screening- level approaches, including (1) estimates based on data on analogous chemicals; (2) generic scenarios (i.e., standardized approaches for assessing exposure and release for a given use scenario); (3) mathematical models based on empirical and theoretical data and information; and (4) assumptions of compliance with regulatory limits, such as OSHA Permissible Exposure Limits (PELs). Rhomberg noted that OPPT cannot require full testing for all chemicals, because of statutory limitations under TSCA. He therefore characterized OPPT’s assessments as “rough screens” designed to flag situations in which further testing should be required. Chemicals that OPPT assesses for regulation under sections 4 or 6 of TSCA are subject to a more rigorous risk assessment process. Compared to PMN reviews, such assessments are much more similar to those conducted elsewhere in EPA, so the EPA-wide guidelines generally apply. For hazard identification and dose-response assessment of carcinogens and noncancer effects, OPPT follows EPA-wide procedures. Because TSCA focuses on the properties of a chemical, rather than on a specific pathway or mode of exposure, OPPT considers the potential hazards posed through multiple routes of exposure. In lieu of information to the contrary, OPPT typically presumes that the results for one route are applicable to other routes. Similarly, in exposure assessment OPPT considers a variety of types and routes of exposure. Unlike other programs that focus on exposure through one medium, assessments under TSCA must assess all potential exposures to a chemical that may lead to unreasonable risk, considering, for example, both residential and occupational exposures. These risks may be assessed separately for each mode of exposure, even if occurring in the same setting. Overall, OPPT aims to provide both central estimates and upper-bound estimates of exposure, and it considers population risks as well as individual risks. OPPT shares overlapping concerns about a number of different kinds of exposure with other federal regulatory agencies. However, some aspects of OPPT’s exposure assessments may differ from those of other programs or agencies concerned with similar exposures. For example, with regard to occupational exposures OPPT assumes that a working lifetime is 40 years, rather than the 45 years assumed by OSHA. Another example is the assumption of body weight; OPPT uses 70kg, whereas ORD recommends a value of 71.8 kg in its Exposure Factors Handbook. In addition to the assessment of chemicals for regulation under sections 4 and 6 of TSCA, OPPT has recently launched a new program to voluntarily add screening-level hazard information on approximately 2,800 high- production-volume industrial chemicals and has proposed a second new voluntary program to address the risks of certain industrial chemicals to which children may be exposed. These two new programs operate under the same risk assessment processes used in the other OPPT programs noted above. OERR is part of EPA’s Office of Solid Waste and Emergency Response (OSWER). Risk assessments are a required component of a larger remediation process established by the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA or Superfund), as amended by the Superfund Amendments and Reauthorization Act of 1986 (SARA). Congress enacted CERCLA to facilitate the cleanup of hazardous waste sites. The act gave EPA broad authority to respond to releases of hazardous substances. SARA requires EPA to emphasize cleanup remedies that treat—rather than simply contain—contaminated waste to the maximum extent practicable and to use innovative waste treatment technologies. Hazardous substances are defined by CERCLA to include substances identified under the Solid Waste Disposal Act, the Clean Water Act, the Clean Air Act, and the Toxic Substances Control Act, or designated by EPA. After investigating potentially hazardous sites, EPA ranks them according to the severity of their waste problems and places the worst on its National Priorities List for Superfund cleanup. Under CERCLA section 105, EPA uses a Hazard Ranking System to decide which sites to include on the list. Section 105 states that priorities are to be based upon relative risk or danger to public health or welfare or the environment, taking into account the population at risk, the hazard potential of the hazardous substances, and the potential for contamination of air and drinking water, among other factors. OERR has developed a human health and environmental evaluation process as part of its remedial response program. Major features and characteristics of the Superfund risk assessment procedures are summarized below. Overall, the risk scenarios for Superfund can be very complex. Superfund sites are often associated with multiple potential pathways and routes of exposure, and mixtures of chemicals at Superfund sites are common. In addition, the Superfund program is required to consider ecological as well as human health risks. A risk assessment is performed after a particular site has been identified according to the National Contingency Plan, EPA’s regulation outlining requirements relevant to response action(s) for hazardous substances. The remedial response process under the National Contingency Plan— and the role of risk information in the process—is summarized in the following seven steps: 1. Site discovery or notification: report determinations about which substances are hazardous. 2. Preliminary assessment and site inspection: collect and review all available information to evaluate the source and nature of hazardous substances. 3. Hazard ranking system: compile data from steps one and two in a numerical scoring model to determine a relative risk measure. 4. Possible inclusion of site on the National Priorities List based on one of the following criteria: the release scores sufficiently high pursuant to the Hazard a state designates a release as its highest priority, or the release satisfies all of the following criteria: the Agency for Toxic Substances and Disease Registry has issued a health advisory that recommends dissociation of individuals from the release, EPA determines that the release poses a significant threat to public health, and EPA anticipates that it will be more cost effective to use its remedial authority than to use removal authority to respond to the release. 5. Remedial investigation and feasibility study: characterize the contamination at site where data is obtained to identify, evaluate, and select cleanup alternatives. 6. Selection of a remedy: choose remedy that is protective of human health and the environment by eliminating, reducing, or controlling risks posed through each pathway, and utilize risk information obtained during step five. 7. 5-year review. One intended result of the remedial steps is the facilitation of a site- specific baseline risk assessment, designed to support risk management decision making. Human health and ecological risk assessments occur during step five, the Remedial Investigation/Feasibility Study stage. For human health risk assessments, Superfund procedures approximate the NAS paradigm, using the following four stages. 1. A data collection and evaluation stage that involves: gathering and analyzing site data relevant to the human health identifying substances present at the site that are the focus of the risk assessment process. 2. An exposure assessment that involves: analyzing contaminant releases, identifying exposed populations, identifying potential exposure pathways and estimating exposure concentrations for pathways, and estimating contaminant intakes for pathways. 3. A toxicity assessment stage that considers: types of adverse health effects associated with chemical exposures, relationships between magnitude of exposure and adverse effects, related uncertainties such as the weight evidence of a particular chemical’s carcinogenicity in humans, and existing toxicity information developed through hazard identification and dose-response assessment. 4. A risk characterization that involves: characterizing potential for adverse health effects (cancer or noncancer) to occur, evaluating uncertainty, and summarizing risk information. For ecological risk assessments, EPA’s guidelines suggest that Superfund remedial actions generally should not be designed to protect organisms on an individual basis, but should protect local populations and communities of biota. Furthermore, except for a few very large sites, Superfund ecological risk assessments typically do not address effects on entire ecosystems. Instead, they gather data regarding the effects on individuals in order to predict or postulate potential effects on local wildlife, fish, invertebrate, and plant populations and communities that occur or that could occur in specific habitats at sites (e.g., a wetland, floodplain, stream, estuary, or grassland). Specifically, the guidelines recommend that ecological risk assessments performed at every site follow an eight-step process: 1. Screening-level problem formulation and ecological effects evaluation: site history, site visit, problem formulation, and ecological effects evaluation. 2. Screening-level exposure estimate and risk calculation: exposure estimate, and risk calculation. 3. Baseline risk assessment problem formation: ecotoxicity literature review, exposure pathways, assessment endpoints and conceptual model, and risk questions. 4. Measurement endpoints and study design. 5. Verification of field sampling design. 6. Site investigation and data analysis. 7. Risk characterization. 8. Risk management. OERR uses a tiered approach for Superfund risk assessments, in which the agency employs more conservative methods and assumptions in the initial screening phases, followed by a more rigorous, multistage risk assessment if screening results indicate the need. Under Superfund, decisions generally are made on a site-by-site basis. According to agency officials, early activities at Superfund sites are often based on initial tier screening. However, they pointed out that the remedial cleanup decision is supported by a site-specific risk assessment that is usually quite detailed with either site-specific exposure assumptions or national default assumptions appropriate to the site which result in “high-end” reasonable risk estimates. Although the Superfund program initially employed an approach of using a hypothetical “worst case” scenario for exposure assessments, EPA’s exposure assessment guidance now emphasizes use of a more realistic upper-bound exposure scenario. The EPA guidelines emphasize that this exposure scenario should be in the range of plausible real exposures, and also call for a central tendency case. In addition, guidelines put forth by the Superfund program office emphasize streamlining the process and reducing the cost and time required, focusing on providing information necessary to justify action and select the best remedy for a Superfund site. In doing so, Superfund guidelines suggest using standardized assumptions, equations, and values wherever appropriate. The Superfund program uses extensive additional program-specific guidance documents addressing human health and ecological risk assessments, as well as analytical tools, such as probabilistic analysis. These documents supplement applicable EPA-wide guidelines. The Superfund guidelines for human health risk assessment, for example, cover developing a baseline risk assessment (Part A), developing or refining preliminary remediation goals (Part B), performing a risk evaluation of remedial alternatives (Part C), and standardizing, planning, reporting, and completing a review (Part D). There are also other headquarters and regional office documents that further supplement the program-specific guidelines and manuals. The Office of Solid Waste, like OERR, is part of OSWER. OSW regulates the management of solid waste and hazardous waste through federal programs established by the Resource Conservation and Recovery Act of 1976, as amended (RCRA). Congress enacted RCRA to protect human health and the environment from the potential hazards of waste disposal, conserve energy and natural resources, reduce the amount of waste generated, and ensure that wastes are managed in a manner that is protective of human health and the environment. The act defines solid and hazardous waste, authorizes EPA to set standards for facilities that generate or manage hazardous waste, and establishes a permit program for hazardous waste treatment, storage, and disposal facilities. The RCRA hazardous waste program has a “cradle to grave” focus, regulating facilities that generate, transport, treat, store, or dispose of hazardous waste from the moment it is generated until its ultimate disposal or destruction. RCRA regulations interact closely with other environmental statutes, especially CERCLA. EPA notes that both programs are similar in that they are designed to protect human health and the environment from the dangers of hazardous waste, but each has a different regulatory focus. RCRA mainly regulates how wastes should be managed to avoid potential threats to human health and the environment. On the other hand, according to EPA, CERCLA is relevant primarily when mismanagement occurs or has occurred, such as when there has been a release or a substantial threat of a release in the environment of a hazardous substance. Regulatory activity under RCRA focuses primarily on specifying procedures and technology to be used to ensure proper handling and disposal of wastes, but risk assessments play a role in several supporting tasks, particularly those involving hazardous waste regulation under RCRA Subtitle C. For example, risk assessment information may be used in the processes for defining (and delisting) substances as hazardous wastes, evaluating the hazards posed by waste streams, assessing the need for corrective action at disposal sites, and granting waste disposal permits (such as incinerator permits). In its RCRA Orientation Manual, OSW expressed an increasing emphasis on making the RCRA hazardous waste program more risk based (with the intention of ensuring that the regulations correspond to the level of risk posed by the hazardous waste being regulated). Major features and characteristics of risk assessment for hazardous waste regulation are summarized below. Making the determination of whether a substance is a hazardous waste is a central component of the waste management program. The Subtitle C program includes procedures to facilitate this identification and classification of hazardous waste. Under the RCRA framework, hazardous wastes are a subset of solid wastes. In RCRA §1004(5), Congress defined hazardous waste as a solid waste, or combination of solid wastes, which because of its quantity, concentration, or physical, chemical, or infectious characteristics may: cause, or significantly contribute to, an increase in mortality or an increase in serious irreversible, or incapacitating reversible, illness; or pose a substantial present or potential hazard to human health or the environment when improperly treated, stored, transported, or disposed of, or otherwise managed. EPA developed more specific criteria for defining hazardous waste using two different mechanisms: (1) listing certain specific solid wastes as hazardous and (2) identifying characteristics (physical or chemical properties) which, when exhibited by a solid waste, make it hazardous. The agency has done so, and risk assessment information may be used to support both mechanisms. “Listed wastes” are wastes from generic industrial processes, wastes from certain sectors of industry, and unused pure chemical products and formulations. EPA uses four criteria to decide whether or not to list a waste as hazardous. 1. The waste typically contains harmful chemicals (and exhibits other factors, such as risk and bioaccumulation potential) which indicate that it could pose a threat to human health and the environment in the absence of special regulation. Such wastes are known as toxic listed wastes. 2. The waste contains such dangerous chemicals that it could pose a threat to health or the environment even when properly managed. These wastes are fatal to humans and animals even in small doses and are known as acute hazardous wastes. 3. The waste typically exhibits one of the four characteristics of hazardous waste: ignitability, corrosivity, reactivity, and toxicity. 4. EPA has cause to believe that, for some other reason, the waste typically fits within the statutory definition of hazardous waste. Listed hazardous wastes can exit Subtitle C regulation through a site- specific delisting process initiated by a petition from a waste handler to an EPA region or a state. The petition must demonstrate that, even though a particular waste stream generated at a facility is a listed hazardous waste, it does not pose sufficient hazard to merit RCRA regulation. “Characteristic wastes” are wastes that exhibit measurable properties that indicate they pose enough of a threat to deserve regulation as hazardous wastes. EPA established four hazardous waste characteristics. 1. Ignitability identifies wastes that can readily catch fire and sustain combustion. 2. Corrosivity identifies wastes that are acidic or alkaline. Such wastes can readily corrode or dissolve flesh, metal, or other materials. 3. Reactivity identifies wastes that readily explode or undergo violent reactions (e.g., when exposed to water or under normal handling conditions). 4. Toxicity is used in a rather narrow and specific sense under this program to identify wastes that are likely to leach dangerous concentrations of chemicals into ground water if not properly managed (and thus expose users of the water to hazardous chemicals and constituents). EPA developed a specific lab procedure, known as the Toxicity Characteristic Leaching Procedure, to predict whether any particular waste is likely to leach chemicals into ground water at dangerous levels. In this procedure, liquid leachate created from hazardous waste samples is analyzed to determine whether it contains any of 40 different common toxic chemicals in amounts above specified regulatory levels. The regulatory levels are based on ground water modeling studies and toxicity data that calculate the limit above which these toxic compounds and elements will threaten human health and the environment. For OSW, the task of identifying and assessing hazardous wastes is made more difficult because waste may be in the form of a mixture of constituents, some of which may be hazardous and some not. (This is also a common issue for the Superfund program.) The EPA-wide guidelines on assessments of chemical mixtures therefore could come into play in OSW risk assessments. For dose-response data on the toxicity and potency of hazardous substances, OSW largely relies on information from other EPA sources. For example, OSW may use the chemical-specific assessments prepared by ORD, data in EPA’s IRIS database, and regulatory standards from other EPA program offices, in particular the Office of Water. However, OSW combines this information with its own exposure analyses. Rhomberg categorized exposure assessment by OSW as either hypothetical or site specific. He noted that hypothetical exposures principally come into play when the agency is defining hazardous wastes and evaluating disposal options. These exposure analyses cover hypothetical waste-handling and disposal practices anywhere in the nation, and OSW focuses on the question of whether such practices might cause undue risks to individuals, not on characterizing the actual distribution of exposures across the population. One of the principal concerns in OSW exposure assessments is leaching to groundwater, but OSW evaluates other exposure pathways from virtually all treatment and disposal practices, with the specific pathways for any particular analysis being decided on a case-by-case basis. Site-specific exposure assessments might be needed when OSW is making regulatory decisions regarding actual waste disposal facilities, as when assessing the need for remedial action at a given site or permitting incineration or other disposal activities. In such cases, the office can focus exposure estimates on the off-site migration of the particular toxic compounds associated with that location. In general, an important part of OSW’s exposure assessments is evaluating the “relative contribution” of hazardous wastes to the overall exposure to a hazardous chemical (which is very similar to assessments by EPA’s Office of Water). In exposure assessments, OSW’s deterministic analyses follow EPA’s risk characterization guidance by setting only two sensitive parameters at high-end values, with the rest of the parameters being set at their central tendency values. According to OSW, this approach is meant to produce a risk estimate above the 90th percentile of the risk distribution but still on the actual distribution. CEPPO is also part of OSWER. It provides leadership, advocacy, and assistance to: (1) prevent and prepare for chemical emergencies; (2) respond to environmental crises; and (3) inform the public about chemical hazards in their community. To protect human health and the environment, CEPPO develops, implements, and coordinates regulatory and nonregulatory programs. It carries out this work in partnership with EPA regions, domestic and international organizations in the public and private sectors, and the general public. CEPPO is responsible for the risks associated with accidental chemical releases. Under the Emergency Planning and Community Right-to-Know Act (EPCRA) in Title III of the Superfund Amendments and Reauthorization Act of 1986, CEPPO must evaluate, develop, and maintain a list of chemicals and threshold quantities that are subject to reporting for emergency planning. In addition, CEPPO develops the emergency reporting and planning requirements, guidance for industry, and guidance and tools for use of the reporting information by Local Emergency Planning Committees. These reporting and planning requirements serve to provide the necessary information to be used at the local level to manage the risks associated with accidental chemical releases. CEPPO is also responsible for accidental chemical release prevention. Under Section 112(r) of the Clean Air Act, as amended by the Clean Air Act Amendments of 1990, CEPPO must evaluate chemicals for acute adverse health effects, likelihood of accidental release, and magnitude of exposure to develop a list of at least 100 substances that pose the greatest risk of causing death, injury, or serious adverse effects to human health or the environment from accidental releases. Each listed substance must have a threshold quantity that takes into account the chemical’s toxicity, reactivity, volatility, dispersability, combustibility, or flammability. Facilities handling a listed substance above its threshold quantity must implement a risk management program and develop a risk management plan. The risk management program must address a hazards analysis, prevention program, and emergency response program. According to CEPPO officials, they scaled these regulatory requirements according to the risk posed by the wide range of facilities subject to the requirements— the greater the risk, the greater the risk management responsibilities. The facilities submit their risk management plans to EPA and to state and local officials for use in emergency planning and local risk management and reduction. CEPPO investigates chemical accidents, conducts research, and collects information about chemical and industrial process hazards to issue Chemical Safety Alerts and other publications to raise awareness about chemical accident risks. CEPPO also develops tools, methods, and guidance necessary to identify and assess the risks to human health from accidental releases. Major features and characteristics of CEPPO’s risk assessment procedures are summarized below. The chemical risk assessments conducted by CEPPO are unique from the risk assessments conducted by other EPA offices. CEPPO’s procedures do not follow the NAS four-step risk assessment approach, but are similar to the chemical risk assessment approach used by the Department of Transportation’s (DOT) Research and Special Programs Administration (RSPA) in that hazards are identified and a measure of exposure (or consequence) is determined to yield a “threat” associated with an accidental release. While RSPA focuses on risks associated with accidents involving unintentional releases of hazardous materials during transportation, CEPPO focuses on risk associated with accidental releases from a fixed facility. According to CEPPO, for accidental release risks, because these events are high consequence and low probability, the hazard and exposure typically can be estimated with some degree of confidence. However, the likelihood or probability of an accidental release is very uncertain. Consequently, likelihood is addressed only in a limited way and the “threat” is judged to be a surrogate for risk. CEPPO’s approaches with respect to chemical accident risk are published mainly in two rulemakings—“List of Regulated Substances and Thresholds for Accidental Release Prevention and Risk Management Programs for Chemical Accident Release Prevention,” 59 FR 4478 (Jan. 31, 1994) and “Accidental Release Prevention Requirements: Risk Management Programs under the Clean Air Act, Section 112(r)(7),” 61 FR 31668 (June 20, 1996)—and in guidelines, especially “Technical Guidance for Hazards Analysis, Emergency Planning for Extremely Hazardous Substances,” which was issued jointly by EPA, DOT, and the Federal Emergency Management Agency (Dec. 1987). For hazard identification, CEPPO identifies the hazards that pose a risk to human health and the environment from an analysis of chemical accidents and of the physical/chemical properties of substances that make them more likely to cause harm as a result of an accidental chemical release. For example, the catastrophic chemical release in Bhopal, India, in December 1984 involved methyl isocyanate, a chemical that is toxic when inhaled. CEPPO identified the criteria necessary to identify those substances that are so toxic that, upon exposure (i.e., inhalation, dermal contact, or ingestion) to a small amount, they cause death or serious irreversible health effects in a short time (acute toxicity). CEPPO also has developed criteria to identify other substances, such as highly flammable substances that can trigger a vapor cloud explosion harming the public and environment. CEPPO is also working to understand the long-term (chronic) effects that might be generated by a single acute exposure. As part of its identification of hazards, CEPPO also evaluates the quantity of a chemical that would need to be released and travel off-site to establish a threshold quantity. If a facility handles more than this quantity, there is a presumption of risk triggering some action by the facility’s owner(s) and operator(s). The hazardous chemicals and threshold quantities identified by CEPPO are published in rulemakings. According to CEPPO, the exposure assessment (or consequence analysis) phase of a chemical accident release assessment is somewhat unique from the classical risk assessment approaches and procedures. The actual exposure to humans after an accidental release is often not known. In addition, the amount and rate of chemical released and the precise conditions (e.g., weather) are usually not known. However, these parameters can be estimated using engineering calculations and mathematical models to generate the concentration likely to have been present or that could be present in a certain type of accidental release. Using these techniques, chemicals that possess the physical/chemical properties most likely to harm the public or the environment can be evaluated to estimate the degree of “threat” that they may pose in an accidental release. CEPPO uses these exposure assessment (consequence analysis) techniques to understand the potential magnitude of exposure associated with a variety of hazardous chemicals. In addition, CEPPO publishes the techniques in guidelines and as software to assist facilities in their assessment of accidental release risk. According to CEPPO, industry has a fundamental responsibility to understand the risks associated with chemical accidents. In addition, the Risk Management Plan requirements under section 112(r) of the Clean Air Act require that this information be made available to the public so that industry and the community can work together to manage the risks that might be present. CEPPO may characterize the risks associated with accidental releases using a number of parameters, such as the presence of a large quantity of a highly hazardous substance in proximity to a large facility that has had a number of accidental releases in the past. CEPPO uses these parameters to place more responsibility on such facilities (e.g., greater accidental release prevention measures under the Risk Management Program requirements), to investigate the underlying reasons for their accidental releases, or to assist in audits and inspections of their accident prevention programs. OAR oversees the air and radiation protection activities of the agency. Radiation risk assessments conducted by OAR are outside the scope of this report, but chemical risk assessments do have a part in OAR’s efforts to preserve and improve air quality in the United States. Such air quality concerns are the primary mission of OAR’s Office of Air Quality Planning and Standards (OAQPS), which, among other activities, compiles and reviews air pollution data and develops regulations to limit and reduce air pollution. The Risk and Exposure Assessment and the Health and Ecosystem Effects Groups within OAQPS provide the scientific and analytical expertise to conduct and support human health and ecological risk assessments in this area, in coordination with ORD. The Clean Air Act, as amended, provides the statutory basis for air-related risk assessments by OAR. The CAA requires EPA to establish national standards for air quality, but it gives states the primary responsibility for assuring compliance with the standards. Chemical risk assessments are primarily associated with regulation of (1) criteria air pollutants and (2) hazardous air pollutants, also referred to as “air toxics.” The CAA requires EPA to set health-based air quality standards (National Ambient Air Quality Standards, or NAAQS) for criteria pollutants, which are common throughout the United States and mostly the products of combustion. Under the CAA, EPA is also required to review the scientific data upon which the standards are based and revise the standards, if necessary, every 5 years. The criteria pollutants are particulate matter, carbon monoxide, sulfur oxides, nitrogen dioxide, ozone, and lead. Of these pollutants, ozone is not directly emitted by a source, but rather is the product of the interaction of nitrogen oxide, volatile organic compounds, and sunlight. Therefore, regulations targeting ozone focus on controlling emissions of nitrogen oxide and volatile organic compounds. The CAA requires EPA to set health-based standards with an “adequate margin of safety,” but according to EPA it is not required to set air quality standards at a zero-risk level to achieve an adequate margin of safety, but simply at a level that avoids unacceptable risks. EPA therefore sets the standards to protect the substantial part of the national population, including sensitive or at-risk populations, but not necessarily the most sensitive or exposed individuals. The CAA also contains provisions, first added in 1970, for the regulation of emissions to the atmosphere of hazardous air pollutants—toxic chemicals other than the six criteria pollutants. The 1970 amendments to the CAA required EPA to identify and control hazardous air pollutants so as to achieve “an ample margin of safety.” However, Congress passed another major set of amendments, the Clean Air Act Amendments of 1990 (CAAA), which revised the hazardous air pollutant provisions and substantially affected the application of risk assessment regarding air toxics. The amendments explicitly wrote into the act a list of 189 hazardous air pollutants to be regulated. In addition, the amendments replaced the former health-based criterion for standards with a criterion that is primarily technology based, mandating the maximum achievable control technology (MACT) for the specified list of chemicals. The act further mandates that EPA evaluate residual risks remaining after implementation of the MACT standards to determine if additional standards are needed to protect the public health with an ample margin of safety. Additional information on the major features and characteristics of chemical risk assessments related to these air quality protection activities is presented below. There are several unique features that affect risk assessments for criteria air pollutants. Compared to many other agents assessed by EPA, the agency generally has extensive human data available on health effects at relevant exposure levels. Therefore, risk assessments for criteria air pollutants require little extrapolation across species or to low doses and few default assumptions. These are the least likely of EPA’s risk assessments to use precautionary or conservative methods and assumptions, and the results are intended to be unbiased estimates without any built-in conservatism. For criteria air pollutants, “hazard identification” information on health effects appears primarily in air quality criteria documents prepared by ORD and staff papers prepared by OAQPS to support the review and development of national ambient air quality standards. These documents are intended to reflect the available scientific evidence on toxicity endpoints of concern. The definition of what responses constitute “adverse” outcomes is ultimately left to the Administrator’s judgment, informed by staff recommendations, advice from the Clean Air Scientific Advisory Committee (part of EPA’s Science Advisory Board), and public comments. EPA’s principal concern regarding criteria pollutants is for noncancer health effects. In contrast to most other EPA noncancer risk assessments, however, EPA does not apply a threshold approach in the case of criteria pollutants. Instead, the agency models response curves as though they have no threshold, recognizing that, as a practical matter, at least some members of the general population will have their thresholds exceeded at or near the lowest exposure levels. EPA characterizes these response relationships without any conservative upper-bound methods. However, probabilistic methods are used to characterize uncertainty in the fitted exposure-response relationships. In addition, there is temporal variation in pollution concentrations, so characterization of exposure-time relationships is also an important component of EPA’s assessments of criteria pollutants. Although EPA’s exposure assessments (and risk characterization) for criteria pollutants focus on population risks, rather than individual risks, the agency does consider effects on more sensitive or exposed populations. Exposure assessments are also affected by the need to establish air quality standards for both annual and daily concentrations for some pollutants. The annual standards are intended to provide protection against typical day-to-day exposures as well as longer-term exposures, while the daily standards are intended to provide protection against days with high peak concentrations of pollutants. EPA’s exposure assessments therefore need to address these types of variations. Rhomberg noted that, because of the long history of analysis of standard pollutants, EPA’s exposure modeling has been continually improved and expanded, resulting in sophisticated models with capabilities well beyond models used in other situations that do not have the benefit of decades of experience and application. Finally, it is important to recognize that one of the most important uses of risk assessments regarding criteria air pollutants is to characterize the population exposure levels and health effects that would be expected given various specified air quality criteria. In other words, one of the primary uses of risk assessment is to estimate what the effects would be if standards were set at various specified levels, rather than using the tool simply to estimate what health risks these pollutants pose. Hazardous air pollutants (air toxics) Although the Clean Air Act Amendments of 1990 shifted the focus in hazardous air pollutant regulation to technology-based controls, several activities may still involve risk assessments, including listing and delisting of hazardous air pollutants, which depends on whether a chemical may present a threat of adverse effects to humans and the environment; de minimis delisting of source categories, which requires sources be listed unless they pose less than a 10-6 risk to the maximally exposed individual (MEI); triggering the consideration of further regulation to address residual risks that remain after applying MACT standards (triggered if the MEI suffers a 10-6 or greater lifetime risk); and offset trading of one pollutant for another based on whether the increase in emissions is offset by an equal or greater decrease in a “more hazardous” air pollutant. According to section 112(o) of the amended CAA, prior to the promulgation of any residual risk standard, EPA shall revise its guidelines for carcinogen risk assessment or provide an explanation of the reasons regarding any NAS report section 112(o)(4) recommendations that have not been implemented. The amended act also had a major impact on hazard identification for air toxics. The amendments defined hazardous air pollutants as air pollutants listed pursuant to section 112(b) of the act. Section 112(b) included an initial list of 189 compounds incorporated by reference into the law. Dose-response analysis for air toxics has in the past been done largely through Health Assessment Documents produced by ORD for the air office, according to the methods discussed in the earlier section on EPA- wide risk assessment procedures. Carcinogen potency calculations for de minimis delisting and residual risk determination will be done under the revised carcinogen assessment guidelines, once they are finalized. EPA addresses noncancer risks for hazardous air pollutants with its usual methodologies (e.g., NOAEL/LOAEL, benchmark dose, or others). With the 1990 amendments, exposure assessments for air toxics will focus on assessing the residual risk for the most exposed individual after MACT has been applied. OAR uses a population-based risk assessment to generate estimates of how risks are distributed within the population, not just for specific conservative scenarios. According to Rhomberg (and confirmed by OAR officials), OAR’s intent is to define the actual most exposed person in the population, rather than a hypothetical person with an unrealistically high estimated exposure. EPA has adopted a tiered approach to analyzing residual risk consistent with the recommendations from NAS and the Presidential/Congressional Commission. In the screening phase, default conservative assumptions are used to ensure that risks will not be underestimated. Sources and hazardous air pollutants that exceed some benchmark in the screening analysis will be evaluated further. According to OAR, the more refined assessments will utilize more site- specific information and more realistic assumptions, especially as they relate to exposure. EPA estimates exposures to air toxics using a general-purpose model largely based on fate and transport considerations for stack emissions. OAR officials noted that they are updating their modeling methodology, updating their Human Exposure Model with the current state-of-the-art dispersion model (ISCST3), and will be updating the census data they use with the 2000 Census numbers when they become available. OW is responsible for the agency’s water quality activities, including development of national programs, technical and science policies, regulations, and guidance relating to drinking water, water quality, ground water, pollution source standards, and the protection of wetlands, marine, and estuarine areas. Chemical risk assessments are associated, in particular, with EPA’s ambient water quality criteria, under the CWA, and drinking water quality regulations, under the SDWA. The goal of CWA is to maintain and improve the cleanliness and biological integrity of the nation’s waters, including lakes, rivers, and navigable waters. Under CWA, EPA publishes water quality criteria defining the degree of water quality that is compatible with intended uses and states of different water bodies. The criteria are health based, but they are not rules and are themselves unenforceable. States use these criteria as guidance for developing state water quality standards and setting enforceable limits in permits for facilities that discharge pollutants into surface waters. CWA distinguishes “conventional” from “toxic” pollutants. Toxic water pollutants are evaluated as exposures to toxic chemicals (similar to EPA’s treatment of hazardous air pollutants). The goal of SDWA is to protect the quality of public drinking water systems. The law focuses on all waters actually or potentially designed for drinking use, whether from above ground or underground sources. SDWA requires EPA to set drinking water standards to control the level of contaminants in drinking water provided by public water systems, which the water systems are required to meet. Congress passed extensive amendments to SDWA through the Safe Drinking Water Act Amendments of 1996 (PL 104-182). Among other key changes, the amendments increased regulatory flexibility, focused regulatory efforts on contaminants posing the greatest health risks, and added risk assessment and risk communication provisions to SDWA. There are several risk-related mandates in these acts. Under CWA, EPA is to establish criteria for water quality solely on the basis of health and ecological effects and “accurately reflecting the latest scientific knowledge… on the kind and extent of all identifiable effects on health and welfare.” CWA defines a toxic pollutant as one that after discharge and upon exposure, ingestion, inhalation, or assimilation into any organism, either directly from the environment or indirectly by ingestion through food chains, will, on the basis of information available to the Administrator, cause death, disease, behavioral abnormalities, cancer, genetic mutations, physiological malfunctions (including malfunctions in reproduction), or physical deformities in such organisms or their offspring. Federal water quality criteria are unenforceable, but states develop enforceable permit limits based on them. In contrast to the unenforceable federal water quality criteria, CWA also provides for the promulgation of enforceable federal performance standards for sources of effluent (waste discharged into a river or other water body) that do include consideration of technological and economic feasibility. Since 1977, establishment of effluent standards for toxic pollutants has been based on the best available technology (BAT) economically achievable by particular source category. The compounds to be regulated are specified in a list, and there are provisions for additions and deletions to the list. Standards must be at that level which the Administrator determines provides “an ample margin of safety,” so that standards more stringent than BAT may be named at EPA discretion. Under SDWA, the EPA Administrator is to “promulgate national primary drinking water regulations for each contaminant… which… may have any adverse effect on the health of persons and which is known or anticipated to occur in public water systems.” An important feature of such regulations, however, is that a standard specifies two levels of contamination. First, a maximum contaminant level goal (MCLG) is set solely on health grounds “at a level at which no known or anticipated adverse effects on the health of persons occur and which allows an adequate margin of safety.” For each such goal there is also a maximum contaminant level (MCL). This MCL is to be as close to the MCLG “as is feasible,” where feasible means “with the use of the best technology, treatment techniques and other means which… are available (taking cost into consideration).” The MCL is the enforceable standard. The 1996 amendments to SDWA added several provisions that increased the importance of risk assessment and risk communication in EPA’s regulation of drinking water quality. For example, the amendments Require EPA, when developing regulations, to (1) use the best available, peer-reviewed science and supporting studies and data and (2) make publicly available a risk assessment document that discusses estimated risks, uncertainties, and studies used in the assessment. Require EPA to conduct a cost-benefit analysis for every new standard to determine whether the benefits (health risk reduction) of a drinking water standard justify the costs. Permit consideration of “risk-risk” issues by authorizing EPA to set a standard other than the feasible level if the feasible level would lead to an increase in health risks by increasing the concentration of other contaminants or by interfering with the treatment processes used to comply with other SDWA regulations. Require EPA to review and revise, as appropriate, each national primary drinking water regulation promulgated by the agency at least every 6 years. Of particular relevance to the use of risk assessment information, any revisions must “maintain, or provide for greater, protection of the health of persons.” Require EPA to identify subpopulations at elevated risk of health effects from exposure to contaminants in drinking water and to conduct studies characterizing health risk to sensitive populations from contaminants in drinking water. Additional information on major features and characteristics of chemical risk assessments related to water quality protection activities is presented below. The various offices within OW—the Office of Ground Water and Drinking Water; Office of Science and Technology; Office of Wastewater Management; and Office of Wetlands, Oceans, and Watersheds—have developed extensive technical and analytical guidance on water quality monitoring and the development of water quality criteria. One recently finalized document particularly relevant for describing OW’s current risk assessment procedures is the revision to the methodology for deriving ambient water quality criteria (AWQC) for the protection of human health. Published pursuant to section 304(a)(1) of the CWA, OW noted that this revised methodology supersedes EPA’s 1980 guidelines and methodology on this subject. In addition to describing OW’s approach to developing new and revising existing AWQC, it defines the default factors that EPA will use in evaluating and determining consistency of state water quality standards with the requirements of the CWA. Although there are different statutory bases and risk mandates for the regulation of ambient and drinking water, OW’s risk assessment procedures in support of CWA and SDWA are mostly similar. However, risk assessments in support of CWA consider not just human health effects but also the ecological effects associated with exposure to pollutants. With regard to human health risks, perhaps the most notable difference between the ambient water and drinking water parts of OW is the additional focus, during exposure assessments for CWA purposes, on exposures to contaminated water through consumption of contaminated fish or shellfish. (This is a primary reason for potential differences in the resulting drinking water and ambient water quality criteria or standards for the same chemical.) OW’s Office of Science and Technology does all of the risk assessments for SDWA maximum contaminant level goals and CWA’s AWQC. For cancer risk evaluation, OW has been applying the principles in EPA’s proposed revision of the carcinogen guidelines. For hazard identification purposes, SDWA originally had specified a list of compounds to be regulated as toxic pollutants and required EPA to regulate an additional 25 contaminants every 3 years. However, the 1996 amendments eliminated that requirement and revised OW’s approach for listing, reviewing, and prioritizing the drinking water contaminant candidate list. The new risk-based contaminant selection process provides EPA the flexibility to decide whether or not to regulate a contaminant after completing a required review of at least five contaminants every 5 years. EPA must use three risk-related criteria to determine whether or not to regulate: (1) that the contaminant adversely affects human health; (2) it is known or substantially likely to occur in public water systems with a frequency and at levels of public health concern; and (3) regulation of the contaminant presents a meaningful opportunity for health risk reduction. The 1996 amendments also included specific requirements to assess health risks and set standards for arsenic, sulfate, radon, and disinfection byproducts. There are a number of important features regarding OW’s exposure assessments in support of CWA and SDWA regulations. OW’s primary exposure question during the criteria/standard-setting process for drinking or ambient water is hypothetical: What health effects might be expected if people consumed water and/or finfish and shellfish contaminated at the level of a candidate standard? The main function of exposure assessment is to link criteria or water concentrations to doses of chemicals and the associated health effects that might be projected. For its exposure assessments, OW uses estimates of water and food ingestion in the United States based on a variety of surveys and studies. One of the major sources of per capita water and fish ingestion is the Department of Agriculture’s Continuing Survey of Food Intakes by Individuals (CSFII), which presents results for the general population and for certain subpopulations (e.g., pregnant and lactating women, children). For assessing standards under SDWA, the linking of water concentration to dose is conducted through standardized consumption values. For example, the default exposure scenario of lifetime consumption by individuals is 2 liters of water per day. However, OW uses other default values to address consumption by sensitive subpopulations, especially children and infants. For assessing AWQC under the CWA, EPA uses the same water consumption rate as under SDWA. In addition, though, the agency adds the dose resulting from the daily average consumption of 17.5 grams of fish. An important change in EPA’s approach for developing AWQC, reflected in the 2000 Human Health Methodology, has been the move toward use of a bioaccumulation factor (BAF) to estimate potential human exposure to contaminants via the consumption of contaminated fish and shellfish. BAFs reflect the accumulation of chemicals by aquatic organisms from all surrounding media (e.g., water, food, and sediment). EPA’s 1980 method used a bioconcentration factor that reflected only absorption directly out of the water column, and therefore tended to underestimate actual contaminant levels in fish and shellfish. EPA’s revised methodology also gives preference to the use of high-quality field data over laboratory or model-derived estimates of BAFs. OW considers indirect exposures to a substance from sources other than drinking water (e.g., food and air) when establishing AWQC. This is particularly important for noncarcinogens, where the fact that several exposure sources might individually be below the RfD level does not mean that collectively the exposure is below this presumably safe level. OW has revised and expanded its policy on accounting for nonwater sources of indirect exposures known as the “relative source contribution.” The procedures for calculating the relative source contribution vary depending on the adequacy of available exposure data, levels of exposure, sources and media of exposure relevant to the pollutant of concern, and whether there are multiple health-based criteria or standards for the same pollutant. (See table 5 in the next section for a more detailed description of these assumptions.) EPA’s risk assessment guidelines and other related documents identify many default assumptions, standardized data factors, and methodological choices that may be used in chemical risk assessments. As pointed out by NAS, assumptions and professional judgment are used at every stage of a risk assessment, because there are always uncertainties in risk assessments that science can not directly answer. For the most part, these assumptions and choices are intended to address various types of uncertainties—such as an absence or limited amount of available data, model uncertainty, and gaps in the general state of scientific knowledge— or variability in the population. They are also intended to provide some consistency and transparency to agency risk assessments. Defaults are generally used in the absence of definitive information to the contrary, but also reflect policy decisions. In its guidelines, EPA characterizes many of its choices as conservative or public-health protective in that they are intended to help the agency avoid underestimating possible risks. Agency guidelines often cited the scientific studies and other evidence that supported the agency’s choice and the plausibility of the resulting risk estimates. In our recent report on EPA’s use of precautionary assumptions, we identified three major factors influencing the agency’s use of such assumptions: (1) EPA’s mission to protect human health and safeguard the natural environment (including specific requirements in some of the underlying environmental statutes), (2) the nature and extent of relevant data, and (3) the nature of the risk being evaluated. EPA’s program offices commonly employ tiered risk assessment approaches that progress from rough screening assessments (for which only limited data may be available) through increasingly detailed and rigorous analyses, if needed. EPA’s guidelines and program-specific documents indicate that conservative default assumptions are most often used during initial screening assessments, when the primary task is to determine whether a risk might exist and further analysis is called for. Such screening assessments may use “worst case” assumptions to determine whether, even under those conditions, risk is low enough that a potential problem can be eliminated from further consideration. According to guidelines and related descriptive materials from the program offices, conservative assumptions are used less often in later tiers, as the agency attempts to gather and incorporate more detailed data into its analyses. Several circumstances may lead to conservative choices playing a less prominent role in EPA risk assessments. For example, the development of more complex and sophisticated models for cancer and noncancer effects places more emphasis on using the full range of available data and characterizing the full range of potential adverse outcomes and effects. Similarly, the increased use of probabilistic analytical methods to derive parameter values will tend to reduce the “compounding” effect of picking conservative point values for each factor. As noted above, the use of tiered risk assessment approaches may also limit the use of default assumptions if more rigorous and case-specific analysis is done beyond initial screening assessments. However, all of these developments may require substantial additional effort and the availability of considerable data, which might not be possible in many cases. Although not intended to be comprehensive, table 5 illustrates in detail some of the specific assumptions, default data values, or methodological choices that are used in EPA chemical risk assessments. The table concentrates primarily on default choices from EPA’s various agencywide risk assessment guidelines. However, to also provide a sense of how default choices are used at the program level, we have included examples of standard assumptions and values employed by two of EPA’s program offices. One set of examples illustrates assumptions and choices used by OPP. The second set presents more detailed descriptions of the standard assumptions and choices identified in OW’s risk assessment methodology for deriving AWQC for the protection of human health. OW’s policy reflects many of the same basic choices that would apply to assessments conducted across the agency, such as the use of uncertainty factors when estimating an RfD. To the extent that EPA’s documents identified for each of these assumptions or choices a reason for its selection, when it would be applied in the risk assessment process, and its likely effect on risk assessment results, we have reported that information. However, it is important to recognize that there is no requirement that agencies provide such information in their guidelines (or even that they have guidelines). In particular with regard to the “likely effects” column, EPA officials cautioned that it is not always appropriate to characterize a single assumption separate from the rest and that it is not always possible to quantify the effect of each default assumption. They noted that, in general, their default assumptions are intended to be public-health protective. The information presented in table 5 was taken primarily from EPA risk assessment guidelines and related documents but also reflects additional comments provided by EPA officials. (GAO notes and comments appear in parentheses.) As with exposure assessment, the program offices typically are responsible for completing the risk characterization. EPA does, however, have several documents that provide agencywide guidance on how such characterization is to be done. The guidance includes a February 26, 1992, memorandum from the EPA Deputy Administrator entitled, “Guidance on Risk Characterization for Risk Managers and Risk Assessors,” and a March 21, 1995, document issued by the EPA Administrator entitled, “Policy for Risk Characterization at the U.S. Environmental Protection Agency.” EPA also has developed a Risk Characterization Handbook to provide more detailed guidance to agency staff. In the statement accompanying its 1994 report Science and Judgment in Risk Assessment, NRC said that although EPA’s overall approach for assessing risks was fundamentally sound, the agency “must more clearly establish the scientific and policy basis for risk estimates and better describe the uncertainties in its estimates of risk.” In March 1995, the EPA Administrator issued the agency’s risk characterization policy and guidance, which reaffirmed the principles and guidance in the agency’s 1992 policy. EPA’s guidance document defined risk characterization as the final step in the risk assessment process that (1) integrates the individual characterizations from the hazard identification, dose-response, and exposure assessments; (2) provides an evaluation of the overall quality of the assessment and the degree of confidence the authors have in the estimates of risk and conclusions drawn; (3) describes the risks to individuals and populations in terms of extent and severity of probable harm; and (4) communicates the results of the risk assessment to the risk manager. Discussing “guiding principles” for risk characterization, EPA emphasized that the integration of information from the three earlier stages of risk assessment, discussion of uncertainty and variability, and presentation of information to risk managers requires the use of both qualitative and quantitative information. For example, when assumptions are made in exposure assessment, EPA said that the source and general logic used to develop the assumptions should be described, as well as the confidence in the assumptions made and the relative likelihood of different exposure scenarios. In the 1995 policy statement, EPA said that risks should be characterized in a manner that is clear, transparent, reasonable, and consistent with other risk characterizations of similar scope. EPA said that all assessments “should identify and discuss all the major issues associated with determining the nature and extent of the risk and provide commentary on any constraints limiting fuller exposition.” The policy also said risk characterization should (1) bridge the gap between risk assessment and risk management decisions; (2) discuss confidence and uncertainties involving scientific concepts, data, and methods; and (3) present several types of risk information (i.e., a range of exposures and multiple risk descriptors such as high ends and central tendencies). The policy stated that each risk assessment used in support of decision making at EPA should include a risk characterization that follows the principles and reflects the values outlined in the policy. However, the policy statement went on to say that it and the associated guidance did not establish or affect legal rights or obligations. Some of EPA’s other risk assessment guidelines also discuss and recommend certain approaches to the risk characterization phase. For example, EPA’s proposed guidelines for carcinogen risk assessment call for greater emphasis on the preparation of “technical” characterizations to summarize the findings of the hazard identification, dose-response assessment, and exposure assessment steps. The agency’s risk assessors are then to use these technical characterizations to develop an integrative analysis of the whole risk case. That integrative analysis is in turn used to prepare a less extensive and nontechnical Risk Characterization Summary intended to inform the risk manager and other interested readers. EPA identified several reasons for individually characterizing the results of each analysis phase before preparing the final integrative summary. One is that the analytical assessments are often done by different people than those who do the integrative analysis. The second is that there is very often a lapse of time between the conduct of hazard and dose-response analyses and the conduct of the exposure assessment and integrative analysis. Thus, according to EPA, it is necessary to capture characterizations of assessments as the assessments are done to avoid the need to go back and reconstruct them. Finally, several programs frequently use a single hazard assessment for different exposure scenarios. The guidelines also point out that the objective of risk characterization is to call out any significant issues that arose within the particular assessment being characterized and inform the reader about significant uncertainties that affect conclusions, rather than to recount generic issues that are covered in agency guidance documents. In another example, EPA’s ecological risk guidelines emphasize that risk characterization is a means for clarifying relationships between stressors, adverse effects, and ecological entities. In addition, this phase of the risk assessment process is a time to reach conclusions regarding the occurrence of exposure(s) and the adversity of existing or anticipated effects. Specifically, EPA guidance describes three ecological risk characterization activities: (1) risk estimation (i.e., integrating exposure and effects data and evaluating uncertainties); (2) risk description (i.e., interpreting and discussing available information about risks to the assessment endpoints); and (3) risk reporting (i.e., estimating risks indicating the overall degree of confidence in such estimates, citing lines of evidence to support risk estimates, and addressing assumptions and uncertainties). Similar to EPA-wide guidance on risk characterization, EPA’s ecological risk characterization guidelines emphasize open communication with risk managers and other interested parties to clearly convey information needed for decision making in a risk management context. It is also EPA’s policy that major scientifically and technically based work products related to the agency’s decisions normally should be peer reviewed to enhance the quality and credibility of the agency’s decisions. With regard to EPA’s chemical risk assessments, peer review can be used for evaluating both specific assessments and the general methods EPA uses in its risk assessments. Peer review generally takes one of two forms: (1) internal peer review by a team of relevant experts from within EPA who have no other involvement with respect to the work product that is to be evaluated or (2) external peer review by a review team that consists primarily of independent experts from outside EPA. In December 2000, EPA released a revised edition of its Peer Review Handbook for use within the agency. The Food and Drug Administration (FDA) within the Department of Health and Human Services regulates the safety of a large number and wide variety of consumer products, including foods, cosmetics, human and animal medicines, medical devices, biologics (such as vaccines and blood products), and radiation-emitting products (such as microwave ovens). Chemical risk assessments are primarily conducted by three of FDA’s five product-oriented centers—the Center for Food Safety and Applied Nutrition (CFSAN), the Center for Veterinary Medicine (CVM), and the Center for Devices and Radiological Health (CDRH). The chemical risk assessment activities of these centers vary depending on factors such as the underlying statutory requirements, the substances being regulated, whether cancer or noncancer effects are of concern, and whether a product is under pre- or postmarket scrutiny. FDA officials said that the agency generally follows the National Academy of Sciences’ (NAS) four- step risk assessment process, although it has not developed written internal guidelines. FDA often incorporates conservative assumptions into its assessments when information essential to a risk assessment is not known, but such assumptions are supposed to be scientifically plausible and consistent with agency regulations or policies. For example, CFSAN assumptions are expected to be reasonably protective of human health. FDA does not have an official policy on how risk assessment results should be characterized and communicated to policymakers and the public. However, FDA officials said that, in practice, they use a standard approach that typically highlights the assumptions with the greatest impact on the results of an analysis, states whether the assumptions used were conservative, and shows the implications of different choices. FDA’s regulatory authority is primarily derived from the Federal Food, Drug, and Cosmetic Act, as amended (FFDCA), although several related public health laws (e.g., the Food and Drug Administration Modernization Act of 1997, or FDAMA) provide additional authority. FDA administers its regulatory responsibilities through its five product-oriented centers: (1) CFSAN, (2) CVM, (3) CDRH, (4) the Center for Drug Evaluation and Research, and (5) the Center for Biologics Evaluation and Research. FDA officials said that, although each of these five product centers conducts some type of risk assessments, the first three primarily conduct the chemical risk assessments that are the focus of this report. Each of these centers has different responsibilities, authorities, and constraints on its regulatory and risk assessment activities. CFSAN is responsible for the regulation of food additives, color additives used in food, and cosmetic additives. Under the FFDCA, the regulation of substances intentionally added to food or used in contact with food must be based solely on the safety of the substances for their intended uses (i.e., consideration of benefits and costs is not allowed). A food containing an unapproved food or color additive is considered “unsafe” unless FDA issues a regulation approving its use or, in the case of a food contact substance, there exists an effective notification. To obtain an authorizing regulation or an effective notification, the sponsor of a food or color additive must show that it is safe for its intended use. FDA regulations under the FFDCA define a product as safe if there is “a reasonable certainty in the minds of competent scientists that the substance is not harmful under the intended conditions of use.” For food additives and color additives that are not themselves carcinogenic but contain carcinogenic impurities, CFSAN uses a quantitative risk assessment to determine whether the risk posed by a carcinogenic impurity is acceptable (i.e., a lifetime risk below one per million) under the FFDCA’s general safety clause of “reasonable certainty of no harm.” Nevertheless, if the food or color additive itself is a known carcinogen, under the “Delaney Clause” amendments to FFDCA, it cannot be deemed safe and is prohibited from use in food. CFSAN is also involved with substantial activities in the area of postmarket concerns with contaminants and naturally occurring toxicants. For example, in the past year, CFSAN participated in a number of major, international chemical risk assessments in the areas of dioxins and various mycotoxins. CVM’s primary role is to implement the FFDCA requirement that animal drugs and medicated feeds are safe and effective for their intended uses and that food from treated animals is safe for human consumption. Under the FFDCA, the regulation of residues of animal drugs that become a part of food because of the use of the animal drug must be based solely on health factors (i.e., consideration of benefits and costs is not allowed). A carcass or any of its parts that contain residues of an unapproved drug, or residues of an approved drug above approved levels, is considered to be unsafe and the carcass is considered adulterated. CVM uses risk assessment to help develop safe concentration levels in edible tissues, residue tolerances for postmarket monitoring, and withdrawal periods for slaughter following drug treatment. For noncancer effects, the applicable safety standard under FFDCA is that that these concentrations, tolerances, and withdrawal periods should represent a “reasonable certainty of no harm.” FFDCA includes provisions that permit FDA to authorize extralabel uses of an animal drug that would pose a “reasonable probability” of risk to human health if residues of the drug are consumed. The agency may establish a safe level for the residue and require that the drug sponsor provide an analytical method for detecting residues of such a compound. However, the act prohibits use in food-producing animals of any compound found to induce cancer when ingested by people or animals unless it can be determined that “no residue” of that compound will be found in the food produced from those animals under conditions of use reasonably certain to be followed in practice. FDA has interpreted the intention of the “no residue” language in the statute as meaning that any remaining residues should present an insignificant risk of cancer to people. As a matter of policy, FDA accepts a lifetime risk below one per million as an insignificant level. CDRH administers the medical device provisions of FFDCA, and assesses risks posed by chemicals that might leach out from medical devices (e.g., breast implants) into surrounding tissue. The center’s basic mission is to protect the public health by ensuring that there is reasonable evidence of the safety and effectiveness of medical devices intended for human use. CDRH usually evaluates risks in the context of a premarket review system, and the decision to clear or approve a product to treat a specific condition is based on a benefit-risk analysis for the intended population and use (not just on the basis of safety or human health as in the case of food regulation). Because all medical products are associated with risks, CDRH considers a medical product to be safe if it has reasonable risks given the magnitude of the benefit expected and the alternatives available. Another unit of FDA, the National Center for Toxicological Research (NCTR), has an important supporting role in the risk-related activities of the product centers. NCTR conducts much of the agency’s methodological research on risk assessment methods and helps to develop and modify FDA’s quantitative methods, in conjunction with experts from the various product centers. NCTR also provides toxicology research supporting all components of FDA. It performs fundamental and applied research designed specifically to define biologic mechanisms of action underlying the toxicity of products regulated by FDA. Although FDA has long been a pioneer in the development of risk assessment methods, the agency has not developed written internal guidance specifically on conducting risk assessments. FDA officials noted that much of their work is done before products are placed on the market and, in those instances, the burden of proof is on sponsors seeking FDA approval for new products. The documents are meant to represent the agency’s current thinking on the scientific data and studies considered appropriate for assessing the safety of a product. However, the guidance documents are not legal requirements and do not preclude the use of alternative procedures or practices by either FDA or external parties. Some of these guidelines include detailed descriptions of risk assessment methods deemed appropriate to satisfy FDA’s reviews under various statutory provisions. FDA has also adopted a number of domestic and international consensus standards that prescribe certain risk assessment methods (e.g., approaches for assessing the safety of medical devices and default consumption values for meat products). This is not true with regard to dietary supplements. The Dietary Supplement Health and Education Act of 1994 created a new framework for FDA’s regulation of dietary supplements, which do not have to undergo preapproval by FDA to determine their safety or efficacy. FDA officials said they currently have no standard procedures for dietary supplement risk assessment. FDA risk assessment procedures have also been described by individuals and organizations from within and outside of the agency in scientific and professional journal articles. For example, a 1997 journal article written by a panel of officials from across FDA summarized the risk assessment approaches of each of FDA’s product centers. A 1996 report on federal agencies’ chemical risk assessment methods described CFSAN’s methods, but did not describe the approaches used by the other centers within FDA. FDA’s food safety risk assessment procedures were also described in “Precaution in U.S. Food Safety Decisionmaking: Annex II to the United States’ National Food Safety System Paper,” which was prepared for the Organization for Economic Cooperation and Development in March 2000. FDA officials said that the agency generally follows the four-step risk assessment process identified by NAS: hazard identification, dose-response assessment (which FDA prefers to call “hazard characterization”), exposure assessment, and risk characterization. They said that they also rely on past precedent and other seminal works on risk assessment, such as the 1985 Office of Science and Technology Policy guidance document on cancer risk assessment. However, they emphasized that FDA does not presume there is a “best way” of doing a risk assessment and is continually updating its procedures and techniques with the goal of using the “best available science.” FDA officials also said that there are variations in the risk assessment approaches used among the agency’s different product centers and, in some cases, within those centers. In general, those variations are traceable to differences in the following factors: the substances being regulated, the nature of the health risks involved (particularly carcinogens versus statutory and regulatory requirements, whether the risk assessment is part of the process to review and approve a product before it can be marketed and used (premarket) or whether the assessment is for risks that might arise during monitoring of a product once it is being used (postmarket), and the nature and extent of the scientific information available. The nature and extent of scientific information varies on a case-by-case basis. The other factors, however, are more generic, and table 6 illustrates how they are similar or different across CFSAN, CVM, and CDRH. The subsections following the table describe more specifically how CFSAN, CVM, and CDRH conduct the first three stages of risk assessment. CFSAN’s procedures for hazard identification and dose-response assessment vary depending on whether noncancer or cancer risks are at issue. For noncancer effects, CFSAN starts with the largest dose in a chronic animal study that did not appear to lead to an increase in toxic effects above the level measured in unexposed control animals— the “no observed adverse effect level” or NOAEL. CFSAN then divides this NOAEL by one or more safety factors to arrive at an “acceptable daily intake” (ADI) intended to be an amount that can be ingested daily for a lifetime without harm. For example, CFSAN typically divides the NOAEL by 10 to allow for the possibility that humans might be more sensitive to a chemical than the experimental animals and then by another 10 to account for the possibility that some individuals might have greater sensitivity than others might. Therefore, for ADIs derived from long-term animal studies, CFSAN commonly uses a combined safety factor of 100. Additional safety factors may also be applied to account for long-term effects versus short- term experiments, inadequacies of the experimental data, or other factors. For cancer effects, CFSAN uses two different hazard assessment/dose- response approaches, depending on the nature of the products being regulated. For food and color additives that are themselves known carcinogens, the Delaney provisions in FFDCA make risk assessment rather straightforward. If a petition to market a food ingredient contains an adequately conducted animal cancer study, and if results of that study indicate that the food ingredient produces cancer in animals, CFSAN identifies the substance as a carcinogen under the conditions of the study. No further corroboration or weight-of-evidence analysis is required, and there is no need for a detailed dose-response assessment, exposure assessment, or risk characterization for the purpose of determining a specific level of the carcinogenic substance in food that may be considered to be safe. CFSAN uses more elaborate procedures for known or suspected carcinogenic impurities in food additives. The center’s method for low- dose cancer risk estimation is similar to EPA’s method (presented in app. II) on extrapolation for carcinogens (see fig. 3). On the dose- response curve of tumor incidence versus dose for a chemical, CFSAN chooses a point below which the data are no longer considered reliable, usually in the range of a tumor incidence of 1 percent to 10 percent. A straight line is drawn from the upper-confidence limit on the estimated risk at that point to the origin (i.e., zero incremental dose/zero incremental response). This provides the slope of the line used to provide upper-bound estimates of cancer risk at low doses. CFSAN does not specify a particular mathematical form for the dose-response relationship in the experimental dose range; the only requirement is an adequate fit to the data. According to FDA officials, CFSAN risk assessors use one of two different methods in animal-to-human scaling when extrapolating this dose-response curve to the estimation of upper bounds on human risk. In one of the methods, CFSAN assumes that cancer risks are equal in animals and humans when doses are similar on a lifetime-averaged milligram/kilogram/day basis (i.e., body weight scaling). In the other method, CFSAN bases its interspecies dose scaling on body weight to the ¾ power (in the absence of information to the contrary). Although the literature suggests that scaling methods can have a significant impact on risk assessment results, FDA officials said that using one approach versus the other makes relatively little difference. Also, because tumor rates can be biased by intercurrent mortality in animal studies (i.e., some animals die during the study from causes other than the tumor type being investigated), CFSAN uses a statistical procedure to make adjustments for intercurrent mortality in testing and estimating tumor rates. CFSAN procedures for exposure assessments to food and color additives are largely driven by the FFDCA requirement that the safety of a chemical compound be assessed in terms of the total amount of the compound in the diet. Therefore, to determine exposure, CFSAN risk assessors must consider all potential uses of the compound being reviewed. Similarly, in defining the allowable limits, the assessors must conclude that the sum total of all of these uses is within safe limits. CFSAN generally assumes in its exposure assessments that the compound is present at its maximum proposed use level in all foods in which it may be used, that any contaminants are present at residue levels established through chemical analysis, and that consumers are exposed to the additive every day. Although most of the agency’s focus is on chronic (long-term) exposures, the agency must also sometimes focus on very short-term, or even single, exposures, especially for contaminants associated with acute toxic effects. The first component in CFSAN’s exposure assessment for food safety is the determination of the concentrations (i.e., use levels or residue levels, in the case of a chemical contaminant) of a chemical in foods. In the premarket approval process, the sponsor of the petition or notification provides this information. For postmarket assessments, information may come from focused field surveys or from established monitoring programs such as the Total Diet Study, which has provided data since 1961 on dietary intakes of a variety of food contaminants, including pesticides, industrial chemicals, toxic and nutritional elements, and vitamins and radionucleides. Analyses are performed on foods prepared for consumption in order to provide a realistic measure of human intake. The second component of CFSAN’s exposure assessment is determining the extent of consumption of different foods. In this process, CFSAN primarily relies on multiple-day national food consumption surveys, and focuses on the upper end of the food intake distribution (i.e., the heaviest consumers of particular foods). CFSAN assumes that, within demographic subgroups, all variation in the survey data represents variation among individuals. That is, the average daily consumption of a food during the survey period is assumed to apply to that person for his or her whole life, and the intakes for different survey participants are assumed to reflect differences from one person to the next in each person’s lifetime consumption. This default assumption has acknowledged biases that result in both overestimating high-end chronic exposures and underestimating the proportion of the population ever consuming particular foods. To complete the exposure assessment, levels of an additive or contaminant in each food type are combined with estimates of daily consumption of each food type to give a total estimated daily intake. FDA may calculate exposures for various demographic groups, attempting to characterize both a mean exposure and an exposure for the heavy consumer (typically consumers at the 90th percentile of the intake distribution). FDA officials also pointed out that the exposure models they use for direct food additives are very different from those for food-contact substances (e.g., packaging). For the latter, they said that the bottom line is usually a mean exposure. FDA officials said that for risk management purposes they may attempt to show the implications of different scenarios used to estimate risk. FDA noted that a computer program that employs Monte Carlo techniques has been developed to study the effects of variability and uncertainty of potency and exposure estimates on estimates of risk. Such complex analyses have been applied principally to contaminants rather than in the premarket evaluations for food and color additives. CVM uses risk assessment in both the premarket approval process and postmarket surveillance. Risk assessments support risk management decisions such as the development of safe concentration values and residue tolerances for these drugs in foods. The primary human health concern in chemical risk assessment for CVM is animal drug residues in food. Residue is defined as any compound present in edible tissues (including milk and eggs) of the food-producing animal that results from the use of the chemical compound, including the compound, its metabolites, or other substances formed in or on food because of the use of the compound. Like CFSAN, CVM’s risk assessment procedures vary based on whether noncancer or cancer risks are at issue. According to FDA officials, the center’s risk assessment procedures for noncarcinogens are similar to those used by the rest of FDA, and are based on laboratory animal data, estimated daily food consumption, drug and metabolite residue data, and appropriate safety factors. CVM’s guidelines for industry note that the agency will calculate the ADI from the results of the most sensitive study in the most sensitive species. The center will normally use different safety factors depending on the type of study supporting the ADI calculation. When using the ADI to calculate the “safe concentration” for an animal drug product, CVM uses standard values for residues of veterinary drugs in edible tissues for the weight of an average adult and the amount and proportion of meat products, milk, and eggs consumed per day. CVM officials pointed out that the consumption values in their guidelines for industry are standard values used by the Joint Expert Committee on Food Additives, sponsored by the World Health Organization and Food and Agriculture Organization, that provides food safety recommendations to the Codex Committee on Residues of Veterinary Drugs in Foods. For carcinogen risk assessments, CVM uses a nonthreshold, conservative, linear-at-low-dose extrapolation procedure to estimate an upper limit of low-dose risk (as described under CFSAN). Cancer risk estimates are generally based on animal bioassays, and upper 95-percent confidence limits of carcinogenic potency are used to account for inherent experimental variability. FDA officials noted that some elements and assumptions of its dose-response analysis procedures are likely to overestimate risk by an unknown amount. Similarly, some of its assumptions on exposure may also overestimate cancer risks. For example, CVM’s risk assessment procedures assume that the concentration of residue in the edible product is at the permitted concentration and that consumption is equal to that of the 90th percentile consumer. In addition, the agency assumes that all marketed animals are treated with the carcinogen. While acknowledging that all of these assumptions result in multiple conservatisms, FDA also states that they are prudent because of the uncertainties involved. Medical devices, supplies, and implants may contain chemicals that can leach out of the devices into surrounding tissues. Risks from these types of chemical contaminants are considered during the premarket review of the material safety of a device, but concerns may also arise during CDRH’s postmarket surveillance activities. According to FDA officials, the concentrations of such leachants in human tissues are generally small and amenable to typical safety risk assessment procedures. CDRH has issued guidance for the preclinical (premarket) biological safety evaluation of medical devices. In that guidance, CDRH recognizes and uses a number of domestic and international consensus standards that have been developed to address aspects of medical device safety, including risks posed by exposure to compounds released from medical devices. However, CDRH officials pointed out that they and medical device approval applicants may use approaches other than those described in the consensus standards to conduct risk assessments. They said the standard that comes closest to describing CDRH’s approach for chemical risk assessment is International Organization for Standardization (ISO)/FDIS 10933-17. CDRH officials noted that, although this international standard is still in draft and has not been formally recognized by the center, the methods that it describes represent the primary procedures used by CDRH to assess the risk posed by patient exposure to compounds released from medical devices. They also pointed out that this standard is unique among risk assessment guidelines in that it provides methods to derive health- based exposure levels for local effects such as irritation, which often “drive” the risk assessment for compounds released from implanted devices. According to CDRH, hazards posed by patient exposure to a device are typically determined after subjecting the device to a series of tests defined by the preclinical evaluation guidance. Evaluation of potential toxicity is supposed to cover a number of adverse effects, including local or systemic effects, cancer, and reproductive and developmental effects. Unless justification is otherwise provided, CDRH assumes that the results obtained in animal studies are relevant for humans. One notable exception for medical device risk assessment, according to CDRH, is that implantation-site sarcomas (malignant tumors) found in rodents are not assumed to be relevant for humans. One option available to applicants is to use a risk assessment approach involving: (1) characterization of the chemical constituents released from a device; (2) derivation of a tolerable intake (TI) value for the compound; and (3) comparing the dose of each constituent received by a patient to its respective TI value. A TI value is a dose of a compound that is not expected to produce adverse effects in patients following exposure to the compound for a defined period. According to CDRH, it is conceptually similar to EPA’s reference dose, but different TI values can be derived for a compound depending on the route and duration of exposure to the medical device. CDRH’s procedures recommend establishing TI values for noncancer adverse effects using standard uncertainty factors in order to account for interspecies and inter-individual differences in sensitivity. However, CDRH permits flexibility in the event that data are available to characterize these uncertainties more accurately. CDRH also uses a lumped uncertainty factor to adjust for limitations in data quality such as (1) the use of short-term studies in the absence of long-term studies, (2) the absence of supporting studies, and (3) use of studies involving different routes or rates of exposure. According to CDRH, this lumped uncertainty value typically does not exceed 100, but can exceed 100 when acute (short-term) toxicity data are the only basis of the calculation of a TI value for permanent exposure. CDRH considers this provision especially important for medical device risk assessment because of the paucity of long-term toxicity data for many of the compounds released from medical devices. For carcinogenic leachants, FDA often uses low-dose linear extrapolation techniques. For a device-released compound that has been determined to be a carcinogen, CDRH uses a weight-of-the-evidence approach to determine the likelihood that it exerts its carcinogenic effect via a genotoxic mechanism. If the evidence suggests that the compound is genotoxic, then CDRH uses quantitative risk assessment to estimate a TI consistent with a risk level of 1 per 10,000. No specific quantitative risk assessment approaches have been identified as better than others for conducting the cancer risk assessment. If, however, the weight-of-the- evidence test suggests that the compound is a nongenotoxic carcinogen, the uncertainty factor approach described above should be employed to derive the TI. Once the TI is derived for each compound released from a device, it is then converted to a tolerable exposure value by taking into account the body weight of the patient and the usage patterns of the device that releases the compound. Overall, the agency noted that one of the most challenging problems in risk assessments for devices is determining the level of exposure to leached chemicals. As previously noted, FDA does not require the use of a specific risk assessment protocol or of specific default assumptions. However, the summary of FDA procedures also demonstrated that assumptions and methodological choices are an integral part of a risk assessment. FDA officials noted that they employ many default assumptions or choices by precedent. In particular, FDA officials and several reference documents on FDA risk assessment procedures pointed out that the agency routinely incorporates conservative assumptions into its assessments in the face of uncertainty. The report on the U.S. food safety system emphasized that precaution is embedded in the underlying statutes and the actions of regulatory agencies to ensure acceptable levels of consumer protection. Therefore, precautionary approaches are very much a part of the agency’s risk analysis policies and procedures. Although not intended to be comprehensive, the following table illustrates in detail some of the specific assumptions or methodological choices that are used in FDA as a whole and within particular FDA product centers. The information in the table was taken primarily from FDA documents, but also reflects additional comments provided by FDA officials. (GAO notes and comments appear in parentheses.) Unlike EPA, FDA does not have an official policy on how the results of the agency’s risk assessments should be characterized to decision makers and the public. However, FDA officials said that, in practice, the agency uses a standard approach for risk characterization that is similar to EPA’s official policy. They said that FDA’s general policy is to reveal the risk assessment assumptions that have the greatest impact on the results of the analysis, and to state whether the assumptions used in the assessment were conservative. FDA officials also said that their risk assessors attempt to show the implications of different distributions and choices (e.g., the results expected at different levels of regulatory intervention). As noted earlier, FDA may employ methods such as Monte Carlo techniques to provide additional information on the effects of variability and uncertainty on estimates of risk. There are some differences in FDA risk characterization procedures depending on the products being regulated and the nature of the risks involved. For food ingredients (direct and indirect food additives, color additives used in food, and substances generally recognized as safe) and animal drug residues that are not carcinogenic, risk characterization under the FFDCA focuses on whether the mandate of reasonable certainty of no harm will be achieved given the proposed limits on use and permissible residues. The main issue is whether the higher end (the 90th percentile) of the distribution of estimated daily intakes is below the ADI calculated from toxicity data. The statutory mandate is interpreted as requiring that, for a food additive to be declared safe, heavy consumers of particular foods should be reasonably assured of protection even if residues were at the maximum level allowed. For carcinogenic impurities, FDA’s focus is also on characterizing whether there is reasonable certainty of no harm. However, because of the Delaney clause, risk characterization is not needed for carcinogenic food ingredients. Residues of carcinogenic animal drugs are also evaluated separately under the DES proviso. CDRH officials pointed out that the draft ISO/FDIS 10933-17 international standard explicitly addresses one risk characterization issue—how sensitive subpopulations should be taken into account when setting allowable limits for compounds released from devices. Although it states that “idiosyncratic hypersusceptibility” should not normally be the basis of the tolerable exposure or allowable limit, the ISO standard does not preclude setting standards in this manner. Furthermore, the standard says that limits should be based on the use of the device by the broadest segment of the anticipated user population. Therefore, if a device is intended for a specific population, such as pregnant women, estimates should be based on that population. Although the Occupational Safety and Health Administration (OSHA) generally follows the standard four-step National Academy of Sciences’ (NAS) paradigm for risk assessment, there are several distinguishing characteristics of its assessments. Under its statutory mandate, OSHA has a specific and narrow focus on the potential risks to workers in an occupational setting. Further, the underlying statute and court decisions interpreting the statute have required the agency to focus on demonstrating, with substantial evidence, that significant risks to workers exist before it can regulate. In addition to presenting its own best estimates of risk, OSHA may present estimates based on alternative methods and assumptions. Much of what is distinct about risk assessment at OSHA can be traced to statutory provisions, court decisions, and the nature of workplace exposures to chemicals. OSHA, an agency within the Department of Labor, was created by the Occupational Safety and Health Act of 1970 (the OSH Act). The central purpose of the act is to ensure safe and healthful working conditions. As one of the primary means of achieving this goal, the act authorizes the Secretary of Labor to promulgate and enforce mandatory occupational safety and health standards. Certain provisions in the act stipulate both the nature and the manner in which these standards should be established. For example: Under section 3(8) of the OSH Act, a safety or health standard is defined as a standard that requires conditions, or the adoption or use of one or more practices, means, methods, operations, or processes, reasonably necessary or appropriate to provide safe or healthful employment or places of employment. According to OSHA, a standard is reasonably necessary or appropriate within the meaning of section 3(8) if it eliminates or substantially reduces significant risk and is economically feasible, technologically feasible, cost effective, consistent with prior OSHA action or supported by a reasoned justification for departing from prior OSHA actions, supported by substantial evidence on the record as a whole, and is better able to effectuate the act’s purposes than any national consensus standard it supersedes. Section 6(b)(5) of the act states that “The Secretary, in promulgating standards dealing with toxic materials or harmful physical agents… shall set the standard which most adequately assures, to the extent feasible, on the basis of the best available evidence, that no employee will suffer material impairment of health or functional capacity even if such employee has regular exposure to the hazard dealt with by such standard for the period of his working life.” A significant factor influencing the interpretation of the OSH Act provisions and OSHA’s approach to risk assessment is the Supreme Court ruling in its 1980 “Benzene” decision that, before issuing a standard, OSHA must demonstrate that the chemical involved poses a “significant risk” under workplace conditions permitted by current regulations and that the new limit OSHA proposes will substantially reduce that risk. This decision effectively requires OSHA to evaluate the risks associated with exposure to a chemical and to determine that these risks are “significant” before issuing a standard. However, the court provided only general guidance on what level of risk should be considered significant. The court noted that a reasonable person might consider a fatality risk of 1 in 1000 (10-3) to be a significant risk and a risk of one in one billion (10-9) to be insignificant. Thus, OSHA considers a lifetime risk of 1 death per 1,000 workers to represent a level of risk that is clearly significant. The court also stated that “while the Agency must support its findings that a certain level of risk exists with substantial evidence, we recognize that its determination that a particular level of risk is significant will be based largely on policy considerations.” Later Court of Appeals decisions have interpreted the Supreme Court’s “Benzene” decision to mean that OSHA must quantify or explain the risk for each substance that it seeks to regulate unless it can demonstrate that a group of substances share common properties and pose similar risks. Although this decision does not require OSHA to quantitatively estimate the risk to workers in every case, it does preclude OSHA from setting new standards without explaining how it arrives at a determination that the standard will substantially reduce a significant risk. According to OSHA officials, the other important contextual influence on OSHA risk assessment is the very nature of workplace exposures to chemicals. Generally, workplace exposures to chemicals are at higher levels than most environmental exposures to chemicals experienced by the general public. Workers are often exposed to many chemical agents at levels not much lower than those used in experimental animal studies. According to agency officials, this is one of the unique features of OSHA’s chemical risk assessments. Also, OSHA frequently has relevant human data available on current exposures, in contrast to most other agencies regulating toxic substances. OSHA currently has no formal internal risk assessment guidance. Instead, OSHA has primarily described its general risk assessment methods, as well as the rationale for specific models and assumptions selected, in the record of each risk assessment and regulatory action. One reason for this, according to agency officials, is that OSHA performs risk assessments only for its standards. Overall, they said the agency only publishes two or three proposed or final rules per year, and not all of these rules involve a chemical risk assessment. The officials also emphasized the incremental nature of advances in risk assessment methods and science, with successive assessments establishing precedents for methods that may be used for succeeding analyses. Like EPA and FDA, OSHA uses the basic NAS four-step process for risk assessment. Another fundamental source for OSHA’s (and EPA’s and FDA’s) methods was the 1985 document on chemical carcinogens produced by the Office of Science and Technology Policy. OSHA often refers to the reference sources of other entities, including other federal agencies, in both specific rulemakings and as general technical links to its on-line information on occupational risks. Despite these common elements and procedures, several features of OSHA’s approach differ from those of other federal agencies. Because OSHA does not currently have written internal guidance on its risk assessment procedures, the information in the following sections is derived primarily from an examination of OSHA’s chemical risk assessments. We also relied on secondary sources, such as Lorenz Rhomberg’s report on federal agencies’ risk assessment methods. In OSHA’s risk assessments, the hazard identification step results in a determination that an exposure to a toxic substance causes, is likely to cause, or is unlikely or unable to cause, one or more specific adverse health effects in workers. According to OSHA, this step also shows which studies have data that would allow a quantitative estimation of risk. OSHA defines hazardous and toxic substances as those chemicals present in the workplace that are capable of causing harm. In this definition, the term chemicals includes dusts, mixtures, and common materials such as paints, fuels, and solvents. OSHA currently regulates exposure to approximately 400 such substances. In the workplace environment, chemicals pose a wide range of health hazards (e.g., irritation, sensitization, carcinogenicity, and noncancer acute and chronic toxic effects) and physical hazards (e.g., ionizing and nonionizing radiation, noise, and vibration). Most of OSHA’s chemical risk assessments have addressed occupational carcinogens. In assessing potential carcinogens, OSHA may consider the formal hazard classification or ranking schemes of other entities as part of the available evidence on a particular chemical. Ultimately, though, OSHA makes its own determinations on the risk posed by particular compounds and their classification as potential occupational carcinogens. OSHA’s chemical risk assessments may also discuss noncancer hazards. For example, in the final rule on methylene chloride the agency discussed the evidence regarding central nervous system, cardiac, hepatic (liver), and reproductive toxicity, as well as carcinogenicity. Similarly, the agency’s rulemaking on 1,3-butadiene addressed adverse health effects such as developmental and reproductive toxicity and bone marrow effects in addition to the evidence on carcinogenicity. OSHA quantifies the risks of noncancer effects if it determines that there are adequate data on exposure and response for the substance of interest. OSHA officials also noted that OSHA has a hazard communication standard, which requires manufacturers, shippers, importers, and employees to inform their employees of any potential health hazard when handling these chemicals. This is usually done through container labeling and material safety data sheets. Although this standard does not address specific risks posed by individual chemicals, it is a comprehensive hazard information standard for nearly all chemicals in commerce. OSHA’s general procedures for dose-response assessment are similar to those of EPA and FDA, especially in the choice of data sets to use for quantitative assessments. However, OSHA probably uses a linear low-dose extrapolation model less often than is the case for other agencies. OSHA differs from the other federal regulatory agencies also in being less conservative in setting its target risk levels when conducting low-dose extrapolation. As previously noted, the main points of OSHA’s risk assessments for regulatory purposes are to determine whether significant risks exist and to demonstrate in a broad sense the degree to which the standard would reduce significant risk. The specific choice of where to set the standard is tempered by the statutory mandate that standards must be technologically and economically feasible. Like other agencies, OSHA states that, all things being equal, epidemiological data are preferred over data from animal studies whenever good data on human cancer risks exist. More often than some other agencies regulating exposures to toxic substances, OSHA may have relevant human data on adverse health effects available for consideration in its risk assessments. However, the rulemaking examples we reviewed also illustrate that these epidemiological data may be considered inadequate for quantitative dose-response assessment, while animal data may provide more precise and useful dose-response information. In both the methylene chloride and 1,3-butadiene dose-response assessments, for example, OSHA had both epidemiological and animal data available, but based its quantitative estimates on data from rodent models. However, OSHA did use its analysis of the epidemiological data when examining the consistency of the results derived from animal studies. When faced with the choice of several animal data sets, OSHA tends not to combine tumor sites but to choose the data set showing the highest sensitivity (i.e., most sensitive sex, species, and tumor site). The agency will, however, frequently present information from alternative data sets and analyses. The agency is likely to include benign tumors with the potential to progress to malignancy along with malignant tumors in the data set used for its quantitative assessments. OSHA cited the Office of Science and Technology Policy’s views on chemical carcinogens in support of this practice, as well as noting that other federal agencies, including EPA and FDA, have also included benign responses in their assessments. Because occupational exposures tend to be closer to the range of experimentally tested doses in animal studies, extrapolation may pose less of a challenge for OSHA than for other regulatory agencies. OSHA’s preferred model for quantitative analysis of animal cancer dose-response data and for extrapolation of these data to low doses is the “multistage model,” which is based on the biological assumption that carcinogens induce cancer through a series of independent ordered viable mutations, and that cancer develops through stages. Unlike EPA and FDA, however, OSHA tends to focus on the maximum likelihood estimate (MLE) of the fitted dose-response curve rather than on an upper bound, although the agency also provides estimates for the 95-percent upper confidence limit (UCL) of the dose-response function. This procedure generally leads to a less conservative risk estimate than the procedures used by EPA or FDA. Like EPA and FDA, OSHA generally assumes no threshold for carcinogenesis. In contrast to the other agencies, OSHA’s default dose- metric for interspecies extrapolation is body weight scaling (mg/kg/day − i.e., risks equivalent at equivalent body weights). According to OSHA, this default is used to be consistent with prior chemical risk assessments, but it also reflects a conscious policy decision that its methodology should not be overly conservative. OSHA says it may in the future move to ¾ -power scaling, as agreed to by EPA, FDA, and the Consumer Product Safety Commission some years ago. OSHA also says it is currently considering developing a different form of the multistage model, which will provide more stable MLE estimates than does the current form. OSHA also considered data from physiologically based pharmacokinetic (PBPK) models in the risk assessment examples we reviewed. PBPK models provide information on target organ dose by estimating the time distribution of a chemical or its metabolites through an exposed subject’s system. OSHA noted that PBPK modeling can be a useful tool for describing the distribution, metabolism, and elimination of a compound of interest under conditions of actual exposure and, if data are adequate, can allow extrapolation across dose levels, routes of exposure, and species. In particular, pharmacokinetic information is useful in modeling the relationship between administered doses and effective doses as a function of the exposure level. However, PBPK models are complicated and require substantial data, which may not be available for most chemicals. OSHA pointed out in the methylene chloride rule that differences in the risk estimates from alternative assessments (including those submitted by outside parties) were not generally due to the dose-response model used, but to whether the risk assessor used pharmacokinetic modeling to estimate target tissue doses and what assumptions were used in that modeling. In the methylene chloride standard, OSHA developed a set of 11 criteria to judge whether available data are adequate to permit the agency to rely on PBPK analysis in place of administered exposure levels when estimating human equivalent doses. Although it is beyond the scope of this appendix to provide a full technical explanation of the following criteria, they do illustrate the complex nature of PBPK analysis and, more generally, the types of issues that risk assessors consider in weighing the available data. 1. The predominant as well as all relevant minor metabolic pathways must be well described in several species, including humans. 2. The metabolism must be adequately modeled. 3. There must be strong empirical support for the putative mechanism of carcinogenesis. 4. The kinetics for the putative carcinogenic metabolic pathway must have been measured in test animals in vivo and in vitro and in corresponding human tissues at least in vitro.5. The putative carcinogenic metabolic pathway must contain metabolites that are plausible proximate carcinogens. 6. The contribution to carcinogenesis via other pathways must be adequately modeled or ruled out as a factor. 7. The dose surrogate in target tissues used in PBPK modeling must correlate with tumor responses experienced by test animals. 8. All biochemical parameters specific to the compound, such as blood:air partition coefficients, must have been experimentally and reproducibly measured. This must especially be true for those parameters to which the PBPK model is sensitive. 9. The model must adequately describe experimentally measured physiological and biochemical phenomena. 10. The PBPK models must have been validated with other data (including human data) that were not used to construct the models. 11. There must be sufficient data, especially data from a broadly representative sample of humans, to assess uncertainty and variability in the PBPK modeling. In the 1,3-butadiene standard, which came out after the methylene chloride standard, OSHA used these same 11 criteria to judge the adequacy of the 1,3-butadiene PBPK models for dose-response assessment. In the butadiene case, the PBPK models did not meet all of these criteria. For dose-response analyses from human cancer data, OSHA tends to use similar methodologies to the other regulatory agencies. Mostly these are simple linear models, such as relative risk models, and estimates of risk are based on the MLE. No specific approach or procedure for the assessment of noncancer effects was evident in the examples of OSHA rulemakings we reviewed. However, OSHA clearly considered a range of noncancer toxic effects in its analyses. In its rulemakings, OSHA focused on describing and analyzing a variety of relevant studies, case reports, and other information found in the scientific literature. Rhomberg noted that, in the past, OSHA used methods that were comparable to those of other agencies. However, the federal court in the AFL-CIO v. OSHA case questioned the use of standard safety factors, noting that “application of such factors without explaining the method by which they were determined… is clearly not permitted.” OSHA has produced quantitative risk estimates for reproductive and developmental effects (glycol ethers, 1993), heart disease and asthma (environmental tobacco smoke, 1994), Hepatitis B virus infection (bloodborne pathogens, 1992), tuberculosis, and kidney toxicity from cadmium exposure. OSHA is currently working on quantitative risk assessments for such adverse health effects as cardiovascular disease mortality, neural effects, asthma, and respiratory tract irritation for a number of substances. OSHA states that new methodology has been used for these assessments, but review drafts were not yet ready and we cannot comment further. Under the OSH Act, OSHA has a relatively specific and narrow focus on exposure assessment. OSHA’s primary focus is estimating the risk to workers exposed to an agent for a working lifetime. This risk is calculated in terms of a person exposed at a constant daily exposure level for 45 years at 5 days per workweek and 8 hours per workday. The goal is to set standards, in the form of permissible exposure limits (PELs), so workers would suffer no impairment during the course of their lifetime under a continuous exposure scenario. Although this is a hypothetical exposure scenario, Rhomberg observed that it is not conservative compared with the actual distribution of exposures in the workplace. He also noted that, in assessing the exposures and risks associated with the new proposed standard, OSHA assumes that the standard is applied to newly exposed workers who will work under the new standard for their entire working lives. No allowance is made for the fact that current workers may already have had exposures higher than the new standard. Despite the primary focus on long-term working lifetime exposures, there may also be some risks posed by acute, short-term exposures. Therefore, although part of OSHA’s risk assessment could focus on longer-term risks and deal with 8-hour time-weighted average (TWA) exposure, the agency’s analysis may also cover short-term exposure effects. In the methylene chloride rule, for example, OSHA set the 8-hour TWA PEL primarily to reduce the risk of employees developing cancer, while the 15-minute short- term exposure limit (STEL) was primarily designed to protect against noncancer risks, such as negative effects on the central nervous system. Finally, Rhomberg pointed out the following distinct features of occupational exposure assessments: Compared to environmental exposures, exposures in the workplace tend to be much better defined. The workplace is a confined setting within which practices and behaviors tend to be standardized. Exposure levels are often high enough to be easily measured, and many workplaces have ongoing monitoring of environmental levels of compounds. As previously noted, OSHA’s risk assessment procedures, including its default assumptions and methodological preferences, tend to be established through the precedents of prior rulemakings. In contrast to EPA and FDA, OSHA also appears to choose somewhat less conservative options, even though the agency notes that Congress and the courts have permitted and even encouraged it to consider “conservative” responses to both uncertainty and human variability. The Supreme Court’s Benzene decision, in particular, affirmed that “the Agency is free to use conservative assumptions in interpreting the data with respect to carcinogens, risking error on the side of over-protection rather than under protection.” On the other hand, OSHA explicitly stated in rulemakings that it takes various steps to be confident that its risk assessment methodology is not designed to be overly conservative (in the sense of erring on the side of overprotection). Although not intended to be comprehensive, table 8 illustrates some of the specific assumptions or methodological choices used by OSHA. It also illustrates the overt balancing of more and less conservative choices that characterizes OSHA’s approach to risk assessment. The information presented in table 8 was taken primarily from OSHA risk assessment documents but also reflects additional comments provided by OSHA officials. (GAO notes and comments appear in parentheses.) Although OSHA does not have written risk characterization policies, recent OSHA rulemakings showed that the agency emphasized (1) comprehensive characterizations of risk assessment results; (2) discussions of assumptions, limitations, and uncertainties; and (3) disclosure of the data and analytic methodologies on which the agency relied. Rhomberg noted that OSHA’s usual practice is to present the results and methodological bases of outside parties’ risk assessments for a chemical in addition to OSHA’s own assessment, and to feature several possible bases for risk calculation in its characterization of risks. In checking examples of recent OSHA rulemakings, we also observed this emphasis on showing a range of alternative assessments, both those of external parties and OSHA’s own sensitivity analyses. At least three factors help to explain this proclivity to characterize risks using different data sets, assumptions, and analytical approaches, all of which are rooted in the statutory context for OSHA standards setting. First, the agency’s statutory mandate, reinforced by the Supreme Court’s Benzene decision, is that it must demonstrate “significant” risk from workplace exposure to a chemical with “substantial evidence.” Second, the OSH Act directs OSHA to base health standards on the “best available evidence” and consider the “latest scientific data.” The third factor is that the standard selected will be limited by consideration of its technological and economic feasibility and cost effectiveness. Together, these provisions provide ample incentive to show that a compound presents a significant risk even when using a range of alternative estimates and scientific evidence. (This does not preclude the agency from focusing on one analysis as the most appropriate to support its final estimate of risk at a particular level of exposure.) The bottom line is that OSHA uses risk assessment to justify a standard by showing, in general, that significant risks exist and that reducing exposure as proposed in the agency’s standard will reduce those risks. In recent OSHA rulemakings, the agency devoted considerable effort to addressing uncertainty and variability in its risk estimates. Such efforts included performing sensitivity analyses, providing the results produced by alternative analyses and assumptions, and using techniques such as Monte Carlo and Bayesian statistical analyses. In its risk characterizations, OSHA provided both estimates of central tendency (such as the mean) and upper limits (such as the 95th percentile of a distribution). In the methylene chloride rule, OSHA noted that, in its past rulemakings, it had frequently estimated carcinogenic potencies via the MLE of the multistage model parameters. However, in this particular rule it chose for its final risk estimate to couple one measure of central tendency (the MLE of the dose- response parameters) with a somewhat conservative measure of its PBPK output (the 95th percentile of the distribution of human internal dose). OSHA concluded that this combination represented “a reasonable attempt to account for uncertainty and variability.” The chemical risk assessments conducted by the Department of Transportation’s (DOT) Research and Special Programs Administration (RSPA) focus primarily on acute (short-term) risks associated with potential accidents involving unintentional releases of hazardous materials (HAZMAT) during transportation. As such, they are very different from risk assessments that focus on chronic health risks. According to agency officials, RSPA’s assessments are done using a flexible, criteria-based system. RSPA’s HAZMAT transportation safety program begins with a hazard analysis that results in material classification. There are international standards on the transportation and labeling of dangerous goods that classify the type of hazard associated with a given substance (e.g., whether it is flammable, explosive, or toxic) and the appropriate type of packaging. Once a hazard is classified, RSPA’s analysis focuses on identifying the potential circumstances, probability, and consequences of unintentional releases of hazardous material during its transportation. DOT has written principles on how the results of its risk or safety assessments should be presented. Those principles emphasize transparency regarding the methods, data, and assumptions used for risk assessments and encourage DOT personnel to not only characterize the range and distribution of risk estimates, but also to put the risk estimates into a context understandable by the general public. According to DOT officials, chemical risks may be an element of almost any departmental risk assessment. For example, they said that one of the alternatives they explored regarding air bags involved potential exposure to chemicals used in the inflation mechanism. They also noted that Federal Aviation Administration (FAA) safety analyses include some elements related to potential exposures to the chemicals that are always found in aircraft mechanisms. However, DOT’s risk assessment most commonly focus on chemical risks when considering the transportation of hazardous materials. Unintentional releases of hazardous materials during transportation, whether due to packaging leaks or transportation accidents, may pose risks to human health and safety, the environment, and property. The potential consequences of such incidents include deaths or injuries caused by an explosion, fire, or release of gases that are toxic when inhaled. Under the Federal Hazardous Materials Transportation Act, as amended, the Secretary of Transportation has the regulatory authority to provide adequate protection against risks to life and property inherent in transporting hazardous materials in commerce. DOT officials pointed out that, because this act tends to be more general than those relevant to other agencies’ regulation of risks from chemicals, it gives DOT more flexibility to define what is “adequate” to address potential risks. The statute directs the DOT Secretary to designate a material or group or class of materials as hazardous when he or she decides that transporting the material in commerce in a particular amount and form may pose an unreasonable risk to health and safety or property. The Secretary is also directed to issue regulations for the safe transportation of such materials. The hazardous materials regulations apply to interstate, intrastate, and foreign transportation in commerce by aircraft, railcars, vessels (except most bulk carriage), and motor vehicles. The Secretary has delegated authority for implementing these hazardous materials responsibilities to various components within DOT. In particular, RSPA issues the Hazardous Materials Regulations and carries out related regulatory functions, such as issuing, renewing, modifying, and terminating exemptions from the regulations. The Superfund Amendments and Reauthorization Act of 1986 mandated that RSPA also list and regulate under the Hazardous Materials Regulations all hazardous substances designated by EPA. According to DOT officials, RSPA conducts most of the department’s risk assessments regarding the transportation of chemical hazardous materials. RSPA and the modal administrations in DOT—FAA, the United States Coast Guard, the Federal Motor Carrier Safety Administration, and the Federal Railroad Administration—share enforcement authority for hazardous materials transportation. RSPA’s Office of Hazardous Materials Safety (OHMS) has the primary responsibility for managing the risks of hazardous materials transportation within the boundaries of the United States, unless such materials are being transported via bulk marine mode (in which case the Coast Guard is responsible). Overall, OHMS notes that its Hazardous Materials Safety Program and resulting regulations (1) are risk based; (2) use data, information, and experience to define hazardous materials and manage the risk hazardous materials present in transportation; and (3) are prevention oriented. Therefore, the analysis of risk is an important element of OHMS’ responsibilities. Within OHMS, the Office of Hazardous Materials Technology (OHMT) provides scientific, engineering, radiological, and risk analysis expertise. Other entities may also be involved in conducting transportation-related chemical risk and safety assessments. For example, OHMS sponsored a quantitative threat assessment by the John A. Volpe National Transportation Systems Center (the Volpe Center), which is operated by RSPA, to determine the probability that a life-threatening incident would occur as a result of transporting hazardous materials in aircraft cargo compartments. OHMS also sponsored a multiyear research effort by the Argonne National Laboratory to characterize the risk associated with transportation of selected hazardous materials on a national basis. One of the most distinctive aspects regarding the regulation of hazardous materials transportation is the role that is played by international agreements and definitions. Criteria for classifying and labeling dangerous chemicals being transported have been internationally harmonized through the United Nations Recommendations on the Transport of Dangerous Goods. This UN classification system is internationally recognized, and RSPA has essentially adopted the UN recommendations into the domestic hazardous materials regulations. (A more detailed description of this classification system appears in the following section.) Because of the particular regulatory context in which it operates—in particular, its focus on acute (short-term) risks associated with transportation accidents—RSPA does not follow the four-step risk assessment paradigm identified by NAS and used by EPA, FDA, and OSHA. However, RSPA’s procedures do address similar generic questions, such as whether a particular material or activity poses a threat and the likelihood and consequences of potential accidents. The agency uses a criteria-based system to assess the hazards to human health and safety, property, and the environment that are associated with potential accidents during hazardous materials transportation. Chemicals are identified and classified as hazards according to a classification system in the Hazardous Materials Regulations that is largely harmonized with internationally recognized criteria. The risk analyses by RSPA then focus on assessing the potential circumstances under which exposure could occur during transportation, their causes, consequences, and probability of occurrence. The general risk assessment procedures applicable to RSPA are found within DOT-wide policies on conducting regulatory analyses and also in descriptive materials about the agency’s Hazardous Materials Safety Program. DOT included general guidelines for conducting a risk assessment as part of its broader Methods for Economic Assessment of Transportation Industry Regulations (Office of the Assistant Secretary for Policy and International Affairs, June 1982). The DOT guidelines for risk assessment are grouped under three major topics: procedural guidelines that recommend formats for presentation of risk analyses, formats for conducting risk analyses, and reporting of assumptions and limits of analyses; methodology guidelines that discuss some of the more frequently used risk methods and their applicability; and data guidelines that discuss data sources, collection and presentation of data, and raw and derived statistics. The primary focus of the DOT-wide risk assessment methodology and guidelines is on estimating the risk reduction attributable to proposed transportation safety regulations. DOT’s guidelines are intended to be applicable to risk assessment of hazardous material transport by any mode as well as assessment of other types of transportation risk. However, DOT stated that the guidelines are not intended to be a “cookbook,” or a prescriptive methodology, specifying each step in a risk assessment. DOT pointed out in the guidelines that such an approach is not desirable, because there is no single “correct” set of methods for assessing transportation risk. In addition to the DOT-wide guidelines, RSPA has produced written materials specifically on the Hazardous Materials Safety Program. These materials describe the role of risk assessment in the management of risks associated with transportation of hazardous materials and the general process used for analysis of risks, and they define risk assessment and management terms for purposes of hazardous materials safety. There also are a number of general guidance documents and reports on various aspects of hazardous materials transportation safety that provide additional insights into the identification and assessment of risks. RSPA does not apply the same NAS four-step paradigm for risk assessment as generally used by EPA, FDA, and OSHA. According to RSPA officials, the main reason for this difference between their risk assessments and most of those conducted by the other three agencies is the focus of RSPA’s assessments. RSPA’s concerns relative to hazardous materials transportation are primarily directed at short-term or acute health risks due to relatively high exposures from the unintentional release of hazardous materials. The officials said that, in contrast, the four basic steps of the NAS paradigm were intended to focus on chronic health risks due to long- term, low-level background chemical exposure. The main exceptions to this difference in general risk assessment procedures occur when other agencies’ assessments are similarly directed at risks associated with unintentional releases of chemicals. In particular, RSPA officials said that there are parallels between their risk assessment and management efforts and those of EPA and OSHA programs that are directed at chemical accidents. (See, for example, the description of the risk assessment procedures for EPA’s Chemical Emergency Preparedness and Prevention Office in app. II.) In sharp contrast to most of the risk assessment procedures we described for EPA, FDA, and OSHA, toxicity is simply one of several potentially dangerous properties of a hazardous material of concern to RSPA. Where toxicity is a factor, RSPA’s risk assessments tend to center on exposure levels that pose an immediate health hazard. This focus is reflected in the types of chemical toxicity information that RSPA helps develop. For example, RSPA actively participates on a National Advisory Committee developing Acute Exposure Guideline Levels for chemicals. In specific cases where chronic toxicity or environmental values play a role in RSPA analyses, agency officials said that they rely on what EPA, FDA, OSHA, and other agencies have developed. Despite such differences, RSPA’s risk assessments address similar basic issues as the chemical risk assessments of the other three agencies (e.g., whether a particular material or activity poses a threat and the severity and likelihood of potential exposures). The DOT-wide risk assessment guidelines primarily discuss “consequence” and “probability” analyses, but also describe a preliminary step for defining scenarios of concern (essentially part of a hazard identification step) and a final step to summarize results and conclusions from the preceding analyses (essentially a risk characterization). The Hazardous Materials Safety Program materials outline a similar risk assessment process that progresses from the identification of hazards to an evaluation of incident causes, frequencies, and consequences. RSPA begins with a hazard analysis that results in material classification. In RSPA risk assessments, hazardous materials are chemical, radioactive, or infectious substances or articles containing hazardous materials that can pose a threat to public safety or the environment during transport. Hazardous materials pose this threat through chemical, physical, nuclear, or infectious properties that can make them dangerous to transport workers or the public. For example, RSPA is concerned with the potential for the unintentional release of hazardous materials to lead to adverse outcomes such as explosions, fires, or severely enhanced fires that can cause deaths, injuries, or property damage. The agency is also concerned with the potential toxic, corrosive, or infectious effects of released materials on humans and the environment. According to DOT officials, their hazard classification approach is a criteria-based system that provides them considerable flexibility in their analysis and regulation of potential hazards. They noted that their criteria are geared more toward the hazard a material may pose in an accident scenario than toward a chronic health risk. The Director of the Office of Hazardous Materials Technology characterized this hazard classification approach as a more open system than used in other agencies (e.g., EPA). He explained that, in this system, any new chemical or substance that fits within RSPA’s matrix of hazard criteria falls under the hazardous materials transportation regulations. Hazard identification for these assessments is based largely on international agreements regarding transportation of dangerous goods. Of particular importance, there is an internationally recognized system for the classification, identification, and ranking of all types of hazardous materials that was created by the UN Committee of Experts on the Transport of Dangerous Goods. This system is revised biennially and published as the “United Nations Recommendations on the Transport of Dangerous Goods.” Under this classification system, all hazardous materials are divided into nine general classes according to physical, chemical, and nuclear properties. The system also specifies subdivisions and packing group designations (that indicate a relative level of hazard) for some classes. (See table 9.) These are broad categories that may include large numbers of diverse materials. For example, the air cargo threat assessment noted that there were 535 different flammable liquid entries in the hazardous materials table and more than 700 toxic material entries. Because there are hazardous materials with multiple dangerous properties, these classes and subdivisions are not mutually exclusive. Compressed or liquefied gases, for example, also may be toxic or flammable. The UN Committee of Experts created more than 3,400 possible identification numbers, proper shipping descriptions, and hazard classes to be assigned to various hazardous material compounds, mixtures, solutions, and devices. There are also generic “not otherwise specified” identification numbers and shipping descriptions that allow the material to be classed by its defined properties. RSPA uses essentially the same framework as the UN recommendations for the hazard classes and packing requirements of its Hazardous Materials Regulations. Table 10 shows the hazard classification system in the regulations. The classification system in these regulations can be very detailed for some subjects. For example, the regulations specifically identify the types of toxicity tests and data that should be used to determine whether something would be classified as poisonous material (class 6, division 6.1). The regulations define poisonous material as a material, other than a gas, which is known to be so toxic to humans as to afford a hazard to health during transportation, or which, in the absence of adequate data on human toxicity, is presumed to be toxic to humans because it falls within one of several specified categories for oral, dermal, or inhalation toxicity when tested on laboratory animals; or is an irritating material, with properties similar to tear gas, which causes extreme irritation, especially in confined spaces. Of particular relevance to comparisons with chemical risk assessments of other agencies, the regulations contain precise definitions of what constitutes oral, dermal, or inhalation toxicity for purposes of the Hazardous Materials Regulations. For example, one threshold for inhalation toxicity is defined as a dust or mist with an LC for acute toxicity on inhalation of not more than 10 mg per liter of air. (A different definition applies to the inhalation toxicity of a vapor.) The regulations also address other testing requirements and conversion factors. The regulations state that, whenever possible, animal test data that have been reported in the chemical literature should be used. The Hazardous Materials Regulations include an extensive Hazardous Material Table with itemized information about specific hazardous materials. The number of HAZMAT table entries corresponds closely with the number created by the UN. RSPA officials noted that the number of specific chemicals covered by the regulations is many multiples of the more than 3,400 entries, though, because of the generic nature of the “not otherwise specified” descriptions. The table includes, but is not limited to, information such as the material’s description, hazard class or division, identification number, packing group, label codes, limits to the quantity of the material permitted in a single package, and special provisions concerning its transportation. Allyl chloride, for example, is identified as a class 3 material (flammable and combustible liquid), is in packing group I (indicating great danger), is forbidden on passenger aircraft and rail, and has two special provisions regarding the tanks used for transporting this substance. A material that meets the definition of more than one hazard class or division, but is not specifically listed in the table, is to be classed according to the highest applicable hazard class or division according to a descending order of hazard. For example, the division of poisonous gases is ranked as a greater hazard than the division of flammable gases. According to OHMS, the process of classifying a material in accordance with these hazard classes and packing groups is itself a form of hazard analysis. Another important feature of this process is that the regulations require the shipper to communicate the material’s hazards through the use of the hazard class, packing group, and proper shipping name on the shipping paper and the use of labels on packages and placards on the transport vehicle. Therefore, the shipping paper, labels, and placards communicate the most significant findings of the shipper’s hazard analysis to other parties. This communication aspect is particularly important in emergency response situations if an accident occurs during transport of these materials. The classification system, by itself, is not sufficient for all risk assessment purposes. For example, RSPA and OHMS still need to identify potential scenarios in which transportation accidents, spills, and leaks could occur. As evidenced by the air cargo threat assessment, such scenarios include the possibility that hazardous materials might be transported in a manner not in compliance with current regulations. Also, as emphasized in a November 2000 report for RSPA, the hazardous materials transport system is highly heterogeneous and complex. The report pointed out that this system involves not only many different materials posing a variety of hazards (as reflected in the classification system outlined in table 9) but also: a chain of events involving multiple players having different roles in the process of moving hazardous materials (such as shippers, carriers, packaging manufacturers, freight forwarders, and receivers of shipments) and the possibility of multiple handoffs of a material from one party to another during transport; several different modes of transport (principally highway, rail, waterway, and air), with some shipments that switch from one mode to another during transit; and multiple possible routes of transit. All of these complex features might need to be considered in identifying hazard scenarios. However, in identifying (and analyzing) potential hazard scenarios, RSPA and OHMS benefit from being able to use data, information, and experience on hazardous materials transportation incidents. For example, risk assessors can review data from sources such as the DOT Hazardous Materials Information System (HMIS) that catalogues transportation-related incidents that involve a release of hazardous materials. An OHMS official pointed out that the agency also uses fairly sophisticated models in analyzing various scenarios. He said that such models were used, for example, to provide a scientific basis for determining evacuation zones when developing the 2000 Emergency Response Guidebook. In contrast to the other agencies covered in this report, determining the toxicity of a particular chemical (dose-response assessment) is not a central focus of risk assessment in RSPA and OHMS. Toxicity is only one of many risk factors under consideration (and should already be addressed through the hazard classification system). Instead, the primary focus of analysis is on the potential for hazardous materials to (1) spill or leak while in transit or (2) cause, contribute to, or multiply the consequences of a transportation-related accident. Analysis regarding the first item is primarily concerned with the packaging and containers used for transportation of hazardous materials, while analysis of the second item also considers other elements, such as the modes and routes of transportation for these materials. As the DOT risk assessment guidelines state, “Hazardous materials accidents generally are transportation accidents in which hazardous materials happen to be present.” DOT documents use a variety of terms to describe and refer to the analysis of hazards or risks of concern to the department and its component offices (e.g., hazard analysis, risk analysis, threat assessment). However, the core of the analysis remains the same—an evaluation of the causes, consequences, and likelihood of transportation incidents involving hazardous materials. The general model in DOT’s guidelines for risk assessment of transportation activities or operations partitions the analysis of risk into two main parts: prediction of possible consequences in terms of loss from accidents (or, more broadly, incidents) while transporting materials in a specified way; and estimation of the probabilities or frequencies of occurrence of the consequences of such accidents (e.g., the likelihood or expected number of accidents occurring that would result in the above loss). For purposes of estimating the risk reduction attributable to transportation safety regulations, the expected loss or “risk” is computed by summing the products of each possible loss multiplied by its probability. (In other words, risk in this context is the probability-weighted average loss.) According to DOT definitions, consequence analysis is the evaluation of the severity and magnitude of impacts associated with the occurrence of postulated accident scenarios. For purposes of analysis, the DOT guidelines recommend partitioning this evaluation into three segments: (1) initiating events (i.e., causes of an accident that can result in loss), (2) effects (i.e., the possible mechanisms by which an initiating event might result in injury or damage); and (3) consequences (i.e., the loss of life, injuries, property damage, or other losses expected from the effects). The evaluation of consequences reflects many factors, including the characteristics of the agent involved, the type of packaging or container used, the amount of material being transported, and the particular modes and routes of transportation (which also affect the extent of potential exposure by the public and environment). DOT defines probability analysis as the evaluation of the likelihood of individual accident scenarios and outcomes of adverse events. The likelihood of a particular hazard might be expressed either as a frequency or probability. The analyses of consequences and probabilities are based on a variety of data sources, including, to the extent possible, “experience” data. Among the sources of information identified in OHMS materials to address consequences and probabilities are: data from the Hazardous Materials Information System (HMIS); commodity flow surveys; chemical substance manufacturing, use, and transportation studies; special analyses (such as the National Transportation Risk Analysis and Air Cargo reports mentioned earlier in this appendix, as well as shipment counts); and public comments on rulemakings. Such sources can provide valuable information for risk assessment in general and the statistical analysis of hazardous material transportation incidents in particular. The HMIS database provides a good illustration of the types of baseline data available. This database provides incident counts according to time, transportation phase (i.e., en route from origin to destination, loading or unloading, and temporary storage), and transportation mode (e.g., air, highway, and rail). For each incident, the database includes information on the hazardous materials involved, including the name of the chemical shipped, container type and capacity, number of containers shipped, number of containers that fail, and the amount of material released. The database also contains information concerning the occurrence of fire, explosion, water immersion, environmental damage, and the numbers of deaths, major and minor injuries, and persons evacuated. However, because DOT’s risk assessments are often used to estimate the “risk impact” of proposed regulations, the DOT guidelines caution that lack of directly applicable experience data for assessing the impacts is probably the rule rather than the exception. This is because the controls provided by the proposed regulations constitute changes from present conditions, and experience data, by definition, relate to present conditions. The guidelines also emphasize that, to evaluate the impact on risk of a proposed regulation or its alternatives, it is necessary to perform a “with and without” type of assessment, considering the potential effects on any or all of the elements of the risk model. As was the case with the classification of hazardous materials and packaging, the agency may employ criteria-based classifications of the consequences of potential adverse events and their likelihood of occurrence. A 1995 guidance document illustrates how consequence and frequency categories were combined into a “risk assessment matrix” to assist decision makers in their risk management decisions. (See table 11 below.) As was the case with the three other agencies covered by our review, some of the chemical risk assessments produced by or for DOT have begun using more sophisticated methods and models. For example, the Director of OHMT characterized the National Transportation Risk Assessment study prepared for OHMS by the Argonne National Laboratory as using state-of- the-art risk assessment techniques to characterize risks associated with the transportation of selected hazardous materials on a national basis. The consequence assessments in this study employed the Chemical Accident Statistical Risk Assessment Model that predicts distributions of hazard zones (i.e., areas in which a threshold chemical concentration is exceeded) resulting from hazardous material release. That model, in turn, reflected the input of other physical models on subjects such as hazardous material release rates of toxic-by-inhalation materials. The Director noted that his office believed this study to be the first comprehensive application of these techniques in this arena for this purpose. Although generally very structured and criteria based, RSPA’s risk assessments for hazardous materials transportation also use assumptions. DOT-wide guidance documents provide a general framework for the use of assumptions. In general, DOT guidance recognizes that assumptions may be made when data are lacking or uncertain, or when it is necessary to limit the scope of an analysis. However, the assumptions, while not empirically verifiable, are supposed to be reasonable, logically credible, and supportable in comparison with alternative assumptions. The DOT risk assessment methodology guidance specifically states that every assessment should include a list of the major assumptions, conditions, and limitations of the risk analysis, as well as the reasons why the assumptions were made. As noted earlier in this appendix, RSPA has access to a number of sources of directly relevant data and statistics on the transportation of hazardous materials. However, there are limitations to these systems and data. For example, the authors of the national transportation risk assessment for selected hazardous materials cautioned that the information in DOT’s data systems was not always sufficient or detailed enough to directly support a quantitative risk assessment. For example, incidents involving most hazardous materials (other than gasoline-truck accidents) typically occur too infrequently to provide statistically reliable data for directly projecting future risks. In his introduction to the study, the Director of OHMT also stated that the quantitative results of this study should be used with caution. Specifically, he noted, “While the model of the hazardous materials transportation system employed in this study is sophisticated, the accuracy of the data used in the model is often less precise. Estimates, assumptions, and aggregate numbers have been used in many cases.” Some of the topics that might require assumptions or choices during a hazardous materials transportation assessment include: the probability of the release of a hazardous material, depending on the nature of the accident, type of material being transported, and the containers used; the amount of material released in an accident, depending again on factors such as the severity of the accident, nature of the material, and type of container, but also depending on assumptions about the size of holes in containers; commodity flows of the materials (e.g., modes of transportation used, classes of rail tracks, types of highways, routing through urban and rural areas and related population density); the dispersion of released hazardous material, including assumptions about climate and meteorological conditions and the type of surface that a liquid might “pool” on if spilled; the probability of a fire or explosion being ignited (both as a consequence of a release or as a cause of a release); and the extent to which humans potentially exposed to released materials would be sheltered or protected (both within a given mode of transportation, such as an aircraft, or external to the carrier). In addition to these topics, RSPA sometimes uses a factor to adjust data in the HMIS database to address underreporting. However, RSPA officials noted that, for certain purposes, it might be inappropriate to extrapolate information in the database. Although assumptions may be needed in RSPA assessments, RSPA officials said that they do not have default assumptions for their risk assessments. According to the officials, assumptions must be developed and described as part of each risk assessment and are specific to the risk assessment. RSPA officials also noted that they do not use “safety factors” in risk assessments, but rather base their assessments on expected levels or ranges of performance. Therefore, unlike in the appendices on EPA, FDA, and OSHA, we have not included a table in this DOT appendix to identify major default choices, the reasons for their selection, when they would be used in the process, and their likely effects on risk assessment results. However, with regard to some of the case-specific assumptions or choices we identified during our review, we did observe that DOT’s assessments typically discussed the reasons for particular choices (as with the other agencies, often citing an interpretation of related research studies). In some instances, information was also provided on the likely effect (e.g., that a particular value represented a conservative estimate or an upper limit) or level of uncertainty (e.g., that a particular parameter value might be high by a factor of 3 to 10 times the results from another study) associated with choices made by the analysts. DOT has explicit, written principles regarding how the results of its risk or safety assessments should be presented. The department’s policies emphasize the principle of transparency and encourage agency personnel to not only characterize the range and distribution of risk assessment estimates, but also to put risk estimates into a context understandable by the general public. For example, DOT’s “risk assessment principles” state that the risk assessment should: make available to the public data and analytic methodology on which the agency relied in order to permit interested entities to replicate and comment on the agency’s assessment; state explicitly the scientific basis for the significant assumptions, models, and inferences underlying the risk assessment, and explain the rationale for these judgments and their influence on the risk assessment; provide the range and distribution of risks for both the full population at risk and for highly exposed or sensitive subpopulations, and encompass all appropriate risks, such as acute and chronic risks, and cancer and noncancer risks, to health, safety, and the environment; place the nature and magnitude of risks being analyzed in context, including appropriate comparisons with other risks that are regulated by the agency as well as risks that are familiar to, and routinely encountered by, the general public, taking into account, for example, public attitudes with respect to voluntary versus involuntary risks, well- understood versus newly discovered risks, and reversible versus irreversible risks; and use peer review where there are issues with respect to which there is significant scientific dispute to ensure that the highest professional standards are maintained. The DOT risk assessment guidelines also state that every risk analysis should present information on (1) quantitative estimates of risk (over the entire range of plausible values of the developed variables, and with a “base case” loss to provide a point of reference); (2) insights gained from performing the analysis into the factors that most affect risk assessment results; and (3) assumptions, conditions, and limitations of the analysis. With regard to the third item, the guidelines specifically state that reasons why the assumptions were made, and why the limitations of the analysis do not significantly impact the risk estimate, should be provided. The guidelines also suggest two methods for treating uncertainty in a risk analysis: sensitivity analysis (DOT’s preferred method for treating and reporting the impact of uncertainty), which should be conducted for each scenario in a risk analysis; and bounding analysis involving error propagation (requiring that each model parameter be expressed as a distribution, or at least a variance, to trace the implication of uncertainty for the risk estimate). The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | As used in public health and environmental regulations, risk assessment is the systematic, scientific description of potential harmful effects of exposures to hazardous substances or situations. It is a complex but valuable set of tools for federal regulatory agencies to identify issues of potential concern, select regulatory options, and estimate the range of a forthcoming regulation's benefits. However, given the significant yet controversial nature of risk assessments, it is important that policymakers understand how they are conducted, the extent to which risk estimates produced by different agencies and programs are comparable, and the reasons for differences in agencies' risk assessment approaches and results. GAO studied the human health and safety risk assessment procedures of the Environmental Protection Agency, the Food and Drug Administration, the Occupational Safety and Health Administration, and the Department of Transportation's Research and Special Programs Administration. This report describes (1) the agencies' chemical risk assessment activities, (2) the agencies primary procedures for conducting risk assessments, (3) major assumptions or methodological choices in their risk assessment procedures, and (4) the agencies' procedures or policies for characterizing the results of risk assessments. |
Both NTIA’s BTOP and RUS’s BIP programs focus primarily on broadband infrastructure deployment, but the programs have some differences based on provisions in the Recovery Act (see table 1). BTOP funds are intended to expand broadband access to unserved and underserved areas. BTOP funds can also be awarded to projects that promote broadband demand and adoption and provide equipment, training, and access for higher education, job creation, public safety and health, and other facilities that serve vulnerable populations. BIP focuses on rural areas and its funds can be used solely for broadband infrastructure deployment. The agencies also have different project eligibility requirements. For example, the Recovery Act requires that BTOP applicants demonstrate that a project would not have been implemented during the grant period without federal grant assistance; the Recovery Act does not include a similar requirement for RUS broadband applicants. To implement the Recovery Act, NTIA and RUS will fund several types of projects. The agencies will fund last-mile and middle-mile network infrastructure projects to extend broadband service in unserved or underserved areas. According to NTIA and RUS, last-mile projects are those infrastructure projects whose predominant purpose is to provide broadband service to end users or end-user devices. Middle-mile projects mostly do not provide broadband service to end users or end-user devices, but instead provide relatively fast, large-capacity connections between backbone facilities—long-distance, high-speed transmission paths for transporting massive quantities of data—and last-mile projects. NTIA is also funding public computer centers and sustainable adoption projects. Both NTIA and RUS have experience with similar broadband grant or loan programs. Before receiving Recovery Act funding, NTIA implemented the Technology Opportunities Program; this program promoted the innovative use of information and communication technologies, primarily in underserved population segments, to promote public benefits. Additionally, NTIA implemented other, nonbroadband programs, such as the Public Safety Interoperability Communications (PSIC) program, which provides funding to public safety organizations; the digital television transition coupon program; and the Public Telecommunications Facilities Program, which provides funding to public broadcasters. RUS has prior and ongoing experience with several broadband-specific programs— including the Rural Broadband Access Loan and Loan Guarantee (Broadband Access Loan) Program which funds the construction, improvement, and acquisition of facilities and equipment for broadband service in eligible rural communities, and the Community Connect broadband grant program, which funds broadband on a “community- oriented connectivity” basis to currently unserved rural areas for the purpose of fostering economic growth and delivering enhanced health care, education, and public safety services. To implement the broadband provisions in the Recovery Act, NTIA and RUS coordinated their efforts and developed program milestones (see fig. 1). OMB tasked agencies implementing Recovery Act programs to engage in aggressive outreach with potential applicants. NTIA and RUS, with FCC coordination, held a series of public meetings in March 2009, explaining the overall goals of the new broadband programs. NTIA and RUS also sought public comments from interested stakeholders on various challenges that the agencies would face in implementing the broadband programs through these meetings and by issuing a Request for Information. NTIA and RUS received over 1,500 comments. FCC, in a consultative role, provided support in developing technical definitions and participated in the first kick-off meeting. On July 1, 2009, Vice President Joe Biden, Secretary of Commerce Gary Locke, and Secretary of Agriculture Tom Vilsack announced the release of the first joint Notice of Funds Availability (NOFA) detailing the requirements, rules, and procedures for applying for BTOP grants and BIP grants, loans, and loan- grant combinations. Subsequently, the agencies held 10 joint informational workshops throughout the country for potential applicants to explain the programs, the application process, and the evaluation and compliance procedures, and to answer stakeholder questions. NTIA and RUS coordinated and developed a single online intake system whereby applicants could apply for either BTOP or BIP funding. NTIA and RUS initially indicated that they would award Recovery Act broadband program funds in three jointly-conducted rounds and initially expected to issue the NOFA for a second funding round before the end of calendar year 2009 and for a third round in 2010. In a draft version of this report, we recommended that the agencies combine the second and third funding rounds. Subsequently, on November 10, 2009, the agencies announced that they would award the remaining program funds in one round, instead of two. Both BTOP and BIP projects must be substantially complete within 2 years and fully complete no later than 3 years following the date of date of issuance of their award. issuance of their award. NTIA and RUS face scheduling, staffing, and data challenges in evaluating applications and awarding funds. The agencies have taken steps to meet these challenges, including the adoption of a two-step evaluation process, utilization of nongovernmental personnel, and publication of information on the applicant’s proposed service area. While these steps address some challenges, the agencies’ remaining schedule may pose risks to the review of applications. In particular, the agencies may lack the needed time to apply lessons learned from the first funding round and may face a compressed schedule to review new applications, thereby increasing the risk of awarding funds to projects that may not be sustainable or do not meet the priorities of the Recovery Act. Scheduling challenges. Under the provisions of the Recovery Act, NTIA and RUS must award all funds by September 30, 2010. Thus, the agencies have 18 months to establish their respective programs, solicit and evaluate applications, and award funds. While in some instances a compressed schedule does not pose a challenge, two factors increase the challenges associated with the 18-month schedule. First, while RUS has existing broadband programs, albeit on a much smaller scale than BIP, NTIA must establish the BTOP program from scratch. Second, the agencies face an unprecedented volume of funds and anticipated number of applications compared to their previous experiences. The volume of funds to be awarded exceeds previous broadband-related programs implemented by NTIA and RUS. While NTIA and RUS have prior experience in administering grant or loan programs, these programs had less budgetary authority than the programs in the Recovery Act (see fig. 2). Of the $7.2 billion appropriated in the Recovery Act, NTIA received $4.7 billion for BTOP. In comparison, NTIA administered the PSIC program, a one-time grant program with an appropriation of about $1 billion for a single year, in close coordination with the Department of Homeland Security (DHS). Additionally, NTIA’s Public Telecommunications Facilities Program received an average of $23 million annually and its Telecommunications Opportunities Program received $24 million annually. RUS received $2.5 billion from the Recovery Act for BIP. In comparison, RUS’s Community Connect program’s average annual appropriation was $12 million and its Broadband Access Loan Program’s average annual appropriation was $15 million. According to preliminary information from the agencies, they received approximately 2,200 applications requesting $28 billion in grants and loans in the first funding round. Based on the number of applications received and the funds requested, the average amount an applicant sought was $12.7 million, or almost half the size of the total average appropriation for NTIA’s l average appropriation for NTIA’s Technology Opportunities Program. Technology Opportunities Program. NTIA and RUS also face an increase in the number of applications that they must review and evaluate in comparison to similar programs (see fig. 3). As mentioned previously, the agencies indicated that for the first round of funding alone, they received 2,200 applications. Of these 2,200 applications, NTIA received 940 applications exclusively for BTOP and RUS received 400 applications exclusively for BIP and 830 dual applications for both programs. Both NTIA and RUS will review the dual applications; if RUS does not fund the project through the BIP program, NTIA can consider funding the project through the BTOP program. By comparison, NTIA received an average of 838 applications annually for the Telecommunications Opportunities Program; for PSIC, NTIA and DHS received 56 applications from state and territorial governments containing a total of 301 proposed projects. RUS received an average of 35 applications annually for the Broadband Access Loan Program and an average of 105 applications annually for the Community Connect program. In addition, since BTOP and BIP will not carry over applications between rounds, applicants who do not receive funding in the first round must reapply to be eligible for consideration for funding in the second round. Therefore, a single proposed project may be evaluated multiple times by BTOP and BIP reviewers. Fourteen of 15 stakeholders with whom we spoke expressed concern that the agencies will face challenges in adequately reviewing the large number of expected applications in the time frame allotted. Staffing challenges. NTIA and RUS will need additional personnel to administer BTOP and BIP. NTIA is establishing a new program with BTOP and will for the first time award grants to commercial entities. NTIA’s initial risk assessment indicated that a lack of experienced and knowledgeable staff was a key risk to properly implementing the program in accordance with the priorities of the Recovery Act. In its fiscal year 2010 budget request to Congress, NTIA estimated that it will need 30 full- time-equivalent staff in fiscal year 2009 and 40 more full-time-equivalent staff for fiscal year 2010. While RUS already has some broadband loan and grant programs in place and staff to administer them, it also faces a shortage of personnel. RUS’s staffing assessments indicated that the agency will need 47 additional full-time-equivalents to administer BIP. Prior to the Recovery Act, RUS had 23 full-time-equivalent staff in fiscal year 2008 for its Broadband Access Loan Program and no full-time- equivalent staff dedicated to the Community Connect program; RUS utilized personnel from the Broadband Access Loan Program for the Community Connect program. RUS indicated that it would have Broadband Access Loan Program staff also assist with BIP. Data challenges. NTIA and RUS lack detailed data on the availability of broadband service throughout the country that may limit their ability to target funds to priority areas. According to NTIA and RUS, priority areas include unserved and underserved areas. NTIA and RUS require applicants to assemble their proposed service areas from contiguous census blocks and to identify the proposed service area as unserved or underserved. However, RUS and NTIA will be awarding loans and grants before the national broadband plan or broadband mapping is complete. NTIA does not expect to have complete, national data on broadband service levels at the census block level until at least March 2010. Eight of 15 stakeholders with whom we spoke said that the agencies face challenges determining whether proposed service areas meet the requirements for underserved and unserved in order to effectively award funds. To work around this problem, the agencies plan to use existing FCC data on broadband service levels (Form 477 data) and state broadband service maps where available. However, the data collected by FCC are at the census tract level, not the census block level. In addition, although FCC and NTIA have discussed NTIA’s access to and use of the Form 477 data, the agencies have not developed formal procedures; RUS has not discussed use of the Form 477 data with FCC. Finally, not all states have broadband maps. Two-step evaluation process. To address the scheduling and staffing challenges, NTIA and RUS are conserving scarce staff resources by screening applications and therefore reducing the number of applications subject to a comprehensive review by using a two-step process. In the first step, the agencies will evaluate and score applications based on the criteria delineated in the NOFA, such as project purpose and project viability. During this step, the agencies will select which applications proceed to the second step. After the first step is complete and the pool of potential projects is reduced, the agencies intend to conduct the second step—due diligence, which involves requesting extra documentation to confirm and verify information contained in an application. Since not all applications will proceed to the second step, not all applicants will be required to submit extra documentation. This will reduce the amount of information the agencies must review. In the NOFA, the agencies indicated that using this two-step process balances the burdens on applicants with the needs of the agencies to efficiently evaluate applications. Use of nongovernmental personnel. Both NTIA and RUS are using nongovernmental personnel to address anticipated staffing needs associated with evaluating applications and awarding funds. To evaluate applications, NTIA is using a review system, in which three unpaid, independent expert reviewers examine and score applications. To be considered an expert reviewer, the individual must have significant expertise and experience in at least one of the following areas: (1) the design, funding, construction, and operation of broadband networks or public computer centers; (2) broadband-related outreach, training, or education; or (3) innovative programs to increase the demand for broadband services. In addition, NTIA will use contractors in an administrative role to assist the expert reviewers. NTIA officials said that the agency issued three guides to be used by the reviewers for each of the three project categories—broadband infrastructure, public computer centers, and sustainable adoption—and conducted more than 15 Web- based training seminars. RUS will use contractors to evaluate and score applications. Both NTIA and RUS said that they are confident that an expert would be able to draw conclusions on the technical feasibility or the financial sustainability of a project based on information provided in the application. Regardless of who reviews the application, the final selection and funding decisions are to be formally made by a selecting official in each agency. Publish applicant information. To address the challenge of incomplete data on broadband service, NTIA and RUS require applicants to identify and attest to the service availability—either unserved or underserved—in their proposed service area. In order to verify these self-attestations, NTIA and RUS will post a public notice identifying the proposed funded service area of each broadband infrastructure applicant. The agencies intend to allow existing service providers in the proposed service area to question an applicant’s characterization of broadband service in that area. According to the NOFA, existing service providers will have 30 days to submit information regarding their service offerings. If this information raises eligibility issues, RUS may send field staff to the proposed service area to conduct a market survey. RUS will resolve eligibility issues by determining the actual availability of broadband service in the proposed service area. Currently, NTIA has no procedures in place for resolving these types of issues, but said that it is developing these procedures using its contractors and other means. During the first funding round, the compressed schedule posed a challenge for both applicants and the agencies. As mentioned previously, NTIA and RUS initially proposed to utilize three separate funding rounds during the 18-month window to award the entire $7.2 billion. As such, each funding round would operate under a compressed schedule. Eight of the 15 industry stakeholders with whom we spoke expressed concern that a small entity would have difficulties completing an application in a timely manner. Specifically, some stakeholders said that small entities were having trouble locating the professional staff needed to assemble an application. The compressed schedule also posed challenges for the agencies. During the first funding round, the agencies missed several milestones. For example, RUS originally intended to select a contractor on June 12, 2009, and NTIA intended to select a contractor on June 30, 2009; however, both agencies missed their target dates, with RUS selecting its contractor on July 31, 2009, and NTIA selecting its contractor on August 3, 2009. Also, the agencies intended to begin awarding the first-round grants and loans on November 7, 2009, but the agencies now expect to begin awarding funds in December 2009. Because of the compressed schedule within the individual funding rounds, NTIA and RUS have less time to review applications than similar grant and loan programs. In the first funding round, the agencies have approximately 2 months to review 2,200 applications. With other telecommunications grant and loan programs, agencies have taken longer to evaluate applications and award funds. For example, from fiscal year 2005 through 2008, RUS took from 4 to 7 months to receive and review an average of 26 applications per year for its Broadband Access Loan Program. NTIA officials acknowledged that the BTOP timeline is compressed compared with the timeline for the Public Telecommunications Facilities Program, which operated on a year-long grant award cycle. For the PSIC program, NTIA and DHS closed the application period in August 2007 and completed application reviews in February 2008, a period of roughly 6 months. In California, the Public Utilities Commission took 4 to 6 months to review 54 applications and award funds for 25 projects in the first year of the California Advanced Services Fund, a $100 million broadband program. Based on their experience with the first funding round, on November 10, 2009, NTIA and RUS reported that they will reduce the number of funding rounds from three to two. In the second and final funding round, the agencies anticipate extending the window for entities to submit applications. This change will help mitigate the challenges the compressed schedule posed for applicants in the first funding round. However, it is unclear whether the agencies will similarly extend the amount of time to review the applications and thereby bring the review time more in line with the experiences of other broadband grant and loan programs. NTIA officials indicated that the agency would like to award all $4.7 billion by summer 2010, to promote the stimulative effect of the BTOP program. RUS officials indicated that the agency will award all $2.5 billion by September 30, 2010, as required by the Recovery Act, indicating a potentially longer review process. Depending on the time frames NTIA and RUS select, the risks for both applicants and the agencies may persist with two funding rounds. In particular, these risks include: Limited opportunity for “lessons learned.” Based on the current schedule, NTIA and RUS will have limited time between the completion of the first funding round and the beginning of the second funding round. NTIA and RUS recently announced that the agencies will begin awarding funds for the first funding round in December 2009. On November 10, 2009, the agencies sought public comment on approaches to improve the application experience and strengthen BTOP and BIP; the public has 14 days to respond with comments following publication of the notice in the Federal Register. Because of this compressed time frame, applicants might not have sufficient time to analyze their experiences with the first funding round to provide constructive comments to the agencies. Further, the agencies might not have sufficient time to analyze the outcomes of the first round and the comments from potential applicants. As such, a compressed schedule limits the opportunity to apply lessons learned from the first funding round to improve the second round. Compressed schedule to review applications. Due to the complex nature of many projects, NTIA and RUS need adequate time to evaluate the wide range of applications and verify the information contained in the applications. NTIA is soliciting applications for infrastructure, public computer center, and sustainable adoption projects. Therefore, NTIA will receive applications containing information responding to different criteria and it will evaluate the applications with different standards. Even among infrastructure applications, a wide variability exists in the estimates, projections, and performance measures considered reasonable for a project. For example, in RUS’s Broadband Access Loan Program, approved broadband loans for the highest-cost projects, on a cost-per- subscriber basis, ranged as much as 15, 18, and 70 times as high as the lowest-cost project, even among projects using the same technology to deploy broadband. Previous experience with broadband loan programs also reveals the challenges inherent in evaluating an application based on estimates provided by the applicant. For example, as of fiscal year 2008, 55 percent of RUS broadband loan borrowers were meeting their forecasted number of subscribers. Nine of the 15 stakeholders that we interviewed expressed concerns that NTIA and RUS lack staffing expertise to determine whether project proposals will generate sufficient numbers of subscribers and revenues to cover operating costs and be sustainable on a long-term basis. Continued lack of broadband data and plan. According to NTIA, national broadband data provide critical information for grant making. Additionally, some stakeholders, including members of Congress, have expressed concern about awarding broadband grants and loans without a national broadband plan. Under the Recovery Act, up to $350 million was available pursuant to the Broadband Data Improvement Act to fund the development and maintenance of a nationwide broadband map for use by policymakers and consumers. NTIA solicited grant applications to help develop the national broadband map, and grant applicants must complete their data collection by March 1, 2010. Additionally, based on provisions in the Recovery Act, FCC must deliver to Congress a national broadband plan by February 17, 2010. To prepare the plan, FCC sought comment on a variety of topics, including the most effective and efficient ways to ensure broadband service for all Americans. By operating on a compressed schedule, NTIA and RUS will complete the first funding round before the agencies have the data needed to target funds to unserved and underserved areas and before FCC completes the national broadband plan. Depending on the time frames the agencies select for the second funding round, they may again review applications without the benefit of national broadband data and a national broadband plan. NTIA and RUS will need to oversee a far greater number of projects than in the past, including projects with large budgets and diverse purposes and locations. In doing so, the agencies face the challenge of monitoring these projects with far fewer staff per project than were available in similar grant and loan programs they have managed. To address this challenge, NTIA and RUS procured contractors to assist with oversight activities and will require funding recipients to complete quarterly reports and, in some cases, obtain annual audits. Despite the steps taken, several risks to adequate oversight remain. These risks include insufficient resources to actively monitor funded projects beyond fiscal year 2010 and a lack of updated performance goals for NTIA and RUS. In addition, NTIA has yet to define annual audit requirements for commercial entities funded under BTOP. NTIA and RUS will need to oversee a far greater number of projects than in the past. Although the exact number of funded projects is unknown, both agencies have estimated for planning purposes that they could fund as many as 1,000 projects each—or 2,000 projects in total—before September 30, 2010. In comparison, from fiscal year 1994 through fiscal year 2004, NTIA awarded a total of 610 grants through its Technology Opportunities Program—or an average of 55 grants per year. From fiscal year 2005 through fiscal year 2008, RUS awarded a total of 84 Community Connect grants, averaging 21 grants per year; and through its Broadband Access Loan Program, RUS approved 92 loans from fiscal year 2003 through fiscal year 2008, or about 15 loans per year. In addition to overseeing a large number of projects, the scale and diversity of BTOP- and BIP-funded projects are likely to be much greater than projects funded under the agencies’ prior grant programs. Based on NTIA’s estimated funding authority of $4.35 billion for BTOP grants and RUS’s estimated potential total funding of approximately $9 billion for BIP grants, loans, and loan-grant combinations, if the agencies fund 1,000 projects each, as they have estimated, the average funded amount for BTOP and BIP projects would be about $4.35 million and $9 million, respectively. In comparison, from fiscal year 1994 to fiscal year 2004, NTIA’s average grant award for its Technology Opportunities Program was about $382,000, and from fiscal year 2005 to fiscal year 2008, RUS awarded, on average, about $521,000 per Community Connect grant award. Further, NTIA and RUS expect to fund several different types of projects that will be dispersed nationwide, with at least one project in every state. NTIA is funding several different types of broadband projects, including last- and middle-mile broadband infrastructure projects for unserved and underserved areas, and public computer center and sustainable broadband adoption projects. BIP can fund last- and middle- mile infrastructure projects in rural areas across the country. Because of the volume of expected projects, NTIA and RUS plan to oversee and monitor BTOP- and BIP-funded projects with fewer staff resources per project than the agencies used in similar grant and loan programs (see table 2). In its fiscal year 2010 budget request to Congress, NTIA estimated that it would need a total of 70 full-time-equivalent staff for fiscal year 2010 to manage BTOP, which includes overseeing funded projects. After refining its spending and budget plans, NTIA said that it will need 41 full-time-equivalent staff for BTOP; at the time of our review, it had filled 33 of these positions. Based on NTIA’s estimate of funding 1,000 projects and its estimated 41 full-time-equivalent staff needed, NTIA will have about 1 full-time-equivalent staff available for every 24 projects. Under the Technology Opportunities Program, NTIA had an average of 1 full-time-equivalent staff in any capacity for every three projects funded annually from fiscal year 1994 through fiscal year 2004. NTIA reported that it is continually assessing its resources and is considering additional staff hires. Similarly, RUS reported that it will need 47 full-time-equivalent staff to administer all aspects of BIP, and the majority of these positions were to be filled by the end of September 2009. These 47 staff members are in addition to the 114 full-time-equivalent staff in the Rural Development Telecommunications program which support four existing loan or grant programs, including the Telecommunications Infrastructure loan program, the Distance Learning and Telemedicine loan and grant program, the Broadband Access Loan Program, and Community Connect grant program. If RUS funds a total of 1,000 projects, as estimated, based on the 47 staff assigned to BIP, it would have 1 staff of any capacity available for every 21 funded projects. Under its Broadband Access Loan Program, RUS had more than 1 full-time-equivalent staff for every loan made annually from fiscal year 2003 through fiscal year 2008. RUS reported that it could use other staff in the Rural Development Telecommunications program to address BIP staffing needs, if necessary. Contractor services. NTIA and RUS will use contractors to help monitor and provide technical assistance for BTOP and BIP projects, in addition to evaluating applications as discussed earlier. On August 3, 2009, NTIA procured contractor services to assist in a range of tasks, including tracking and summarizing grantees’ performance, developing grant- monitoring guidance, and assisting with site visits and responses to audits of BTOP-funded projects. Through its statement of work for contracted services, NTIA estimated that its contractor will provide about 35,000 hours of support for grants administration and postaward support in 2010 and about 55,000 hours of support for additional optional years. On July 31, 2009, RUS awarded a contract to a separate contractor for a wide range of program management activities for BIP. RUS’s contractor will be responsible for a number of grant-monitoring activities, including developing a workflow system to track grants and loans; assisting RUS in developing project monitoring guidance and policies; and assisting in site visits to monitor projects and guard against waste, fraud, and abuse. In addition to its contractor, RUS intends to use existing field staff for program oversight. RUS reported that it currently has 30 general field representatives in the telecommunications program and 31 field accountants in USDA’s Rural Development mission area that may be available to monitor broadband programs. RUS field accountants conduct financial audits primarily within its telecommunications and electric utility loan programs. Two of the 30 general field representatives are dedicated to RUS’s broadband grant and loan programs, and RUS reported that the other general field representatives would be available to assist with BIP oversight if needed. Of the 47 full-time-equivalent staff that RUS has estimated needing to implement BIP, it plans to hire a total of 10 general field representatives and 10 field accountants on a temporary basis. In addition, RUS officials told us that Rural Development has an estimated 5,000 field staff available across the country that support a variety of Rural Development loan and grant programs. Although these individuals do not have specific experience with telecommunications or broadband projects, according to RUS, this staff has experience supporting RUS’s business and community development loan programs, and this workforce could be used for project monitoring activities if there was an acute need. Recipient reports and audits. To help address the challenge of monitoring a large number of diverse projects, NTIA and RUS have developed program-specific reporting requirements that are intended to provide transparency on the progress of funded projects. Based on our review of the requirements, if NTIA and RUS have sufficient capacity to review and verify that information provided by funding recipients is accurate and reliable, these requirements could provide the agencies with useful information to help them monitor projects. The following reporting requirements apply to BTOP and BIP funding recipients: General Recovery Act reports. Section 1512 of the Recovery Act and related OMB guidance requires all funding recipients to report quarterly to a centralized reporting system on, among other things, the amount of funding received or obligated, the project completion status, and an estimate of the number of jobs created or retained through the funded project. Under OMB guidance, awarding agencies are responsible for ensuring that funding recipients submit reports to a central, online portal no later than 10 calendar days after each calendar quarter in which the recipient receives assistance. Awarding agencies must also perform their own data quality review and request further information or corrections by funding recipients, if necessary. No later than 30 days following the end of the quarter, OMB requires that detailed recipient reports are made available to the public on the Recovery.gov Web site. BTOP-specific reports. The Recovery Act requires BTOP funding recipients to report quarterly on their use of funds and NTIA to make these reports available to the public. NTIA also requires that funding recipients report quarterly on their broadband equipment purchases and progress made in achieving goals, objectives, and milestones identified in the recipient’s application, including whether the recipient is on schedule to substantially complete its project no later than two years after the award and complete its project no later than 3 years after the award. Recipients of funding for last- and middle-mile infrastructure projects must report on a number of metrics, including the number of households and businesses receiving new or improved access to broadband as a result of the project, the advertised and averaged broadband speeds and the price of the broadband services provided, and the total and peak utilization of network access links. BIP-specific reports. RUS requires BIP funding recipients to submit quarterly balance sheets, income and cash-flow statements, and the number of customers taking broadband service on a per community basis, among other information. In addition, RUS requires funding recipients to specifically state in the applicable quarter when they have received 67 percent of the award funds, which is RUS’s measure for “substantially complete.” BIP funding recipients must also report annually on the number of households; businesses; and educational, library, health care, and public safety providers subscribing to new or improved access to broadband. RUS officials reported that it plans to use quarterly reports to identify specific projects for on-site monitoring and to determine when that monitoring should take place. NTIA and RUS also require some funding recipients to obtain annual, independent audits of their projects. The primary tool for monitoring federal awards through annual audits is the Single Audit report required under the Single Audit Act, as amended. We recently reported that the Single Audit is a valuable source of information on internal control and compliance for use in a management’s risk assessment and monitoring processes—and with some adjustments, we said, the Single Audit process could be improved for Recovery Act oversight. The Single Audit report is prepared in accordance with OMB’s implementing guidance in OMB Circular No. A-133. OMB’s Recovery Act guidance directed federal agencies to review Single Audit reports and provide a synopsis of audit findings to OMB relating to obligations and expenditures of Recovery Act funding. All states, local governments, and nonprofit organizations that expend over $500,000 in federal awards per year must obtain an annual Single Audit or, in some cases, a program-specific audit (referred to collectively in this report as a Single Audit). Commercial (for profit) entities awarded federal funding of any amount are not covered by the Single Audit Act, and states, local governments, and nonprofit organizations expending less than $500,000 in federal awards per year are also not required to obtain an annual Single Audit under the Single Audit Act. RUS, however, requires all commercial recipients of BIP funds to obtain an annual, independent audit of their financial statements under requirements that also apply to RUS’s existing broadband grant and loan programs. However, RUS’s existing audit requirements are different from the Single Audit requirements. NTIA has yet to determine what annual audit requirements will apply to commercial grantees; NTIA reported that it intends to develop program-specific audit requirements and guidelines that will apply to commercial recipients that receive broadband grants and plans to have those guidelines in place by December 2009. See table 3 for a description of BTOP and BIP audit requirements. Lack of sufficient resources beyond fiscal year 2010. Both NTIA and RUS face the risk of having insufficient resources to actively monitor BTOP- and BIP-funded projects after September 30, 2010, which could result in insufficient oversight of projects not yet completed by that date. As required by the Recovery Act, NTIA and RUS must ensure that all awards are made before the end of fiscal year 2010. Under the current timeline, the agencies do not anticipate completing the award of funds until that date. Funded projects must be substantially complete no later than 2 years, and complete no later than 3 years following the date of issuance of the award. Yet, the Recovery Act provides funding through September 30, 2010. The DOC Inspector General has expressed concerns that “without sufficient funding for a BTOP program office, funded projects that are still underway at September 30, 2010, will no longer be actively managed, monitored, and closed.” NTIA officials told us that NTIA has consulted with OMB about seeking BTOP funding after September 30, 2010, to allow it to close grants. RUS officials reported that given the large increase in its project portfolio from BIP, RUS’s capacity to actively monitor these projects after its BIP funding expires may be stressed. Without sufficient resources to actively monitor and close BTOP grants and BIP grants and loans by the required completion dates, NTIA and RUS may be unable to ensure that all recipients have expended their funding and completed projects as required. Lack of updated performance goals. The Government Performance and Results Act of 1993 (GPRA) directs federal agencies to establish objective, quantifiable, and measurable goals within annual performance plans. GPRA stresses the importance of having clearly stated objectives, strategic and performance plans, goals, performance targets, and measures in order to improve a program’s effectiveness, accountability, and service delivery. Specifically, performance measures allow an agency to track its progress in achieving intended results. Performance measures also can help inform management decisions about such issues as the need to redirect resources or shift priorities. NTIA has established preliminary program performance measures for BTOP, including job creation, increasing broadband access, stimulation of private sector investment, and spurring broadband demand. However, NTIA has not established quantitative, outcome-based goals for those measures. NTIA officials reported that the agency lacks sufficient data to develop such goals and is using applications for the first round of funding to gather data, such as the expected number of households that will receive new or improved broadband service. According to NTIA officials, data collected from applications for the first funding round could be used to develop program goals for future funding rounds. RUS has established quantifiable program goals for its existing broadband grant and loan programs, including a measure for the number of subscribers receiving new or improved broadband service as a result of the programs. However, according to USDA’s fiscal year 2010 annual performance plan, RUS has not updated its goals to reflect the large increase in funding it received for broadband programs under the Recovery Act. In addition, RUS officials told us that the agency’s existing measure for the number of subscribers receiving new or improved broadband access as a result of its programs is based on the estimates provided by RUS borrowers in their applications. Consequently, these program goals do not reflect actual program outcomes, but rather the estimates of applicants prior to the execution of their funded projects. Undefined audit requirements for commercial recipients. At the time of our review, NTIA did not have audit requirements or guidelines in place for annual audits of commercial entities receiving BTOP grants. NTIA officials reported that because BTOP is the first program managed by NTIA to make grants to commercial entities, the agency does not have existing audit guidelines for commercial entities. However, NTIA reported that it intends to develop program-specific audit requirements and guidelines that will apply to commercial recipients that receive broadband grants, and it plans to have those guidelines in place by December 2009. Although award recipients that do not expend more than $500,000 per year in federal awards may not be subject to an annual audit requirement, NTIA officials reported that they do not yet know the extent to which they will make awards in this range. In the absence of clear audit requirements and guidelines for commercial recipients of BTOP funding, NTIA will lack an important oversight tool to identify risks and monitor BTOP grant expenditures. The Recovery Act established an ambitious schedule for NTIA and RUS to implement the broadband provisions. In particular, the agencies have 18 months to establish their respective programs, solicit and evaluate applications, and award funds. Compounding the challenge, NTIA must establish the BTOP program from scratch, and the agencies face an unprecedented volume of funds and anticipated number of applications. The agencies initially indicated that they would award Recovery Act funds in three rounds; but, on November 10, 2009, the agencies announced that they would consolidate the second and third funding rounds and award the remaining funds in a single, second funding round. However, the schedule of the new, second funding round is unclear. Based on the experience in the first funding round and their legacy grant and loan programs, the agencies might have little time to thoroughly review applications to ensure that funded projects meet the objectives of the Recovery Act. Without adequate time to gather lessons learned from the first funding round and to thoroughly review applications, the agencies risk funding projects that might not meet the objectives of the Recovery Act. In addition to reviewing an unprecedented number of applications, NTIA and RUS must oversee funded projects to ensure the projects meet the objectives of the Recovery Act and to guard against waste, fraud, and abuse. All funded projects must be complete no later than 3 years following the award of funds; therefore, some funded projects might not be complete until September 30, 2013. However, the Recovery Act only provided funding through September 30, 2010. Without adequate resources beyond fiscal year 2010, the agencies may not be able to ensure that all projects are completed as intended and to guard against waste, fraud, and abuse. Due to the compressed schedule and limited staff resources, NTIA and RUS have had limited time to develop outcome-based performance goals for their programs. However, the agencies use of sequential funding rounds provides them with an opportunity to collect important data from funding applicants early in the program that could be used to develop meaningful performance goals. For example, because applicants must provide estimates for and reports on the number of households and other entities that will receive new or improved broadband service as a result of the projects, NTIA and RUS should have a good basis to establish program goals for BTOP and BIP for the second funding round and to evaluate the effectiveness of federal spending for broadband deployment. Without such goals, future efforts to expand broadband deployment and adoption may lack important information on the types of projects that were most effective at meeting subscriber goals and other targets, thereby limiting the ability to apply federal resources to programs with the best likelihood of success. Finally, although NTIA and RUS have established a range of reporting requirements for funding recipients, NTIA has yet to define what annual auditing requirements, if any, will apply to commercial funding recipients under BTOP. Although we have previously reported that the Single Audit Act’s annual audit requirement is not a perfect tool to oversee Recovery Act funding, the absence of an annual audit requirement for commercial entities would hamper NTIA’s oversight of its Recovery Act funding. For example, NTIA would lack independent auditors’ assurances that its funding recipients have important internal controls in place to fully track expenditures and guard against fraud, waste, and abuse. We recommend that the Secretaries of Commerce and Agriculture take the following three actions: 1. To reduce the risk of awarding funds to projects that may not be sustainable or do not meet the priorities of the Recovery Act delay the issuance of the second NOFA in order to provide time to analyze application and evaluation processes and apply lessons learned from the first funding round, and provide review time in the second funding round comparable with other broadband grant and loan programs. 2. To ensure that all funded projects receive sufficient oversight and technical support beyond September 30, 2010, and through their required completion dates, develop contingency plans to ensure sufficient resources for oversight of funded projects beyond fiscal year 2010. 3. To ensure that management has appropriate tools in place to evaluate the effectiveness of BTOP and BIP and to apply limited resources to achieve desired program outcomes, use information provided by program applicants in the first funding round to establish quantifiable, outcome-based performance goals by which to measure program effectiveness. We also recommend that the Secretary of Commerce take the following step: To ensure that NTIA has sufficient insight into the expenditure of federal funding by commercial entities that may receive BTOP grants, determine whether commercial entities should be subject to an annual audit requirement. We provided a draft of this report to the departments of Commerce and Agriculture, to OMB, and to FCC for review and comment. In the draft report, we recommended that NTIA and RUS combine the second and third planned funding rounds into one extended funding round. The departments of Commerce and Agriculture agreed with our recommendations; FCC and OMB did not comment on our recommendations. Subsequently, on November 10, 2009, NTIA and RUS announced that they would award the remaining program funds in one round, instead of two. Therefore, we removed this recommendation from the final report. In its comments, NTIA noted that the agency will take all appropriate additional steps to apply the lessons learned and address GAO’s concerns, including utilizing experiences from the first round of funding to improve the program, establishing outcome-based performance measures, and implementing reasonable audit requirements for commercial grantees. NTIA’s full comments appear in appendix III. For the recommendations directed to RUS, RUS described steps it is exploring that are consistent with our first recommendation. RUS agreed with the second and third recommendations. FCC, NTIA, OMB, and RUS provided technical comments that we incorporated, as appropriate. In its comments, RUS noted that it has extensive experience awarding and managing grants and loans for rural America, including grants and loans for electric and telecommunications projects. RUS noted that by focusing on budget authority, our report does not reflect the true scope of its telecommunications programs. In particular, RUS noted that the Broadband Access Loan Program operated with a program level of $300 to $400 million. We chose to report the budget authority for the various programs to provide comparability between the grant and loan programs operated by NTIA and RUS. We acknowledge that RUS’s legacy programs operate at the program level exceeding the budget authority; however, the BIP program will also operate at a program level exceeding the $2.5 billion budget authority. RUS also noted that our report does not reflect the full scale of its existing staffing levels. In particular, RUS noted that it has 114 full-time staff dedicated solely to telecommunications programs and 30 General Field Representatives who can assist with oversight of the BIP program. In our report, we note the number of staff dedicated to RUS’s broadband programs, and we also note that RUS has additional staff, including 30 General Field Representatives, that the agency can draw upon for the BIP program. RUS’s full comments appear in appendix II. We are sending copies of this report to the Secretary of Agriculture, the Secretary of Commerce, the Director of the Office of Management and Budget, the Chairman of the Federal Communications Commission, and interested congressional committees. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. In addition to the contact named above, Michael Clements, Assistant Director; Eli Albagli; Matt Barranca; Elizabeth Eisenstadt; Dean Gudicello; Tom James; Kim McGatlin; Sara Ann Moessbauer; Josh Ormond; and Mindi Weisenbloom made key contributions to this report. | Access to broadband service is seen as vital to economic, social, and educational development, yet many areas of the country lack access to, or their residents do not use, broadband. To expand broadband deployment and adoption, the American Recovery and Reinvestment Act (Recovery Act) provided $7.2 billion to the Department of Commerce's National Telecommunications and Information Administration (NTIA) and the Department of Agriculture's Rural Utilities Service (RUS) for grants or loans to a variety of program applicants. The agencies must award all funds by September 30, 2010. This report addresses the challenges NTIA and RUS face; steps taken to address challenges; and remaining risks in (1) evaluating applications and awarding funds and (2) overseeing funded projects. The Government Accountability Office (GAO) reviewed relevant laws and program documents and interviewed agency officials and industry stakeholders. NTIA and RUS face scheduling, staffing, and data challenges in evaluating applications and awarding funds. NTIA, through its new Broadband Technology Opportunities Program, and RUS, through its new Broadband Initiatives Program, must review more applications and award far more funds than the agencies formerly handled through their legacy telecommunications grant or loan programs, including NTIA's largest legacy grant program, Public Safety Interoperable Communications. NTIA and RUS initially proposed distributing these funds in three rounds, but recently adopted two rounds. To meet these challenges, the agencies have established a two-step application evaluation process that uses contractors or unpaid, independent experts for application reviews and plan to publish information on applicants' proposed service areas to help ensure the eligibility of proposed projects. While these steps address some challenges, the upcoming deadline for awarding funds may pose risks to the thoroughness of the application evaluation process. In particular, the agencies may lack time to apply lessons learned from the first funding round and to thoroughly evaluate applications for the remaining rounds. NTIA and RUS will oversee a significant number of projects, including projects with large budgets and diverse purposes and locations. In doing so, the agencies face the challenge of monitoring these projects with far fewer staff per project than were available for their legacy grant and loan programs. To address this challenge, NTIA and RUS have hired contractors to assist with oversight activities and plan to require funding recipients to complete quarterly reports and, in some cases, obtain annual audits. Despite these steps, several risks remain, including a lack of funding for oversight beyond fiscal year 2010 and a lack of updated performance goals to ensure accountability for NTIA and RUS. In addition, NTIA has yet to define annual audit requirements for commercial entities funded under the Broadband Technology Opportunities Program. |
Internal control is not one event, but a series of activities that occur throughout an entity’s operations and on an ongoing basis. Internal control should be an integral part of each system that management uses to regulate and guide its operations rather than as a separate system within an agency. In this sense, internal control is management control that is built into the entity as a part of its infrastructure to help managers run the entity and achieve their goals on an ongoing basis. Section 3512 (c), (d) of Title 31, U.S. Code, commonly known as the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), requires agencies to establish and maintain effective internal control. The agency head must annually evaluate and report on the control and financial systems that protect the integrity of its federal programs. The requirements of FMFIA serve as an umbrella under which other reviews, evaluations, and audits should be coordinated and considered to support management’s assertion about the effectiveness of internal control over operations, financial reporting, and compliance with laws and regulations. Office of Management and Budget (OMB) Circular No. A-123, Management’s Responsibility for Internal Control, provides the implementing guidance for FMFIA, and prescribes the specific requirements for assessing and reporting on internal controls consistent with the Standards for Internal Control in the Federal Government (internal control standards) issued by the Comptroller General of the United States. The circular defines management’s responsibilities related to internal control and the process for assessing internal control effectiveness, and provides specific requirements for conducting management’s assessment of the effectiveness of internal control over financial reporting. The circular requires management to annually provide assurances on internal control and emphasizes the need for integrated and coordinated internal control assessments that synchronize all internal control–related activities. FMFIA requires GAO to issue standards for internal control in the federal government. The internal control standards provide the overall framework for establishing and maintaining effective internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. As summarized in the internal control standards, internal control in the government is defined by the following five elements, which also provide the basis against which internal controls are to be evaluated: Control environment: Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. Risk assessment: Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Control activities: Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. Information and communication: Information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Monitoring: Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. A key objective in our annual audits of IRS’s financial statements is to obtain reasonable assurance that IRS maintained effective internal control with respect to financial reporting. While we use all five elements of internal control as a basis for evaluating the effectiveness of IRS’s internal controls, our ongoing evaluations and tests have focused heavily on control activities, where we have identified numerous internal control weaknesses and have provided recommendations for corrective action. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. In other words, they are the activities conducted in the everyday course of business that are intended to accomplish a control objective, such as ensuring IRS employees successfully complete background checks prior to being granted access to taxpayer information and receipts. Control activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achievement of effective results. To accomplish our objectives, we evaluated the effectiveness of corrective actions IRS implemented during fiscal year 2010 in response to open recommendations as part of our fiscal years 2010 and 2009 financial audits. To determine the current status of the recommendations, we (1) obtained IRS’s reported status of each recommendation and corrective action taken or planned as of March 2011, (2) compared IRS’s reported status to our fiscal year 2010 audit findings to identify any differences between IRS’s and our conclusions regarding the status of each recommendation, and (3) performed additional follow-up work to assess IRS’s actions taken to address the open recommendations. For our recommendations to IRS regarding information security, this report includes only summary data on the number of those recommendations and their general makeup. Because of the sensitive nature of many of the issues related to our recommendations regarding information security, we have reported our recommendations for corrective action to IRS separately. In order to determine how IRS’s open recommendations, including those identified in our June 2011 management report, fit within the agency’s management and internal control structure, we compared the open recommendations and the issues that gave rise to them to the (1) control activities listed in the internal control standards, (2) list of major factors and examples outlined in our Internal Control Management and Evaluation Tool, and (3) criteria and objectives for federal financial management as discussed in the Chief Financial Officers Act of 1990 Federal Accounting Standards Advisory Board’s (FASAB) Statement of Federal Financial Accounting Concepts No. 1, Objectives of Federal Financial Reporting. We also considered whether IRS had addressed, in whole or in part, the underlying control issues that gave rise to the recommendations; and other legal requirements and implementing guidance, such as OMB Circular No. A-123 and FMFIA. (CFO Act) and the Our work was performed from December 2010 through April 2011 in accordance with generally accepted government auditing standards. IRS continues to make progress in resolving its internal control weaknesses and addressing outstanding recommendations, but it still faces significant financial management challenges. Since we first began auditing IRS’s financial statements in fiscal year 1992, IRS has taken a significant number of actions that enabled us to conclude that it had effectively resolved several material weaknesses and significant deficiencies and to close almost 300 of our previously reported financial management–related recommendations. This includes 37 recommendations we are closing with this report based on actions IRS took through March 2011. Nevertheless, IRS continues to face challenges in improving the effectiveness of its financial and operational management. Specifically, IRS continues to face management challenges in (1) resolving its two material weaknesses and one significant deficiency in internal control, (2) developing performance measures and managing for outcomes, and (3) addressing its remaining internal control issues, particularly those dealing with safeguarding of taxpayer receipts and information. Further, as in previous years’ audits, our fiscal year 2010 audit continued to identify additional internal control issues, resulting in 29 new recommendations for corrective action. These issues are discussed in detail in our June 2011 management report to IRS. In addition, as noted earlier, we also identified numerous issues related to information security during our fiscal year 2010 audit that we reported separately because of the sensitive nature of many of those issues. We have made numerous recommendations to IRS over the years— including new recommendations resulting from our fiscal year 2010 financial audit—to address the issues comprising these weaknesses in internal control. Successfully implementing these recommendations would assist IRS in fully resolving these weaknesses. To its credit, IRS continues to work to address the issues underlying these and other internal control weaknesses. As we reported in our audit of IRS’s fiscal year 2010 financial statements, IRS continues to face significant challenges in resolving its two remaining long-standing material weaknesses in internal control concerning (1) unpaid tax assessments and (2) information security. IRS’s continuing challenge in addressing its material weakness in internal control over unpaid tax assessments results from its (1) inability to use its general ledger and underlying subsidiary records to report federal taxes receivable, compliance assessments, and writeoffs in accordance with federal accounting standards without significant compensating procedures; (2) lack of transaction traceability for the reported balance in taxes receivable that comprises over 80 percent of IRS’s total assets as of September 30, 2010, and an effective transaction-based subledger for unpaid tax assessment transactions; and (3) inability to effectively prevent or timely detect and correct errors in taxpayer accounts. These control deficiencies are caused primarily by IRS’s continued reliance on software applications that were not designed to provide the accurate, complete, and timely transaction-level financial information that management needs to make well-informed decisions or to accumulate and report financial information in accordance with federal accounting standards. These problems are likely to continue to exist until these software applications are either significantly enhanced or replaced. Successfully addressing these issues is vital and is one of the goals of IRS’s ongoing systems-modernization effort. IRS’s continuing challenge in addressing its material weakness in internal control over the management of information systems security is primarily due to IRS not having fully implemented key components of its information security program. Although IRS has processes in place intended to monitor and assess its internal controls, these processes were not always effective. For example, (1) IRS’s testing did not detect many of the vulnerabilities we identified and did not assess a key application in its current environment, and (2) IRS had not effectively validated corrective actions reported to resolve previously identified weaknesses. As we reported in our audit of IRS’s fiscal year 2010 financial statements, IRS has made progress in addressing numerous weaknesses in information security internal control. However, many of the weaknesses we reported in previous years remain unresolved and continue to place IRS systems at risk. For example, IRS (1) continued to allow individuals more access to sensitive information contained on its network than needed to perform their assigned duties, (2) had not completed actions to address a vulnerability in its procurement system that allowed users to enter commands that bypassed normal application security controls, and (3) continued to allow visitors unnecessary access to secured areas at one data center. In addition to unresolved issues, we identified additional internal control deficiencies, that, along with the unresolved deficiencies, continued to jeopardize the confidentiality, integrity, and availability of information processed by IRS’s key systems, and increased the risk of material misstatement for financial reporting. For example, IRS had not (1) appropriately secured the database associated with the online system IRS used to support and manage its computer access request, approval, and review process; (2) appropriately restricted permissions on the database that supported an application used for cost allocation of rent-related data, allowing database users to run operating system commands; (3) tested the Redesigned Revenue Accounting Control System (RRACS) application security in its current production environment, which would have enabled IRS to identify weaknesses compromising IRS’s ability to segregate incompatible duties and jeopardize the integrity of the application’s data, and (4) used encrypted protocols on a server supporting the Electronic Federal Tax Payment System and several internal routers, potentially exposing user IDs and passwords transmitted in clear text across the network to inappropriate disclosure and unauthorized use. Until IRS takes additional steps to implement more comprehensive testing and effective validation processes, its facilities, computing resources, and information will remain vulnerable to inappropriate use, modification, or disclosure, and agency management will have limited assurance of the integrity and reliability of its financial and taxpayer information. In addition to the continuing challenges posed by the two long-standing material weaknesses concerning unpaid tax assessments and information security, our audit of IRS’s fiscal year 2010 financial statements also identified a significant deficiency in IRS’s internal control over tax refund disbursements. This significant deficiency, which is the collective result of (1) a multiyear pattern of our identifying and reporting deficiencies in IRS’s internal control over the processing of manual refunds; (2) the increasing magnitude of manual refunds disbursed; and (3) new deficiencies associated with the internal controls over the First-Time Home Buyer Credit (FTHBC), increases the risk that IRS may pay out duplicate or otherwise erroneous tax refunds to which individuals or businesses are not entitled and for which IRS must use resources attempting to recover. This new significant deficiency is an example of the danger of not effectively addressing control deficiencies as soon as they are identified so that they do not become a more serious problem. We have reported numerous control deficiencies associated with manual refund processing since 1999. Nine of those deficiencies and their associated recommendations remain open, two of which have been open since 2005. As we reported in our audit of IRS’s fiscal year 2010 financial statements, IRS continues to face challenges in developing and instutionalizing the use of financial management information to assist it in making operational decisions and in measuring the effectiveness of its programs. IRS has not developed cost-based (and when appropriate, revenue-based) outcome- oriented performance measures that would enhance its ability to manage for outcomes and integrated them into its routine management and decision-making processes or its externally reported performance metrics. Although IRS has developed projected direct tax return on investment estimates for new enforcement (tax collection) initiatives in its annual budget submissions, it has not developed similar direct tax return on investment outcome-oriented performance metrics to determine whether funded initiatives achieve their originally projected outcomes. Lacking such performance metrics inhibits IRS’s ability to more fully assess and monitor the relative merits of its existing programs, to evaluate new initiatives, or to consider alternatives and adjust its strategies as needed. Outcome-oriented performance metrics based on specific enforcement programs’ costs and revenues would assist IRS in improving its ability to (1) establish measurable outcome goals, (2) evaluate the relative merits of various program options, and (3) highlight opportunities for optimizing the allocation of resources. They could also assist IRS in more credibly demonstrating to Congress and the public that it is using its appropriations wisely. IRS’s existing metrics focus on process-oriented workload measures of program outputs rather than on measuring program outcomes. For example, for its enforcement programs, IRS focuses on measuring discrete activities within its overall tax collection efforts, such as the percentage of various types of tax returns examined, criminal investigations completed, and the number of tax returns examined and closed. While such output measures can be useful elements in assessing performance, they are not designed to measure the contribution each of these activities makes to the collection of unpaid taxes, nor do they compare the cost of collection activities to the tax revenue generated. IRS’s enforcement metrics do not include revenue collected—a measure of outcome—compared to the cost of collection, which could provide useful information on the benefits of the enforcement programs. In addition, IRS’s publicly available performance metrics do not measure the internal cost of IRS’s programs either in the aggregate or per service or activity performed. As we report in the “Status per IRS” section of appendix I in this report, IRS has reported that it considers our recommendation to develop outcome-oriented performance measures and related performance goals for IRS’s enforcement programs and activities to be closed. We do not agree. Part of IRS’s justification for closing the recommendation is that it uses estimates of the cost-benefit direct tax return on investment analysis to evaluate future scenarios and to support funding requests for new initiatives in its annual budget submissions. Using such estimates of prospective return on investment information is useful for budgetary decision making, but our recommendation is for IRS to develop outcome data on the actual results of its programs and activities. We have also previously recommended that IRS extend the use of return on investment in future budget proposals to include major enforcement programs, develop return on investment data for its enforcement programs using actual revenue and full cost data and compare actual results to the projected return on investment data included in its budget request, and provide Congress with information comparing projected savings to actual savings in the year following the budget’s implementation. The intent of our recommendations is to encourage IRS to develop outcome-oriented performance metrics and to use them, along with other metrics, in resource-allocation decisions. While IRS has not developed or deployed such metrics for either funded initiatives or for ongoing enforcement programs and activities, IRS officials informed us they are considering options to collect direct tax return on investment data for newly funded enforcement initiatives. IRS officials also contended that it is not prudent to rely exclusively on direct tax return on investment as the sole determinant of resource allocation. As we have reported previously, we acknowledge that IRS must consider other factors besides maximizing revenue collection and least-cost operations. The fairness of IRS’s implementation of the tax code and treatment of all taxpayers are important and we are cognizant of the many factors, such as coverage, that are important considerations when making resource-allocation decisions. These factors, and the decisions IRS makes about how to respond to them, have a significant effect on taxpayers, as well as on tax collections. We also acknowledge that IRS faces challenges in developing outcome-oriented performance metrics, such as return on investment, and integrating them into its resource allocation decision-making process. Furthermore, developing such an approach is important in order for IRS to make optimum use of its available resources and to be able to credibly demonstrate it is doing so to Congress and the public. IRS’s lack of outcome-oriented performance metrics is inconsistent with federal financial management concepts as embodied in the Federal Accounting Standards Advisory Board’s Statement of Federal Financial Accounting Concepts No. 1, Objectives of Federal Financial Reporting. In its discussion of financial reporting concepts, FASAB notes that federal financial data should provide accountability and decision-useful information on the costs of programs and the outputs and outcomes achieved, and it should provide data for evaluating service efforts, costs, and accomplishments. The absence of outcome metrics is also inconsistent with the objectives of the CFO Act of 1990. A key objective of the act was for agencies to routinely develop and use appropriate financial management information to evaluate program effectiveness, make fully informed operational decisions, and ensure accountability. While obtaining a clean audit opinion on its financial statements is important in itself, it is not the CFO Act’s end goal. Rather, the act’s end goal is modern financial management systems that provide reliable, timely, and useful financial information to support day-to-day decision making and oversight. Such systems and practices should also provide for the systematic measurement of both outputs and outcomes. We have made several recommendations to IRS over the years to address its financial management challenges in developing internal full cost data for its programs and activities and for outcome-oriented performance measures. Successfully addressing the remaining open recommendations would enhance IRS’s ability to effectively manage for outcomes. IRS’s actions over the years to resolve internal control weaknesses enabled us to close nearly 300 internal control–related recommendations. However, IRS also continues to face a challenge in addressing numerous other unresolved internal control issues in several aspects of its operations that, while neither individually nor collectively representing a material weakness or significant deficiency, nonetheless merit management attention to ensure they are fully and effectively addressed. IRS now has a total of 57 open audit recommendations resulting from internal control issues that we report as “other control issues” in appendix II of this report. While most were identified during our recent financial audits, some were identified in our audits from 2005 or earlier. It is incumbent upon IRS to effectively address these open recommendations and to improve its system of internal controls so that IRS can identify and correct potential weaknesses before they can grow into more serious problems. Forty-four—77 percent—of the 57 “other” open recommendations address issues related either directly or indirectly to physically safeguarding of tax receipts and taxpayer information, a critical element of IRS’s responsibilities. IRS processes billions of dollars annually in checks and currency and other valuable assets, and it must safeguard and account for them to prevent theft, fraud, and misuse. To do so, IRS has established physical security, accountability, and accounting policies, processes, and procedures to manage its activities involving transporting and accounting for tax receipts and for handling and storing taxpayer information. Although IRS has made substantial improvements in safeguarding taxpayer receipts and information since our financial audits first began identifying serious internal control issues in this area, the task of ensuring ongoing control over such critical responsibilities for IRS is a difficult one and requires constant vigilance. Each year, we continue to identify control issues related to IRS’s safeguarding of taxpayer receipts and information. For example, our fiscal year 2010 audit identified new internal control issues and made 18 additional recommendations that related either directly or indirectly to physically safeguarding taxpayer receipts and information. The internal control issues encompassed in our open recommendations cover critical physical security functions, such as transporting taxpayer receipts and sensitive taxpayer information among IRS facilities and lockbox banks and maintaining physical security at IRS facilities to prevent loss, theft, or the potential for fraud regarding tax receipts and taxpayer information; conducting inspections and audits of the design and operation of IRS’s physical security processes and controls designed to safeguard tax receipts and taxpayer information; conducting appropriate background investigations and screening of personnel, including contractors, with access to taxpayer information; and ensuring the proper destruction of documents and equipment to prevent the inappropriate release of sensitive taxpayer information. In light of the volume of taxpayer receipts and sensitive taxpayer files that IRS is responsible for safeguarding, and the implications for IRS’s mission if they are lost, stolen, or the subject of fraud or misuse, it is critical that IRS fully and effectively resolve the internal control issues we have identified and work toward continually improving its internal controls to prevent new issues from arising. In June 2010, we issued a report on the status of IRS’s efforts to implement corrective actions to address financial management recommendations stemming from our fiscal year 2009 and prior year financial audits and other financial management–related work. In that report, we identified 85 audit recommendations that remained open and thus required corrective action by IRS. A significant number of these recommendations have been open for several years, either because IRS had not taken corrective action or because the actions taken had not yet effectively resolved the issues that gave rise to the recommendations. IRS has continued to work to address many of the internal control issues to which these open recommendations relate. In the course of performing our fiscal year 2010 financial audit, we identified numerous actions IRS took to address many of its internal control issues. On the basis of IRS’s actions, which we were able to substantiate through our audit, we have closed 37 of our prior years’ recommendations. However, a total of 48 recommendations from prior years remain open, a significant number of which have been outstanding for several years. IRS considers another 13 of the prior years’ recommendations to be effectively addressed and therefore closed. However, we consider them to remain open. For 9 of the 13, in our view, IRS’s actions did not fully address the issue that gave rise to the recommendations. For the remaining 4, we have not yet been able to verify the effectiveness of IRS’s actions because IRS’s corrective actions are ongoing. (The “Status per IRS” and “Status per GAO” sections of app. I provide a summary of both IRS’s and our assessment of IRS’s actions on each recommendation.) During our audit of IRS’s fiscal year 2010 financial statements, we identified additional issues that require corrective action. In our June 2011 management report to IRS, we discussed these issues, and made 29 new recommendations to address them. Consequently, a total of 77 financial management–related recommendations need to be addressed—48 from prior years and 29 new recommendations resulting from our fiscal year 2010 audit. We consider all of the new recommendations to be short- term. We also consider the majority of the recommendations outstanding from prior years to be short-term; however, a few, particularly those concerning the functionality of IRS’s automated systems, are complex and will require several more years to fully and effectively address. In addition to the 77 open recommendations from our financial audits and other financial management–related work, there are 105 additional open recommendations stemming from our assessment of IRS’s information security controls over key financial systems, information, and interconnected networks conducted as an integral part of our annual financial audits. The issues that led to our previously reported and our newly identified recommendations related to information security increase the risk of unauthorized disclosure, modification, or destruction of financial and sensitive taxpayer data. Collectively, they constitute IRS’s material weakness in internal control over information security for its financial and tax processing systems. As discussed earlier in this report, recommendations resulting from the information security issues identified in our annual audits of IRS’s financial statements are reported separately because of the sensitive nature of many of these issues. Appendix I presents a summary listing of (1) the 85 non-information- systems security–related recommendations based on our financial statement audits and other financial management–related work that we had not previously reported as closed and the 29 new recommendations based on our fiscal year 2010 financial audit, (2) IRS-reported corrective actions taken or planned as of March 2011, and (3) our analysis of whether the issues that gave rise to the recommendations have been effectively addressed, based primarily on the work performed during our fiscal year 2010 financial statement audit. The appendix lists the recommendations by the year in which the recommendation was made and by report number. Appendix II presents the 77 open recommendations that remained as a result of closing the aforementioned 37 recommendations and the addition of 29 new recommendations from our fiscal year 2010 audit. The recommendations have been arranged by related material weakness, significant deficiency, and compliance issue as described in our opinion report on IRS’s financial statements, as well as other control issues we have identified and discussed in our annual management report to IRS. Linking the open recommendations from our financial audits and other financial management–related work, and the issues that gave rise to them, to internal control activities that are central to IRS’s tax administration responsibilities provides insight regarding their significance. Internal control standards consist of five elements—control environment, risk assessment, control activities, information and communication, and monitoring. For the control activities element, the internal control standards explain that an agency’s system of internal control should provide for an assessment of the risks the agency faces from both external and internal sources and that internal control activities should help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. The control activities element defines 11 specific control activities, which we have grouped into three categories, as shown in table 1. Each of the unresolved recommendations from our financial audits and financial management–related work, and the underlying issues that gave rise to them, can be traced to one of the 11 specific control activities as shown in table 1. As table 1 indicates, 29 (38 percent) of the unresolved recommendations relate to IRS’s controls over safeguarding of assets and security activities, 30 (39 percent) relate to issues associated with IRS’s ability to properly record and document transactions, and 18 (23 percent) relate to issues associated with IRS’s management review and oversight. In the following section, we group the 77 open recommendations under the specific control activity to which the condition that gave rise to them most appropriately fits. We define each control activity as presented in the internal control standards and briefly identify some of the key IRS operations that fall under that control activity. Although not comprehensive, the descriptions are intended to help explain why actions to strengthen these control activities are important for IRS to efficiently and effectively carry out its overall mission. Each control activity description includes a table of the related open recommendations. The tables list the recommendations by the year in which we made them (ID no.). For each recommendation, we also indicate whether it is a short-term or long-term recommendation. We characterized a recommendation as short-term when we believe that IRS had the capability to implement solutions within 2 years of the year in which we first reported the recommendations. Note that for the internal control activity “top-level reviews of actual performance,” IRS addressed the outstanding recommendation from prior years that related to this control activity, and we identified no new issues during our fiscal year 2010 financial audit that relate to this control activity. Given IRS’s mission, the sensitivity of the data it maintains, and its processing of trillions of dollars of tax receipts each year, one of the most important control activities at IRS is the safeguarding of assets. Internal control in this important area should be designed to provide reasonable assurance regarding prevention or prompt detection of unauthorized acquisition, use, or disposition of an agency’s assets. IRS has outstanding recommendations in the following three control activities in the internal control standards that relate to safeguarding of assets (including buildings and equipment as well as tax receipts) and security activities (such as limiting access to only authorized personnel): (1) physical control over vulnerable assets, (2) segregation of duties, and (3) access restrictions to, and accountability for, resources and records. Internal control standard: An agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for and limited access to assets such as cash, securities, inventories, and equipment which might be vulnerable to risk of loss or unauthorized use. Such assets should be periodically counted and compared to control records. Of the trillions of dollars in taxes that IRS collects each year, hundreds of billions is collected in the form of checks and cash accompanied by tax returns and related information. IRS collects taxes both at its own facilities as well as at lockbox banks. IRS acts as custodian for (1) the tax payments it receives until they are deposited in the General Fund of the U.S. Treasury and (2) the tax returns and related information it receives until they are either sent to the Federal Records Center or destroyed. IRS is also charged with controlling many other assets, such as computers and other equipment, but it is IRS’s legal responsibility to safeguard tax returns and the confidential information taxpayers provide on those returns that makes the effectiveness of IRS’s internal controls over physical security essential to accomplishing its mission. While effective physical safeguards over receipts should exist throughout the year, such safeguards are especially important during the peak tax filing season. Each year during the weeks preceding and shortly after April 15, an IRS service center or lockbox bank may receive and process daily over 100,000 pieces of mail containing returns, receipts, or both. The dollar value of receipts each service center and lockbox bank processes increases to hundreds of millions of dollars a day during the April 15 time frame. The following 22 open recommendations in table 2 are designed to improve IRS’s physical controls over vulnerable assets. They include recommendations for IRS to improve controls over (1) physical security at its Taxpayer Assistance Centers (TAC), (2) courier activities, (3) lockbox banks’ handling of unprocessable items, (4) the handling of hardcopy cash receipts, and (5) property and equipment disposal procedures. We consider all of these recommendations to be correctable on a short-term basis. Internal control standard: Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling any related assets. No one individual should control all key aspects of a transaction or event. As noted in the previous section, IRS employees process hundreds of billions of dollars in tax receipts in the form of cash and checks. Consequently, it is critical that IRS maintain appropriate separation of duties to allow for adequate oversight of staff and protection of these vulnerable resources so that no single individual would be in a position of causing an error or irregularity, or potentially converting the asset to personal use, and then concealing it. For example, when an IRS field office receives taxpayer receipts and returns, it is responsible for depositing the cash and checks in a depository institution and forwarding the related taxpayer information received, such as tax returns, to an IRS service center for further processing. In order to adequately safeguard receipts from theft, the person responsible for recording the information from the taxpayer receipts on a voucher should be different from the individual who prepares those receipts for transmittal to the service center for further processing. Implementing the following recommendation in table 3 would help IRS improve its separation of duties, which will in turn strengthen controls over tax receipts. This recommendation is short-term in nature. Internal control standard: Access to resources and records should be limited to authorized individuals, and accountability for their custody and use should be assigned and maintained. Periodic comparison of resources with the recorded accountability should be made to help reduce the risk of errors, fraud, misuse, or unauthorized alteration. Because IRS is responsible for maintaining accountability over a large volume of cash and checks, it is imperative that it maintain strong controls to appropriately restrict access to those assets, the records relied on to track those assets, and sensitive taxpayer information. Although IRS has a number of both physical and information systems controls in place, some of the issues we have identified in our financial audits over the years pertain to ensuring that (1) those individuals who have direct access to cash and checks are appropriately vetted, such as through appropriate background investigations, before being granted access to taxpayer receipts and information and (2) IRS maintains effective access security control. The following six short-term recommendations in table 4 are intended to help IRS improve its access restrictions to assets and records. IRS has a number of internal control issues related to recording transactions, documenting events, and tracking the processing of taxpayer receipts or information. IRS has outstanding recommendations in the following three control activities related to proper recording and documenting of transactions: (1) appropriate documentation of transactions and internal controls, (2) accurate and timely recording of transactions and events, and (3) proper execution of transactions and events. Internal control standard: Internal control and all transactions and other significant events need to be clearly documented, and the documentation should be readily available for examination. The documentation should appear in management directives, administrative policies, or operating manuals and may be in paper or electronic form. All documentation and records should be properly managed and maintained. IRS collects and processes trillions of dollars in taxpayer receipts annually both at its own facilities and at lockbox banks under contract to process taxpayer receipts for the federal government. Therefore, it is important that IRS maintain effective controls to ensure that all documents and records are properly and timely recorded, managed, and maintained both at its facilities and at the lockbox banks. In this regard, it is critical that IRS adequately document and disseminate its procedures to ensure that they are available for IRS employees. IRS must also document its management reviews of controls, such as those regarding refunds and returned checks, document transmittals, and reviews of TAC operations. To ensure future availability of adequate documentation, IRS must ensure that (1) its systems, particularly those now being developed and implemented, have appropriate capability to identify and trace individual transactions and (2) all critical steps in its accounting processes are adequately documented. Resolving the following 13 recommendations in table 5 would assist IRS in improving its documentation of transactions and related internal control procedures. All of these recommendations have been classified as short-term. Internal control standard: Transactions should be promptly recorded to maintain their relevance and value to management in controlling operations and making decisions. This applies to the entire process or life cycle of a transaction or event from the initiation and authorization through its final classification in summary records. In addition, control activities help to ensure that all transactions are completely and accurately recorded. IRS maintains sensitive records for tens of millions of taxpayers in addition to maintaining its own financial records. To maintain these records, IRS often has to rely on outdated computer systems or manual work-arounds. Unfortunately, some of IRS’s recordkeeping difficulties we have reported on over the years will not be fully addressed until it can replace its aging systems; an effort that is long-term and, in part, dependent on obtaining future funding. Implementation of the following 15 recommendations in table 6 would strengthen IRS’s recordkeeping abilities. Thirteen of these recommendations are short-term, and 2 are long-term regarding requirements for new systems for maintaining taxpayer records. Several of the recommendations listed deal with financial reporting processes, such as maintaining subsidiary records, recording budgetary transactions, and tracking program costs. Some of the issues that gave rise to several of our recommendations directly affect taxpayers, such as those involving duplicate assessments, errors in calculating and reporting manual interest, errors in calculating penalties, and collection of trust fund recovery penalty assessments. Two of these recommendations have remained open for 10 years or more, reflecting the complex nature of the underlying systems issues that must be resolved to fully address some of these control deficiencies. Internal control standard: Transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. This is the principal means of ensuring that only valid transactions to exchange, transfer, use, or commit resources and other events are initiated or entered into. Authorizations should be clearly communicated to managers and employees. Each year, IRS spends approximately $250 million to cover the cost of its employees’ travel in addition to entering into agreements with, and receiving services from, vendors. Failure to ensure that employees obtain appropriate authorizations for their travel or approval for procurements leaves the IRS open to fraud, waste, or abuse. IRS’s actions to address the following two short-term recommendations in table 7 would improve IRS’s controls over travel costs and approvals for the procurement of goods and services. All personnel within IRS have an important role in establishing and maintaining effective internal controls, but IRS’s managers have additional review and oversight responsibilities. Management must set the objectives, put control activities in place, and monitor and evaluate controls to ensure that they are followed. Without adequate monitoring by managers, there is a risk that internal control activities may not be carried out effectively and in a timely manner. IRS has outstanding recommendations in the following four control activities related to effective management review and oversight: (1) reviews by management at the functional or activity level, (2) establishment and review of performance measures and indicators, (3) management of human capital, and (4) top-level reviews of actual performance. Internal control standard: Managers need to compare actual performance to planned or expected results throughout the organization and analyze significant differences. IRS employs over 100,000 full-time and seasonal employees. In addition, IRS is also responsible for overseeing lockbox banks processing tens of thousands of individual receipts, totaling hundreds of billions of dollars. Effective management oversight of operations is important at any organization, but is imperative at IRS given its mission. Implementing the following 12 short-term and 1 long-term recommendations in table 8 would improve IRS’s management oversight of several areas of its operations, including monitoring of contractor and off-site processing facilities, release of tax liens, and issuance of manual refunds. Internal control standard: Activities need to be established to monitor performance measures and indicators. These controls could call for comparisons and assessments relating different sets of data to one another so that analyses of the relationships can be made and appropriate actions taken. Controls should also be aimed at validating the propriety and integrity of both organizational and individual performance measures and indicators. IRS’s operations include a wide range of activities, including educating taxpayers, processing taxpayer receipts and data, disbursing hundreds of billions of dollars in refunds to millions of taxpayers, maintaining extensive information on tens of millions of taxpayers, and seeking collection from individuals and businesses that fail to comply with the nation’s tax laws. Within its compliance function, IRS has numerous activities, including identifying businesses and individuals that underreport income, collecting from taxpayers who do not pay taxes, and collecting from those receiving refunds for which they are not entitled. Although IRS has at its peak over 100,000 employees, it still faces resource constraints in attempting to fulfill its duties. It is vitally important for IRS to have sound performance measures to assist it in assessing its performance and targeting its resources to maximize the government’s return on investment. The following long-term recommendation in table 9 is designed to assist IRS in (1) evaluating its operations and (2) determining which activities are the most beneficial. This recommendation is directed at improving IRS’s ability to measure and evaluate the internal costs, direct benefits, and outcomes of its operations—particularly with regard to identifying its most cost-effective tax collection activities. Internal control standard: Effective management of an organization’s workforce—its human capital—is essential to achieving results and an important part of internal control. Management should view human capital as an asset rather than a cost. Only when the right personnel for the job are on board and are provided the right training, tools, structure, incentives, and responsibilities is operational success possible. Management should ensure that skill needs are continually assessed and that the organization is able to obtain a workforce that has the required skills that match those necessary to achieve organizational goals. Training should be aimed at developing and retaining employee skill levels to meet changing organizational needs. Qualified and continuous supervision should be provided to ensure that internal control objectives are achieved. Performance evaluation and feedback, supplemented by an effective reward system, should be designed to help employees understand the connection between their performance and the organization’s success. As a part of its human capital planning, management should also consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. IRS’s operations cover a wide range of technical activities requiring specific expertise in tax-related matters; financial management; and systems design, development, and maintenance. Because IRS has tens of thousands of employees spread throughout the country, it is imperative that management establish and maintain up-to-date guidance and provide appropriate training for its staff. Taking action to implement the following four short-term recommendations in table 10 would assist IRS in its management of human capital. Increased budgetary pressures and an increased public awareness of the importance of internal control have served to provide additional pressure on IRS to carry out its mission more efficiently and effectively while continuing to protect taxpayers’ information. Sound financial management and effective internal controls are essential if IRS is to efficiently and effectively achieve its goals. IRS has made substantial progress in improving its financial management and internal control since its first financial audit, as evidenced by unqualified audit opinions on its financial statements for the past 11 years, resolution of several material internal control weaknesses, significant deficiencies, and other control issues, and actions taken resulting in the closure of hundreds of financial management recommendations. This progress has been the result of hard work by many individuals throughout IRS and sustained commitment of IRS leadership. Nonetheless, more needs to be done to fully address the agency’s continuing financial management challenges— resolving material weaknesses and significant deficiencies in internal control; developing outcome-oriented performance metrics that can facilitate managing operations for outcomes; and correcting numerous other internal control issues. Effective implementation of the recommendations we have made through our financial audits and related work could greatly assist IRS in improving its internal controls and achieving sound financial management. In commenting on a draft of this report, IRS expressed its appreciation for our acknowledgment of the agency’s progress in addressing its financial management challenges as evidenced by our closure of 37 open financial management recommendations from prior GAO reports. IRS also commented that it is committed to implementing appropriate improvement to ensure that it maintains sound financial management practices. We will review the effectiveness of further corrective actions IRS has taken or will take to address all open recommendations as part of our audit of IRS’s fiscal year 2011 financial statements. We are sending copies of this report to the Chairmen and Ranking Members of the Senate Committee on Appropriations; Senate Committee on Finance; Senate Committee on Homeland Security and Governmental Affairs; and Subcommittee on Taxation, IRS Oversight and Long-Term Growth, Senate Committee on Finance. We are also sending copies to the Chairmen and Ranking Members of the House Committee on Appropriations; House Committee on Ways and Means; the Chairman and Vice Chairman of the Joint Committee on Taxation; the Secretary of the Treasury; the Director of OMB; the Chairman of the IRS Oversight Board; and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-3406 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix presents a list of (1) the 85 recommendations that we had not previously reported as closed, (2) Internal Revenue Service (IRS) reported corrective actions taken or planned as of March 2011, and (3) our analysis of whether the issues that gave rise to the recommendations have been effectively addressed. It also includes 29 recommendations based on our fiscal year 2010 financial statement audit. Table 11 lists the recommendations by the year and recommendation number (ID no.) and also identifies the report in which the recommendation was made. For several years, we have reported material weaknesses, significant deficiencies, noncompliance with laws and regulations, and other control issues in our annual financial statement audits and related management reports. Appendix II provides summary information regarding the primary issue to which each open recommendation is most closely related. To compile this summary, we analyzed the nature of the open recommendations to relate them to the material weaknesses, significant deficiency, compliance issue, or other control issues (not associated with a material weakness, significant deficiency, or compliance issue) identified as part of our financial statement audit. The Internal Revenue Service (IRS) has weaknesses in its internal control over the management of unpaid tax assessments resulting from the agency’s (1) inability to use its general ledger and underlying subsidiary records to report federal taxes receivable, compliance assessments, and writeoffs in accordance with federal accounting standards without significant compensating procedures, (2) lack of transaction traceability for the reported balance in taxes receivable that comprises over 80 percent of IRS’s total assets as of September 30, 2010, and an effective transaction-based subledger for unpaid tax assessment transactions, and (3) inability to effectively prevent or timely detect and correct errors in taxpayer accounts. The recommendations in table 12 address these weaknesses. IRS has serious internal control weaknesses over information security that result primarily from IRS not having fully implemented key components of its information security program. These weaknesses, collectively, represent a material weakness. For example, (1) IRS’s testing did not detect many of the vulnerabilities we identified and did not assess a key application in its current environment, and (2) IRS did not effectively validate corrective actions reported to resolve previously identified weaknesses. Although IRS has made some progress in addressing previous weaknesses we identified in its information systems and physical security controls, as of March 2011, there were 105 open recommendations designed to help IRS improve its information systems security controls. Those recommendations are reported separately and are not included in this report primarily because of the sensitive nature of some of the issues. IRS has significant internal control weaknesses over its tax refund disbursements. In our audit of IRS’s fiscal year 2010 financial statements, we reported a significant deficiency in IRS’s internal control over tax refund disbursements that resulted from (1) a multiyear pattern of our identifying deficiencies in IRS’s internal control over the processing of manual refunds, which we have reported for several years; (2) the increasing magnitude of manual tax refunds disbursed; and (3) new deficiencies associated with the First-Time Home Buyer Credit (FTHBC). This significant deficiency increases the risk that IRS may pay out duplicate or otherwise erroneous tax refunds to which individuals or businesses are not entitled and for which IRS must spend resources attempting to recover. The recommendations in table 13 address our findings. IRS continues to be noncompliant with the laws and regulations governing the release of federal tax liens. We found IRS did not always release applicable federal tax liens within 30 days of tax liabilities being either paid off or abated, as required by the Internal Revenue Code (section 6325). The Internal Revenue Code grants IRS the power to file a lien against the property of any taxpayer who neglects or refuses to pay all assessed federal taxes. The lien serves to protect the interest of the federal government and as a public notice to current and potential creditors of the government’s interest in the taxpayer’s property. The recommendation in table 14 addresses our finding. The 57 recommendations listed in table 15 pertain to issues that do not rise individually or in the aggregate to the level of a material weakness or significant deficiency in internal control, or to a reportable noncompliance with laws and regulations. However, these issues do represent weaknesses in various aspects of IRS’s internal controls that should be addressed. In addition to the contact named above, the following individuals made major contributions to this report: William J. Cordrey, Assistant Director; Crystal Alfred; Russell Brown; Ray B. Bush; Stephanie Chen; Jeremy Choi; Oliver Culley; Charles Ego; Doreen Eng; Charles Fox; Valerie Freeman; Ryan Guthrie; Ted Hu; Richard Larsen; Tuan Lam; Delores Lee; Jenny Li; Cynthia Ma; Joshua Marcus; Julie Phillips; John Sawyer; Christopher Spain; Cynthia Teddleton; Lien To; LaDonna Towler; Cherry Vasquez; Gary Wiggins; and Ting-Ting Wu. | In its role as the nation's tax collector, the Internal Revenue Service (IRS) has a demanding responsibility to annually collect trillions of dollars in taxes, process hundreds of millions of tax and information returns, and enforce the nation's tax laws. Since its first audit of IRS's financial statements in fiscal year 1992, GAO has identified a number of weaknesses in IRS's financial management operations. In related reports, GAO has recommended corrective actions to address those weaknesses. Each year, as part of the annual audit of IRS's financial statements, GAO makes recommendations to address any new weaknesses identified and follows up on the status of IRS's efforts to address the weaknesses GAO identified in previous years' audits. The purpose of this report is to (1) provide an overview of the financial management challenges still facing IRS, (2) provide the status of financial audit and financial management-related recommendations and the actions needed to address them, and (3) highlight the relationship between GAO's recommendations and internal control activities central to IRS's mission and goals. IRS has made progress in improving its internal controls and financial management since its first financial statement audit in 1992, as evidenced by 11 consecutive years of clean audit opinions on its financial statements, the resolution of several material internal control weaknesses, and actions resulting in the closure of nearly 300 financial management recommendations. This progress has been the result of hard work throughout IRS and sustained commitment at the top levels of the agency. However, IRS still faces significant financial management challenges in (1) resolving its remaining material weaknesses and significant deficiency in internal control, (2) developing outcome-oriented performance metrics, and (3) correcting numerous other internal control issues, especially those relating to safeguarding tax receipts and taxpayer information. At the beginning of GAO's audit of IRS's fiscal year 2010 financial statements, 85 financial management-related recommendations from prior audits remained open because IRS had not fully addressed the underlying issues. During the fiscal year 2010 financial audit, IRS took actions that GAO considered sufficient to close 37 recommendations. At the same time, GAO identified additional internal control issues resulting in 29 new recommendations. In total, 77 recommendations remain open. To assist IRS in evaluating and improving internal controls, GAO categorized the 77 open recommendations by various internal control activities, which, in turn, were grouped into three broad control categories. The continued existence of internal control weaknesses that gave rise to these recommendations represents a serious obstacle for IRS. Effective implementation of GAO's recommendations can greatly assist IRS in improving its internal controls and achieving sound financial management, which are integral to effectively carrying out its tax administration responsibilities. Most recommendations can be addressed within the next year or two. However, a few recommendations, particularly those concerning the functionality of IRS's automated systems, are complex and will require several more years to effectively address. GAO is not making any recommendations in this report. In commenting on a draft of this report, IRS stated that it is committed to implementing appropriate improvements to maintain sound financial management practices. |
Treasury has issued savings bonds since 1935. Savings bonds offer investors the ability to purchase securities with lower minimum denominations than those for marketable Treasury securities. When individuals purchase savings bonds, they loan the amount they paid for the bonds to the U.S. government. Over a period of time (up to 30 years), the savings bonds earn interest and, after 12 months of their original purchase, can be cashed in for their purchase price, plus the interest they have earned, subject to a 3-month interest penalty during the first 5 years. Over the years, Treasury has offered a number of savings bonds with different terms and interest rates. Currently, Treasury offers Series EE bonds, which have a fixed interest rate, and Series I bonds, which pay an interest rate that is tied to inflation. Savings bonds do not represent a major source of funds for the Treasury. The Bureau of the Fiscal Service, one of Treasury’s 10 bureaus, helps to fund the federal government by selling Treasury securities, including savings bonds. Treasury Securities Services within the bureau operates Treasury’s Retail Securities program, which allows retail investors to purchase savings bonds and marketable securities in electronic form directly from Treasury. The office’s flagship system is TreasuryDirect, an online proprietary system created in 2002 that allows customers to buy and hold savings bonds and marketable securities, and to manage their accounts without assistance from a customer service representative. TreasuryDirect customers can purchase securities at any time, direct electronic payments to bank accounts, and convert paper savings bonds to electronic savings bonds in the same series and with the same issue date. TreasuryDirect customers also can set up payroll deductions and automatically recurring purchases. As of March 2015, TreasuryDirect had around 580,700 accounts that were funded and held nearly $27 billion. The elimination of paper savings bonds reduced program costs but made purchasing bonds more difficult for some savers. However, our analysis of Treasury’s bond data showed that the drop in bond purchases after the elimination of paper savings bonds was not statistically significant. As shown in figure 1, annual purchases of U.S. savings bonds declined significantly from 2001 through 2013, falling from around $14.6 billion to less than $1 billion, or by more than 90 percent.savings bond purchases declined every year, except from 2002 to 2003. Likewise, the role of savings bonds in helping to fund the federal debt also declined over the period, accounting for about 3.2 percent of the federal debt in 2001 and about 1.0 percent in 2013. Following the long-term decline in savings bond purchases, Treasury stopped selling paper savings bonds through over-the-counter channels, including through financial institutions and mail-in orders, on January 1, 2012, as part of its agency-wide electronic initiative to reduce program costs and improve customer service. According to Treasury officials, the agency phased out the issuance of paper savings bonds through employer-sponsored payroll savings plans in 2010, and the ending of savings bond sales through over-the-counter channels was the last step of discontinuing paper savings bonds. Treasury estimated that the elimination of over-the-counter sales of paper savings bonds would save nearly $70 million in program costs from 2012 through 2016. Treasury calculated these savings by estimating how much it would save in costs associated with issuing new paper bonds and servicing and redeeming existing paper bonds, which include fees paid to banks, postage, and printing. For example, Treasury estimated that the change would eliminate around $14.5 million in fees paid to financial institutions for issuing and redeeming savings bonds and around $12.7 million in postage expenses for mailing paper bonds to customers over the 5-year period. Additionally, Treasury estimated that it would save in personnel costs because fewer employees would be needed to process customer service transactions. According to Treasury’s estimates, the change would save around $4.9 million in compensation and benefit costs for Treasury staff and $28.5 million in Federal Reserve Bank personnel costs over the 5-year period. Finally, Treasury estimated $9 million in savings from reductions in paper stock, overhead, forms, and other costs. In addition to the cost savings, Treasury expected the change to provide customer benefits, such as increased security and convenience. Although paper bonds allowed buyers to purchase savings bonds at financial institutions, Treasury’s online system for purchasing savings bonds and other Treasury securities—TreasuryDirect—allows customers to buy, manage, and redeem savings bonds electronically at any time. Treasury officials told us that electronic bonds are safer and more secure, because paper bonds could be lost, stolen, altered, or fraudulently redeemed. Treasury officials also added that electronic bonds provide the agency with both operational advantages and enhanced customer experience, since Treasury can automatically track bond purchases, redemptions, and values for the customer. When Treasury eliminated paper savings bonds, it created access challenges for bond buyers who do not have a bank account and Internet access. Customers now must use TreasuryDirect to purchase electronic savings bonds, although some can purchase paper savings bonds through the Tax Time program, which we discuss later in this report. To open a TreasuryDirect account, a customer generally must have both Internet access and a bank account. While TreasuryDirect can be accessed through cellular phones and other mobile devices, the website is not optimized for such use. According to representatives from a nonprofit organization that focuses on savings for lower-income households, mobile access is the primary means of Internet access for some lower-income consumers. According to 2011 Census Bureau data, around 50 percent of households with less than $25,000 in income did not have computer-based Internet access from some location.according to the 2013 Federal Deposit Insurance Corporation’s (FDIC) National Survey of Unbanked and Underbanked Households, 7.7 percent of U.S. households, or nearly 9.6 million households, were unbanked— Further, that is, they did not have a bank account at an insured institution.result, such households or individuals may not be able to access TreasuryDirect or complete a transaction if they wanted to buy savings bonds. Treasury officials recognized the access challenges related to TreasuryDirect that some potential users might face, but told us such challenges could be mitigated. Treasury officials said that they worked with organizations that provided Internet access to the public, such as libraries and community centers, and determined that such organizations provide the level of Internet access required for potential TreasuryDirect users. The officials also told us that in lieu of a bank account, individuals could use reloadable debit cards to purchase and redeem savings bonds through TreasuryDirect. While the use of such cards provides an avenue for those without a traditional bank account to purchase savings bonds, Treasury estimated that few savings bonds, approximately 1,426, had been purchased using prepaid debit cards from mid-April 2005 through mid-November 2014. Further, Treasury officials told us that unbanked individuals could use the Tax Time program to purchase paper savings bonds. Our analysis of IRS data on the Tax Time program indicates that around 91 percent of tax filers who used part of their tax refund to purchase paper savings bonds had part of their refund directly deposited into a bank account. Similarly, based on data from SCF surveys from 2001 through 2010, over 90 percent of households who owned savings bonds have bank accounts. Additionally, according to FDIC’s survey, more than 90 percent of all households the agency surveyed had a bank account. According to Treasury officials and representatives from several nonprofit organizations that we interviewed, TreasuryDirect also poses some usability challenges. For example, Treasury officials and nonprofit representatives told us that giving savings bonds as a gift through TreasuryDirect can be a cumbersome process. They explained that TreasuryDirect requires the individual buying the savings bond to have the Social Security number and TreasuryDirect account number of the recipient of the gift bond, information the individual may not know. The gifting process also requires the recipients or their parents or guardians to set up a TreasuryDirect account, if they do not have one. Treasury officials told us that issues associated with the process of buying bonds as gifts were the source of the most common complaints from customers about savings bond transactions through TreasuryDirect. In addition, representatives from nonprofit organizations and an academic we interviewed told us that TreasuryDirect generally was not a user-friendly system, even for individuals who were comfortable using the Internet for their financial transactions. They told us that navigating the system was not easy and could pose challenges to potential customers who were not familiar with online financial transactions. Similarly, Treasury officials told us that customers anecdotally had expressed concerns about difficult navigation, lengthy application pages, organization of information, security features, complicated linked accounts processes, and difficulty locating tax reporting information. When Treasury eliminated paper savings bonds in January 2012, there were nearly 379,000 total funded TreasuryDirect accounts.580,000 total funded TreasuryDirect accounts, but the extent to which the increase resulted from savings bond investors has not been determined. Our analyses of Treasury savings bond data indicated that the decline in savings bond purchases after Treasury discontinued the sale of paper savings bonds in January 2012 was consistent with the overall long-term decline in savings bond purchases. In addition, the decline since January 2012 generally was not statistically significant based on models we estimated. While there was a large decline in purchases in 2012 and 2013 when sales of paper savings bonds were discontinued, there are a number of factors that could account for this decline. For example, savings bond purchases declined in 9 out of 10 years from 2002 to 2011, and some declines were quite large, hence recent declines in purchases may be reflective of long-term trends. In addition, we found that savings bond purchases have been sensitive to interest rate changes, with savers typically purchasing more when interest rates are higher and purchasing less when they are lower. The low interest rates in recent years may account for some of the decline in savings bond purchases. Although lower-income households that do not have bank accounts or Internet access could face challenges accessing or using TreasuryDirect, this challenge may only affect a small percentage of such households. Our analyses indicate that a small percentage of such households buy savings bonds in general, even when they were available in paper form. According to data from the 2013 SCF survey, 4.6 percent of lower-income households held savings bonds in 2013, and this percentage had declined from 7.7 percent in 2001. In a July 2014 Federal Register release, and in support of its strategy to reach new customers, develop new product delivery streams, and increase the number of available product offerings, Treasury released its plans to introduce the Treasury Retail Investment Manager (TRIM), which According to Treasury officials, TRIM will be will replace TreasuryDirect.more flexible and responsive to changing business and digital investing needs. Treasury officials told us that they plan to offer mobile phone access through TRIM, which could improve access for households that do not have computer-based Internet access at home. Treasury officials also told us that TRIM would attempt to address a number of TreasuryDirect’s usability challenges. For example, Treasury officials told us that the TRIM system should be more user friendly for customers, because it will have an online interface that is similar to the online interfaces that banks and stock brokers offer and with which most customers are likely familiar. The system also is expected to streamline various steps for customers navigating the system—for example when they open or sign into accounts—to improve usability and potentially save Treasury money by reducing calls to customer service. According to Treasury officials, they also are exploring ways for TRIM to simplify the process for buying savings bonds as gifts and to allow for multiple funding options. One option under consideration is for a customer to buy a savings bond gift certificate that can be given to another individual, who can go online to open a TRIM account and use the certificate to buy the savings bond directly. Treasury also is exploring multiple funding options for customer accounts to provide options to savers who do not have bank accounts. As of May 2015, TRIM was under development, and Treasury officials told us that its release date had not been set. According to Treasury officials, TRIM is being developed in four phases—initiation, planning, execution, and closing. Treasury officials told us that TRIM was in the planning phase and that the system’s design was being developed. Specifically, Treasury officials are working on defining technical requirements for the system. Before TRIM can be implemented, Treasury will need to complete the execution and closing phases, which include technical design, system coding, various testing, consumer education, and system documentation. Treasury officials told us that they did not have a specific release date for TRIM, which will depend on the time needed to complete the next steps in the project plan. According to a Treasury estimate issued in 2013, TRIM was expected to cost around $18 million to develop and implement. Treasury officials told us that, as of May 2015, they did not have any changes to this estimate and that the costs they had incurred thus far had been consistent with the estimate. They also told us that Treasury had tentative plans to develop an implementation plan for TRIM by April 2016. Since 2010, U.S. tax filers have used the Tax Time program to save by using their tax refund to purchase paper savings bonds. For example, about 55,000 tax filers with adjusted gross incomes of $25,000 or less participated in the program for tax years 2010 through 2013 and bought about $13.7 million in savings bonds. Treasury has been extending the program annually in consideration of some of the program’s benefits, but not in consideration of the program’s costs. Since 2010, U.S. tax filers have been able to use their tax refund to purchase paper savings bonds through the Tax Time Savings Bond program. In 2009, President Obama proposed a package of initiatives to spur increased savings that included a provision for purchasing savings bonds with tax returns. Under the Tax Time program, tax filers receiving a tax refund may use an IRS form to allocate their refund among several options, such as purchasing paper savings bonds or depositing their refund directly into their bank account. As shown in table 1, in tax years 2010 through 2013 about 142,000 total tax filers used the Tax Time program to buy a total of about $72.5 million in paper savings bonds.(According to data provided by Treasury, of the 142,000 total tax filers that used the Tax Time program, about 20 percent were repeat participants in the program). These filers purchased, on average, approximately $500 in paper savings bonds each year. Table 1 also shows that about 55,000 tax filers with an adjusted gross income of $25,000 or less collectively bought about $13.7 million in paper savings bonds. These filers purchased, on average, approximately $250 in paper savings bonds each year. At the same time, the number of tax filers participating in the Tax Time program and the amount of savings bonds purchased under the program were relatively small. The total number of tax filers receiving a refund for tax years 2010 through 2013 was more than 100 million in each year, and Tax Time participants made up less than 1 percent of this group. Similarly, the amount of savings bonds purchased through the program from 2010 through 2013 accounted for about 1 percent of the total amount of all savings bonds purchased during those years. About 30 percent of Tax Time program participants also were tax filers who received the Earned Income Tax Credit. Enacted by Congress in 1975, the Earned Income Tax Credit is one of the largest antipoverty programs. Generally, income and family size determine a taxpayer’s eligibility, and the credit is a refundable tax credit for low-to-moderate income working individuals and couples—particularly those with children. As shown in table 2, about 30 percent of tax filers participating in the program from 2010 through 2013 received the Earned Income Tax Credit. According to representatives from three nonprofit organizations and two academics we interviewed, tax season provides an opportunity for tax filers receiving a refund to set aside an amount of money specifically for savings. They told us that tax season was often the one time during the year that tax filers—particularly those with low incomes—had a relatively large lump sum of money available to save. However, in some instances, tax filers receiving a refund may already know what they plan to use their refunds for, and that may not include any savings. Treasury has been extending the Tax Time program on an annual basis and plans to continue extending it in the short term. According to Treasury officials, the program was scheduled to expire after the 2015 tax season, in which case tax filers would no longer have had the option to use the IRS form to purchase paper savings bonds. However, Treasury officials told us that the agency decided in December 2014 to extend the program through the 2016 tax season. The decision was made by the Fiscal Assistant Secretary of the Treasury based on an internal recommendation from the Commissioner of the Bureau of the Fiscal Service, which oversees the savings bond program. Treasury officials said that they intended to continue recommending the continuation of paper tax-time bonds until a suitable electronic alternative is implemented. However, Treasury officials did not provide us with any additional information on how an electronic alternative would replace the option of purchasing paper savings bonds. For participants who do not have Internet access or want to buy bonds electronically, it is not clear what a suitable electronic alternative would be. Although Treasury has been extending the Tax Time program on an annual basis, it has not assessed the program’s costs along with its benefits. In deciding to extend the program in the last 2 years, Treasury officials told us that they considered participation levels and the amount of savings bonds purchased through the program. Such data indicate some of the program’s benefits, namely its ability to promote savings by lower- income and other households. While the amount of bonds purchased and program participation levels can be quantified, other benefits of the program, such as providing a savings opportunity for lower-income households that may not be able to access TreasuryDirect to purchase savings bonds online, are more difficult to quantify. Although Treasury officials considered some of the Tax Time program’s benefits in deciding to extend it, they generally did not consider the program’s costs in their decision-making process. According to Treasury and IRS officials, Treasury has not conducted an analysis on the current costs of the program or determined how much Treasury would save if the program were allowed to expire after the 2016 tax season. IRS officials told us that IRS’s current costs to administer the program were minimal, because IRS largely processes the forms electronically. Treasury officials told us that its current cost of printing and mailing a paper savings bond was approximately 17 cents, but this estimate did not include the share of the overhead, system, and other costs attributable to paper savings bonds. Moreover, the 17 cent estimate also did not include any cost that IRS incurred for its role in implementing the program. In prior work on agency stewardship of public funds, we reported that properly estimating program costs is necessary for several reasons and that comparing these costs to the program’s benefits to evaluate alternatives related to program decisions is a best practice. Producing cost estimates is important for evaluating resources and making decisions about programs at key decision points. Credible cost estimates also help support funding decisions for an agency’s programs. Comparing these costs to the benefits in order to consider all alternatives for a program ensures linkage between the alternatives. In deciding to extend the Tax Time program, Treasury has considered some of the program’s benefits but generally not the program’s costs, both of which are needed to evaluate program performance and alternatives. As discussed, Treasury has previously considered levels of program participation and amounts of savings bonds purchased by participants in its decisions, and most recently has extended the program until a suitable electronic alternative is available. Consideration of not only the Tax Time program’s benefits but also the program’s cost would provide Treasury with important information in evaluating not only the resource requirements when deciding whether to allow the program to expire but also the program’s performance in relation to its benefits and costs. For example, if the program’s operating costs are minimal, then the program’s benefits may outweigh its costs, such as providing opportunities for lower-income households to save. Conversely, if program costs are significant, those costs might outweigh the program’s benefits in light of the number of tax filers using the program and the availability of an electronic alternative. However, by not having full, reliable, estimates of the cost of the Tax Time program to compare to the benefits, Treasury’s ability to make a fully informed decision is limited. GAO found that lower-income households save relatively small amounts and face a number of savings challenges that result, in part, from limited access to financial institutions and products. According to several academics and nonprofits we interviewed, savings and other asset- building programs are fundamental building blocks for helping lower income-households achieve economic mobility and security. Savings provide a buffer against unexpected events and a means to move up the economic ladder through investments, such as by buying a home, paying for college, starting a business, or saving for retirement. In addition to the Tax Time program, discussed above, federal, state, and local agencies as well as nonprofits have developed a number of programs aimed at assisting lower-income households to save and build assets. These programs include providing financial literacy and education services, and range from promoting short-term financial goals, such as emergency savings, to long-term financial goals, such as saving for retirement. According to 2013 SCF data, lower-income households have limited savings in bank accounts and other financial assets. the lowest income quintile (or bottom fifth) had a median income of around $14,200 in 2013, and households in the next income quintile had a median income of around $28,400. As shown in table 3, 82 percent and 93 percent of the U.S. households in the bottom two income quintiles had financial assets, but the median value of these financial assets were $550 and $3,064, respectively. In other words, half of the households in the lowest income quintile held $550 or less in financial assets. In comparison, the median value for financial assets for all surveyed households in 2013 was $17,580. Bank accounts are the mostly widely held financial asset among lower-income households, according to 2013 SCF data. However, separate from bank accounts, a significant majority of lower-income households hold few or no other financial assets, such as stocks, bonds, or mutual funds. For example, 9 percent of U.S. Financial assets in SCF include bank accounts, certificates of deposit, savings bonds, bonds, stocks, mutual funds, retirement accounts, and cash value life insurance. households in the bottom income quintile have retirement accounts, compared with around 28 percent of households in the next lowest income quintile. As shown in figure 3, median household financial assets, excluding retirement accounts, dropped in the wake of the 2001 and 2008 recessions and have not recovered to pre-recession levels. Median holdings in 2013 were down by 40 percent or more in comparison to median holdings in 2001 for both the population as a whole and for lower- income households.assets for the two lowest income quintiles was $1,000 in 2013. This total reflects the relatively low level of short-term savings for these households. Since at least 2003, the federal government has played a broad role in promoting financial literacy, which encompasses financial education—the process by which individuals improve their knowledge and understanding of financial products, services, and concepts. Financial literacy plays an important role in helping to promote the financial health and stability of individuals and families. In prior work on financial literacy, we reported that federal agencies have made progress in recent years in coordinating their financial literacy activities and collaborating with nonfederal entities, in large part due to the efforts of the federal multiagency Financial Literacy and Education Commission (FLEC). In addition to their financial literacy efforts, some federal agencies have developed savings programs involving financial assets. These programs are aimed at helping households and individuals that may not have access to traditional savings vehicles, such as employer-sponsored retirement plans. According to a Treasury official, Treasury launched the myRA program, which is in a soft-launch phase, to promote retirement savings among individuals without access to employer-sponsored retirement plans.According to Treasury, the program offers a retirement savings account that is a Roth IRA, so it follows the same rules that apply generally to Roth IRAs and receives the same tax treatment.no minimum-amount requirement, a maximum balance of $15,000, and it can be funded through payroll direct deposit. The account houses a savings bond that will never go down in value (except from withdrawals) and the security in the account, like other Treasury securities, is backed by the U.S. Treasury. Participating employers make myRA information available to their employees. Employees are able to enroll in the program, and then elect to have a portion of each paycheck directly deposited into their myRA automatically. A myRA has no fees, Treasury officials stated that they worked to develop the framework for this program in 2014, including issuing a new Treasury security to serve as the investment option for these accounts, and designing easy-to- understand materials for savers. Treasury continued to build on the development process by making myRA available to a small group of employers, including federal agencies. Presently, Treasury is working closely with this small group of participants to get feedback and better ensure that the user experience is as simple and straightforward as possible–both for employers and employees–before myRA becomes more broadly available later this year. Treasury has indicated that it is too early to begin evaluating the impact of the myRA program. However, Treasury officials told us that they will continue to monitor the progress of the program as it moves through its soft-launch phase. Given the challenges low-and-moderate income households face in obtaining financial or banking services, FDIC has created a number of initiatives to help low and moderate-income individuals improve their financial skills and use financial institutions according to FDIC officials. For example, FDIC officials stated that, in 2001, FDIC developed the Money Smart program, which is a comprehensive financial education curriculum designed to help consumers, especially low- and- moderate income consumers and entrepreneurs, enhance their financial skills and help create positive banking relationships. Officials added that FDIC provides the curriculum free of charge in formats for consumers to complete on their own or through instructor-led classes. According to FDIC, the program has reached over 2.75 million consumers since 2001. In April 2007, FDIC used a three-part survey to determine the effectiveness of its Money Smart financial education curriculum and found that the program positively influenced how course participants managed their finances and their financial confidence. The study also found that these positive changes were sustained months after participants had completed Money Smart training. Specifically, the study found that participants were more likely to open deposit accounts, save money in a mainstream deposit product, use and adhere to a budget, and demonstrate increased confidence in their financial abilities when they were contacted 6 to 12 months after completing the Money Smart course compared to before beginning the course. To further promote low and moderate income consumers’ access to financial services, FDIC developed the Model Safe Accounts Pilot in January 2011. The pilot was designed to evaluate the feasibility of having financial institutions offer safe, low-cost transaction and savings accounts (Safe Accounts) that are responsive to the needs of underserved consumers- including those with low and moderate incomes. Nine financial institutions participated in the pilot by offering Safe Accounts, which are checkless, card-based electronic accounts that limit acquisition and maintenance costs. These accounts allow withdrawals only through automated teller machines, point-of-sale terminals, automated clearinghouse pre-authorizations, and other automated means. Overdraft and nonsufficient funds fees are prohibited with the transaction accounts. According to FDIC, the nine banks opened more than 3,500 Safe Accounts during the pilot. Retention of these accounts exceeded expectations—more than 80 percent of transaction accounts and 95 percent of savings accounts remained open at the end of the 1-year pilot period. According to FDIC, Safe Accounts performed on par with or better than other transaction and savings accounts and several of the banks plan to continue to offer Safe Accounts—some banks are also considering the possibility of graduating pilot accountholders to traditional deposit accounts. Although the Safe Accounts program was only a 1-year pilot, FDIC officials told us that the agency provides interested FDIC insured institutions with a Safe Accounts template that includes guidelines for offering cost-effective transactional and savings accounts to underserved consumers. This template was based, in part, on lessons learned during the pilot phase. FDIC announced its Youth Savings Pilot Program on August 4, 2014. According to FDIC, this pilot program seeks to identify and highlight promising approaches to offering financial education tied to the opening of safe, low-cost savings accounts for school-aged children. The pilot has two phases. According to FDIC officials, Phase I includes FDIC insured institutions currently working with schools or nonprofit organizations that help students open savings accounts in conjunction with financial education programs during the 2014 to 2015 and 2015 to 2016 school years. Nine banks differing in size, location, and business models were selected for the first phase. The officials added that Phase II will include FDIC insured institutions beginning or expanding youth savings account programs during the 2015 to 2016 school year. FDIC is collecting summary information—including data on the number of accounts opened and financial education approaches used—from pilot participants. When the pilot is complete, FDIC intends to publish a report to provide financial institutions with promising approaches to working with schools and other organizations to combine financial education with access to a savings account. The Office of Community Services at the Department of Health and Human Services’ Administration for Children and Families administers the Assets for Independence program. Started in 1998, the Assets for Independence program awards grants to community-based entities, nonprofits and state, local, and tribal government agencies that partner with nonprofits to implement an asset-based approach for assisting low income families to become economically self-sufficient according to the Administration for Children and Families. According to agency officials, entities receiving these grants enroll participants in Assets for Independence projects to save earned income in special-purpose, matched savings accounts, also called individual development accounts. According to agency officials, every dollar that a participant deposits into an Assets for Independence individual development account is matched by the Assets for Independence project. Match rates can vary from $1 in match funds for every $1 the participant deposits in his or her individual development account, to as much as $8 in match funds for every $1 saved. Participants generally must use their individual development accounts and matching funds for a qualified expense: the purchase of a home; the capitalization or expansion of a business; or post-secondary educational expenses. According to agency officials, under the program, grantees are required to assist participants in the demonstration project in obtaining the skills necessary to achieve economic self-sufficiency. Examples of such activities include providing financial education and credit counseling. As illustrated in table 4, from 2010 through 2014, according to agency officials the Administration for Children and Families awarded 269 Assets for Independence grants and over $62 million to a number of organizations including nonprofits, state or local governments, tribal governments, and community development financial institutions, to name a few. Table 4 also shows that the program budget for the Administration for Children and Families since fiscal year 2010. According to Administration for Children and Families data through fiscal year 2010, more than 90 percent of Assets for Independence projects allowed participants to pursue homeownership as an asset goal, while more than 80 percent allowed participants to pursue postsecondary education or training and business capitalization as asset goals. Nearly one-third of projects allowed participants to transfer account savings to the individual development account of a spouse or dependent. In 2011, Administration for Children and Families began a random assignment evaluation of the Assets for Independence program at two grantee sites. This evaluation will assess the impact of Assets for Independence program participation on savings, savings patterns, and asset purchase by lower-income individuals and families. It builds on the previous quasi-experimental evaluation and studies of other non-Assets for Independence funded individual development account projects. The 2008 evaluation used data from the early to mid-2000s and found that Assets for Independence program participants were 35 percent more likely to become homeowners, 84 percent more likely to become business owners, and nearly twice as likely to pursue post-secondary education or training compared with a corresponding national sample of nonparticipants eligible for the program. According to the Administration for Children and Families, the random assignment evaluation will further understanding of the program’s overall impact on early participant outcomes. The evaluation team completed participant enrollment and baseline data collection in July 2014 and expects to release its final report in early 2016. The Department of Housing and Urban Development (HUD) awards competitive grants to public housing agencies for the administration of programs that encourage residents of public housing to attain self- sufficiency through programs such as the Family Self Sufficiency program. The program funds coordinators who help participants achieve employment goals and accumulate assets. Through the coordination and linkage to local service providers, program participants receive training and counseling that enables them to increase their earned income and decrease or reduce their need for rental assistance. Under the Family Self Sufficiency program, escrow accounts are used as incentives to increase work effort and earnings. Specifically, when participants have to pay a higher rent after their earned income increases, the public housing agency calculates an escrow credit that is deposited each month into an interest-bearing account (see fig. 4). Families that successfully complete their contract for the Family Self Sufficiency program receive their accrued escrow funds. According to HUD officials, over 72,000 households participated in the program in fiscal year 2014, and 4,382 families successfully completed their Family Self Sufficiency contracts. The 2013, 2014 and 2015 appropriation amounts for the Family Self Sufficiency program was $75 million. HUD is requesting $85 million in 2016. In September 2004, HUD commissioned a 5-year prospective study of the Family Self Sufficiency program, focusing on programs serving Housing Choice Voucher recipients. The study provided a final assessment of the experiences of a representative sample of Family Self Sufficiency participants that enrolled in 2005 and 2006. The study also examined the relationship between participants’ characteristics, Family Self Sufficiency programmatic features, and program outcomes. The study found that after 4 years in the Family Self Sufficiency program, 24 percent of the study participants completed program requirements and graduated. When the study ended, 37 percent had left the program without graduating and 39 percent were still enrolled in the Family Self Sufficiency program. Program graduates were more likely to be employed than participants who did not graduate or who still were enrolled in the program. Program graduates also had higher incomes, both when they enrolled in the Family Self Sufficiency program and when they completed the program, than participants with other outcomes. Staying employed and increasing their earned incomes helped graduates to accumulate substantial savings in the Family Self Sufficiency escrow account. The average escrow account balance was $5,294 for program graduates, representing about 27 percent of their average household income at the time of program enrollment. Recognizing that financial literacy or education is only part of the solution to help lower-income households achieve financial security, state and local government agencies and nonprofits have developed a variety of programs targeting specific populations or serving a specific savings purpose. These include retirement savings programs, prize-linked savings programs, short-term emergency savings programs, and various asset building (or asset accumulation) programs that promote savings for specific goals (e.g., post-secondary education, home ownership, or business ownership). Several states have created prize-linked savings programs to offer a new way to help lower-income and other individuals to save. As of 2015, Michigan, Nebraska, North Carolina, and Washington have created Save to Win programs, in which participating credit unions offer their members the opportunity to open prized-linked savings accounts. A Save to Win account is designed as a 12-month share certificate that allows for unlimited deposits throughout the year. Savers are required to deposit only $25 to open an account and earn raffle tickets for every additional $25 deposited in the account, with a cap on the number of entries per month. The cap helps ensure that individuals who cannot save as much still have opportunities to win. Raffle tickets qualify participants for the chance to win monthly cash prizes and grand prizes at the end of the year. According to Doorways to Dreams Fund, since the launch of Save to Win in 2009, over 50,000 accounts have been opened with over $94 million in savings in Michigan. Moreover, the nonprofit reported that among surveyed Save to Win accountholders, between 62 percent 81 percent were financially vulnerable. Michigan passed a law in 2003 to allow for credit unions to offer “savings promotion raffles.” The other four states also have modified their laws to allow credit unions to offer prize- linked accounts, savings promotion raffles, or other promotional contests of chance. On the federal level, in 2014, Congress passed the American Savings Promotion Act to provide for the use of savings promotion raffle products by financial institutions to encourage savings. According to some nonprofit officials and academics we interviewed, federal and state savings programs primarily promote and provide tax incentives for retirement savings, which tend to benefit higher-income households more than lower-income households. At the same time, they told us that short-term or emergency saving tends to be more important for lower-income households, because it helps households meet their immediate needs—for example, to cover unexpected car repairs, medical expenses, or temporary unemployment. Some government entities and nonprofit organizations have developed pilot and other programs to promote short-term emergency savings. According to program officials, the AutoSave Pilot was a joint initiative of two nonprofits—New America and MDRC. Program officials told us that the pilot tested the feasibility of establishing automatic savings programs that use direct deposit to divert a small amount of after-tax wages into savings accounts. Automatic savings programs would be especially valuable for individuals who have few liquid assets and limited access to low-cost credit products, because these savings can be used as a personal safety net in the event of unanticipated expenses or a sudden decrease in income, according to New America and MDRC. AutoSave investigated two different program designs. The first program design, implemented in fall 2009, was the “opt -in program,” where employees signed up for the AutoSave savings program through their employer. Employees who did not have a savings account were able to open one through a bank or credit union that partnered with the workplace site. With this version of the program design, only the savings deposits were automatic. The opt-in AutoSave program design had been offered to employees at eight workplace sites, ranging in size between 13 and 25,000 employees. The pilot had a special focus on generating participation among low- to moderate-income workers, although all employees were eligible to sign up. Overall participation rates ranged between 2 percent and 62 percent of all employees at these targeted workplaces, with most sites ranging between 9 percent and 25 percent. In sites where wages were tracked, the majority of participants had wage levels within the lower three-fifths of the wage distribution in their workplace. These participation results were consistent with expectations for the opt-in program design. The second investigated program design was an “opt-out program,” where all employees would have been automatically enrolled in the AutoSave savings program unless they elected not to be in the program. With this design, both enrollment and deposits would have been automatic. Opt-out enrollment was not actually piloted because MDRC’s assessment of the legal and operational risks concluded that while this approach would presumably be legal in some states, a lack of regulations or case law addressing the model meant that employers would be taking undue risks to implement the opt-out model. In the absence of such guidance or precedence, MDRC has determined that it is not currently feasible to implement the opt-out enrollment program design (even by using a payroll card with an attached savings product). According to an official at the City of San Francisco, the EARN Starter Account program, developed by the California non-profit EARN, seeks to increase the supply of starter account products that allow unbanked lower-income households to begin saving. Program participants must make at or below 50 percent of their area median income. The EARN Starter Account is an online program that rewards participants for consistently saving at least $20 each month for six months, and participants earn a maximum of $55 in matched funds over the six month period according to the nonprofit. Participants link their existing savings accounts to the EARN Starter Account platform to facilitate savings. If participants make any withdrawals over the 6 months matched funds earned will be forfeited and the account may be closed. At the end of 6 months, participants can claim the funds. Participants can continue using the EARN website for another 6 months. Since 2002, 6,000 EARN clients have saved $6.8 million dollars, and 83 percent of participants have continued to save after their formal program ended, according to a qualitative study by the nonprofit. The study found that consistent savers also demonstrated a shift toward future orientation. More specifically, these program participants were planning to acquire more assets (such as further education, the purchase of a home, or founding or developing a small business). EARN is partnering with the City and County of San Francisco to bring the Starter Account platform to low-income San Franciscans, beginning with a pilot program for public housing residents. Some government entities and nonprofit organizations have developed programs to encourage lower-income households to save part of their income tax refund. According to officials at the Center for Social Development at Washington University in St. Louis, Refund to Savings is a pilot program intended to help lower-income households build savings and increase financial security. Launched in 2012, the pilot is a collaboration among Washington University in St. Louis, Duke University, and Intuit Inc. According to program officials, the program is implemented through a version of Intuit’s tax preparation software that is available for free to lower-income taxpayers and reaches approximately 1.2 million households. The goal of the initiative is to design and test a low-cost scalable intervention that can lead tax filers to save part of their tax refund. Under the pilot, Intuit users are assigned randomly to a treatment or control group. The treatment group uses a version of the software in which they receive prompts to motivate them to save part of their tax refund as emergency savings. In 2013, the pilot tested automatic refund splitting in which the software automatically put part of the tax filer’s refund in a savings account or savings bond. According to officials at the Center for Social Development, tax filers who did not want to split their refund had to select an “I don’t need to save” button to opt out. In 2013, almost 900,000 low- and moderate-income tax filers participated in the pilot, depositing approximately $5.9 million more in savings accounts than they would have without the intervention, according to the Center for Social Development officials. Data generated by program use and refund allocation behavior will be evaluated to determine whether the prompts, saving opportunity, or both increased saving levels compared with the control group, according to the Center for Social Development at Washington University. According to officials at MDRC and New York City’s Office of Financial Empowerment, the SaveUSA program (formerly $aveNYC) is administered by the Mayor’s Fund to Advance New York City and the New York City Center for Economic Opportunity and offers lower- income households an incentive to save a portion of their tax refund. According to program officials, SaveUSA was launched in 2011 in four cities (New York City, Tulsa, San Antonio, and Newark). Participants open a SaveUSA account when they file their taxes. They are required to save at least $200 of their refund for a year, and earn 50 cents for every dollar saved, with a maximum match of $500. According to an April 2014 study of the program by MDRC, nearly two-thirds of SaveUSA participants in 2011 (the program’s first year) qualified for the savings match and received, on average, $191 in savings match dollars. In the second program year, 39 percent of the 2011 SaveUSA sample participated again, and about 27 percent received a savings match according to the MDRC study. The MDRC study found that on average, SaveUSA group members received $96 in savings match dollars in the program’s second year. According to the MDRC study, those who received a savings match in both years appear to have been in a better position to save—they tended to be older, were more likely to have more income, and were more likely to have pledged the maximum amount allowed of $1,000, compared with other SaveUSA group members. In contrast, SaveUSA group members who had especially low incomes or who pledged the minimum amount of $200 were the least likely to ever receive a savings match. Asset building is based on strategies that help households build financial or tangible assets, such as savings, a home, or a business. A number of nonprofits, states, and municipalities have developed programs to help lower-income households build assets through the use of individual development accounts or child development accounts. As discussed, the Office of Community Services at the Administration for Children and Families administers the Assets for Independence program, which awards grants to community-based entities, nonprofits, and government agencies to implement special-purpose, matched savings accounts or individual development accounts. The length of the program, amount of matching dollars provided, allowable uses for savings, and other rules may be different from one program to the next. An example of an individual development account is the Assets for All Alliance program. According to officials at the Opportunity Fund, this individual development account was launched in 1999 by the Opportunity Fund (formerly Lenders for Community Development) in collaboration with the Silicon Valley Community Foundation Center for Venture Philanthropy and several community partners, including a number of nonprofit social service agencies. According to a study published by the Silicon Valley Community Foundation and Lenders for Community Development, the Assets for All Alliance individual development account program is intended to help lower-income families “learn financial management skills and build assets that would help them permanently improve their economic situation.“ Savings by program participants are “matched by philanthropic and government dollars on a two-to-one basis” according to the study. According to the Opportunity Fund, this program has resulted in 1,028 individual development accounts and $2.77 million in total savings towards asset goals. According to officials at the Center for Social Development at Washington University in St. Louis, child development accounts are savings or investment accounts opened as early as birth. The goal of child development accounts is to promote saving and asset building for lifelong development. Child development accounts assets may be used for postsecondary education, homeownership, or enterprise development. In many cases, public and private entities deposit funds into these accounts to supplement savings for the child. Although the goal of child development accounts is long-term savings accumulation, programs differ in design and features. According to the Center for Social Development, enrollment in some states, including Maine and Nevada, is automatic unless parents opt out (opt-out programs). Some other child development accounts are voluntary or opt-in, meaning that parents must enroll their children, often by opening a 529 or bank savings account. For example, the Nevada College Kick Start program automatically deposits $50 into a 529 account for every public school kindergartner in the state according to officials at the Center for Social Development. In 2014, 70,000 students had been enrolled in Kick Start. Officials told us that Maine’s College Challenge is the only statewide universal child development account program in the nation, benefiting all children born in Maine (more than 40,000 children in 2014). The program automatically deposits $500 into a 529 account on the child’s behalf. Both Nevada and Maine’s 529 plans offer savings matches to state residents according to officials at the Center for Social Development. Other examples of child development accounts include those developed by national nonprofits including the Corporation for Enterprise Development and New America. According to New America, some municipalities also have launched their own child development account programs. For example, as New America reports, in San Francisco the Kindergarten to College program was launched in 2011 and opens accounts for every kindergartner in the city’s public schools. Lower income households face a variety of challenges to saving. U.S. savings bonds continue to provide Americans, including those with lower- incomes, with an affordable, safe, and convenient way to save and invest. However, when Treasury ended the over-the-counter sale of paper savings bonds through financial institutions in January 2012, it created challenges for some bond buyers who had to rely on accessing TreasuryDirect to purchase savings bonds online. Treasury has taken steps to develop a more flexible and responsive Internet-based system than TreasuryDirect, but the TRIM system is in the early stages of development. Treasury intends for these changes to address some of the existing access and other challenges associated with TreasuryDirect. Currently, the Tax Time Savings Bond program provides the only means by which individuals can purchase paper savings bonds, but the program’s future is uncertain, because Treasury may discontinue the program when TRIM is implemented. However, the TRIM system still will require Internet access by computer or mobile device, and Tax Time program users who lack Internet access may not be able to save by buying savings bonds at tax time if the program is discontinued. How the benefits and costs of the Tax Time program would compare when Treasury implements TRIM is not known—in part because Treasury generally has considered the program’s benefits but not the program’s costs. Without considering both, Treasury cannot make a fully informed decision on whether to discontinue the Tax Time program when an electronic alternative is available. To help ensure that Treasury can make a fully informed decision on whether to discontinue the Tax Time Savings Bond program as it implements the TRIM system, GAO recommends that the Secretary of the Treasury consider the benefits and costs of the Tax Time program in future decisions on whether to extend the program. We provided a draft of this report to Treasury and IRS for review and comment. In their comment letter, which is reprinted in appendix II, Treasury agreed with GAO’s recommendation and stated that it would conduct a cost-benefit analysis of the Tax Time Savings Bonds program. Treasury also provided technical comments, which we incorporated, as appropriate. We also provided draft excerpts for technical comment to federal and other agencies—including the Departments of Health and Human Services and Housing and Urban Development, FDIC, New York City’s Office of Financial Empowerment, and San Francisco Office of Financial Empowerment—and nonprofit organizations, including the Center for Social Development at Washington University, Doorways to Dreams Fund, MDRC, and Opportunity Fund. These third parties provided technical comments, which we have incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to Treasury, IRS, FDIC, HUD, and the Department of Health and Human Services, interested congressional committees, members, and others. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Cindy Brown Barnes at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our review examines (1) the effect of Treasury’s elimination of paper U.S. savings bonds, including on the savings bond program and bond purchases; (2) the extent to which Treasury’s Tax Time Savings Bond program has promoted savings, particularly by lower-income households, and Treasury’s plans for the program’s future; and (3) the extent to which lower-income households are saving using financial products, and some of the government and nonprofit programs developed to promote savings by lower-income households. For all three objectives, we analyzed various data. First, we used data issued by the Department of the Treasury (Treasury) on the amount of U.S. savings bonds purchased from 2001 through 2013 to analyze trends in savings bond purchases over this period, including the effect of the Treasury’s elimination of paper savings bonds on savings bond purchases. Second, we used data from the triennial Survey of Consumer Finances (SCF) issued by the Board of Governors of the Federal Reserve System for survey years 2001, 2004, 2007, 2010, and 2013 to estimate the percentage of U.S. households holding financial assets, including U.S. savings bonds; the median value of such financial assets held by U.S. households, and the median income of households. The survey data include information on families’ balance sheets, pensions, income, investments, and demographic characteristics. We analyzed the U.S. population data as a whole and also considered the bottom two income quintiles separately. We chose these survey years because they provide a period of about 10 years prior to and 1 year after the discontinuation of the sale of paper savings bonds at financial institutions. We analyzed the U.S. population data as a whole and also considered the bottom two income quintiles separately. SCF data are based on probability samples and estimates are formed using the appropriate estimation weights provided with the survey’s data. Because each of these samples follows a probability procedure based on random selections, they represent only one of a large number of samples that could have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (i.e., plus or minus 2.5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, all percentage estimates have 95 percent confidence intervals that are within 5 percentage points of the estimate itself, and all numerical estimates other than percentages have 95 percent confidence intervals that are within 5 percent of the estimate itself. We also reviewed documentation on the SCF, such as codebooks and Federal Reserve bulletins. Third, we used aggregated data provided by the Internal Revenue Service (IRS) on income tax filers who used at least part of their tax refunds to buy paper savings bonds from 2010 through 2013 to analyze the number of tax filers who bought paper savings bonds, including those with adjusted gross incomes of $25,000 or below—the lowest income category reported in the data—and the amount of savings bonds they purchased. We also used the aggregated data to analyze refund options used by the tax filers (such as paper check and paper savings bond, direct deposit and paper savings bond, or paper savings bond only) and demographic information about the filers, such as their age. We assessed the reliability of the data we used by interviewing knowledgeable officials, and conducting manual testing on relevant data fields, such as the number of tax filers who participated in the program and amounts of savings bonds purchased. We found the data we reviewed to be sufficiently reliable for the purposes of our analyses. To examine the effect of Treasury’s elimination of paper U.S. savings bond, including on the savings bond program and bond purchases, we reviewed data on savings bonds purchases from 2001 through 2013, and analyzed trends in purchases for this time period, including before and after paper savings bonds were discontinued in January 2012. Specifically, to analyze long- term trends in savings bond purchases and more recent trends since the end of paper sales, we estimated two econometric models. The first model was based on a portfolio choice model, and modeled purchases as a function of interest rates, inflation, and economy-wide risk (using the Chicago Board Options Exchange’s Volatility Index). In other words, consumers may make savings bond purchase decisions the same way they make other decisions about financial portfolio allocation, based on risk and return considerations. The second model was based on linear and quadratic time trends to capture the long-term reduction in purchases. We included monthly seasonal effects in both models. The drop in savings bond purchases after the end of paper sales was consistent with long-term trends and generally not statistically significant. The drop in purchases after the end of paper sales also was consistent with the reduction in interest rates at the time (the coefficient on interest rates was highly statistically significant). As with any econometric model, our approach is imperfect and is unlikely to include all factors that influence savings bond purchases. Additional data over time might provide different or more definitive estimates of the change in purchases associated with the end of paper sales. We reviewed Federal Register releases on TreasuryDirect and its replacement system, the Treasury Retail Investment Manager; Treasury documentation, including a description of data in the monthly statement of public debt, estimates of cost savings from eliminating paper savings bonds, press releases, Bureau of the Fiscal Service’s President’s budgets and capital investment plans; and TreasuryDirect materials. To assess the reliability of Treasury’s cost estimates, we interviewed Treasury officials on how the estimates were determined and reported. We also interviewed Treasury officials to discuss a range of issues related to its savings bond program, including the benefits and costs of eliminating paper savings bonds, concerns raised about TreasuryDirect, and plans for replacing TreasuryDirect. To determine the extent to which Treasury’s Tax Time Savings Bond program has promoted savings, we analyzed IRS data on the use of the program by tax filers for tax years 2010 through 2013 (as discussed in detail above). We also reviewed IRS documentation on the program, such as descriptions on how the program operates and answers to common questions about the program, and studies on the Tax Time program published by academics and nonprofit organizations focusing on social or economic policy. We interviewed Treasury and IRS officials about the Tax Time program’s operations, benefits, costs, and future in terms of its expiration. To better understand the extent to which this program can help lower-income households to save, we interviewed nonprofit organizations focusing on social or economic policy, including Doorways to Dreams Fund, New America, Corporation for Enterprise Development, and MDRC. To examine the extent to which lower-income households are saving using financial products, we examined SCF data for survey years 2001, 2004, 2007, 2010, and 2013 (as described in greater detail above). Based on these data, we defined lower-income households as the lower two distributions (or quintiles) of households in the United States. To describe some of the government and nonprofit programs developed to promote savings by lower-income households, we conducted Internet and literature searches for research, initiatives, testimonies, and studies on savings programs targeting lower-income households and reviewed materials on such programs. We specifically reviewed select federal, state, local, and nonprofit programs targeting either long-term (such as retirement or asset accumulation) or short-term savings goals for lower- income households. For the purposes of this report, we focused on programs designed to promote savings using financial assets, such as bank accounts, bonds, mutual funds, and retirement accounts. We generally excluded programs designed to promote savings through home ownership or other nonfinancial assets. For federal programs, we focused our review on federal agencies involved in promoting financial literacy that are members of the multiagency Financial Literacy and Education Commission (FLEC). We interviewed six FLEC member agencies—the Departments of the Treasury, Housing and Urban Development, Health and Human Services, and Education; the Federal Deposit Insurance Corporation; and the Bureau of Consumer Financial Protection, also known as the Consumer Financial Protection Bureau—about their savings programs and reviewed related documentation. We also reviewed select state, local, and nonprofit programs targeting lower-income households. We selected these programs based on our research of savings programs for lower-income households and interviews with FLEC members and other stakeholders. For the programs we selected, we interviewed relevant program officials and reviewed documentation on the programs, including information on participation in the programs where available. Finally, we interviewed other relevant stakeholders, including Doorways to Dreams Fund, New America, Corporation for Enterprise Development, MDRC, Consumers for Paper Options, and academics. We conducted this performance audit from August 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Richard Tsuhara (Assistant Director), Tarek Mahmassani (Analyst-in-Charge), Emily R. Chalmers, Michael Gitner, Michael Hoffman, Wati Kadzai, Robert Letzler, Marc Molino, Patricia Moye, and Andrew Stavisky made significant contributions to this report. | U.S. savings bonds provide Americans with an affordable way to save. In 2012, Treasury stopped selling paper savings bonds at banks as part of its broader electronic initiative. As a result, savings bonds generally must be purchased through TreasuryDirect®. The one exception is the Tax Time Savings Bond program, established in 2010 to enable taxpayers to use their tax refund to buy paper savings bonds. The program is one way for lower-income families to save. You requested that GAO examine Treasury's savings bond program, including the accessibility of TreasuryDirect, and other savings programs. This report examines (1) the effect of Treasury's elimination of paper U.S. savings bonds on the program and bond purchases, (2) the extent to which the Tax Time Savings Bond program has promoted savings by lower-income households and Treasury's future plans for the program, and (3) the extent to which lower-income households are saving and programs developed by federal agencies and others. GAO reviewed agency rules and other documents; analyzed Treasury, Internal Revenue Service, and other data, in part using economic models; and interviewed federal, state, and nonprofit entities and experts involved in savings programs. The Department of the Treasury's (Treasury) elimination of paper savings bonds made buying bonds more difficult for some customers, but GAO's analyses generally indicated that the decline in bond purchases after the change was not statistically significant. Treasury eliminated paper savings bonds in January 2012, after a long-term decline in savings bond purchases. It estimated the change would save about $70 million in program costs from 2012 through 2016. Except for the Tax Time Savings Bond program, customers who want to buy savings bonds must use TreasuryDirect—an online system that requires users to have Internet access and a bank account. Customers without both, which likely includes lower-income households, face challenges accessing TreasuryDirect. Treasury is in the early stages of developing a new system, the Treasury Retail Investment Manager (TRIM), to make it easier to buy savings bonds, such as by using a mobile device, which often is the primary means of accessing the Internet for many lower-income households. A little more than one-third of the users of Treasury's Tax Time Savings Bond program—the only way to purchase paper bonds—were lower-income tax filers (filers with an adjusted gross income of $25,000 or less), but the program's future is uncertain. Since 2010, tax filers have been able to use a tax form to buy paper savings bonds with their tax refund. For tax years 2010 through 2013, about 142,000 tax filers (less than 1 percent of tax filers receiving refunds) used at least part of their tax refund to buy nearly $72.5 million in savings bonds. Of these filers, about 55,000 had incomes of $25,000 or less and bought about $13.7 million in savings bonds, or about $250, on average, per filer each year. Treasury has been extending the program partly because the amount of bonds purchased and participation levels indicate that the program is providing benefits, but it generally has not considered the program's costs. In May 2015, Treasury officials told GAO that they plan to continue to extend the program until TRIM can provide a suitable electronic alternative. Because TRIM will require Internet access by computer or mobile device, Tax Time program users without such access may no longer be able to save by buying bonds with their refunds after TRIM is implemented. In prior work on agency stewardship of public funds, GAO reported that agencies, as a best practice, should consider both benefits and costs in considering alternatives related to program decisions. Without considering both, Treasury cannot make a fully informed decision on whether to discontinue the Tax Time program when an electronic alternative is available. On the basis of GAO's analysis of data from the most recent Survey of Consumer Finances conducted in 2013, the median value of financial assets held by the bottom fifth of income earners (whose median annual income was $14,200) was $550. Given the limited savings of lower-income households and savings challenges faced by such households, a number of federal agencies have developed programs to promote savings. For example, Treasury's my RA®, which is in a soft-launch phase, promotes retirement savings for individuals without access to employer-sponsored retirement plans. State, local, and nonprofit agencies also have initiated programs that promote savings for retirement, child development, or emergencies and generally target lower-income households. Eligibility requirements and participation vary by program. GAO recommends that as Treasury implements the TRIM system, it consider the benefits and costs of the Tax Time program in future decisions on whether to extend the program. Treasury agreed with GAO's recommendation. |
Cable television service emerged in the late 1940s to fill a need for television service in areas with poor over-the-air reception, such as mountainous or remote areas. At that time, cable operators simply retransmitted the signals of local broadcast stations. By the late 1970s, cable operators began to provide new cable networks, such as HBO, Showtime, and ESPN, and the number of cable subscribers increased rapidly. Two significant changes occurred in the 1990s and early 2000s. First, the Congress passed the Cable Television Consumer Protection and Competition Act of 1992 that, among other things, prohibited local franchising authorities from awarding exclusive (or monopoly) franchises to cable operators, thereby opening the door to wire-based competition. Second, cable operators began offering new services, such as digital cable, cable modem Internet access, and telephone, in addition to their basic video service. Today, many cable operators offer these advanced services in bundles with their basic video service. Since its introduction in 1994, direct broadcast satellite (DBS) service has grown dramatically and is now the primary competitor to cable operators. Subscribers to DBS service use a small reception dish to receive signals beamed down from satellites. Because DBS satellites orbit above the equator, a reception dish must point toward the southern sky, and households located in the northern part of the United States need to angle the dish more toward the horizon than households in the southern part of the United States. Unlike cable, which upgraded to digital service in recent years, DBS service has been a digital-based service since its inception. DBS providers generally offer most of the same cable networks as cable operators. However, for many years DBS providers did not offer local broadcast stations to their subscribers in most instances because of copyright obstacles, obstacles that cable operators did not face. After the Congress passed the Satellite Home Viewer Improvement Act of 1999, which altered the copyright rules that applied to DBS providers, cable and DBS companies were placed on a more equal competitive footing. From 2001 to 2004, the aggregate number of U.S. households that subscribe to DBS television service grew rapidly. Figure 1 illustrates the growth in total DBS subscription and penetration rates for 2001 through 2004. In July 2001, about 15.5 million households were served by DBS. By January 2004, about 21.3 million households were served by DBS—an increase of 37.8 percent in 2-1/2 years. Similarly, over the same period of time, the overall penetration rate of DBS rose from 13 percent in 2001 to 17.4 percent in 2004—a 33.5 percent increase. DBS penetration rates have been higher in rural areas than in suburban and urban areas throughout the last several years, as shown in figure 2. From July 2001 to January 2004, DBS penetration has grown steadily in all three types of geographic areas. In 2001, penetration rates were highest in rural areas at 25.6 percent, followed by 13.9 percent in suburban areas and 8.6 percent in urban areas. As of January 2004, DBS penetration remained the highest in rural areas, growing to about 29 percent, while it grew to 18 percent of suburban households and 13 percent of urban households. Although the DBS penetration rate in rural areas has been and remains higher than it is in other geographic areas, subscribership has grown more rapidly in suburban and urban areas than in rural areas from 2001 to 2004. In fact, urban areas have experienced the highest growth in overall DBS subscribership. Figure 3 displays the percentage growth in total DBS subscribers and the percentage growth in DBS penetration rates in urban, suburban, and rural areas. From 2001 to 2004, DBS subscribership grew 55 percent in urban areas, 37 percent in suburban areas, and 17 percent in rural areas. In the same time period, the growth in penetration rates was also highest in urban areas, at 50.4 percent, followed by suburban penetration growth at 32 percent, and rural penetration growth of 15 percent. Less than 9 percent of American households do not have the opportunity to purchase cable television service because it is not available where they live. However, in these areas, the DBS penetration rate is about 53 percentage points greater than in areas where cable television service is available. Where cable television service is available, cable operators are increasingly providing advanced services, such as digital cable, cable modem, and telephone service. In 2004, the DBS penetration rate was over 20 percentage points greater in areas where cable operators did not provide advanced services, compared with areas where these services were available. Finally, in some limited areas, cable companies compete with other wire-based competitors, and where there is more than one wire- based cable competitor, the DBS penetration rate was 8 percentage points lower than in areas without such an additional competitor. Most households in the United States have access to cable television service. Using Knowledge Network’s 2004 survey, we found that less than 9 percent of responding households reported that cable television service was not available. According to FCC, households without access to cable television service generally reside in smaller and rural markets. Where cable television service is not available, households are far more likely to purchase DBS service. In figure 4, we illustrate the percentage of households receiving television service through four different modes (over- the-air, cable, DBS, and other) for areas where households report that cable television service is available and where it is not available. In areas where cable television service is available, 65 percent purchase cable service, 16 percent use free over-the-air television, and about 15 percent purchase DBS service. When cable television service is not available, a significant percentage of households—nearly 68 percent—purchase DBS service, while nearly all of the remainder—31 percent—rely on over-the-air television. Since 2001, the percentage of cable operators providing advanced services (digital cable, cable modem, and telephone services) has increased. In figure 5, we illustrate the percentage of cable operators providing no advanced services; one or more, but not all, advanced services; and all three advanced services based on FCC’s annual survey of cable franchises. In 2001, over 18 percent of cable operators did not provide advanced services, while less than 3 percent did not provide advanced services by 2004. At the same time, the percentage of cable operators providing all three advanced services increased from 16 percent in 2001 to 26 percent in 2004. In 2004, most cable operators (about 66 percent) provided both digital cable and cable modem services, but not telephone service. In areas where cable operators do not provide advanced services, the DBS penetration rate is significantly greater than in areas where cable operators provide advanced services. In figure 6, we illustrate the DBS penetration rate for 2001, 2002, and 2004 based on the availability of advanced services from cable operators. In 2004, the DBS penetration rate was over 36 percent in areas where cable operators did not provide advanced services, compared with approximately 16 percent in areas where cable operators provided one or more, but not all, advanced services, and only 14 percent in areas where cable operators provided all three advanced services. In fact, the DBS penetration rate increased modestly since 2001 in areas where cable operators provide one or more advanced services. However, the DBS penetration rate increased 12 percentage points since 2001 in areas where cable operators do not provide advanced services. Although the Telecommunications Act of 1996 sought to increase wire- based competition, few American households have a choice among companies providing television service via wire-based facilities. In a 2005 report, FCC noted that few franchise areas—about 1 percent—have effective competition based on the presence of a wire-based competitor. These competitors include telephone companies, electric and gas utilities, and broadband service providers. In areas with more than one wire-based cable provider, the DBS penetration rate is lower compared with areas with only one wire-based provider. In figure 7, we illustrate the DBS penetration rate for 2004 in cable franchise areas with and without wire-based cable competition. The DBS penetration rate is 18 percent in areas without wire-based competition and 10 percent in areas with wire-based competition. We found that three key geographic factors and three key competitive factors influence DBS penetration rates in cable franchise areas throughout the United States. Regarding geographic factors, we found that (1) the DBS penetration rate is lower in areas with a high prevalence of multiple dwelling units, such as apartments and condominiums; (2) the DBS penetration rate is lower in areas where the angle at which the satellite dish must be installed is relatively low, such that the satellite points more toward the horizon than toward the sky; and (3) the DBS penetration rate is higher in nonmetropolitan areas. In terms of competitive factors, we found that (1) the DBS penetration rate is lower in areas where the cable operator’s system has greater system capacity; (2) the DBS penetration rate is lower in areas where there is more than one wire-based cable provider; and (3) the DBS penetration rate is higher in areas where DBS providers carry local broadcast stations, such as an ABC affiliate. Using an econometric model to control for the many factors that influence the DBS penetration rate, we identified three geographic factors that influenced the DBS penetration rate in cable franchise areas in 2004; see appendix III for a full explanation of, and results from, our econometric model. The DBS penetration rate is lower in areas with a relatively large number of housing units represented by multiple dwelling units (such as apartments and condominiums). A 10 percent increase in the percentage of housing units represented by multiple dwelling units is associated with a 2.5 percent decrease in the DBS penetration rate. One possible explanation for this result is that residents of multiple dwelling units are more likely to encounter greater difficulty installing a DBS satellite dish, since the dish requires a clear line of sight to the southern sky. The DBS penetration rate is lower in areas where, to see the southern sky, the satellite dish must be pointed more toward the horizon than up at the sky. In general, the farther north one is within the United States, the more the dish must be angled toward the horizon to see the satellite over the equator. We found that a 1 percent decrease in the angle at which the DBS satellite dish must be set at is associated with a 1 percent decrease in the DBS penetration rate. A possible explanation for this result is that a satellite dish facing the horizon is less likely to have a clear line of sight to the southern sky because of interference from surrounding buildings or trees. The DBS penetration rate is generally higher in nonmetropolitan areas. The DBS penetration rate is about 41 percent greater in cable franchise areas outside metropolitan areas compared with cable franchise areas within metropolitan areas. This result is consistent with the results discussed above for 2001 to 2004 and may be attributed to the early popularity of satellite service in rural areas. Using the same econometric model, we also identified three competitive factors that influence the DBS penetration rate in cable franchise areas in 2004. The DBS penetration rate is lower in areas where the cable operator’s system has greater capacity. A 10 percent increase in the cable operator’s system capacity is associated with a 2.4 percent decrease in the DBS penetration rate. With greater system capacity, a cable operator can provide more channels and advanced services, such as digital cable, cable modem, and telephone services. Thus, greater system capacity allows the cable operator to provide a compelling alternative to DBS service that can contribute to lower DBS penetration rates. This result is consistent with the lower DBS penetration rate in areas where cable operators provided advanced cable services for 2001 to 2004 that we discussed above. The DBS penetration rate is lower in areas with wire-based cable competition, compared with areas without wire-based competition. In particular, we found that DBS penetration rates are about 37 percent lower in areas with wire-based cable competition compared with areas without wire-based competition. Again, this result is consistent with the results discussed above. With wire-based competition, additional companies are competing for customers. The addition of a second cable operator can attract some customers who might otherwise have purchased DBS service, thereby reducing the DBS penetration rate. The DBS penetration rate is higher in areas where DBS customers can receive local-into-local service. Local-into-local service allows DBS subscribers to receive the local broadcast stations in their area (e.g., the ABC, CBS, Fox, and NBC affiliates) from the DBS provider, just as cable subscribers receive local broadcast stations from their cable operator. Since individual programming appearing on broadcast stations generally has higher ratings than individual programming appearing on cable channels, the ability of DBS providers to offer local broadcast stations to their customers remains an important competitive factor. We found that where local-into-local service is available, the DBS penetration rate is about 12 percent higher than areas where local-into-local is not available. We provided a draft of this report to the Federal Communications Commission (FCC) for its review and comment. FCC staff provided technical comments that we incorporated, where appropriate. We provided a draft of this report to the National Cable and Telecommunications Association (NCTA) and the Satellite Broadcasting and Communications Association (SBCA) for their review and comment. NCTA provided no comments. SBCA officials noted that, in addition to the factors we discuss in the report, the inability of DBS providers to carry certain programming developed by cable operators also influences the DBS penetration rate in certain markets. In particular, SBCA noted that FCC’s program access rules require that vertically integrated cable operators make satellite-delivered programming available to competing subscription video providers, such as DBS providers, but that the program access rules do not apply to terrestrially delivered programming. SBCA officials note that the ability of cable operators to deliver programming terrestrially, especially popular programming such as regional sports networks, and therefore deny DBS providers access to this programming, negatively affects the DBS penetration rate in certain markets. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to interested congressional committees; the Chairman, FCC; and other interested parties. We will also make copies available to others upon request. In addition, this report will be available at no cost on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or at [email protected]. Major contributors to this report include Amy Abramowitz, Stephen Brown, Michael Clements, Simon Galed, and Bert Japikse. To respond to the first and second objectives—to provide information on how direct broadcast satellite (DBS) subscribership has changed since 2001 and how the DBS penetration rate differs across urban, suburban, and rural areas—we gathered data on DBS subscribers from the Satellite Broadcasting and Communications Association (SBCA). SBCA provided us with the number of DBS subscribers by ZIP Code™ 1 for the two DBS providers, DIRECTV® and EchoStar. Using information from the Census Bureau and a private vendor, we matched the zip codes to counties and calculated the number of DBS subscribers in each county throughout the United States. We also gathered data on housing unit projections from the Census Bureau, which, when combined with the number of DBS subscribers, allowed us to calculate the DBS penetration rate by county for July 2001 to January 2004. This allowed us to examine changes in the DBS penetration rate for that period of time. Further, using data from the Office of Management and Budget, we classified counties as urban, suburban, and rural, based on the location of central cities and designations of metropolitan statistical areas (MSA). This allowed us to calculate the DBS penetration rate for each of these geographic categories. ZIP Code™ is a registered trademark of the United States Postal Service. For simplicity, we refer these as zip codes. regarding the availability of digital cable, cable modem, and telephone service and the presence of wire-based competition. We matched individual zip codes to the cable franchise areas that formed the unit of analysis in FCC’s survey. When combined with the count of DBS subscribers by zip code from SBCA, we calculated the DBS penetration rate for each cable franchise area in FCC’s survey. We used these data, combined with cable operators’ responses to FCC’s survey regarding advanced services and wire-based competition, to calculate the DBS penetration rate under these various scenarios. To respond to the fourth objective—to provide information on the factors that appear to influence the DBS penetration rate in cable franchise areas—we used an econometric model we previously developed that examines the effect of competition on cable rates and service and the DBS penetration rate. Using data from FCC’s 2004 Cable Price Survey, the model considered the effect of various factors on cable rates, the number of cable subscribers, the number of channels that cable operators provide to subscribers, and the DBS penetration rate for areas throughout the United States. See appendix III for a more detailed explanation of, and results from, our econometric model. To respond to the objectives of this report, we relied extensively on three data sets and took steps to ensure the reliability of these data. The data sets we relied on include the Federal Communications Commission’s (FCC) 2002 and 2004 Cable Price surveys, direct broadcast satellite (DBS) subscriber counts by zip code from the Satellite Broadcasting and Communications Association (SBCA), and Knowledge Network’s 2004 The Home Technology Monitor survey. In this appendix, we explain the steps we took to ensure that these data were sufficiently reliable for the purposes of our work. FCC annually surveys approximately 700 cable franchises to fulfill a congressional mandate to report on average cable rates for cable operators found to be subject to “effective competition”—a legally defined term— compared with operators not subject to effective competition. In previous testimonies and a report, we have noted weaknesses with FCC’s survey, including insufficient instructions and inaccuracies in the classification of the competitive status of cable operators. In response to our recommendations, FCC has taken several steps to improve the reliability of its survey, including editing the survey document and correcting inaccurate classifications of the competitive status of cable franchises. Additionally, FCC conducts follow-ups with survey respondents and edits survey data when inaccuracies are apparent. We used FCC’s 2002 and 2004 Cable Price surveys to identify areas where cable operators provided advanced services and also for information on price, number of channels, and other operating data necessary for our cable-satellite econometric model. Because our use of data from FCC’s surveys was important in a comparative manner, rather than an absolute sense—that is, our primary concern with cable rates was the relative level of rates between cable franchises, rather than the absolute rate in a particular cable franchise—it is not important for our use that the data be precise. We conducted logic tests to identify any observations with apparent inaccuracies in the variables of interest for our work. We determined that the data were sufficiently reliable for our analysis. SBCA possesses data on the number of DBS subscribers by zip code. To respond to the objectives of this report, we sent SBCA a letter identifying the specific data elements we required. SBCA officials prepared a set of data sets consistent with our needs. We conducted logic tests on SBCA’s data and identified some inconsistencies, which we discussed with SBCA officials. SBCA officials subsequently took steps to resolve these inconsistencies. Based on the revised data we received from SBCA and our subsequent tests, we determined that the data were sufficiently reliable for our analysis. To obtain information on the availability of cable service and types of television service used by U.S. households, we purchased existing survey data from Knowledge Networks Statistical Research. This survey was completed with 2,375 of the estimated 5,075 eligible sampled individuals for a response rate of 47 percent; partial interviews were conducted with an additional 96 people, for a total of 2,471 individuals completing some of the survey questions. The survey was conducted between February 23 and April 25, 2004. Because we did not have information on those contacted who chose not to participate in the survey, we could not estimate the impact of the nonresponse. Our findings will be biased to the extent that the people at the 53 percent of the telephone numbers that did not yield an interview have experiences with television service or equipment that are different from the 47 percent of our sample who responded. However, distributions of selected household characteristics (including presence of children, race, and household income) for the sample and the U.S. Census estimate of households show a similar pattern. To assess the reliability of these survey data, we reviewed documentation of survey procedures provided by Knowledge Networks and questioned knowledgeable officials about the survey process and resulting data. We determined that the data were sufficiently reliable for the purposes of this report. This appendix describes our econometric model of cable-satellite competition. In particular, we discuss (1) the specification of the model, (2) the data sources used for the model, (3) the merger of various data sources into a single data set, (4) the descriptive statistics for variables included in the model, (5) the estimation methodology and results, and (6) alternative specifications. We developed an econometric model to examine the influence of various factors, including those describing aspects of cable competition at the local level, on local DBS penetration rates. Estimating the importance of various factors on the DBS penetration rate is complicated by the possibility that the DBS penetration rate in an area may help determine, but also be determined by, in part, the local cable price in that area. One statistical method applicable in this situation is to estimate a system of structural equations in which certain variables that may be simultaneously determined are estimated jointly. In our previous reports, we estimated a four-equation structural model in which cable prices, the number of cable subscribers, the number of cable channels, and the DBS penetration rate were jointly determined. We use this same general structure again, this time using the most recent information available from FCC’s 2004 Cable Price Survey and contemporaneous satellite subscriber information provided by the Satellite Broadcasting and Communications Association. We made some minor modifications because of, for example, changes in the subscription video market. We estimated the following four-equation structural model of the subscription video market: DBS penetration rate in a local market is hypothesized to be related to (1) cable prices per channel; (2) the DBS companies’ provision of local stations in the franchise area; (3) the size of the television market as measured by the number of television households; (4) the age of the cable franchise; (5) the median household income of the local area; (6) cable system capacity in terms of megahertz; (7) a dummy variable for areas outside metropolitan areas; (8) the percentage of multiple dwelling units; (9) the angle, or elevation, at which a satellite dish must be fixed to receive a satellite signal in that area; and (10) the presence of a nonsatellite competitor. The DBS penetration rate variable is defined as the number of DBS subscribers in a franchise area expressed as a proportion of the total number of housing units in the area. As hypothesized, the DBS penetration rate is expected to depend on the prices set by the cable provider as well as on the demand, cost, and regulatory conditions in the subscription video market that directly affect DBS. Cable prices are hypothesized to be related to (1) the number of channels, (2) the number of cable subscribers, (3) the DBS penetration rate, (4) the DBS companies’ provision of local stations in the franchise area, (5) the size of the television market as measured by the number of television households, (6) horizontal concentration, (7) vertical relationships, (8) the presence of a nonsatellite competitor, (9) regulation, (10) average wages, and (11) population density. The cable price variable used in the model is intended to reflect the total monthly rate charged by a cable franchise to the typical subscriber. The explanatory variables in the cable price relationship are essentially cost and market structure variables. Number of cable subscribers is hypothesized to be related to (1) cable prices per channel, (2) the DBS penetration rate, (3) the number of broadcast stations, (4) urbanization, (5) the age of the cable franchise, (6) the number of homes passed by the cable system, (7) the median household income of the local area, and (8) the presence of a nonsatellite competitor. The number of cable subscribers is defined as the number of households in a franchise area that subscribe to the most commonly purchased programming tier. This represents the demand equation for cable services, which depends on rates and other demand- related factors. Number of channels is hypothesized to be related to (1) the number of cable subscribers, (2) the DBS penetration rate, (3) the size of the television market as measured by the number of television households, (4) the median household income of the local area, (5) cable system capacity in terms of megahertz, (6) the percentage of multiple dwelling units, (7) vertical relationships, and (8) the presence of a nonsatellite competitor. The number of channels is defined as the number of channels included in the most commonly purchased programming tier. The number of channels can be thought of as a measure of cable programming quality and is explained by a number of factors that influence the willingness and ability of cable operators to provide high- quality service and consumers’ preference for quality. Table 1 presents the explanatory variables in the structural model on cable prices and DBS penetration rates. We required several data elements to build the data set used to estimate this model. The following is a list of our primary data sources. We obtained data on cable prices and service characteristics from the 2004 Cable Price Survey that FCC conducted as part of its mandate to report annually on cable prices. FCC’s survey asked a sample of cable franchises to provide information, as of January 1, 2004, about a variety of items pertaining to cable prices, service offerings, subscribership, franchise area reach, franchise ownership, and system capacity. We used the survey to define measures of each franchise area’s cable prices, number of subscribers, and number of channels as described above. In addition, we used the survey to define variables measuring (1) system megahertz (the capacity of the cable system in megahertz), (2) homes passed by the cable system serving the franchise area and perhaps other franchises in the same area, (3) regulation—a dummy variable equal to 1 if the franchise is subject to rate regulation of its Basic Service Tier, (4) horizontal concentration—a dummy variable equal to 1 if the franchise area is affiliated with one of the largest MSOs with at least 1 million subscribers nationally, and (5) the status of nonsatellite competition—a dummy variable equal to 1 if the franchise faced competition from a second wireline company that provides cable service. From the Satellite Broadcasting and Communications Association, we obtained DBS subscriber counts as of January 2004 for each zip code in the United States. We used this information to calculate the number of DBS subscribers in a cable franchise area, which, when divided by the number of housing units, was used to define the DBS penetration rate. We used the most recent data from the Census Bureau to obtain the following demographic information for each franchise area: housing units, median household income, proportions of urban and rural populations, housing units accounted for by structures with more than five units (multiple dwelling units), population density, and nonmetropolitan statistical areas. For average wage, we used May 2003 estimates for Installation, Maintenance, and Repair Occupations from the Bureau of Labor Statistics’ (BLS) National Occupational Employment and Wage Estimates. We used metropolitan area data for most franchise areas, and state-level data for those franchise areas located outside of metropolitan areas. We used data from BIA MEDIA AccessPro™ to determine the number of broadcast television stations in each television market. To define the dummy variable indicator of vertical integration, we used information on the corporate affiliations of the franchise operators provided in FCC’s survey. We used this information in conjunction with industrywide information on vertical relationships between cable operators and suppliers of program content gathered by FCC in its Tenth Annual Report on the status of competition in the market for delivery of video programming. From Nielsen Media Research, we acquired information to determine the number of television households in each designed market area (DMA), or television market, and the DMA in which each cable franchise was located. We used information from the two DBS companies (DIRECTV® and EchoStar) to identify DMAs in which these companies provide local stations and, if local stations are available, when the companies initiated this service. We used this to construct a measure of local station availability, as well as alternative specifications presented in the final section. Based on a zip code associated with each cable franchise area, we determined the necessary satellite dish elevation for each area based on information available from the Web pages of the two DBS companies. The level of observation in our model is the local cable franchise. Many of the variables we used to estimate our model, such as each cable franchise’s price, come directly from FCC’s Cable Price Survey. However, we also created variables describing competitive, geographic, and economic conditions in each franchise area. For these variables, we used information from other sources. For example, we obtained median household income and the extent of multiple dwelling units from Census Bureau data, and derived the DBS penetration rate from information provided by the Satellite Broadcasting and Communications Association. Generally, these data are reported at other geographic levels, and we describe briefly the process by which we merged these different data sources. Cable franchise areas take a variety of jurisdictional forms, such as city or town, or unnamed, unincorporated area. As a consequence, they do not correspond in many cases to well-recognized geographical units, such as Census places, for which other data are readily available. Our approach to identifying the geographic extent of each franchise area and relating information processed at different geographic levels to each franchise area is similar to that we have used and described in detail in our previous reports. In general, we used information in FCC’s survey identifying franchise community name and type (such as city or town) to match to Census geographic identification codes for particular places or county subdivisions that do correspond to Census geography. In particular, we used 2000 Census information on the number of housing units in these jurisdictions as the basis for our measure of DBS penetration. For other franchises, however, the link to Census records was not as direct. For franchises in unincorporated unnamed areas, for example, and those whose franchise areas represent a section of the associated community (which occurs in some large cities), we acquired additional information on the geographic boundaries of the franchise areas. The satellite subscriber information we obtained was organized by zip code. In order to link these subscriber counts to franchise area geographies, we determined the zip code or zip codes associated with each franchise. Because zip codes often do not share boundaries with other geographies, one zip code can be associated with more than one cable franchise area. Also, many franchises, particularly larger ones, span many zip codes. Therefore, we needed to identify the zip code or codes in each franchise area as well as the degree to which each of those zip codes is contained in each franchise area to calculate the degree of satellite penetration for each franchise area. We accomplished this by using software designed to relate various levels of census geography to one another. For most franchise areas—that is, those that correspond to census places, county subdivisions, or entire counties—we were able to use this software to relate census places, county subdivisions, or other census geographies directly to the zip codes that corresponded to those areas and to calculate the share of each zip code’s population according to the 2000 Census that was contained in that area. We used these population shares to allocate shares of each zip code’s total DBS subscribers to the relevant franchise area, and then summed the resulting subscribers across all zip codes in that franchise area. We defined the penetration by dividing this subscriber total by an estimate of the housing units in that franchise area in January 2004. As part of the process of identifying the zip codes associated with each franchise area, we identified a key zip code that we used for linking other data items. We used Census data organized at the zip code level to assign demographic data, such as income and the extent of multiple dwelling units, to each franchise area. We also used this key zip code to attach information concerning the proper satellite dish elevation. We assigned other information to each franchise on the basis of the franchise’s county, state, or metropolitan area. We assigned wage data from BLS at the metropolitan or state level and we assigned nonmetropolitan status, percentage of urban population, and the Nielsen television market of each franchise at the county level. Information on the provision of local stations by DBS companies, which occurs at the television market level, was then assigned to each franchise. Table 2 provides basic statistical information on all of the variables included in the cable-satellite competition model. We calculated these statistics using 624 observations in our data set. We excluded those franchises sampled by FCC that were municipally operated or that competed directly with municipally operated franchises because we believe that these cable franchises are likely to be operated differently from the majority of other franchises. We employed the Three-Stage Least Squares (3SLS) method to estimate our model. Table 3 includes the estimation results for each of the four structural equations. All of the variables, except dummy variables, are expressed in natural logarithmic form. This means that coefficients can be interpreted as “elasticities”—the percentage change in the value of the dependent variable associated with a 1 percent change in the value of an independent, or explanatory, variable. The coefficients on the dummy variables are elasticities in decimal form. We found that several factors related to the geographical conditions influence the DBS penetration rate. Specifically, as shown in table 3, DBS penetration rates are likely to be significantly higher in nonmetropolitan areas. This could be associated with the historical development of satellite service, which had been marketed for many years in smaller and more rural areas. Additionally, the DBS penetration rate is higher in areas that require a relatively higher angle or elevation at which the satellite dish is mounted and is lower in areas where there are more multiple dwelling units. These two factors can be associated with the need of DBS satellite dishes to “see” the satellite: A dish aimed more toward the horizon (as opposed to aimed higher in the sky) is more likely to be blocked by a building or foliage, and people in multiple dwelling units often have fewer available locations to mount a satellite dish. Additionally, we found that several factors related to competitive conditions influence the DBS penetration rate. As shown in table 3, our model results indicate that in cable franchise areas where local broadcast stations are available from one or both DBS providers, the DBS penetration rate is approximately 12 percent higher than in areas where local stations are not available via satellite. This finding suggests that in areas where local stations are available from one or both DBS providers, consumers are more likely to subscribe to DBS service and, therefore, DBS appears to be more competitive with cable than in areas where local stations are not available from a DBS provider. We did not find that DBS companies’ provision of local broadcast stations is associated with lower cable prices. In table 3, the estimate is, in fact, positive, although not statistically significant, and we therefore cannot reject the hypothesis that provision of local broadcast stations has no impact on cable prices. However, we found that cable prices were approximately 16 percent lower in areas where a second cable company— known as an overbuilder—provides service. Finally, cable prices are higher in areas where the cable company provides more channels, indicating that consumers are generally willing to pay for additional channels and that providing additional channels raises a cable company’s costs. Additionally, we found that DBS penetration rates are lower in cable franchise areas where a second wire-based competitor is present; in these areas, the DBS penetration rate is 37 percent lower compared with similar areas where a second wire-based competitor is not present. We considered alternative specifications under which we expanded the definition of local broadcast stations to account for (1) whether one or both DBS companies offer local stations and (2) the length of time that DBS companies have provided local stations. To conduct this analysis, we included several additional variables: “Both DBS companies provide” equals 1 if both DBS companies offer local stations in the cable franchise area, “One DBS company provides” equals 1 if only one DBS company offers local stations, “Long-term” equals 1 if either or both DBS companies have offered local stations in the cable franchise area for more than 3 years as of January 2004, “Short-term” equals 1 if local stations have been available for less than 3 years, “Both long-term” equals 1 if both DBS companies have offered local stations in the cable franchise area for more than 3 years as of January 2004, and “Both otherwise” equals 1 if local stations have otherwise been available from both DBS companies. We report the results of these alternative specifications only for the DBS penetration equation because we are primarily interested in their affects on DBS penetration and we found little impact on the other equations in the model. We present the results for four different specifications in table 4. In general, there is evidence that the longer that local stations have been available in a local area, the larger will be the increase in the local DBS penetration rate, and that the increase in the local DBS penetration rate is greater in those areas in which both DBS companies provide local stations. | Since its introduction in 1994, direct broadcast satellite (DBS) service has grown dramatically, and this service is now the principal competitor to cable television service. Although DBS service has traditionally been a rural service, passage of the Satellite Home Viewer Improvement Act of 1999 enhanced the competitiveness of DBS service in suburban and urban markets. GAO agreed to examine (1) how DBS subscribership changed since 2001; (2) how DBS penetration rates differ across urban, suburban, and rural areas; (3) how DBS penetration rates differ across markets based on the degree and type of competition provided by cable operators; and (4) the factors that appear to influence DBS penetration rates across cable franchise areas. To complete this report, GAO prepared descriptive statistics and an econometric model using data from the Federal Communications Commission's annual Cable Price Survey and the Satellite Broadcasting and Communications Association's subscriber count database. Since 2001, the number of households subscribing to DBS service has grown rapidly; thus the percentage of households subscribing to DBS service, the DBS penetration rate, has grown to over 17 percent of American households. The DBS penetration rate is highest in rural areas, but growing most rapidly in suburban and urban areas. Between 2001 and 2004, the DBS penetration rate grew 15 percent in rural areas to 29 percent of rural households, 32 percent in suburban areas to 18 percent of suburban households, and 50 percent in urban areas to 13 percent of urban households. The degree and type of competition influences the DBS penetration rate. In areas with no cable service, the DBS penetration rate is about 53 percentage points greater than in areas where cable service is available. Where cable service is available, cable operators increasingly offer advanced services. The DBS penetration rate is approximately 20 percentage points greater in areas where cable operators are not providing advanced services, compared with areas where these services are available. While relatively few areas have more than one wire-based cable operator, in these areas the DBS penetration rate is 8 percentage points lower than in areas with only one cable operator. In addition to the differences in DBS penetration rates across rural, suburban, and urban areas, and differences associated with the degree and type of cable competition, additional geographic and competitive factors also influence the DBS penetration rate. For example, the DBS penetration rate is lower in areas with a high prevalence of multiple-dwelling units, such as apartments. Additionally, the DBS penetration rate is higher in areas where DBS providers offer local broadcast stations (such as ABC and NBC affiliates) directly to their subscribers. The Federal Communications Commission provided technical comments on a draft of this report that we incorporated where appropriate. |
LSC was established in 1974 as a private, nonprofit, federally funded corporation to provide legal assistance to low-income people in civil matters. LSC provides the assistance indirectly through grants to competitively selected local programs. LSC distributes funds to the grantees on the basis of the number of low-income persons living within a service area. Grantees may receive additional funding from non-LSC sources. During 1998, LSC funded 262 local grantees that operated through approximately 900 neighborhood law offices employing about 3,600 attorneys and 1,400 paralegals. Each local program is governed by its own board of directors and is required to spend at least 12.5 percent of its LSC grant to encourage private attorney involvement (PAI) in delivering legal services to low-income clients. In fiscal years 1998 and 1999, LSC received appropriations of $283 million and $300 million, respectively. LSC’s authorizing legislation restricts it from engaging in lobbying; political activities; class actions except under certain conditions; and cases involving abortion, school desegregation, and draft registration or desertion from the military. Annual appropriations laws have placed additional restrictions on the activities in which LSC grantees can engage, even with non-LSC funds. In 1996, for example, grantees were prohibited from engaging in challenges to welfare reform, litigation on behalf of prisoners, representation in drug-related public housing evictions, and representation of certain categories of aliens. Grantees must serve only those clients who meet financial and citizenship/alien eligibility requirements. With respect to client financial eligibility, local programs are to establish their own criteria, which, in general, require that clients’ income not exceed 125 percent of the federal poverty guidelines. With appropriate documentation of the grantee’s decision, clients who are between 125 percent and 187.5 percent of the federal poverty level may be found eligible. LSC regulations require that grantees (1) adopt a form and procedure to obtain eligibility information and (2) preserve that information for audit by LSC. With regard to citizenship/alien eligibility, only citizens and certain categories of aliens are eligible for services. For clients who are provided services in person, a citizen attestation form or documentation of eligible alien status is required. For clients who are provided services on the telephone, grantees must document that they made inquiries regarding the individuals’ citizenship/alien eligibility. LSC uses a Case Service Reporting (CSR) system to gather quantifiable information from grantees on the services they provide that meet LSC’s definition of a case. The CSR Handbook is LSC’s primary official guidance to grantees on how to record and report cases. According to the 1999 CSR Handbook, which revised and expanded the guidance in LSC’s 1993 Handbook, information about cases is an important indicator of the number of legal problems that programs address each year, and LSC relies on such case information in its annual request for federal funding for legal services. Audit reports on five grantees issued by LSC’s OIG between October 1998 and July 1999 reported that all five grantees misreported the number of cases they had closed during calendar year 1997 and the number of cases that remained open at the end of that year. The OIG found that all five grantees overstated the number of closed cases, while four overstated and one understated open cases. The OIG attributed the overreporting to such factors as (1) counting as cases telephone calls in which individuals were not provided any legal assistance and only partial eligibility determinations were made; (2) counting wholly non-LSC funded cases as LSC cases; (3) double counting of the same cases; (4) reporting cases as closed during 1997, or as still open at the end of 1997, when service ceased in prior years; and (5) counting as cases the provision of services to over-income clients. In June 1999, in response to Congress’ request for information on whether the 1997 case data of other LSC programs had problems similar to those reported by LSC’s OIG, we issued a report on our audit of five of LSC’s largest grantees: Baltimore, Chicago, Los Angeles, New York City, and Puerto Rico. We found similar types of reporting errors at the five grantees and estimated that, overall, 75,000 of the 221,000 open and closed cases that the five grantees reported to LSC were questionable. Interviews that we conducted with LSC officials and executive directors of the 5 audited grantees indicated that they had taken or were planning to take steps to correct the causes of these case-reporting problems. Our objectives were to determine (1) what efforts LSC and its grantees have made to correct problems with case service reporting, and (2) whether these efforts are likely to resolve the case reporting problems that occurred in 1997. To address the objectives, we reviewed documents that contained LSC case reporting guidance, interviewed cognizant officials at LSC headquarters, and conducted structured telephone interviews with a random sample of grantee executive directors. Specifically, we reviewed LSC regulations and LSC’s 1993 and 1999 CSR Handbook, as well as supplemental guidance that LSC distributed to grantees in the form of program letters and frequently asked questions and answers about case reporting. We also collected documentation and interviewed LSC officials and grantee executive directors to gather information about a self- inspection process that LSC required all grantees to undertake in order to determine the accuracy of their 1998 case data. In our telephone interviews with executive directors, we asked if the directors viewed LSC’s case reporting requirements as being clear, what changes they have made or planned to make as a result of the requirements, and the results of their self-inspection of 1998 case data. We also interviewed LSC officials and grantee executive directors about areas of case reporting that they felt needed further clarification. To identify the universe of LSC grantees, an LSC official referred us to the list of LSC programs on the corporation’s Internet Web site. As of July 15, 1999, the Internet site listed 256 programs. From this list, we randomly selected 80 programs for our sample. We developed the interview instrument, pretested it with executive directors from two grantees, revised the instrument based on the pretest results, and completed approximately 1-hour-long structured interviews by telephone with the executive directors of 79 of the 80 programs, for a 99 percent response rate. (After numerous attempts over several days, we were unable to contact the executive director of one LSC grantee.) Our sample was designed so that we could generalize our findings with 95-percent confidence and a maximum 10-percent margin of error to the universe of LSC grantees. We performed our work from July through September 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the President of LSC. LSC’s comments are discussed at the end of this letter and included as appendix I. LSC issued a new CSR Handbook and distributed other written communications intended to clarify reporting requirements to its grantees. Most grantees indicated that the new guidance helped clarify LSC’s reporting requirements, and virtually all of them indicated that they had or planned to make program changes as a result of the requirements. Many grantees, however, identified areas of case reporting that remained unclear to them. To address specific problems identified by the OIG and LSC’s own internal program reviews, LSC has issued revised reporting guidance. In November 1998, LSC issued the 1999 CSR Handbook, which replaced the 1993 CSR Handbook. The 1999 handbook instituted changes to some of LSC’s reporting requirements and provided more detailed information on other requirements than previously existed. LSC also distributed program letters and a list of frequently asked CSR questions and answers that further elaborated on points made in the 1999 handbook. The 1999 CSR Handbook instituted several notable changes to case reporting requirements. These included (1) procedures for timely closing of cases; (2) procedures for management review of case service reports; (3) procedures for ensuring single recording of cases; (4) requirements to report LSC-eligible cases, regardless of funding source; and (5) requirements for reporting PAI cases separately. In addition, grantees were required to use automated case management systems and procedures that would ensure that program managers had timely access to accurate information on cases and the capacity to meet their reporting requirements. On November 24, 1998, LSC informed its grantees that two of the changes in the 1999 CSR Handbook were to be applied to the 1998 case data. The two changes pertained to timely closing of cases and management review of case service reports. The timely closing provision required grantees to ensure that cases in which legal assistance had ceased in 1998, and was not likely to resume, would be closed prior to grantees’ submission of case service reports to LSC in March 1999. To the extent practicable, cases in which the only assistance provided to the client was counsel and advice, brief service, or referral after legal assessment were to be closed in the year in which these types of service were provided. Cases involving other types of service were to be closed in the year in which program staff determined that further legal assistance was unnecessary, not possible, or inadvisable and a closing memorandum or other case-closing notation was prepared. The management review provision required the executive director, or a designee, to review the program’s case service reports prior to their submission to LSC in order to ensure their accuracy and completeness. The remaining new provisions of the 1999 CSR Handbook were not applicable to 1998 cases. For example, for 1998, there was no requirement for grantees to ensure that cases were not double counted. For 1999, LSC is requiring the use of automated case management systems and procedures to ensure that cases involving the same client and specific legal problem are not reported to LSC more than once. For 1998, grantees could report only those cases that were at least partially supported by LSC funds. For 1999, LSC is requiring grantees to report all LSC-eligible cases, regardless of funding source. LSC intends to estimate the percentage of activity spent on LSC service by applying a formula that incorporates the amount of funds grantees receive from other funding sources compared with the amount they receive from LSC. For 1998, grantees were required to report their LSC-funded PAI cases together with non-PAI cases. For 1999, PAI cases are to be reported separately. In addition to changing certain reporting requirements, the 1999 handbook also provides more detailed guidance to grantees than the 1993 handbook. For example, the 1999 handbook provides more specific definitions of what constitutes a “case” and a “client” for CSR purposes. The 1999 handbook also addresses documentation requirements that were not discussed in the 1993 handbook. For example, the 1999 handbook indicates that, except for telephone service cases, the client’s file must contain an attestation of citizenship or documentation of alien eligibility in order for the case to be reported to LSC. The 1999 handbook also imposes requirements for documentation of information on client income and assets. The handbook states that, for all cases reported to LSC, the eligibility documentation must include specific information about income and assets. The 1999 handbook also contains a requirement that legal assistance, to be counted as a case, must be provided by an attorney or paralegal. On the basis of our survey, we estimate that over 90 percent of grantee executive directors viewed the changes in the 1999 CSR Handbook as being clear overall, and virtually all of them planned to or had made at least one change to their program operations as a result of the revised case reporting requirements. Program changes that were cited included revising policies and procedures, providing staff training, modifying forms and/or procedures used during client intake, implementing computer hardware and software changes, and increasing review of cases. Nearly half of the grantees indicated that they planned to or had revised their policies and/or procedures to comply with the 1999 handbook requirements, and slightly less than a third planned to or had conducted staff training on the requirements. Respondents told us that the focus of their training was on such issues as the current definition of a case, how to determine and document clients’ financial eligibility, how to determine and document citizenship/alien eligibility, timely closing of cases, and prevention of duplicate case reporting. More than half of the grantees indicated that they planned to or had changed their intake forms and/or procedures. Of the 41 respondents who made this comment, 26 said they were making these changes in order to document client income and assets. Slightly over 40 percent of the grantees reported that they planned to or had made computer changes. The computer changes included such actions as adding new fields to their automated case management (e.g., to ensure that client eligibility and acceptance information is recorded), making programming changes (e.g., to identify duplicate cases, ensure that cases are not assigned separate numbers for the same client with the same legal problem, and better document client assets), and installing new software (e.g., to generate reports so that they can track how long cases are open). About 10 percent of grantee executive directors indicated that they could better comply with CSR requirements if they had uniform case management software. LSC, too, believes that grantees’ case management systems should provide for more consistency in the collection and processing of CSR information. According to an LSC official, LSC is developing a strategy for modifying grantees’ case management systems so that data errors can be detected and prevented. As part of the strategy, LSC has hired an expert in case management systems and is working to develop standard input requirements for these systems. LSC intends to work with case management system vendors to modify the systems so that they prevent cases from being accepted if grantees do not record the required compliance information. LSC expects modifications to the case management systems to be implemented in calendar year 2000 and available for application to grantees’ 2000 CSR data. In the nearer term, LSC plans to work with a contractor to develop customized case management queries that would enable grantees to detect and remedy errors in their 1999 case data. In 1999, LSC plans to pilot test the computer queries at five programs that have the most commonly used case management systems. Nearly three-fourths of the grantees indicated that they planned to or had increased their review of cases to comply with LSC requirements. Respondents cited such review activities as more intense monitoring of open cases, to ensure that they are closed in a timely manner, and more thorough monitoring of PAI cases. Some grantees said they have directed increased attention to reporting requirements during routine reviews of cases, while others said they instituted more frequent reviews of cases. One executive director reported to LSC that managing attorneys would meet with all case-handling staff and reemphasize the importance of keeping accurate activity records. These managing attorneys are to monitor compliance with the requirement initially on a weekly basis and then on a random basis. If deficiencies are discovered, the managing attorney is to direct the case handler to correct them immediately, and the managing attorney is to recheck the file within 48 hours to ensure compliance with the directive. Another executive director told us that his program had not previously reviewed CSR data. He said that they now are reviewing reports, along with listings of open and closed cases, to ensure their compliance with LSC guidelines. Some executive directors said that they planned to become more involved in reviewing case files and case management reports. Many grantees reported making other efforts to comply with reporting requirements, such as disseminating the new handbook to case handlers, developing compliance checklists, emphasizing the importance of compliance to staff, sending written instructions or memos to staff, holding meetings and discussions, providing more feedback to case handlers, and more stringently enforcing requirements. Although most of the grantee executive directors reported that the new LSC guidance helped clarify requirements, many of them also indicated that they were still unclear about certain requirements and that additional clarification was needed. Among the areas of confusion or uncertainty that executive directors identified were requirements pertaining to asset and citizenship/alien eligibility documentation, single recording of cases, and who can provide legal services. Asset documentation: The 1999 CSR Handbook states that, for all cases to be reported to LSC, the eligibility documentation should include specific information about income and assets. About 30 percent of the executive directors indicated that LSC’s requirements for documenting client assets were clear only to some, to little, or to no extent. Of the 24 respondents who gave this response, 23 made comments to the effect that LSC should clarify what it means by assets and asset limits and/or clarify its documentation requirements for client assets. In a July 14, 1999, program letter to grantees, LSC noted that although many grantees inquire about applicants’ assets, they do not consistently document either the inquiries or the applicants’ responses. In its program letter, LSC sought to clarify its requirements for documenting and preserving the asset information obtained from each applicant. Because we conducted our telephone survey in late July and early August, we do not know how many of the executive directors with whom we spoke had reviewed LSC’s July 14 guidance and felt that it sufficiently clarified their questions concerning asset documentation. However, two respondents told us that LSC’s July 14 guidance was still unclear. One respondent said, for example, that the program letter left unclear the meaning of the term “household goods,” whether exempted assets had to be documented, who was to value the goods, and who was to determine what to count and what not to count as part of assets. Our own analysis of LSC guidance on the asset issue indicated that LSC has not been consistent in its directives to grantees about the specificity of asset information they must have in order to comply with CSR reporting requirements. For example, in its March 24, 1999, communication on frequently asked CSR questions and answers, LSC instructed grantees to document, at minimum, the total amount of household assets accessible to the client. In its July 14, 1999, program letter, LSC instructed grantees to identify and document all of the liquid and nonliquid assets of all persons who are resident members of a family unit. Citizenship/alien eligibility documentation requirements for telephone cases: The 1999 CSR Handbook does not explicitly address the citizenship or eligible alien status documentation requirements for situations where assistance is provided only over the telephone. Nearly one-fourth of the executive directors indicated that LSC’s documentation requirements in this area were clear only to some, to little, or to no extent. Of 19 respondents who gave this response in our survey, 15 said that more clarification was needed on the grantees’ documentation responsibilities. Respondents said, for example, that they were confused about exactly when they needed to obtain a written attestation, whether it was sufficient simply to record that questions about citizenship/alien eligibility had been asked, whether certain types of service required documentation while others did not, and whether the requirement changed if an individual receiving assistance over the telephone came into the office to drop off documents. In its July 14, 1999, program letter to grantees, LSC sought to clarify its requirements for documenting citizenship/alien eligibility information in telephone cases. LSC stated that it requires recipients to make appropriate inquiry of every telephone applicant and record the inquiry and response. All such documentation is to be maintained in the client file. We do not know how many of the respondents to our survey were familiar with LSC’s July 14 guidance when we interviewed them in late July and early August. Single recording of cases: The 1999 CSR Handbook requires that programs ensure that cases involving the same client and specific legal problem are not recorded and reported to LSC more than once. Over one-fourth of the executive directors indicated that the requirement for preventing duplicate case reporting was clear only to some, to little, or to no extent. Of the 22 respondents who gave this response in our survey, 17 said that the distinction between specific and related legal problems is difficult to determine. Several respondents cited examples in family law cases where more than one problem can arise within one family case. Depending on the situation, it was not clear to them at what point legally related issues become separate enough to count as separate cases. One respondent indicated that attorneys in his program have a hard time interpreting this requirement, and that he got calls every week about this issue. Provider of legal assistance: The CSR handbook requires that legal assistance be provided by an attorney or paralegal in order for a service to be called a case. Slightly over 40 percent of executive directors indicated that LSC was clear only to some, to little, or to no extent about who can provide such legal assistance. Respondents noted in our telephone survey, for example, that the terms “case handler,” and “paralegal” are not clearly defined, and that they did not know whether nonlawyers (e.g., intake workers) who are supervised by lawyers can provide legal assistance. Comments made to us by the executive directors revealed that grantees held varying views about who can provide legal assistance to clients. Several respondents specifically stated that this is an important issue that LSC should address in its handbook. Our own analysis of LSC guidance on this issue indicates that LSC has not been consistent in its advice to grantees. For example, in its communication with grantees on frequently asked CSR questions and answers, LSC stated that a telephone conversation between an intake specialist and a caller who was accepted for service can be counted as a case if the caller received some advice that addressed a specific legal problem. LSC officials told us that LSC’s current position on this issue is that the person giving legal advice must be someone who is authorized to practice law or is under the supervision of an attorney in accordance with local rules of practice. Therefore, legal advice can be given by a (1) lawyer, (2) paralegal, or (3) intake specialist or law student under the supervision of a lawyer, as long as the assistance does not violate local rules of practice. LSC sought to determine the accuracy of grantees’ case data by requiring that grantees do self-inspections of their open and closed caseload data for 1998. Grantees were required to determine whether the error rate in their data exceeded 5 percent. If the error rate was 5 percent or less, they could certify that their data were substantially correct. If the error rate was higher than 5 percent, they were to determine how the problems identified could be addressed. LSC found that about three-fourths of the grantees were able to certify to the substantial accuracy of their data. LSC used the results of the self-inspections to estimate the total number of case closings in 1998. Our review of LSC’s self-inspection process raised concerns about the accuracy and interpretation of the results, and what the correct number of certifying programs should be. On May 14, 1999, LSC issued a memo to all grantees instructing them to complete a self-inspection procedure by July 1, 1999. The purpose of the self-inspection was to ensure that (1) grantees were properly applying instructions in the 1999 edition of the CSR Handbook that were applicable to the 1998 data, and (2) LSC had accurate case statistical information to report to Congress for calendar year 1998. LSC provided detailed guidance to grantees on the procedures for the self- inspection. Each grantee was to select and separately test random samples of open and closed cases to determine whether the number of cases it reported to LSC earlier in the year was correct. Grantees were to verify that the case file contained a notation of the type of assistance provided, the date on which the assistance was provided, and the name of the case handler providing the assistance. Grantees were also to determine whether assistance had ceased prior to January 1, 1998; was within certain service categories as defined by the 1999 handbook; was provided by an attorney or paralegal; and was not prohibited or restricted. Finally, grantees were to verify that each case had eligibility information on household income, household size, household assets, citizenship attestation for in-person cases, and indication of citizenship/alien status for telephone-only cases. According to LSC officials, the self-inspection was a single procedure that was undertaken within a limited time period, and LSC did not expect the self-inspection to resolve all case-reporting problems. In requiring grantees to verify that their 1998 case files contained information on client assets and an indication of citizenship/alien status for telephone-only cases, LSC imposed stringent criteria on the self- inspections. The criteria were stringent in that LSC had not promulgated explicit documentation requirements related to these issues until it released the 1999 CSR Handbook and July 14 program letter. Grantees were not required to apply these new documentation requirements to their 1998 case data. Recognizing that in 1998 many grantees did not keep sufficient documentation on assets or citizenship/alien eligibility to comply with the stringent self-inspection requirements, LSC allowed grantees some latitude in determining whether they were in fundamental compliance with the requirement that legal assistance only be provided to eligible individuals. In conversations with grantee executive directors during the self-inspection, LSC officials advised them that if they had a level of certainty that their program staff had asked questions about assets, and if their certainty was sufficient for them to sign a form attesting to this, that was an acceptable basis for asserting compliance with the reporting requirement for assets. Similarly, if grantees were sufficiently certain that, for telephone cases, their program staff had asked questions about citizenship/alien status but had not documented the inquiry, that too was acceptable. Finally, LSC allowed grantees some latitude with respect to the requirement that an attorney or paralegal must be the provider of legal assistance. LSC advised grantees during the self-inspection that it was acceptable for them to count as valid cases instances in which legal advice was given by a (1) lawyer, (2) paralegal, or (3) intake specialist or law student under the supervision of a lawyer, as long as the assistance did not violate local rules of practice. If any single aspect of a case failed to meet LSC’s requirements, the case was to be classified as an error for reporting purposes. If the grantees found that their CSR case sampling had an error rate of 5 percent or less, the program directors and policy board chairs were to sign a certification form and return it to LSC. Grantees who could not certify to the correctness of their 1998 CSR data were to submit a letter to LSC describing (1) the problems they had identified during the self-inspection process and (2) the corrective actions they had instituted to address the problems. Grantees could resubmit their 1998 CSR data to LSC if they identified one or more problems in the random sample and corrected their entire 1998 database so that the problems no longer appeared. If, by correcting the problems, the error rate in the data was reduced to 5 percent or less, the grantees could resubmit their 1998 data along with a signed certification attesting to the substantial accuracy of the resubmitted data. In this way, grantees who were unable to certify at one point in time could certify at a later point in time. As of July 29, 1999, 26 grantees had resubmitted their 1998 CSR data after having made corrections to the data, and 20 of them had certified their data. According to LSC officials, about three-fourths of the grantees certified the accuracy of their 1998 case data. As of August 26, 1999, LSC documents indicated that 199 of 261 grantees (76 percent) reported substantially correct CSR data to LSC. The remaining 62 grantees (24 percent) did not certify to LSC that their CSR data were substantially correct. On the basis of the self-inspection results, LSC estimated that grantees closed 1.1 million cases in 1998. LSC officials told us that they were surprised that such a large number of grantees were able to certify their 1998 CSR data. They attributed the lower-than-expected error rates to the following factors: The self-inspection did not attempt to identify duplicate cases. LSC officials explained that, during most of 1998, LSC did not have a standard regarding duplicate cases, and that they believed it would have been too burdensome on grantees for LSC to require them to apply the new standard retroactively. According to an LSC official, duplicate cases are best caught at the time of intake. The official also noted that duplicate cases had not in the past been found to be a major problem in comparison with other identified problems. Grantees received the new 1999 CSR Handbook in November1998, and a number of them had implemented the requirements that applied to their 1998 data by the time they submitted their 1998 case statistics. Grantees were aware of the problem that the OIG had identified with cases coded as referral after legal assessment. That is, some programs had inappropriately reported as CSR cases numerous instances of making a telephone referral without providing legal advice and/or without documenting an individual’s eligibility. Because grantees had been sensitized to this issue, LSC officials believed that they were less likely to count these referrals as cases in 1998. According to LSC officials, CSR problems were more common at larger grantees that had heavier case loads and multiple branches. They noted that approximately 30 of the 50 grantees with the largest caseloads were unable to certify their 1998 data. They hypothesized that larger programs may have difficulty certifying if such programs have (1) numerous branch offices, one or more of which are out of compliance with regulations and causing reporting problems for the entire program, and/or (2) numerous sources of funding and more compliance requirements, which could add to the complexity of their reporting process. The officials acknowledged that these hypotheses needed further exploration. LSC officials also reported that noncertifying grantees identified two principal problems. One pertained to the lack of citizenship attestations in case files. The second pertained to the lack of information on client assets. LSC officials also noted that some grantees had reported matters, such as referrals, as cases, and some had reported cases as open when they did not meet the timely closing requirement. In our telephone interviews with 79 executive directors, 24 reported that they did not certify their 1998 data to LSC. Factors that the executive directors of noncertifying programs believed greatly affected the errors in their 1998 data included lack of clear guidance from LSC (cited by 11 respondents), computer problems (cited by 7 respondents), and insufficient attention by their programs to administrative matters (cited by 5 respondents). Computer problems that were noted included difficulties merging databases and using several different software packages, and problems with upgrading the computer system. Based on the self-inspection results, LSC estimated that its grantees closed about 1.1 million cases in 1998. LSC arrived at this figure by subtracting the total number of closed cases estimated to be in error (135,498) from the total number of cases that grantees reported to LSC (1,260,351). As of July 29, 1999, LSC estimated that its grantees’ total closed caseload for 1998 was 1,124,853. LSC intends to report only the estimated number of cases closed in 1998, even though the self-inspections were to encompass both open and closed cases. According to LSC officials, LSC has less confidence in the open-case numbers than the closed-case numbers. One reason for this is that they believe open cases are more likely than closed cases to have timely closing problems. A second reason is that some grantees experienced problems when converting to a new computer system. System conversions sometimes caused dates to be lost or cases to be mistakenly coded as being open. Our review of LSC’s self-inspection results raised some concerns about LSC’s interpretation of the results and about the accuracy of the data provided to LSC by grantees. As a result, we could not assess whether the number of certified programs and case closures that LSC estimated for 1998 is correct, lower, or higher than it should be. Although LSC provided instructions to grantees on how they should select test samples and what case information they should verify, LSC did not issue standardized procedures for grantees to use in reporting the results of their self-inspections. Grantees that could not certify their data wrote letters to LSC in which they described the errors they uncovered. The letters contained varying degrees of detail about the errors. Some programs provided an overall error rate and did not separately report how much error they found in open and closed cases. Others provided detailed information on both the number of errors found in open and closed cases, respectively, as well as the number of errors broken out by type. Since LSC did not have a standard protocol for collecting the results of the self- inspections, in some cases LSC officials had to rely on interpreting grantee letters that described the problems that were discovered. LSC officials believe that their numerous contacts with grantees who called them to ask questions about the self-inspections, combined with their analysis of the grantee letters, enabled them to correctly determine the number of certifying programs and estimate the number of closed cases. We are uncertain how many programs should have been counted as certified because we are uncertain if LSC applied a consistent definition of “certification.” Most programs that were on LSC’s certification list determined that they had error rates of 5 percent or less for both open and closed cases. However, LSC placed some programs on the certified list if the program’s overall error rate for closed cases was 5 percent or less, even if the overall error rate actually was higher than 5 percent. We encountered this situation in two instances in which executive directors told us in telephone interviews that they did not certify their CSR data because their overall error rate exceeded 5 percent. However, these programs appeared on LSC’s list of certified programs. When we asked an LSC official about this, he told us that they advised grantees that if their closed case error rate did not exceed 5 percent, they should “partially certify” their data. In response to our inquiry, the official reviewed the certification letters submitted by nearly 200 grantees, and he identified 5 certified programs whose error rates for open cases exceeded 5 percent. Given that some grantees submitted only an overall estimate of data error, we do not know how many programs qualified to be certified overall, just for closed cases, or just for open cases. In another instance, an executive director told us that she did not certify her program because 8 of the 14 case-handling offices had error rates exceeding 5 percent. Nonetheless, this program appeared on LSC’s certified list. An LSC official explained that, after reviewing the information provided by this program, the official agreed that the program should not have been classified as certified. In a fourth instance, in which an executive director reported to us that he did not certify his program, an LSC official said that, although they thought the grantee’s data had been corrected, the program had not yet certified its data and might need to do additional sampling. We are also concerned that LSC’s instructions to grantees on how to conduct the self-inspections may have led some of the smaller grantees to select too few test cases to make a reliable assessment of the proportion of error in their case data. For example, LSC instructed grantees to select every tenth case for review if the program handled less than 1,000 cases in 1998. This was to be done separately for open and closed cases. Based on 1998 case statistics that grantees submitted to LSC between January and March 1999, several programs would have based their certification determinations on reviews of a relatively small number of cases. Seven grantees had fewer than 300 closed cases, and 43 grantees had fewer than 300 open cases. Three programs had fewer than 300 cases combined. The smallest program would have based its self-inspection on 3 closed cases and 1 open case. We believe that, in general, samples of 30 or fewer cases are too small to provide reliable estimates of the total number of case data errors. Because these were smaller grantees, this limitation would have had little effect on LSC’s estimate of total closed caseload. However, it could have affected—by either overstating or understating—LSC’s count of the number of certified programs. LSC does not know how well grantees conducted the self-inspection process, nor how accurate the results are. We spoke with several executive directors who did not correctly follow LSC’s reporting requirements. In one case, the executive director sought clarification from LSC headquarters about the use of a CSR closure code and was given incorrect verbal guidance. The situation concerned whether applicant files could be closed as “client withdrew” cases when individuals who were accepted for service and assigned to staff attorneys did not subsequently appear for a meeting with the attorney. According to both the executive director and an LSC official, the executive director was told that if the program attempted to contact these individuals, by telephone or letter, to determine whether they were still interested in obtaining legal assistance, they could be counted as cases. The executive director told us that she would review her entire database for this type of error and make corrections. This program was included on LSC’s certified list, but we do not know whether the program would stay on that list if all the errors were identified. Another executive director told us that he was concerned that LSC did not want grantees to count assistance over the telephone as a case. This is not an entirely correct interpretation of LSC guidance since, under certain conditions, LSC permits legal assistance over the telephone to be counted as a case. Although this program was included on LSC’s certified list, any valid telephone cases that were not counted would have erroneously increased its error rate. A third executive director told us that the timely closing rule required him to close new cases in December. If the person sought assistance with a similar problem in January, it would be treated as a new case because a new reporting year had begun. In a March 1999 written communication intended to supplement the 1999 handbook, LSC advised grantees to exercise discretion about when to close cases that were opened near the end of the year. LSC did not require grantees to close all cases at year-end. These examples illustrate the possibility that incorrect interpretations of LSC guidance may have resulted in some programs certifying their 1998 data when they shouldn’t have, and other programs not certifying their 1998 data when they should have. An LSC official told us that, although they have conducted CSR training sessions for grantee executive directors, there are thousands of case handlers in grantee offices who have not received similar training. The official acknowledged that written guidance and telephone contacts with grantees may not be sufficient to ensure correct and consistent understanding of reporting requirements, and that LSC may consider alternative ways of providing training to staff, such as through videos. Incorrect interpretations of LSC guidance could also have affected the accuracy of LSC’s estimate of closed cases. The Inspector General told us that his office has completed audits of six grantees’ 1998 case data. One additional audit of a grantee’s CSR data had not yet been completed. The six completed audits included both certifying and noncertifying grantees. According to the Inspector General, the results of the six completed audits would be provided to Congress by September 29, 1999. LSC officials told us that the self-inspection was valuable and that LSC plans to have grantees complete self-inspections again early next year as part of the 1999 CSR reporting process. We agree that a self-inspection process can be valuable, provided that grantees have clear, consistent, and appropriate guidance on the procedures for reviewing their case data, determining whether to certify the data, correcting errors, and providing their results to LSC in a standardized way that facilitates validation of the results. LSC’s 1999 CSR Handbook and other written communications have improved the clarity of reporting requirements for its grantees. However, many grantees remain unclear about and/or misunderstood certain aspects of the reporting requirements. LSC’s practice of disseminating guidance primarily by written or telephone communications may not be sufficient to ensure that grantees correctly and consistently interpret the requirements. LSC sought to determine the accuracy of grantees’ 1998 case statistics by requiring grantees to conduct self-inspections. However, we do not know the extent to which the results of the self-inspection process are accurate. The validity of the results are difficult to determine because LSC did not standardize the way that grantees were to report their results, some of the grantees used samples that were too small to assess the proportion of error in their data, some grantees did not correctly follow LSC’s reporting guidance, and LSC had done no verification of the grantees’ self-inspection procedures. We do not believe that LSC’s efforts, to date, have been sufficient to fully resolve the case reporting problems that occurred in 1997. We recommend that the President of LSC clarify and disseminate information on the specific information on client assets that grantees must obtain, record, and maintain; clarify and disseminate information on the types of citizenship/alien eligibility information grantees must obtain, record, and maintain for clients who receive legal assistance only over the telephone; clarify and disseminate LSC’s criteria for single recording of cases; clarify and disseminate LSC’s policy concerning who can provide legal assistance to clients for the service to be counted as a case; explore options for facilitating correct and consistent understanding of reporting requirements, such as developing and disseminating a training video for grantee staff; develop a standard protocol for future self-inspections to ensure that grantees systematically and consistently report their results for open and closed cases; direct grantees to select samples for future self-inspections that are sufficient to draw reliable conclusions about magnitude of case data errors; and finally, ensure that procedures are in place to validate the results of LSC’s 1998 self-inspection, as well as of any future self-inspections. The President of LSC provided written comments on a draft of this report, which are printed in full in appendix I. LSC generally agreed with our findings and stated that it intends to implement our recommendations. Concerning the issue of the clarity of case reporting guidance, LSC’s letter stated that LSC is troubled by our finding that some grantees are continuing to have difficulty in this area. LSC’s letter reiterated a point that we made in the report; namely, that the overlap in time between our data collection effort and LSC’s distribution to grantees of a July 1999 program letter intended to clarify client eligibility documentation requirements may have caused some of the executive directors not to factor in the new guidance when they responded to our telephone interview questions. Although, as we state on pages 9 and 10 of the report, some respondents may not have been familiar with LSC’s July 1999 guidance, two respondents told us that they were familiar with the new guidance and that they were still unclear about the requirements dealing with assets. We also state on page 10 of the report that our own analysis indicated that LSC was not consistent in its guidance to grantees on what asset information they needed to obtain from clients. Therefore, we do not believe that grantees’ lack of clarity concerning reporting requirements on client eligibility was due solely to the overlap in the time period of our data collection and LSC’s distribution of the July 1999 program letter. With respect to grantees’ lack of clarity about how to determine duplicate case counting, LSC indicated that it will consider providing additional guidance to grantees through further revisions to the CSR handbook. LSC also indicated that it will revise the handbook to clarify the issue of when legal assistance by nonlawyers can be reported as a case. LSC also reiterated that it is developing additional methods to supplement written guidance, including (1) conducting training and technical assistance, (2) developing case management standards that would detect and prevent cases from being accepted and reported if the required eligibility documentation was not obtained, and (3) developing database queries for grantees to apply to their case management systems to identify instances where cases lacked required compliance documentation. Concerning the self-inspection results, LSC noted that its principal purpose in requiring grantees to self-inspect their 1998 case data was to ensure that case-reporting guidance was being properly applied. LSC stated that assessing the accuracy of the 1998 case statistics that grantees submitted to LSC in March 1999 was a secondary purpose of the self-inspection. LSC believes that the self-inspection increased grantees’ awareness of reporting requirements and prompted them to make changes in their case reporting practices. Indeed, in LSC’s view, the self-inspection requirement resulted in more program improvements than did the 1999 CSR handbook and other written guidance issued by LSC because grantees that identified significant problems were required to take corrective actions. LSC notes correctly that, in relation to the self-inspection, our report focuses on the accuracy of LSC’s determination of the number of grantees that certified their data, and the factors that affected the accuracy of LSC’s estimate of the total number of closed cases in 1998. Our review focused on the stated goals of the self-assessment. Although we did not assess whether, relative to LSC’s other efforts, the self-inspection had a greater effect on grantees’ awareness of and compliance with CSR reporting, we do report that 26 grantees made corrections to their 1998 data as a result of their self- inspections. As arranged with your offices, unless you publicly announce the contents of this letter earlier, we plan no further distribution until 7 days after the date of this letter. At that time, we will send a copy of the report to the Chairmen and Ranking Minority Members of LSC’s appropriations and legislative committees and to Mr. John McKay, the President of LSC. The major contributors to this report are acknowledged in appendix II. If you or your staff have any questions concerning this report, please contact me or Evi L. Rezmovic on 202-512-8777. In addition to those named above, Mark Tremba, Kristeen McLain, Jan Montgomery, David Alexander, Barry Seltser, Lemuel Jackson, Brian Lipman, and Walter Vance made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO determined: (1) what efforts the Legal Services Corporation (LSC) and its grantees have made to correct problems with case service reporting; and (2) whether these efforts are likely to resolve the case reporting problems that occurred in 1997. GAO noted that: (1) LSC revised its written guidance and issued a new handbook to its grantees to clarify case reporting requirements; (2) based on telephone interviews with a sample of 79 LSC grantee executive directors, GAO estimates that 90 percent of grantees viewed the new guidance as having clarified reporting requirements, overall; (3) virtually all grantees said they responded to the new requirements by making or planning to make one or more changes to their program operations; (4) however, many grantees indicated that they were unclear about certain aspects of LSC's reporting requirements, particularly regarding: (a) the specific information required on client assets; (b) the information required for documenting citizenship/alien eligibility for services provided over the telephone; (c) the criteria for avoiding duplicate counts of cases; and (d) who can provide legal assistance to clients in order for the service to be counted as a case; (5) LSC initiated a self-inspection procedure in which grantees were required to review their 1998 case data and submit certification letters to LSC if they found that the extent of error in their data was 5 percent or less; (6) grantees who could not certify their 1998 data were required to develop corrective actions that would address the problems identified; (7) about 75 percent of the grantees submitted letters to LSC certifying that the error rate in their 1998 data was 5 percent or less, while about 25 percent of the grantees submitted letters to LSC indicating that they could not certify their 1998 data; (8) according to LSC, about 30 of the 50 grantees with the largest caseloads were unable to certify their 1998 case data; (9) GAO could not assess whether the number of certified and uncertified programs that LSC obtained for 1998 was correct, lower, or higher than it should be; (10) this is because LSC did not provide grantees with a standardized way of reporting their self-inspection results, and LSC's instructions on how to conduct the self-inspections may have led some of the smaller grantees to select too few cases to reliably assess the amount of error in their case data; (11) some grantees did not correctly interpret LSC's case reporting requirements; and (12) for these reasons, GAO does not believe that LSC's efforts to date have been sufficient to fully resolve the case reporting problems that occurred in 1997. |
In an earlier era, when there was less concern over the costs of health care, the process by which drugs reached patients was relatively simple. The patient went to a doctor, who, if convinced that the malady could be helped with medication, would prescribe a drug that the patient could obtain at the local pharmacy. If the patient’s health insurance had a prescription drug benefit, the patient would be reimbursed for the purchase; if not, the patient would cover the costs out-of-pocket. The decisions regarding which drug would be prescribed were often left to physicians, while those regarding drug cost typically involved manufacturers and retail pharmacies. Further, the health insurer was usually not centrally involved in either decision. Today, the ways in which drugs are prescribed and paid for are considerably more complex. To a great extent, this complexity has been introduced in direct response to concerns with the rapid growth in health care expenditures. Just as with hospital and physician services in an earlier day, insurers have recently begun to take concrete steps to control the costs of pharmacy benefits. Some steps require patients to bear a larger share of the costs of drugs through increased copayments, while others reduce the utilization of drugs and rely more on less-costly types of drugs. The most important steps, however, are directed at minimizing both how much insurers pay manufacturers for drugs and how much they pay pharmacies for their services. Insurers take steps to reduce the acquisition costs of drugs by negotiating for discounts or rebates from drug manufacturers. A powerful tool in these negotiations is the formulary that the insurer or the PBM maintains. A formulary is a list of prescription drugs that are preferred by the insurer or the PBM. Drugs are included on formularies not only for reasons of medical effectiveness but also because of price. Because formularies can affect the utilization rates for drugs, it is in the interest of a drug manufacturer to have its products included. This is especially true when the insurer or PBM is successful in obtaining high rates of physician compliance with the formulary and when the insurer has a large number of enrollees. In these cases, the potential effect that placement on a formulary has on the sales and market share of a drug is so great that insurers can use such placement as a means of securing discounts or rebates from drug manufacturers. Insurers and PBMs also negotiate for discounts directly with pharmacies to try to control how much they reimburse for services. In these negotiations, the position of insurers is strengthened not by formularies but by their ability to influence which pharmacies their enrollees use. As with the negotiations with manufacturers, the position of the insurer or the PBM is related to the number of enrollees represented by the plan. The extent to which negotiated rebates and discounts with drug manufacturers and pharmacies have controlled costs can be substantial. For example, in our most recent examination of these strategies, a large insurer estimated that the combined savings that resulted from manufacturer rebates and pharmacy discounts exceeded $300 million.Many retail pharmacists believe that the means used to achieve these savings have placed them at a comparative disadvantage in the rapidly changing health care environment. The current environment is viewed with anxiety by many retail pharmacists. The success of insurers and other institutional buyers in using their consolidated buying power to reduce the price they pay for drugs has not been shared by retail pharmacists. As a consequence, retail pharmacies are sometimes charged more for similar products than are health insurers such as health maintenance organizations, self insured health plans, and other institutional buyers. The best evidence we were able to obtain that differential pricing existed comes from a recent study of drug pricing in Wisconsin. Table 1 summarizes the results from that study. As can be seen from the table, differences in prices of greater than 10 percent were found for more than one third of all products (27 out of 76 drugs), and in more than one half of those cases (21 percent of all cases), the differences could not be justified by volume of purchase. In placing these findings in a larger perspective, it is important to note that Wisconsin has what is often referred to as a “unitary pricing” law that “requires sellers to offer drugs . . . to every purchaser under the same terms and conditions afforded to the most favored purchaser.” The data from Wisconsin support the conclusion of many that differential pricing exists. The differences in prices may well reflect the relative abilities of insurers and retail pharmacies to influence market share. That is, some purchasers of drugs, primarily those who can influence the specific drugs that are prescribed for large numbers of patients, may pay less for drugs because of that ability. The increasing concern among insurers with controlling costs and the consequent reliance on their consolidated purchasing power also have affected how much pharmacies are reimbursed for the drugs they sell to customers. As health insurers and the PBMs that represent them cover more people, they use the size of their member populations as leverage to help reduce the amounts that they reimburse pharmacies for prescriptions dispensed to those populations. Although a pharmacy can refuse to participate in an insurer’s network of pharmacies willing to provide prescription discounts, it is difficult for the pharmacy to face the possibility of losing the business. For example, each of the two largest PBMs represents more than 40 million people nationwide. As we were told by one independent retail pharmacist, “either I agreed to the new reimbursement schedule, or I lose 40 percent of my patients.” In addition to the pressures of how much retail pharmacists pay for drugs and how much they can charge for their services, they have been facing pressure from new sources of competition. The expansion of supermarkets into the pharmaceutical area has been under way for some time, but the more immediate threat to the viability of retail pharmacies may be posed by the reliance of insurers on mail order pharmacies. Mail order firms have made significant inroads into the market in recent years, especially in providing drugs for the chronically ill. In an effort to promote the use of mail order pharmacies, some insurers provide enrollees with considerable financial incentives. For example, the largest plan under the federal employee health benefits program provides enrollees drugs free of charge if they obtain them through the mail order program yet requires a 20-percent copayment from most enrollees for drugs purchased at retail pharmacies. All these pressures on retail pharmacies have had a considerable effect. For example, in the case described above, a change in pharmacy benefits that affected many of the plan’s enrollees reduced payments to retail pharmacies. During the first 5 months of 1996, the total amount that retail pharmacies were paid for the prescriptions they dispensed to enrollees affected by the benefit change decreased by about 36 percent, or about $95 million, from the amount paid during the same period in 1995. Retail pharmacists have resorted to three different types of action in response to the changes in pharmaceutical pricing: litigation, adoption of competitive strategies, and calls for legislation. A large lawsuit regarding drug pricing was recently settled, at least in part. The suit was a class action by tens of thousands of independent and chain pharmacies against virtually all the leading manufacturers and wholesalers of brand-name prescription drugs. The pharmacies argued that the manufacturers and wholesalers, by granting discounts to managed care organizations that were not available to the pharmacies, were engaged in a price-fixing conspiracy in violation of federal antitrust law. The court rejected an initial settlement but approved a modified settlement with most of the manufacturer-defendants on June 21, 1996.(The wholesalers are not parties to this settlement because the court earlier granted summary judgment in their favor.) The litigation is not entirely over because not all parties have agreed to the settlement, and a number of issues remain on appeal in the Court of Appeals for the 7th Circuit. The modified settlement satisfied the concerns about future pricing conduct that led the court to reject the initial proposal. Specifically, the current settlement provides that (1) the manufacturers will not refuse discounts solely on the basis that the buyer is a retailer and (2) retail pharmacies and buying groups that are able to demonstrate an ability to affect market share will be entitled to discounts based on that ability, to the same extent that managed care organizations would get such discounts. In addition to pursuing legal remedies, retail pharmacies are beginning to adopt some strategies designed specifically to become more competitive in the new environment. Some pharmacies are offering services not traditionally found in them (such as food products and optical care), while some are trying to follow the lead of institutional drug purchasers. For example, some retailers are creating buying groups, and others are considering ways to influence the choice of drugs by contacting patients directly and informing them of the relative merits of the different drugs that might be available. If contacting patients directly is successful, it will provide retail pharmacies with the commodity that makes institutional buyers so powerful—namely, the ability to influence market share. Although we cannot predict how successful any of these strategies will be, the large chain pharmacies are more likely to succeed as they try to compete with managed care organizations and mail order pharmacies than are the smaller, independent retail pharmacies. Finally, retail pharmacists and their representatives have been strong proponents for legislative solutions. Depending on ideological affiliation, these are alternatively referred to as “unitary pricing” or “equal access to discount” laws, and they have been considered in one form or another by the majority of state legislatures. Although it is difficult to predict all the consequences of legislation in such a complex area as drug pricing, we can look to the last instance in which the federal government attempted a legislative solution to a problem involving drug costs: the Medicaid rebate on prescription drugs. In OBRA 1990, the Congress tried to reduce Medicaid’s prescription drug costs by requiring that drug manufacturers give state Medicaid programs rebates for outpatient drugs. The rebates were based on the lowest of “best” prices that drug manufacturers charged other purchasers, such as health maintenance organizations and hospitals. In our study of this legislation, we found that the average best price for outpatient drugs paid by large purchasers increased. In its evaluation, the Congressional Budget Office concluded that the program had reduced Medicaid spending on prescription drug benefits by almost $2 billion. However, at the same time, the budget office study’s conclusion was consistent with ours in that “spending on prescription drugs by non-Medicaid patients may have increased as a result of the Medicaid rebate program.” Although the issues involved with the differential pricing between institutional and retail pharmacies are likely to be distinct from those the Congress confronted in the Medicaid prescription drug benefit, the lessons of OBRA 1990 cannot be ignored at a time when controlling health care costs is of such critical importance. Mr. Chairman, this concludes my statement. I would be happy to answer any questions that the Subcommittee might have. For more information about this testimony, please call George Silberman, Assistant Director, at 202-512-5885. Other major contributors include David G. Bernet, Joel A. Hamilton, and John C. Hansen. Blue Cross FEHB Pharmacy Benefits (GAO/HEHS-96-182R, July 19, 1996). Pharmacy Benefit Managers: Early Results on Ventures with Drug Manufacturers (GAO/HEHS-96-45, Nov. 9, 1995). Medicaid: Changes in Best Price for Outpatient Drugs Purchased by HMOs and Hospitals (GAO/HEHS-94-194FS, Aug. 5, 1994). Prescription Drugs and the Elderly: Many Still Receive Potentially Harmful Drugs Despite Recent Improvements (GAO/HEHS-95-152, July 24, 1995). Prescription Drug Prices: Official Index Overstates Producer Price Inflation (GAO/HEHS-95-90, Apr. 28, 1995). Prescription Drugs: Spending Controls in Four European Countries (GAO/HEHS-94-30, May 17, 1994). Prescription Drugs: Companies Typically Charge More in the United States Than in the United Kingdom (GAO/HEHS-94-29, Jan. 12, 1994). Medicaid: Outpatient Drug Costs and Reimbursements for Selected Pharmacies in Illinois and Maryland (GAO/HRD-93-55FS, Mar. 18, 1993). Prescription Drug Prices: Analysis of Canada’s Patented Medicine Prices Review Board (GAO/HRD-93-51, Feb. 17, 1993). Medicaid: Changes in Drug Prices Paid by HMOs and Hospitals Since Enactment of Rebate Provisions (GAO/HRD-93-43, Jan. 15, 1993). Prescription Drugs: Companies Typically Charge More in the United States Than in Canada (GAO/HRD-92-110, Sept. 30, 1992). Prescription Drugs: Changes in Prices for Selected Drugs (GAO/HRD-92-128, Aug. 24, 1992). Medicaid: Changes in Drug Prices Paid by VA and DOD Since Enactment of Rebate Provisions (GAO/HRD-91-139, Sept. 18, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the implications of prescription drug pricing for retail pharmacies, focusing on the: (1) changes in the process of getting prescription drugs from manufacturers to patients; and (2) consequences for and response of retail pharmacies to these changes. GAO noted that: (1) health insurers have used their consolidated buying power to obtain drug discounts not available to retail pharmacies; (2) health insurers and pharmacy benefit managers (PBM) use the size of their member populations as leverage to help reduce the amounts that they reimburse pharmacies for prescriptions dispensed to those populations; (3) retail pharmacies have been facing increased competition from mail order pharmacies; and (4) retail pharmacies have responded to the changes in pharmaceutical pricing by waging lawsuits against leading drug manufacturers and wholesalers, developing more competitive strategies for gaining business, and campaigning for legislative action. |
The United States has more than 19,000 airports, ranging from busy commercial service airports such as Hartsfield-Jackson Atlanta International Airport that enplanes millions of passengers annually, to small grass airstrips that serve only a few aircraft each year. Of these, roughly 3,300 airports are designated by FAA as part of the national airport system and are therefore eligible for federal assistance for airport capital projects. The national airport system consists of two primary types of airports— commercial service airports, which have scheduled service and board 2,500 or more passengers per year, and general aviation airports, which have no scheduled service and board fewer than 2,500 passengers. Federal law divides commercial service airports into various categories of airports, based on the number of passenger boardings, ranging from large hub airports to commercial service nonprimary airports (see fig. 1). The majority of passenger traffic occurs at large hub airports: almost 73 percent of all passengers in the United States boarded at the 30 large hub airports in 2015. The federal government provides grants to help fund airport capital development through its Airport Improvement Program (AIP). Congress appropriates funds for AIP and other FAA programs from the Airport and Airway Trust Fund (AATF), which is itself funded by a variety of aviation- related taxes, such as taxes on tickets, cargo, general aviation gasoline, and jet fuel. FAA’s tool for identifying airports’ future capital projects that are eligible for AIP grants is the National Plan of Integrated Airport Systems (NPIAS). FAA relies on airports, through their planning process, to identify individual projects for funding consideration. Federal law and FAA’s rules establish which types of airport development projects are eligible for AIP’s funding. Generally, most types of airfield improvements—such as runways, lighting, navigational aids, and land acquisition—are eligible. AIP-eligible projects for airport areas serving travelers and the general public—called “landside development”—include entrance roadways, pedestrian walkways and movers, and common space within terminal buildings, such as waiting areas. Hangars and interest expense on airport debt are not eligible for AIP grants. Some landside development projects—including revenue-producing terminal areas, such as ticket counters and concessions—are also ineligible. PFCs are another federally authorized source of funding that commercial airport sponsors can levy on passengers to help pay for capital development at national system airports. Commercial airports must designate which projects PFCs will fund and must seek and obtain FAA’s approval to charge a PFC. Funding for both AIP and PFCs is linked to passenger activity. In this way, Congress aimed to direct funds to where they are needed most. Airports also fund their development with state and local contributions as well as airport generated funds, such as income from airports’ tenants and commercial activities. Airport-generated revenue is typically used to finance the issuance of local debt such as tax-exempt bonds, which for larger commercial airports constitute more than half of their financing. Because of the size and duration of airport development—for example, planning, funding and building a new runway can take more than a decade and several hundred-million dollars to complete—long-term debt is used to help finance these types of projects. The FAA’s estimate of the costs for infrastructure development at airports over the next 5 years is about $32.5 billion compared to the airport industry’s estimate of almost $100 billion for the same period. In 2016, FAA estimated that airports have roughly $32.5 billion in planned development projects for the period 2017-2021, which represents a 3 percent, or $1 billion, decrease from its estimate for the 2015-2019 period. The FAA attributes the decline in capital development costs to a range of factors, including a reduction in current and future traffic relative to earlier predictions, the use and age of airport facilities, and costs related to changing aircraft technology. FAA reported a decrease in estimated costs for planned projects at most large and medium hubs, with increases at other hub types. For instance, according to the FAA, there is an increase in terminal projects at small airports, while FAA notes that many large and medium sized airports have terminal projects planned. Further, according to FAA’s analysis, airports will experience decreased demands for building new airside capacity, such as runways, to reduce delays. The airport industry’s estimate of 5-year planned development cost, as developed by Airports Council International-North America (ACI-NA), is three times FAA’s. ACI-NA’s most recent estimate of almost $100 billion in planned investment is a 32 percent increase over its 2015 5-year estimate of $75.5 billion. According to ACI-NA officials, of the nearly $100 billion in total planned development costs, $61 billion are for AIP-ineligible projects and $38.9 billion are for AIP-eligible projects (as compared to FAA’s $32.5 billion estimate), with most of the ineligible projects for terminal or landside improvements such as ground access. The percentage increase in planned development estimates is greatest for large hub airports, where estimated costs have increased more than 50 percent, from about $40 billion to about $60 billion in ACI-NA’s most recent estimate. For example, according to the latest ACI–NA report, the Los Angeles International Airport reported that its planned new development will cost about $10 billion between 2017 and 2021 for infrastructure projects. In contrast, most small airports reported single- digit increases in infrastructure costs, according to ACI-NA, although there are some exceptions. ACI-NA officials told us that a key driver for its increasing cost estimate is that airports have deferred some airport projects due to a lack of funding in the past. The principal reason why FAA’s and ACI-NA’s planned development costs differ so significantly is that the ACI-NA cost estimate encompasses substantially more projects than does FAA’s, according to ACI-NA. As we have previously reported, the ACI-NA uses AIP-eligible and AIP-ineligible projects to develop its estimates, while the FAA only uses AIP-eligible projects. Additionally, ACI-NA cost estimates are made up of projects that have already identified funding sources as well as those that have not. According to ACI-NA officials, 77 percent of the cost of planned development for large hub airports in their most recent cost estimate has funding already arranged. In contrast, FAA’s estimates only include projects without financing arranged. Additional reasons for differences in FAA’s and ACI-NA’s estimates are technical and methodological. First, the sources and methods for surveying information from the airports differ. FAA estimates are developed by reviewing information from airport plans that were available through 2015. The ACI-NA costs estimates are based on a survey of airports completed in 2017. Second, the FAA does not adjust its estimates for inflation, but the ACI-NA uses a 1.5 percent annual inflation adjustment. Without the inflation adjustment, ACI-NA’s estimate would drop $4.2 billion to $95.7 billion in constant 2016 dollars. Third, the ACI- NA estimate includes contingency costs for potential design changes, whereas FAA’s estimate does not. While FAA and ACI-NA cost estimates have long differed for the reasons outlined above, the most recent estimates diverge considerably, as shown in figure 2. The 5-year FAA estimate for 2017 through 2021 fell from the prior estimate to $32.5 billion, whereas ACI-NA’s estimate increased by $24.4 billion to $99.9 billion, or three times FAA’s estimate. In 2015, we estimated that in recent years national system airports had generated an average of $10 billion annually for capital development. These funds come from a variety of sources, as noted in figure 3. AIP grants: Since 2012, AIP authorizations have been unchanged, although the health of the AATF, which funds AIP, has improved. The AATF’s balance has recovered in recent years, ending fiscal year 2016 with an uncommitted balance of $5.7 billion and a cash balance of $14.3 billion. AIP grants must be used for eligible and justified projects, which are planned and prioritized by airports, included in their capital improvement plans, and reviewed and approved by FAA staff and the Secretary of Transportation. The distribution system for AIP grants is complex. It is based on a combination of formula grants—which are often referred to as “entitlement grants” within this program—that go to all national-system airports, and discretionary grants that FAA awards for selected eligible projects. In 2015, we reported that, for fiscal years 2009 through 2013, national-system airports received an average of $3.3 billion annually in AIP grant funding. Grant awards in fiscal year 2016 totaled almost $3.3 billion. PFC collections: Congress last raised the PFC cap in 2000 to $4.50 per flight segment, with a limit on the total PFCs that a passenger can be charged per round trip of $18 total. Large and medium hub airports that collect PFCs of $3 or less per flight segment have their AIP entitlement funding reduced by 50 percent; any of these airports that collect PFCs of more than $3 have their AIP entitlement funding reduced by 75 percent. Most of these AIP reductions to large and medium airports are distributed to smaller airports through the AIP. We found in 2015 that for fiscal years 2009 through 2013, commercial airports had an annual average of $1.8 billion of their PFC collections available for capital projects after deducting interest payments on debt. Ninety percent of that amount was collected by larger airports. Of the $90 billion in FAA approved PFC collections, 34 percent has been committed for landside projects, such as terminals; 34 percent for the interest payments on debt used to pay for capital projects, and 18 percent for airside projects, such as runways and taxiways. As of January 2017, 96 of the top 100 airports have been approved to collect PFCs. State grants: Airports can also obtain funding for capital development projects from state grants. This money is often used to provide the airport’s share of matching funds required for AIP-funded projects. According to the results of a survey we conducted in collaboration with the National Association of State Aviation Officials (NASAO), for fiscal years 2009 through 2013, states provided an annual average of $477 million to national system airports, with $345 million (72 percent) going to smaller airports and $131 million (28 percent) going to large and medium hub airports. Capital contributions: Capital contributions are funds contributed for infrastructure projects by the airport sponsor or entities that use the airport, such as airlines or tenants. According to FAA data on commercial airports’ annual financial reports, for fiscal years 2009 through 2013, commercial airports received an annual average of $644 million in capital contributions. Of this amount, $419 million went to larger airports and $225 million went to smaller airports. Airport-generated net income: Airports generate both aeronautical revenues, such as revenues earned from leases with airlines and landing fees, and non-aeronautical revenues, such as earnings from terminal concessions and parking fees. We found that for fiscal years 2009 through 2013, airport-generated net income available for capital development projects averaged $3.8 billion annually—55 percent from aeronautical revenues and 45 percent from non-aeronautical revenues (see fig. 4). To leverage these funding sources, some airports also issue bonds to finance infrastructure projects, often for larger and longer-term developments. Bonds allow an airport to fund a project up front and pay for its cost, plus interest, over a much longer time frame compared to the construction of the project. Because many U.S. airports are owned by states, counties, cities, or public authorities, bonds issued by these entities to support airport projects may qualify as tax-exempt bonds for federal tax purposes. The tax-exempt status enables airports to issue bonds at lower interest rates than taxable bonds, thus reducing a project’s financing costs. Tax-exempt bonds can be issued at lower rates because the federal income-tax exclusion on the interest paid by the purchasers can make these investments more attractive to investors than taxable bonds. Based on our analysis of data from Thomson Reuters on airport bond issuances, from 2009 to 2013, airports obtained an average of $6.3 billion per year for new projects by issuing bonds. Bond financing has traditionally been an option exercised by larger airports because they are more likely to have a greater and more certain revenue stream to support repayment of debt. Smaller airports tend to be less reliant on bonds and, to the extent that they do issue bonds, make greater use of general obligation bonds that are backed by the tax revenues of the airport sponsor, which is often a state or municipal government. Data from FAA’s airport financial-reporting system indicate that from fiscal year 2009 to fiscal year 2013, 94 percent of bond proceeds—including both new bonds and refinancing—went to larger airports and 6 percent went to smaller airports. The total amounts of funding by source differ between larger and smaller airports. As shown in figure 5, larger airports are more dependent than are smaller airports on airport-generated net income and larger airports are less dependent than are smaller airports on AIP grants. In 2015, we estimated airports’ planned capital-development costs for fiscal years 2015 through 2019 at $13 billion annually, which exceeded airports’ average funding of $10 billion by roughly $3 billion in recent years ($2.7 billion in constant 2013 dollars). We have examined airport funding and planned development four times since 1998 and, as figure 6 shows, the difference between planned development and historical funding has never exceeded $3 billion. Note that the gap also tends to be proportionally greater for smaller airports. As we reported in 2015, airports have a number of options for addressing any shortfall in funding their capital development, including prioritizing capital development projects, financing projects, attempting to increase airport revenues, or entering into public-private partnerships. States and local communities can also choose to increase state grant funding. For individual airports, a common method for aligning funding with planned development is to prioritize projects. This generally entails decisions about which projects to move forward with and which to defer, but could also include scheduling a project in phases or reducing the scope of or cancelling a planned project. Another method that airports can use to align funding with capital development is to borrow money to fund a project. Most commonly, this consists of issuing a bond. However, as previously discussed, borrowing has traditionally been an option exercised by larger airports. To be able to finance projects, an airport’s financial situation must be viewed positively enough to be able to borrow money at affordable rates in the bond market. Two of the airport financial- consulting firms with whom we spoke in 2015 noted that some airports are already leveraged to a large extent, and one bond-rating agency stated that taking on additional debt is always a risk. A third method for airports to fund capital development is to try to increase airport-generated net income. We have found in recent prior work that in addition to traditional commercial activities to generate non-aeronautical revenue, such as parking fees or terminal concessions, some airports have developed commercial activities with stakeholders from local jurisdictions and the private sector to help develop airport properties into retail, business, and leisure destinations. One approach to increasing funding for airports that has been advanced by airports and others is to increase or eliminate the current $4.50 cap on PFCs. However, any increase in PFCs is controversial and strongly opposed by airlines, which contend that airports currently have adequate access to funding for their development. We have previously found that increasing the PFC cap would significantly increase PFC collections available to airports. Specifically, in 2014, we developed an economic demand model to estimate the potential funding airports might generate using three different PFC amounts. The general approach of this analysis was to model airport collections and passenger traffic under various PFC cap levels. We modeled three different increases in the PFC cap amount, each starting in 2016: PFC cap of $6.47 (the 2016 equivalent of $4.50 indexed to the Consumer Price Index (CPI) starting in 2000 when the cap was first instituted); PFC cap of $8 based on the President’s 2015 budget proposal; and PFC cap of $8.50 that would be indexed to inflation. Our analysis indicated that all three scenarios would significantly increase the potential amount of PFC collections in comparison to what would be available without a PFC increase, as shown in table 1. For example, we estimated that raising the PFC cap to $8.00 would result in an additional $2.6 billion in PFCs, an increase of 77 percent in PFC revenue in 2020. Because passenger traffic is highly concentrated at larger airports, PFC collections are similarly concentrated. Thus, larger airports would benefit most from a PFC increase. A hub level analysis of a PFC cap increase shows that large hub airports could receive nearly three-quarters of all PFCs, while large and medium hubs together could account for nearly 90 percent of total PFCs, similar to the current distribution. For example, under an $8 PFC, large hub airports could receive additional PFC revenues of $1.74 to $2.08 billion annually and medium hubs could receive additional PFC revenues of $372 to $435 million annually from 2016 to 2024. Small and non-hub airports could receive up to $212 million and $82 million in additional annual PFC revenues, respectively, from 2016 to 2024. While an increase in PFCs would mainly flow to the larger airports, smaller airports could also benefit from increased PFC collections. As previously noted, under current law, large and medium hubs’ apportionment of AIP formula funds may be reduced, which in fiscal year 2014, resulted in a redistribution of approximately $553 million. The majority of this funding (87.5 percent) goes to the Small Airport Fund for redistribution among small airports. The remaining 12.5 percent became available as AIP discretionary funds, which FAA uses to award grants to eligible projects regardless of airport size. According to our model, while increasing the PFC cap could raise PFC revenue, it could decrease passenger demand. Such a decrease would also result in marginally slowing growth in revenues to the AATF. Assuming that the PFC increase is fully passed on to consumers and not absorbed through a reduced lower base in (before tax) fares, the higher cost of air travel could reduce passenger demand according to economic principles. Economic principles and past experience suggest that any increase in the price of a ticket—even if very small—will have an effect on some consumers’ decisions on whether to take a trip. For example, an increase in the price by a few dollars may not affect the decision of a business flyer going for an important business meeting but could affect the decision of a family of four going on vacation. Under all three scenarios, AATF revenues, which totaled $14.3 billion in 2016 and are used to fund FAA activities, would likely continue to grow overall based on current projections of passenger growth; however, the modeled cap increases could reduce total AATF’s revenues by roughly 1 percent because of reduced passenger demand. For example, under a $6.47 PFC, we estimated that AATF’s revenues would total $105 million less in 2024 than they would total if the cap were not raised. For more than a decade, airlines and airports have hotly debated a PFC increase because it would give greater control over airport investment to airports. All else being equal, lower PFCs can provide airlines with more influence over airport infrastructure decisions and higher PFCs can provide airports more control over local capital-funding decisions, including the ability to decide how to apply PFC revenues to support capital projects and thus how those revenues might influence airline rates and charges. Generally, PFCs offer airports relative independence over investment decisions at their airports. While airports must notify and consult with the airlines on how they spend PFCs, as long as FAA approves, airlines cannot block these decisions. Airlines can choose to serve other airports, however, so airports still have an incentive to listen to airline concerns. Chairman Blunt, Ranking Member Cantwell, this concludes my statement for the record. For further information about this testimony, please contact Gerald L. Dillingham at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Paul Aussendorf (Assistant Director), Amy Abramowitz, Dave Hooper, Malika Rice, Amy Suntoke, Melissa Swearingen, and Michelle Weathers. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Roughly 3,300 airports in the United States are eligible for federal AIP grants from the FAA that can be used for certain types of projects, such as building runways and noise mitigation. To fund development, in addition to AIP grants, airports rely on locally generated revenues and federally authorized PFCs, which are added to the price of an airline ticket and have been capped at $4.50 per flight segment. The administration's call to boost spending on public infrastructure has renewed attention on the importance of maintaining and improving airport infrastructure. This testimony discusses: (1) the differences between estimates of airports' planned development costs, (2) the federal funding and other airport funding and revenues that may be available to defray development costs, and (3) the implications of increasing the cap on PFCs, among other objectives. This testimony is based on previous GAO reports issued from March 1998 through April 2015, with selected updates conducted through March 2017. To conduct these updates, GAO reviewed recent information on FAA's program activities and analyses outlined in FAA reports, and related airport industry estimates of infrastructure development costs. GAO also interviewed officials from FAA, and airport and airline trade associations. The Federal Aviation Administration's (FAA) estimate of the costs for planned capital development at airports over the next five years is about $32.5 billion, compared to the Airports Council International-North America's (ACI-NA) estimate of almost $100 billion, both for the period 2017-2021. The difference between these two estimates can be attributed to a number of factors, but most significantly to the types of projects included in the estimates. FAA's estimate is limited to projects that are eligible for Airport Improvement Program (AIP) grants that do not already have funding arranged, whereas ACI-NA's estimates include all projects regardless of AIP eligibility or whether funding is arranged. The figure below illustrates the disparity between the two estimates since 2005. Note that since 2015, FAA's estimate has decreased by $1 billion whereas ACI-NA's has increased by $24.4 billion. In addition to the AIP and state grants they receive, airports generate funds through airport-generated income and Passenger Facility Charges (PFC), among other sources. In 2015, GAO estimated that funding from these sources totaled an average of $10.3 billion annually (2013 dollars), $2.7 billion less than airports' planned development costs. Airports have a number of options for addressing any shortfall in funding their planned development costs, including prioritizing development projects, financing projects with long term debt, attempting to increase airport revenues, or entering into public-private partnerships. Increasing or eliminating the PFC cap would significantly increase PFC collections available to airports under three scenarios GAO modeled in prior work. However, according to GAO's model, an increase in the PFC could also marginally slow passenger growth and therefore the growth in tax revenues to the Airport and Airway Trust Fund (AATF), which is used to fund FAA programs. Such projected effects depend on key assumptions regarding the consumers' sensitivity to a PFC cap increase, whether the airlines decide to pass on the full increase to consumers, and the rate at which airports would adopt the increased PFC cap. Any increase in PFCs is strongly opposed by airlines which contend that an increase could reduce passenger demand. |
Despite overall economic growth in the United States during the 1980s, the economic and social health of many cities declined. While crime, poverty, and the physical and social deterioration of urban neighborhoods increased, intergovernmental aid to cities declined between 1980 and 1993 by about 19.4 percent in constant dollars. Meanwhile, the out-migration of many middle-income residents and businesses has caused city tax bases to shrink, hampering the ability of local governments to assist economically and socially distressed areas suffering from a mix of interrelated problems. Over the past several decades, the public and private sectors have tried different strategies to assist people living in distressed communities. Some of these efforts have focused on improving the chances for individuals in these areas to obtain the education, social services, and other support they need to leave their neighborhoods. Others have focused on improving the neighborhood’s physical environment through affordable housing or economic development. Still others have combined aspects of both approaches by addressing the needs of residents and their environment. These latter efforts are referred to as comprehensive by community development experts because they consider the housing, social service, and economic development needs of the community. They are considered community-based because they focus on a specific geographic area and involve the residents in planning and implementing the effort. Comprehensive community-based efforts have often begun within a community in response to neighborhood conditions—rather than in response to a federal program—and are operated by local nonprofit organizations. While the structures of these organizations and the programs they provide vary, figure 1.1 illustrates a likely design for a comprehensive community-based development effort. During the 1960s, as a part of its overall strategy to better serve the needs of the poor, the federal government supported broad comprehensive initiatives, such as the Community Action Program (CAP) and Model Cities. CAP established community action agencies (CAA) at the local level to combine and redirect a wide range of federal, state, local, and private resources to make a comprehensive attack on poverty. Participation by beneficiaries and decentralization of decision-making were also major elements of the program. As we reported in 1992, the program lacked sufficient authority and political support at the federal and local levels to influence agencies’ practices and improve service delivery. Model Cities sought to rebuild deteriorated neighborhoods in selected cities by coordinating the array of resources from assistance programs at all levels of government, particularly in housing, education, health, and transportation. Like CAP, Model Cities attempted to unify these efforts into an interrelated system. The program was administered by city demonstration agencies that were an integral part of city administrations. In retrospect, according to our 1992 report, the results of the Model Cities program were mixed because the program lacked incentives to promote cooperation and consensus on priorities. The Model Cities program was terminated as of January 1975 by the Housing and Community Development Act of 1974. The act consolidated seven community development categorical grant programs into the Community Development Block Grant (CDBG) program. Federal support and sponsorship for comprehensive efforts slowed after this, and funding for many community development programs declined in the 1980s. Meanwhile, the private sector, which had started its own comprehensive effort to revitalize distressed communities, continued to shape the comprehensive approach. The Ford Foundation, early in the 1960s, developed the Gray Areas Project in New Haven, Connecticut. Its purpose was to address the multiple needs of a distressed inner city neighborhood by rehabilitating existing housing, providing new affordable housing, and addressing residents’ social and economic needs. Experiences from the private and the federal efforts of the 1960s led to the concept of the Community Development Corporation (CDC). CDCs are private nonprofit organizations that focus their efforts on specific distressed geographic areas. As originally envisioned, these groups emphasized economic and physical development as well as social service delivery. Their boards of directors were composed of residents from the area and representatives of concerned businesses and institutions. CDCs typically entered into partnerships with local governments and corporate entities and relied on both public and private funding. Since the early 1970s, the number of Community-Based Development Organizations—also known as CDCs—has more than tripled, according to a Fannie Mae Foundation study. Studies by the National Congress for Community Economic Development indicate that there are currently at least 2,500 CDCs around the country. However, many of these CDCs do not offer comprehensive services but focus primarily on housing production or economic development. As federal involvement in community development declined and private participation grew, entities known as intermediaries evolved to provide CDCs with financial and technical assistance. In 1979, the Ford Foundation created the Local Initiatives Support Corporation (LISC), a national intermediary set up to provide grants, loans, and technical assistance to nonprofit community development organizations. Another prominent national intermediary—the Enterprise Foundation—has focused on strengthening nonprofit housing development groups, forging local housing partnerships, and helping local groups link needed services into housing, as well as on demonstrating creative approaches to community development. The federal government also supported the use of national intermediaries. In 1978, the Congress chartered the Neighborhood Reinvestment Corporation (NRC) (42 U.S.C. 8101 et. seq.), a public nonprofit corporation. NRC’s mission included the revitalization of declining lower-income neighborhoods and the provision of affordable housing. NRC works with local organizations that are known collectively as NeighborWorks. There are several different types of NeighborWorks organizations, including Neighborhood Housing Services (NHS). NHSs are partnerships of local business leaders, local government officials, and neighborhood residents that function as NRC’s main vehicle for revitalizing distressed neighborhoods. A major new federal initiative to assist urban and rural communities in their revitalization efforts—the Empowerment Zones and Enterprise Communities (EZ/EC) program—was adopted in 1993 under the Omnibus Budget Reconciliation Act. This program promotes the comprehensive revitalization of distressed communities by funding broad, community-based strategic plans. The bulk of the benefits under the program go to nine areas—six urban and three rural—designated as empowerment zones. Considerably fewer benefits are available to the 95 areas—65 urban and 30 rural—designated as enterprise communities. Although the Department of Housing and Urban Development (HUD) and the Department of Agriculture were responsible for designating the areas, the President also established the Community Enterprise Board—a federal, Cabinet-level entity—to assist in implementing the EZ/EC program. The Board is composed of the Vice President, who serves as its Chair; the President’s assistants for domestic policy and economic policy, who each serve as vice chairs; the secretaries of 10 Cabinet departments; and the heads of several other agencies. In addition, the Board is tasked with advising the President on how federal programs can be better coordinated across agencies to respond to the needs of distressed communities. Community development initiatives typically rely on a patchwork of different funding and technical support sources from both the public and the private sectors. Federal funds generally flow through state and local governments in the form of block grants or go directly to community organizations in the form of categorical, or program-specific, funding. Additional funding—often to support specific programs or projects—is available directly from state and local governments. Private funding and technical assistance come from a myriad of sources, including intermediaries and foundations. Several federal block grant funding sources are available to community development organizations through state and local governments. Under HUD’s CDBG program, a wide range of neighborhood revitalization activities can be funded. For example, these grants may be used to rehabilitate housing, support economic revitalization projects, and provide public facilities. HUD also offers funding for housing development through the Home Investment Partnership (HOME) program to state and local governments, which may pass a portion of the funds on to eligible housing development organizations. The Department of Health and Human Services (HHS) makes funds available through the Community Services Block Grant (CSBG) and the Social Services Block Grant (SSBG). The CSBG funds can be used for a range of activities to provide social services, such as emergency assistance, employment assistance, and elderly care. The SSBG funds can also be used for a wide variety of social services, including preventing and treating drug and alcohol abuse and training and employing disadvantaged adults and youth in housing construction and rehabilitation. The federal government also provides funding to community organizations through many separate programs operated across federal departments. This funding tends to be categorical—designated for specific activities—and must be applied for in accordance with specific program guidelines. For example, HUD offers funding for homeownership through the Housing Opportunities for People Everywhere (HOPE) program and for assistance to the homeless through the McKinney Act programs. HHS provides grants to local entities to develop and implement projects that create jobs for low-income people in distressed neighborhoods through its Community Initiatives Program. It also provides grants for substance abuse prevention and treatment demonstration projects, among other things. Other agencies—including the Departments of Commerce, Education, Justice, Labor, and Transportation; the Environmental Protection Agency; and the Small Business Administration—operate additional programs that are available to community organizations. In addition, various federal tax credit and loan guarantee programs are available to community organizations. Some states and localities administer additional programs and provide grants or loans to community organizations for affordable housing, economic development, and social services. For example, a city government may have its own homeownership program that the community organizations can use. Sometimes, state or local governments provide other types of assistance by donating land or offering to work with lenders to negotiate lower interest rates. In addition, some states and localities provide financing—sometimes tax-exempt—for specific projects. National intermediaries provide grants and loans, technical assistance, and coordination with other organizations. These organizations possess advantages of scale that allow them to give local groups access to tax credits and corporate equity investments, secondary mortgage markets, and lenders’ commitments. For example, the National Equity Fund—a subsidiary of LISC—and the Enterprise Foundation use the federal low-income housing tax credit to raise capital for community organizations. In addition to raising funds, NRC’s Neighborhood Housing Service helps form local partnerships of residents, governments, and businesses. Local intermediaries also support community organizations by creating support systems, helping to arrange financing, and providing training and other technical assistance. Foundations provide funding and assistance in a variety of ways. Several national and local foundations have formed direct partnerships with community development organizations. These foundations provide the organizations with funding and technical assistance for planning and executing projects. Other foundations provide grants for specific projects or as “seed” or “glue” money to be used in leveraging additional financing from other sources or to give a project already under way the resources necessary to continue. Commercial banks, businesses, and insurance companies also provide assistance in varying forms to community-based development organizations. Some banks offer loan programs to promote housing, small business, and property development or make below-market-rate mortgage loans for low- and moderate-income housing. Some banks also invest in development projects and local businesses. Businesses and insurance companies have generally contributed to community-based organizations through donations to foundations and intermediaries. However, some businesses work directly with neighborhoods by providing technical support and by donating supplies or products for fund-raising or special events. Other businesses invest by locating their stores or plants in shopping centers or industrial parks within distressed communities. Other organizations, such as universities, hospitals, and religious institutions, also support community-based organizations. In some cities, universities and medical centers have teamed up with community-based groups to sponsor neighborhood-based development activities, such as housing rehabilitation or child care. Many community-based development organizations began in church basements. Aside from providing financial support, some religious institutions provide technical assistance. The Subcommittee on Human Resources and Intergovernmental Relations, House Committee on Government Reform and Oversight, asked GAO to assess (1) the reasons why experts advocate a comprehensive approach to community revitalization, (2) the challenges to implementing these efforts, and (3) the ways the federal government might support comprehensive approaches. To respond to this request, we conducted case studies of four comprehensive community revitalization efforts: (1) the Core City Neighborhoods in Detroit, Michigan, (2) the Dudley Street Neighborhood Initiative in Boston, Massachusetts, (3) the Marshall Heights Community Development Organization in Washington, D.C., and (4) the Neighborhood Housing Services in Pasadena, California. We neither evaluated these efforts to determine whether they were successful nor compared the comprehensive approach to single-focused approaches. Instead, we examined the history of each organization to find out why it chose a comprehensive approach and studied the major factors that helped and hindered its efforts. We judgmentally selected our case study sites through consultations with community development experts according to the following criteria: have at least 3 years’ experience; plan housing, social, and economic development; include residents in planning and decision-making; focus on a specific geographic area, and be located in an urban area. These sites varied in their geographic location, style of management, origin (how the effort began and who started it), and evolution (how the effort incorporated housing, social, and economic development). They also varied in their demographic and economic profiles, differing, for example, in their rates of unemployment and poverty. Table 1.1 summarizes this information. The Core City neighborhood in southwest Detroit was once home to many of the city’s auto workers and was one of the city’s more elaborate business and shopping districts. The neighborhood declined rapidly after the 1967 riots as people and businesses moved out of the area and crime and drug trafficking increased. Now, it is largely vacant, in terms of both people and businesses. Burned-out, abandoned, and boarded-up buildings and vacant lots are scattered throughout the neighborhood. In 1984, a local Catholic parish began community outreach efforts that resulted in the establishment of a nonprofit organization—Core City Neighborhoods (CCN)—that collaborates with other local organizations to provide comprehensive services to the neighborhood (see app. II). The Dudley Street neighborhood—located about 2 miles south of Boston’s major financial and cultural districts—was once a thriving business and residential district. Over a period of nearly 30 years, the neighborhood was effectively isolated from the rest of the city and experienced financial disinvestment, arson, influxes of poor residents, and illegal garbage dumping. Twenty-seven percent of the households in the neighborhood receive public assistance, compared with 12 percent in Boston as a whole. The Dudley Street Neighborhood Initiative (DSNI) began in 1984 as a nonprofit community organizing and planning entity. It collaborates with neighborhood residents, nonprofit organizations, foundations, and city agencies to meet its planned housing, economic, and social service objectives (see app. III). Located in the northeast/southeast area of Washington, D.C., the Marshall Heights neighborhood was once a thriving African-American middle-class residential and business area. However, since the 1970s the community has suffered as its middle-class residents and businesses have moved out of the area. The community is cut off from the rest of the city by the Anacostia River and Interstate 295. It is home to one-third of the city’s public housing units, yet 38 percent of its residents are homeowners. The Marshall Heights Community Development Organization (MHCDO) is a nonprofit CDC begun in 1978 to concentrate on economic development projects that would lead to self-sufficiency for the area’s residents. The organization has since expanded into housing development and social services (see app. IV). Northwest Pasadena is a residential community that consists of older single-family and multifamily units in need of rehabilitation. The majority of the community’s small businesses are unstable, marginally profitable, and undercapitalized. The area has the city’s highest living density and lowest household income. Until recently, the Pasadena city government played a limited role in the community, which, for the last 50 years has been socially isolated from the rest of the city. Adding to the sense of isolation, a highway was constructed in the early 1970s, displacing many residents and creating a shortage of affordable housing. The Pasadena Neighborhood Housing Services (PNHS) was formed in 1979 as a nonprofit organization after the city asked for help from the federally chartered NRC. PNHS’ initial efforts centered around organizing the community and rehabilitating its housing. However, the organization has since expanded its efforts into economic development and social services (see app. V). To determine why community development experts advocate a comprehensive approach to community revitalization, we convened three expert panels (see app. VII) to obtain the views of researchers; national intermediaries; government officials; and public interest groups representing community development organizations, social services organizations, and state and local governments. We also reviewed pertinent literature and interviewed leading researchers, foundation representatives, and federal agency officials. In conducting our case studies, we gathered data and interviewed community officials about their choice of a comprehensive approach. We developed information on the structure of the revitalization efforts, the nature of the collaborations the organizations had developed with public and private groups, and the range of funding sources used by the organizations. In addition, we collected demographic and economic data from the Bureau of the Census for our case study cities and for the census tracts that make up the case study neighborhoods. To determine the challenges involved in a comprehensive approach to improving the conditions in the four neighborhoods, we relied primarily on our case studies. We interviewed individuals involved in or having knowledge of the revitalization effort about the primary factors that had promoted or impeded these organizations’ success. Persons interviewed included the executive director and primary staff of the organizations, members of each board of directors, neighborhood residents, state and local government officials, and representatives of major funding organizations and local nonprofit organizations. To the extent possible, we corroborated this evidence by reviewing studies and publications. To identify ways for the federal government to support comprehensive approaches, we discussed the federal role with neighborhood organizations, members of our expert panels, and federal and local government officials. We also reviewed previous GAO reports on community development issues and on social service integration. In addition, we reviewed relevant studies, including the National Performance Review’s reports on reinventing government and the National Academy of Public Administration’s report on HUD. We conducted our work between October 1993 and January 1995 in accordance with generally accepted government auditing standards. We discussed the findings in this report with HUD officials, including the Director of the Office of Affordable Housing within the Community Planning and Development Division, who generally agreed with the information presented in the report. We also discussed our findings with the Director of the Office of Community Services within HHS’ Administration for Children and Families, who stated that local communities should be the focus of program decision-making to improve housing, economic, and social conditions in distressed urban neighborhoods. He noted that the experience and participation of the people most directly involved in the neighborhood improvement process—members of the community—are of paramount importance in the effort. The problems in distressed urban neighborhoods are severe and growing worse. Nonetheless, community-based organizations that use a comprehensive approach hold promise for significant, long-term neighborhood improvement, according to experts from government agencies, foundations, and community development programs. Researchers said that such an approach is feasible because community organizations and an infrastructure to support them have evolved over the last several decades. Although comprehensive efforts—including those we reviewed—are diverse, they often share certain characteristics. Typically, they are community-based—focusing on a specific geographic area and actively involving residents—address physical and social needs, and are initiated and sustained through collaborations with both the public and the private sectors. The organizations we studied evolved their comprehensive approach as they matured to respond to neighborhood needs. However, the variety of programs offered by these groups and the inability to quantify some of their results make it difficult to measure their impact. In addition, community development experts emphasize that many of these neighborhoods have suffered decades of disinvestment that cannot be quickly reversed. They cautioned that significant improvements in conditions in these neighborhoods may take a generation or longer to achieve. Across the country, distressed communities face an array of escalating physical, social, and economic problems. The number of people in poverty has climbed from 29 million in 1980 to 39 million in 1993. Many of these poor are concentrated in distressed urban communities where poverty and neighborhood distress—as indicated by the rates of poverty and joblessness and the numbers of female-headed households, welfare recipients, and teenage school dropouts—worsened between 1980 and 1990, according to a 1993 study. Studies suggest that these problems are complex and interrelated. For example, a 1989 study reported that 81 percent of the families in poverty face two or more obstacles to achieving self-sufficiency. Such obstacles include joblessness, poor education, reliance on welfare, or poor health. Furthermore, over half of the families face three or more obstacles. According to an Annie E. Casey Foundation study, the vast assortment of interconnected problems, unmet needs, and disinvestment combine to produce dysfunctional and socially isolated neighborhoods. Another study by the Local Initiatives Support Corporation states that problems in low-income communities such as escalating crime, drug trafficking, joblessness, teen pregnancy and school dropout rates are both the causes and the effects of social disorganization. We found that despite the progress made by the organizations we studied, these same problems exist in our case study neighborhoods. Each of these neighborhoods has significantly higher rates of poverty and unemployment and higher proportions of welfare recipients and school dropouts than the city as a whole (see app. I). In addition, the physical condition of these neighborhoods has deteriorated and crime rates are high. For example, the Core City neighborhood has a high percentage of vacant land on which burned-out or dilapidated homes stand. A study by the city of Pasadena found a high concentration of violent crimes, neighborhood disturbances, and trafficking in narcotics. In the Marshall Heights neighborhood, most units are vacant in two public housing complexes that are awaiting demolition or renovation. Figure 2.1 shows the conditions that exist in these neighborhoods. Given the conditions in these neighborhoods, community development experts cautioned us that significant improvements may take a generation or longer to achieve. Nonetheless, experts from government agencies, foundations, public interest groups, and community development programs believe that community-based organizations that use a comprehensive approach enhance the chances of significant, long-term neighborhood improvements because they address multiple neighborhood needs. They told us that the conditions in the neighborhoods are interrelated and need to be addressed in tandem if long-lasting results are to be achieved. An expert on comprehensive approaches does not believe that the comprehensive initiatives were begun in response to research or theory but were rather inspired by the logical appeal of the approach. She said that there has been an increasing recognition of the limits of narrowly defined, categorical strategies. For example, new housing has been built in many distressed communities without much attention having been given to the social problems facing its occupants. Social services have been carried out as if in a vacuum, separate from the conditions in the neighborhood. The expert said that each intervention was governed by a separate bureaucracy without any sense of coordination. In contrast, she said, a comprehensive approach recognizes that the problems in distressed communities are interrelated, and it tries to begin change in a number of areas. For example, she said, rather than addressing just one of a family’s needs, such as housing, a comprehensive organization would also attempt to meet the family’s needs for employment, education, child care, training in parenting skills, or treatment for substance abuse. The need to address the interrelated problems in distressed areas through a multifaceted approach is also recognized by researchers, HUD, and HHS. The appeal of the comprehensive approach is that it ensures attention to the interrelationships among the needs of the community by linking human services, physical revitalization, and economic development in a concerted effort, according to a University of Chicago study. A study by the New School for Social Research reported that the problems in distressed communities are “complex and multidimensional and require long-term integrative approaches to their solution.” In addition, HUD endorsed the comprehensive approach in its March 1994 publication entitled Strategies for Community Change in which the Secretary wrote, “We believe the best strategy to community empowerment is a community-driven comprehensive approach which coordinates economic, physical, environmental, community, and human needs.” July 1994 initiatives by HHS’ Administration for Children and Families are also intended to make it easier for community organizations to use HHS programs to meet community needs. Dissatisfied with the results of previous single-focused approaches to community revitalization, national organizations and foundations are also emphasizing a comprehensive approach. While they recognize that the comprehensive approach is not new, they said that such approaches are more feasible now than in the past because community organizations have gained experience and an infrastructure for providing funding and technical assistance has evolved. According to the Director of Field Services for the Neighborhood Reinvestment Corporation (NRC), many programs supported by NRC in the past were developed with a housing rehabilitation focus. Over the years, however, the organization has learned that community needs extend beyond housing. As a result, NRC is encouraging its community organizations to make their programs more comprehensive. The Ford Foundation’s Neighborhood and Family Initiative—a multiyear program—uses the comprehensive approach because the foundation believes single-focused approaches to neighborhood problems are not effective in providing for the range of interrelated needs in poor neighborhoods. Additionally, the Annie E. Casey Foundation found that efforts to assist low-income children at risk were insufficient and needed to be augmented with social and economic initiatives that target the whole community. To encourage comprehensive revitalization, the foundation has provided $160,000 in planning grants and is willing to commit up to $3 million to each of five comprehensive organizations that attempt to improve conditions in their neighborhoods, including two of our case study organizations. Finally, the Enterprise Foundation—an intermediary that formerly focused primarily on housing—has begun a Transforming Neighborhoods demonstration in the Sandtown-Winchester neighborhood of Baltimore that brings community residents together with public and private agencies to plan and undertake comprehensive strategies. Although comprehensive efforts are diverse, researchers have found that many—including the four we reviewed—share certain characteristics. Typically, they are community-based, focusing on a specific geographic area, and actively involving residents. Although they may evolve differently, they consider the needs of the community holistically so that their efforts confront the range of problems facing the community. Finally, they are frequently initiated and sustained through collaborations with many other organizations. Community-based efforts focus on a specific neighborhood and involve those affected by the problems in shaping strategies to improve conditions in the neighborhood. Several studies have concluded that what distinguishes these efforts from their predecessors—Community Action Programs, Model Cities, and many single-focused efforts—is the extent of residential support for the community organization and its agenda. For instance, a study by the University of Chicago suggests that many of the earlier community efforts did not achieve their goals because they were initiated by outside organizations and did not involve the residents. A study conducted by Rainbow Research stated that significant community development takes place only when residents are committed to investing themselves and their resources in the effort. When residents identify their own needs and take advantage of skills already available in the community to foster their goals, a sense of ownership and community pride develops that allows a change in community conditions, according to community development experts. The experts also said that without residents’ involvement, results were often short-lived. The four organizations we reviewed cited several benefits of residents’ participation in their community-based efforts. First, residents’ participation ensures that an organization’s activities support the real needs of the community. In addition, they said residents’ support and participation gives the organization social and political legitimacy as a voice of the community. Residents’ participation also gives the organization a source of support in the form of volunteers to sit on the board of directors, to fill staff positions in the organization, or to assist with specific events or activities. Community leaders said they have also noticed that participation instills a greater sense of pride and hope in the residents. Another common characteristic of these efforts is that they attempt to consider the multiple needs of the residents. According to a study prepared for the Ford Foundation, most comprehensive approaches fit one of three patterns. First, some focus on better coordinating the delivery of existing services toward a more comprehensive approach. Second, some efforts begin with a single focus—such as housing development—but evolve over time to encompass a variety of services and projects. Finally, according to the study, a few efforts begin with a comprehensive agenda. These efforts typically take on the most pressing issue first. They add to their activities in accordance with their overall plan as their organizational capacity grows. Although the four organizations we studied were unique in terms of structure and services, two of the four organizations began with a single focus and evolved a more comprehensive approach as needs were identified. For example, in Pasadena, housing services officials began with a housing rehabilitation program and later expanded into community development activities, child care, and economic development. The Marshall Heights community group initially focused on economic development. Although its first project was the renovation of a shopping center, the group soon recognized that this effort alone would not make residents self-sufficient. Over the next decade, the group expanded into housing rehabilitation, drug abuse prevention and treatment, emergency services, and job training. Core City Neighborhoods in Detroit began, in contrast, with a community organizing effort to identify residents’ needs. The organization established a comprehensive approach to address the identified needs, which included improved housing conditions, crime prevention, business development and improved job opportunities, and enhancement of the neighborhood’s physical appearance. Also, the fourth organization—Dudley Street—identified the development of a comprehensive plan as one of its first objectives. Concurrently, the organization began a campaign to stop the illegal dumping of trash as a mechanism for showing results and gaining community support. As the organization acquired more political power, funding, and staff support, it began addressing the other issues—housing, social services, and economic development—identified in its comprehensive plan. Finally, comprehensive community organizations often collaborate with other local public and private organizations to help use resources more efficiently and to meet residents’ needs. These collaborations may include foundations, schools, social service agencies, and other nonprofit organizations. According to a Ford Foundation study, collaborations can range from a few to several participants and can have either formal agreements of cooperation or informal agreements that include the occasional sharing of information, personnel, supplies, or materials. In addition, these arrangements can be structured through a local institution or government, a consortium of existing institutions, or a specially created independent organization. The four organizations we studied collaborated with other groups to expand their resources and address areas that they would not otherwise have been able to take on. For example, Core City Neighborhoods collaborated with other groups extensively. They networked with six other groups to provide social services, such as a parenting skills program and an after-school and summer program for youth. They also collaborated with a local foundation that provides publicity and funding for the organization and with a bank that funds other efforts and provides volunteers. In all four locations we visited, key stakeholders agreed that the comprehensive approach has benefited the community and holds promise for long-term results because the approach has enabled them to provide multiple services and to make these services accessible to community residents. In addition, residents and community leaders from all four locations cited an improvement in the physical appearance of the neighborhood and the attitudes of some of the residents. For example, the Marshall Heights community organization believes that it has improved the quality of life for many residents by bringing services to the community. Residents no longer have to take several buses to obtain emergency services or housing assistance outside the community. From one organization, they can obtain emergency food, temporary housing, homeownership assistance, employment referrals, drug abuse prevention and treatment services, advice on starting a small business, and assistance in cleaning up and organizing the neighborhood. Some of these services were not previously available in the community but were developed by the organization over the last decade as it evolved and recognized the many needs of the community. For example, a lack of services for treating drug-addicted residents prompted the organization to create its own treatment center. The center takes a holistic approach and provides a framework for a wide range of prevention, intervention, treatment, and follow-up services and programs (see app. IV). The Dudley Street organization emphasized the value of being able to help people improve themselves from whatever level they begin. For example, one person may need access to elderly care only, while another may need assistance in finding affordable housing and child care. Dudley Street’s goal is to help residents organize to gain access to services the community needs, according to the organization’s executive director (see app. III). In Pasadena, the director of a program for potential small business owners described a number of outcomes from the program that go beyond the acquisition of business skills. Some participants reassess and replace their initial business ideas, others succeed in getting jobs, while some return to school. The director believes that for many of the participants, the motivational benefits gained from learning to organize efforts in pursuit of a goal are often more important than the economic benefits (see app. V). However, each organization stressed that its efforts would require a sustained commitment over a long period of time because of the magnitude of the problems being addressed. The Core City organization in Detroit has developed a 50-year strategic plan, anticipating that the neighborhood’s revitalization will take a considerable amount of time (see app. II). The executive director of the Pasadena organization pointed out that because the housing stock is older and the population transient, the need for housing rehabilitation and social services will be ongoing. Figure 2.2 depicts conditions before and after cleanup and/or renovation in our four case study neighborhoods. The photographs on the first page of the figure, taken during the mid-1980s, illustrate the effects of illegal dumping on vacant lots in the Dudley Street neighborhood. The photographs on the facing page show the results of the Dudley Street organization’s efforts—housing, offices, a restored park, and a mural. On the third page, contrasting pairs of photographs depict a shopping center in the Marshall Heights neighborhood before and after rehabilitation, as well as a vacant building that the neighborhood organization converted into a community resource center. The photographs on the final page illustrate improvements in housing and commercial areas achieved through the efforts of three neighborhood organizations. Few empirical studies have been completed that are able to capture the long-term impact of groups carrying out a comprehensive approach. According to community development researchers, there are several reasons for the lack of empirical research. First, because these organizations have evolved to respond to the specific needs of their community, each organization is different from its counterparts. Such diversity makes generalization difficult. Second, the results of much of the work these groups do—community outreach, counseling, and referral—are difficult to measure or quantify. According to a University of Chicago study, traditional evaluations are rarely designed to measure the depth and complexity of factors occurring at the neighborhood level or to relate the cause and effect of changes over time. As a result, existing evaluations of these efforts generally focus on tangible benefits, such as the number of goods and services produced, rather than intangible benefits, such as building self-esteem, pride, and hope within the community. The few formal evaluations that have been completed for the four organizations we reviewed were requested and funded by outside organizations. For example, as a prerequisite for participating in an operating support initiative, LISC required and funded an evaluation of the Marshall Heights organization by a consulting firm in 1992. The evaluation pointed out success factors (holistic vision, strong leadership) and weaknesses (inability to integrate programs) and made several recommendations to the organization. The Pasadena organization is evaluated quarterly by its parent organization, the Neighborhood Reinvestment Corporation. These evaluations focus on financial and program performance. The Annie E. Casey Foundation is developing an evaluation framework for its Rebuilding Communities Initiative. This framework will be applied to the Dudley Street and Marshall Heights organizations to meet a requirement for participation in the foundation’s community revitalization effort. Officials from all four organizations we studied said that they do not formally evaluate their own programs. These officials told us that self-evaluations have not been done because of resource constraints. However, all four community organizations have assessed their activities informally. Some have reviewed their accomplishments each year to ensure that they are meeting the objectives laid out in a strategic planning document. Others have compared their current program offerings with the results of ongoing community needs assessments. The organizations also maintained records of results, such as the number of housing units produced, clients served, or participants involved. They told us that this information is often required by funders. In response to the interrelated problems in distressed communities and out of dissatisfaction with the results of community development efforts over the past several decades, community development experts, foundations, government agencies, and community development organizations are turning to the comprehensive approach. While they recognize that this approach is not new, they believe that it is more feasible now than it was in the past because community organizations are more experienced and an infrastructure to support them has developed. They emphasize that the conditions in these neighborhoods cannot be quickly reversed. In addition, the diversity of these efforts and the difficulty in quantifying some of their results make it difficult to measure outcomes. Nonetheless, experts and organizations believe that community-based efforts that involve the residents and consider their needs holistically are promising because these efforts recognize the intertwined nature of the problems confronting these communities and the people who live there. Many challenges confronted the four organizations we studied as they attempted to improve conditions in their neighborhoods. Because many residents were skeptical, a substantial challenge to each organization was gaining the trust of residents and ensuring their involvement in the revitalization effort. In addition, the organizations had to piece together a complex web of funding from several private and public sources—often with restrictions on use—to cover both their program and their administrative costs. They also faced the daunting task of concurrently managing a diverse set of programs to address housing, economic development, and social service needs. These challenges required persistent efforts over many years to build sufficient technical and management skills to operate effectively. Leaders of these organizations said that, to sustain their organization, they have concentrated on building support among diverse groups of residents, gaining access to multiple funding sources, collaborating with other organizations, and developing a cadre of experienced staff. According to officials representing the organizations we studied, involving residents was a challenge because some of them were skeptical, fearful, or apathetic. For example, in one community, the executive director remembered shouting to residents through their front doors and trying to communicate with them through peep holes. He said that residents who opened their doors talked about how nothing they could do would make a difference in the neighborhood. A resident in one of the communities we studied said that people were afraid to speak up in community meetings about problems such as drug dealing in their neighborhoods because they were afraid of retaliation. According to these officials, neighborhood conditions and the failures of past community development efforts to address the needs of residents were largely to blame for residents’ feelings. At each case study location, conditions had declined as many middle-class residents and the businesses that served them moved out. Subsequently, poverty increased and related problems grew in these areas (see app. I). Physical isolation from the rest of the city and reductions in both private and public services also affected several locations. Disinvestment, from cuts in police protection to insurance and home mortgage redlining, had been taking place for years. One of the locations contained 2,995 public housing units—one-third of the city’s total units—784 of which were vacant as of January 1994. In addition, many residents remembered previous promises that were broken when budgets were cut or displacements occurred instead of neighborhood improvements. The organizations used a variety of methods to gain the trust of community residents and involve them in the organization. Each organization cited visible accomplishments—rehabilitated housing and economic development projects—as a factor in gaining the trust of residents and reducing their skepticism about the revitalization effort. For example, in one case, residents did not begin to trust the organization until they noticed the development of apartment complexes and the establishment of youth activities. In another case, residents said that the redevelopment of the local shopping center was a visible sign that the organization was serious about improving neighborhood conditions. In addition, the organizations we studied conducted extensive neighborhood outreach and organizing campaigns, involved the residents in developing plans to address neighborhood concerns, formed boards of directors with seats designated for residents, hired residents for staff and management positions in the organization, and revisited their plans periodically to obtain residents’ input and to make sure that the plans still met the community’s needs. One of the organizations said that it has yet to involve sufficient numbers of the neighborhood’s public housing residents in the effort. The executive director said that under an Annie E. Casey grant, the organization had begun to plan ways to involve more public housing residents. However, he said that without reducing the concentration of public housing units by creating mixed-income developments, it would be hard to end the feelings of isolation experienced by public housing residents. Community development experts we interviewed agreed. They said that public policy contributes to the isolation of public housing residents by concentrating low-income families in one place and by creating a bureaucratic structure—the public housing authority—that is typically not involved in community development activities. Each of the four efforts we studied was faced with the challenge of funding and managing multiple social service, housing, and economic development programs to address community needs. The four organizations relied on multiple public and private sources, such as federal block grants and program-specific grants, foundation grants, and corporate donations. Identifying and soliciting additional funding sources and establishing collaborations to provide services posed a major challenge for each group. Once the funds were obtained and the collaborations were established, the groups were faced with the challenge of concurrently managing multiple programs, each with separate funding sources, application requirements, and reporting expectations. The four organizations found that obtaining funding to meet the diverse needs of the community was difficult and time-consuming. In general, they said that their primary problem with public funding sources could be traced to the proliferation of categorical programs and the programs’ many different application and reporting requirements. For example, one organization said that applying for a $725,000 HUD McKinney Act grant and tracking the program’s reporting requirements demanded one staff member’s full-time attention. Representatives from this organization also said that the reporting requirements for the program tend to focus more on processes and expenditures than on results. Another organization was reluctant to apply for a HUD neighborhood development program because the cost of hiring someone to write a proposal was too high compared with the likelihood of being funded. Representatives from three of the organizations said that they have turned down funding from certain federal programs or have chosen not to apply for some federal grants because the programs were not flexible enough to be used to address community needs. For example, one organization decided not to apply for a community development initiative loan from HUD because it did not believe that the repayment term was realistic for the planned project. Another organization does not use federal funding for some of its programs because beneficiaries would be required to meet stricter eligibility standards than the organization deems reasonable. A third organization intended to use funds from HUD’s Nehemiah Grants program to support its development of new homes in the community.However, since mortgages supported by a program grant could not be assumed by future homebuyers, the organization could not ensure that the housing would be kept affordable for future homebuyers. Because of this restriction, the organization decided not to accept the funding. In response to these problems, each of the four organizations we reviewed developed diverse funding sources to support its programs. All four organizations used funding from federal, state, and local programs and received support from foundations and corporations. Overall, the organizations relied on public funding for about 30 to 60 percent of their budgets. Much of this funding was obtained through CDBG or CSBG—two relatively flexible federal grant programs. The organizations credited these programs with providing a long-term stream of funding for a wide range of services. Total organizational budgets for 1993 ranged from about $500,000 to about $2,600,000. Table 3.1 lists the major funding sources used by the four organizations. The four organizations said that they were able to develop multiple funding sources more easily after they had accumulated a record of accomplishments and small amounts of funding—seed money—that they could use to leverage more resources. For example, a city official in one case study location informed us that the city continues to provide funds because of the effort’s established history and effective use of funding for viable projects. Similarly, two foundations involved with another case study organization described the effort as a good investment because of the organization’s proven track record and strong leadership. The Marshall Heights organization cited its use of $25,000 in CDBG funding to leverage $3.2 million in private funds to rehabilitate its shopping center. Each of the organizations we reviewed also increased its capacity to address community needs by collaborating with other organizations, such as housing developers, churches, local governments, private corporations, and other nonprofit organizations. Representatives from the organizations said that collaborating—while difficult and time-consuming—allowed them to use the skills and expertise of other organizations without necessarily developing the same capacity themselves. Two of the organizations relied on collaborations with other organizations to expand their network of services. The other two organizations provided most of the services themselves but relied on collaborations to supplement their programs. In both instances, the collaborations increased the resources available to the organization. For example, one organization established a collaboration with an existing nonprofit housing developer who agreed to complete the housing development portion of the organization’s comprehensive plan. The other organization worked with a local fund-raising organization that helped raise over $133,000 over a 4-year period and provided an attorney to untangle building titles, architects to handle redesigns, and many volunteer hours and consultations with other professionals. Each of the groups we studied also faced the challenge of managing an organization that operates—or facilitates the delivery of services through—multiple, concurrent, and diverse programs. All of the organizations said that the number of programs they operated had increased over the last 10 years in response to community needs. In each case, increases in the number of programs created a strain on the organization’s managerial and administrative capacity. For example, during a 4-year period, the staff of one of the organizations we studied doubled in size and the operating budget nearly tripled with the addition of major programs to produce affordable housing and provide social services. According to an organizational assessment prepared for the group, the expansion in programs put a strain on the existing management systems, staff, and finances. The different funding sources needed to support the organization’s many programs created a strain on the financial system because each program had a different set of expenditure definitions and reporting requirements and, therefore, had to be tracked separately. In addition, the collaborations developed by these organizations sometimes caused management strains because they were time-consuming and occasionally created competition. One organization said that a great deal of time had to be spent on building consensus before collaboration could occur because the groups were used to competing for funding. Another organization said that collaboration can be costly and difficult because it requires bringing together many different groups that have to cooperate and share power. In another neighborhood, an organization official cautioned that the executive director can be perceived as a political threat to city officials who believe that, as a recognized leader in the community, the executive director may run for office one day. Each organization said that these management challenges required persistent efforts over many years to build sufficient capacity to operate effectively. They said that one way they build such capacity was to develop a cadre of experienced staff members—both from within the community and from outside it. For example, one organization has received assistance in maintaining its staff levels by obtaining administrative funding from foundations. Another responded by hiring long-time board members—who were also neighborhood residents—as staff. Two organizations also developed leadership below the executive director position by creating deputy director positions. In addition, the charisma and enthusiasm of staff and leaders were cited by each organization as key ingredients that helped them through difficult times. Organizations using a comprehensive approach face multiple challenges. Community skepticism caused by declining neighborhood conditions and the failures of some previous programs makes involving residents difficult. The need to fund multiple programs and to manage them once funding is secured also poses challenges. The number and diversity of funding sources these organizations use create demands on staff time because the organizations must concurrently manage multiple programs, each of which has separate application requirements and reporting expectations. Despite such challenges, the organizations we studied have managed to sustain their comprehensive approach by employing several strategies, including ensuring residents’ participation in the revitalization effort, developing consistent and diverse funding sources and collaborations with other organizations, and making organizational changes where necessary to respond to an increasing number of programs. Historically, coordination has been limited across and within the federal departments and agencies that have responsibility for programs intended to assist distressed communities. Agencies have tended not to collaborate with each other for a variety of reasons, including concerns about losing control over program resources. Recently, the federal government has taken steps to improve interagency coordination and reduce fragmentation by consolidating and streamlining some of the federal programs intended to assist distressed communities. If fully implemented, these efforts could help the federal government become more supportive of comprehensive revitalization efforts. The federal government assists distressed urban communities and their residents through a complex system involving at least 12 federal departments and agencies. Together, these agencies administer hundreds of programs in the areas of housing, economic development, and social services. For example, in previous work we reported that there are at least 154 employment and training assistance programs, 59 programs that could be used for substance abuse prevention, and over 90 early childhood development programs. A guidebook to federal programs available for the Empowerment Zones and Enterprise Communities program identified over 50 programs as a “sample” of the universe of federal programs that agencies could consider in developing their revitalization plans. Considered individually, many of these categorical programs make sense. But together, they often work against the purposes for which they were established, according to a National Performance Review (NPR) report. According to Office of Management and Budget (OMB) officials we interviewed, one reason for limited coordination among the many federal programs with similar goals and objectives is that federal agencies have become more protective of their programs as resources have grown scarcer. These officials and a community development expert also believe that agencies are concerned that collaboration and coordination could lead to a loss of control over program resources. Moreover, the OMB officials believe that federal efforts to maintain program structures and funding levels have constrained opportunities to identify and resolve instances of programmatic overlap, regulatory burden, and limited access to funds. In addition, previous efforts at coordination have generally been unsuccessful. In earlier work, for example, we found that the federal government had set up a patchwork of parallel administrative structures to deliver an estimated $25 billion annually in employment and training services. Many of these programs target the same population, yet despite decades of attempts to improve coordination, conflicting program requirements continue to hamper administrators’ efforts to coordinate activities and share resources. In the area of social service delivery, evaluations of previous coordination efforts have found that such initiatives were unable to coordinate different categorical programs at the federal level and have had only limited success at the local level. Even within federal agencies, programs are sometimes fragmented and uncoordinated. For example, in fiscal year 1993 HUD’s Office of Community Planning and Development administered several programs that provided about $5.4 billion to states, local governments, and public and private nonprofit groups for (1) affordable housing, (2) community and economic development, (3) assistance to the homeless, (4) infrastructure, and (5) social services. Until HUD recently began efforts to consolidate four of these programs, applicants had to complete four different applications and prepare two plans. In addition, each program operated on its own schedule and required lengthy progress reports that included little information on the program’s accomplishments. HUD reported that these requirements were pushing communities away from comprehensive planning and toward compartmentalized thinking. The proliferation of federal programs imposes a burden on local organizations that attempt to piece together programs to serve their communities. As we mentioned in chapter 3, the neighborhood organizations we studied found it burdensome to manage multiple programs with individual funding streams, application requirements, and reporting expectations. In addition, one organization reported that it had strained its managerial and financial systems to meet federal record-keeping and accounting standards for several funding sources. While the organization implemented the necessary procedures to comply with the standards, officials said that the administrative burdens nearly forced the organization to reduce the scope of its services. Recently, in response to recommendations by NPR to reduce the administrative burden of federal programs and make federal programs more responsive, a number of initiatives have been undertaken. Some of these initiatives may eventually aid communities currently taking or planning to take a comprehensive revitalization approach. These initiatives include (1) governmentwide programmatic and managerial changes intended to “reinvent” federal departments and agencies, (2) program consolidation and streamlining measures designed to reduce fragmentation among some federal programs and reduce administrative burdens on recipients of federal funding, and (3) the establishment of the Community Enterprise Board. Created in 1993, NPR undertook a broad review of the federal government’s management and operations in an attempt to “reinvent” the way departments and agencies do their work. Among its emphases were recommendations on how major government programs could improve their operations by enhancing their responsiveness to customers’ needs. To implement these recommendations, Executive Order 12862 was issued, requiring executive branch departments and agencies to establish and implement customer service standards. As an initial step in this process, for example, HHS identified its partners, direct and indirect customers, and stakeholders. HHS plans to set standards for its partners—most often state and local governments—and then establish standards for its “ultimate customers,” such as substance abuse clients, Head Start families, and children in foster care. To achieve its customer service goals, HHS intends to consult with state and local governments and service providers when it formulates new policies and regulations that affect its partners and the individuals and families who receive services. To reduce the level of fragmentation among federal programs used to assist distressed communities and their residents, the federal government has also taken steps to streamline application processes and consolidate some programs. For example, HUD recently issued a proposed rule to consolidate into a single submission the planning and application requirements for several formula grant programs administered by its Office of Community Planning and Development. These include CDBG, Emergency Shelter Grants, HOME Investment Partnerships, and Housing Opportunities for People With AIDS. The proposed rule would also consolidate the reporting requirements for these programs, requiring one performance report instead of several program-specific reports. Other agencies that have taken steps to consolidate programs include HHS, Education, and Labor. However, according to OMB officials and public policy researchers, a significant reduction in the level of program fragmentation has historically been difficult to achieve because of the congressional subcommittee structure, the protectiveness of agencies toward their programs, and the strong support of constituent groups for particular programs. Nonetheless, HUD has announced plans, pending congressional approval, to consolidate 60 of its major programs into 3 flexible performance-based funds. The funds would be designed to give state and local governments the flexibility to develop local plans for community and housing needs that, by their nature, would vary from jurisdiction to jurisdiction and change from year to year. The Community Enterprise Board was established by executive order in September 1993 to assist with the implementation of the Empowerment Zones and Enterprise Communities program and to advise the President on how the federal programs available to assist distressed communities can be better coordinated across agencies. To improve such coordination, the Board has been tasked with (1) developing an inventory of all programs providing physical, social, and economic assistance to distressed communities and their residents, (2) identifying programs or policies that overlap and/or conflict, and (3) developing innovative strategies to collaborate on ways to accomplish common program objectives. While the experts we interviewed agreed that an entity such as the Board is needed to coordinate the federal programs available to assist distressed communities, they also said that in the past such efforts have not been very successful. If the Board is to fulfill this mission, it will require high-level departmental commitment and open dialogue, according to the experts. According to a recent study on HUD by the National Academy of Public Administration (NAPA), flexibility should be a primary criterion in any decision on consolidation reached by the Congress and the administration or in any of the programmatic changes undertaken in the interim. Among the ways to ensure this flexibility are (1) to build in appropriate waiver provisions (statutory or regulatory) for new or demonstration programs so that communities can quickly get them under way or make community-specific changes, (2) to provide sufficient flexibility in funding major program areas so that the Secretary of HUD has a range of options for addressing the varied and changing needs of communities, and (3) to limit the number of competitive awards by providing more funds through block grants. The federal government’s approach to assisting economically and socially distressed communities has led to the creation of numerous individual programs intended to address specific needs faced by these communities. Considered individually, many of these categorical programs make sense. But together, as the NPR report noted, they often work against the purposes for which they were established. Because previous federal efforts to consolidate or streamline programs have had only limited success, local organizations must still piece together programs to serve their communities. Although past efforts to coordinate and consolidate programs across agencies have had limited success, we believe that consolidation measures such as those HUD has proposed, if fully implemented, could make it easier for communities to plan and undertake a comprehensive approach to neighborhood improvement. | Pursuant to a congressional request, GAO reviewed the multifaceted approaches that community-based nonprofit organizations in Boston, Detroit, Pasadena, and the District of Columbia have taken to improve conditions in their distressed urban neighborhoods, focusing on the: (1) reasons development experts and practitioners advocate a comprehensive approach; (2) challenges community organizations will face implementing a comprehensive approach; and (3) difficulties the federal government may have in supporting comprehensive approaches. GAO found that: (1) community development experts advocate a comprehensive approach to address the complex and interrelated problems of distressed neighborhoods; (2) practitioners in the four locations reviewed believe that a comprehensive approach is feasible because community organizations and supporting networks are already present; (3) conditions in distressed neighborhoods cannot be quickly reversed and evaluating the results of community outreach efforts will be difficult because these efforts are not easily quantifiable; (4) community-based nonprofit organizations must overcome community skepticism, inadequate resident participation, a complex funding system, and the difficulties in managing a diverse set of concurrent housing, economic development, and social service programs to improve conditions in their neighborhoods; (5) organization leaders believe that to sustain their efforts they need to concentrate on building residents' support, gain access to multiple funding sources, and develop an experienced staff; (6) federal departments and agencies have not coordinated their efforts to assist distressed communities because they have separate missions and concerns about losing control over their resources; and (7) recent federal initiatives to consolidate programs could help the federal government become more supportive of comprehensive community development efforts. |
“Oversight of a contract”—which can refer to contract administration functions, quality assurance surveillance, corrective action, property administration, and past performance evaluation—ultimately rests with the contracting officer who has the responsibility for ensuring that contractors meet the requirements set forth in the contract. However, contracting officers are frequently not located in the area or at the installation where the services are being provided. For that reason, contracting officers designate CORs via an appointment letter to assist with the technical monitoring or administration of a contract on their behalf. CORs serve as the eyes and ears for the contracting officer and act as the liaisons between the contractor, the contracting officer, and the unit receiving support or services. CORs are responsible for tasks identified in the contracting officer’s appointment letter that may include (1) providing daily contract oversight, (2) performing quality assurance reviews, (3) monitoring contract performance, and (4) assessing technical performance. CORs cannot direct the contractor by making commitments or changes that affect price, quality, quantity, delivery, or other terms and conditions of the contract. have also been tasked with other contract-related duties such as preparing statements of work, which provide the requirements or specifications of the contract, developing requirements approval paperwork, and preparing funding documents. Although CORs are non- acquisition personnel, they can have acquisition-related responsibilities— particularly those related to contract oversight. CORs are not usually contracting specialists and often perform contract management and oversight duties on a part-time basis in addition to performing their primary military duties, such as those performed by an infantryman or a quartermaster specialist. DOD defines the term “contingency contract” as a legally binding agreement for supplies, services, and construction let by government contracting officers in the operational area, as well as other contracts that have a prescribed area of performance within a designated operational area. These contracts include theater support, external support, and systems support contracts. Theater Support Contracting Command contracting officers assisted by CORs. When DCMA is not designated responsibility for administrative oversight of a contract, the contracting officer who awarded the contract is responsible for the administration, management, and oversight of the contract. These contracting officers, such as those from the CENTCOM Joint Theater Support Contracting Command often appoint CORs to monitor contractor performance. CORs appointed by the CENTCOM Joint Theater Support Contracting Command are typically drawn from units receiving contractor-provided services. In Afghanistan, CORs that have been appointed to contracts administered by DCMA report oversight results to DCMA personnel. For contracts not administered by DCMA, CORs provide oversight information to the contracting officer. In Afghanistan, the CENTCOM Joint Theater Support Contracting Command directs requiring activities (units receiving contractor-provided goods and services) to nominate CORs for all service contracts valued at more than $2,500 with significant technical requirements that require ongoing advice and surveillance from technical/requirements personnel. The contracting officer may exempt service contracts from this requirement when the following three conditions are all met: (1) The contract will be awarded using simplified acquisition procedures; (2) The requirement is not complex; and (3) The contracting officer documents the file, in writing, as to why the appointment of a COR is unnecessary. Although DOD requires CORs to receive training and took some actions to enhance training programs, CORs we met with in Afghanistan do not always receive adequate training to prepare them for their contract management and oversight duties. DOD requires that CORs be qualified by training and experience commensurate with the responsibilities to be delegated to them. According to DOD officials, the current training might qualify CORs to monitor contractor performance generally, but it does not necessarily make them sufficiently capable for their particular assignments. DOD officials have acknowledged gaps in training. For example, required DOD training taken by CORs did not fully address the unique contracting environment that exists in Afghanistan, which includes large numbers of Afghan contractors with limited experience and qualifications. Further, the instability and security aspects of remote locations throughout Afghanistan coupled with an undeveloped infrastructure impedes the CORs’ ability to communicate with and rely upon acquisition personnel, such as contracting officers, for support and guidance. Additionally, not all of the required training for CORs was conducted, and some other oversight personnel were not being trained. In Afghanistan, much of the daily surveillance of contractors supporting military operations is performed by CORs. The Federal Acquisition Regulation (FAR) requires that quality assurance, such as surveillance, be performed at such times and places as may be necessary to determine that the supplies or services conform to contract requirements. DOD guidance requires CORs to be trained and assigned prior to award of a contract. DOD training is intended to familiarize the CORs with the duties and responsibilities of contract management and oversight. Contracting organizations such as the CENTCOM Joint Theater Support Contracting Command require that personnel nominated to be CORs complete specific online training courseswell as locally developed CORs overview training, and contract-specific training provided by contracting officers in theater (the latter referred to as (referred to as Phase I), as Phase II) before they can serve as CORs in Afghanistan. The guidance notes that, at a minimum, Phase II training will consist of contract specific responsibilities, including file documentation; terms and conditions of the contract; specifics of the performance work statement; acceptance of services procedures; invoice procedures; technical requirements; monthly reporting procedures, and contractor evaluation—all specific to their assigned contract. DOD has taken some actions to enhance training programs to prepare CORs to manage and oversee contracts in contingency operations, such as in Afghanistan. For example, DOD developed a new training course for CORs, with a focus on contingency operations and developed a more general certification program for CORs, including the contingency operations course as a training requirement when it is applicable. DOD also took steps to institutionalize operational contract support by including some CORs-related training in professional military education programs and by emphasizing the need for qualified CORs by discussing their responsibilities in joint doctrine and other guidance with the publication of Joint Publication 4-10—Operational Contract Support and the Defense Contingency Contracting Officer’s Representative Handbook and memoranda issued by the Deputy Secretary of Defense. Our analysis of DOD’s CORs training and interviews with over 150 CORs and contracting personnel from over 30 defense organizations like the regional contracting centersKandahar, and Camp Leatherneck, Afghanistan, indicated that some gaps and limitations existed in DOD’s training programs leaving CORs not fully prepared to perform their contract management and oversight and the DCMA in Bagram, Kabul, duties. For example, the training for CORs is generally focused on low- risk contract operations in Afghanistan and does not fully address the unique contracting environment that exists there, such as the extent of inexperience of Afghan contractors, the remote and insecure locations of project sites, the underdeveloped infrastructure, and constraints on the movement and deployment of oversight personnel, especially acquisition personnel. More specifically, the required CORs training does not include information about important issue areas like the Afghan First Program, which encourages an increased use of local personnel and vendors for supplies and services as part of the U.S. counterinsurgency strategy, and working with private security contractors. Some CORs in Afghanistan told us they were unaware of the challenges in working with Afghan contractors and thought contracting with them would be similar to contracting with U.S. vendors. However, according to some of the CORs and other contracting personnel we interviewed, providing oversight of Afghan contractors was more challenging than was the case with other vendors because the Afghan contractors often did not meet the timelines specified in the contract, did not provide the quality products and the services the units had anticipated, and did not necessarily have a working knowledge of English. Further, these officials told us that Afghan contractors were not always familiar with the business standards and processes of the U.S. government. For example, one COR told us during our visit in February 2011 that a unit was still waiting for barriers that it had contracted for in May 2010. According to that COR, while some of the barriers had been delivered, the unit had not received all of the barriers it required even though the contract delivery date had passed. Other CORs, contracting officials, and commanders described similar situations in which services were either not provided as anticipated or were not provided at all. Because of gaps in training, CORs did not always understand the full scope of their responsibilities and did not always ensure that the contractor was meeting all contract requirements. As a result, according to contracting officials, items such as portable toilets, gates, water, and other items or services were not available when needed, raising concerns about security, readiness, and morale. Contracting officials from over 30 defense organizations and units in Bagram, Kabul, Kandahar, and Camp Leatherneck whom we spoke with noted similar problems with construction contracts awarded to Afghan contractors. For example, according to another COR, an Afghan contractor was awarded a $70,000 contract to build a latrine, shower, and sink unit. The COR told us that the contractor was unable to satisfactorily complete the project and so another contract was awarded for approximately $130,000 to bring the latrine, shower, and sink unit to a usable condition. Because of inadequacies in training, CORs did not always understand that they had the responsibility to ensure that the terms of the contract were met and therefore did not bring contractors’ performance issues to the contracting officer’s attention for resolution. Similarly, DOD contracting officials provided us with documentation of other construction problems, including a shower/toilet facility built without holes in the walls or floors for plumbing and drain (fig. 1), and facilities that were constructed with poor-quality materials such as crumbling cement blocks (fig. 2). The Special Inspector General for Afghanistan Reconstruction has also reported significant construction deficiencies related to contracting in Afghanistan, including poorly formed and crumbling cement structures attributable to the lack of CORs training and oversight. Because of the nature and sensitivity of security contracts, CORs for private security contractors’ contracts have unique responsibilities. For example, during the period of our review, under guidance in place prior to June 2011, CORs were responsible for compiling a monthly weapons discharge report and for ensuring contractor adherence to contractual obligations on topics such as civilian arming requirements, personnel reporting systems, property accountability, and identification badges. According to a senior military officer with U.S. Forces Afghanistan’s private security contractor task force, because of gaps in training, CORs did not always understand the full scope of their responsibilities and so did not always ensure that a contractor was meeting all contract requirements. He noted that CORs did not always understand that they had the responsibility to ensure that the terms of the contract were met and therefore did not bring contractors’ performance issues to the contracting officer’s attention for resolution. As a result, DOD may pay contractors for poor performance and installations might not receive the level of security contracted for. Further, we found that the training programs lacked specifics on the preparation of statements of work or documents required for acquisition review boards—two contract management responsibilities that CORs in Afghanistan were routinely tasked to do. Although the development of a statement of work involves a variety of participants from the contracting process, a COR may be uniquely suited to have an early impact on the development of a complete and accurate statement of work. The Defense Contingency Contracting Officer’s Representative Handbook describes statements of work as specifying the basic top-level objectives of the acquisition as well as the detailed requirements of the government. The statement of work can provide the contractor with “how to” instructions to accomplish the required work. It could provide a detailed description of what is expected of the contractor and forms part of the basis for successful performance by the contractor and effective oversight of contracts by the government. Well-written statements of work are needed to ensure that units get the services and goods needed in the required time frame. As we reported in 2000 and 2004, poorly written statements of work can also increase costs and the number of substandard supplies and services provided by the contractor. Based on discussions with contracting personnel from four major bases in Afghanistan responsible for reviewing these documents, statements of work prepared by CORs were vague and lacked the specifics needed to provide units with what they wanted. We were told by multiple DOD officials that some CORs routinely cut and paste information from previous statements of work into their current document without adapting it as needed, resulting in errors that have to be corrected and further extending the time involved in procuring a good or service. Contracting personnel told us of instances in which statements of work had to be rewritten because the original statements of work did not include all the required contractor actions, or because they included incorrect requirements. Although there are other DOD contracting personnel involved in the requirement and procurement process, CORs can help to ensure that well-articulated needs are more fully documented at an early stage. DOD contracting personnel responsible for reviewing and approving requests for contract support told us that poorly written statements of work were a principal reason units do not receive the operational contract support they need for sustaining military operations. Because of gaps in training, CORs were unable to prepare well-articulated statements of work that clearly define the warfighters’ needs. For example, DOD contracting personnel told us about a dining facility in Afghanistan that was built without a kitchen because it was not included in the original statement of work, resulting in DOD having to generate a separate statement of work for the kitchen. According to contracting officials and commanders, poorly written statements of work increase the procurement process time, the workload burden on the DOD contracting personnel, and delays and disruptions in critical supplies and services needed for the mission. Moreover, according to DOD, one of the acquisition review boards in Afghanistan, known as the Joint Acquisition Review Board, reviews and recommends approval or disapproval of proposed acquisitions to ensure efficiency and cost effectiveness. DOD contracting personnel responsible for reviewing acquisition proposals told us that delays and disruptions in supplies and services needed by the unit have been attributed to incomplete or incorrect documents, such as statements of work. Since CORs in Afghanistan are heavily relied upon by their units and the acquisition personnel in the development of these documents, it is important that they understand what paperwork is required and how to properly complete it in order to obtain needed goods and services in a timely manner. Contracting officials acknowledge the challenges with preparing complete/correct statements of work and DOD is making some effort to address the gaps in training. For example, the Defense Acquisition University provides a training course on preparing requirements documents such as statements of work; however, it is not a DOD requirement for CORs and contracting personnel to complete this training before assuming their contract-related roles and responsibilities. DOD contracting personnel and CORs in Afghanistan told us that the CENTCOM Joint Theater Support Contracting Command contracting officers were frequently unable to provide the required contract-specific training (Phase II) for CORs because they were busy awarding contracts. For instance, a COR whom we interviewed in Afghanistan was directing a contractor to perform construction work or correct deficiencies in performance without authorization from or communication with the contracting officer. Because the COR had never received the required training from the contracting officer, he was not aware that this practice was potentially unauthorized. Without the follow-on Phase II training from the contracting officer, CORs may lack a clear and full understanding of the scope of their contract duties and responsibilities. In contrast, DCMA’s contracting personnel provide specific contract training and mentoring to its CORs because DCMA has full-time quality assurance personnel who have been tasked with providing COR training and assistance. According to DCMA officials, certified quality assurance representatives continue to mentor CORs after their formal training has been completed. Moreover, in addition to CORs, other personnel expected to perform contract oversight and management duties in Afghanistan are not always being trained. Joint Publication 4-10 states that military departments are responsible for ensuring that military personnel outside the acquisition workforce who are expected to have acquisition responsibility, including oversight duties associated with contracts or contractors are properly trained. The Joint Publication also highlights the key role of commanders and senior leaders in operational contract support oversight. However, contracting personnel that we interviewed in Afghanistan told us that military personnel such as commanders and senior leaders did not always receive training on their contract management and oversight duties in Afghanistan and that commanders, particularly those in combat units, do not perceive operational contract support as a warfighter task. Although some contracting-related training is available for commanders and senior leaders, it is not required before deployment. DOD has not expanded the professional military education curriculum by increasing the number of training offerings on operational contract support with a particular emphasis on contingency operations to fully institutionalize operational contract support in professional military education. Based on our previous findings, it is essential that commanders and senior leaders complete operational contract support training before deployment to avoid confusion regarding their contract role and responsibilities in managing and overseeing contractors, and nominating qualified CORs. In 2006, we recommended that operational contract support training be included in the professional military education to ensure that all military personnel expected to perform contract management duties, including commanders and senior leaders, receive training prior to deployment. DOD has taken some actions to implement this recommendation by developing some Programs of Instruction on contingency acquisition for their non- acquisition workforce to be taught at some of the military and senior staff colleges. However, commanders and senior leaders are not required to take these courses before assuming their contract management and oversight roles and responsibilities. CORs did not always have the subject area-related technical expertise or access to subject matter experts with those skills to manage and oversee contracts in Afghanistan, especially those contracts of a highly technical and complex nature. The Defense Contingency Contracting Officer’s Representative Handbook indicates that CORs are responsible for determining whether products delivered or services rendered by the contractor conform to the requirements for the service or commodity covered under the contract. Further, the Contracting Officer’s Representative Handbook notes that CORs should have technical expertise related to the requirements covered by the contract. However, according to CORs and contracting personnel we interviewed in Afghanistan, CORs did not have the subject area-related technical expertise necessary to monitor contract performance for the contracts they were assigned to oversee. For example, many of these CORs were appointed to oversee construction contracts without the necessary engineering or construction experience, in part because their units lacked personnel with those technical skills. While DCMA had subject matter experts in key areas such as fire safety available for CORs needing technical assistance, CORs for contracts written by the CENTCOM Joint Theater Support Contracting Command did not have subject matter experts to turn to for assistance, particularly in the construction trades during the time of our visit. As a result, according to officials, there were newly constructed buildings that had to be repaired or rebuilt before being used by U.S. and Afghan troops because the CORs providing the oversight were not able to adequately ensure proper construction. According to personnel we interviewed, these practices resulted in wasted resources, low morale, and risks to the safety of base and installation personnel where the deficient guard towers, fire stations, and gates were constructed. Officials stated that it is not uncommon for a COR to accept a portion of the contractor’s work only to find later upon further examination that the work was not in accordance with the contract and substandard. Similarly, officials stated that the LOGCAP personnel did not accept responsibility for maintenance of a facility that had been constructed by Afghan contractors until LOGCAP contractors first repaired or replaced wiring and plumbing to meet building codes. Although the CORs were not solely responsible for contract oversight, or for the implications identified above, they could have provided an early verification of contractor performance. More importantly, in the Afghanistan contracting environment the DOD contracting personnel ultimately responsible for oversight—such as contracting officers—were often removed or absent from the remote locations where the work was performed and had no ability to communicate electronically. This results in greater reliance on CORs and reduces the opportunity for CORs to identify problems early in the process. The following were cases that further illustrate the impact of CORs not having the technical skills or support needed to perform contract management and oversight. Although the CORs did not necessarily bear the sole responsibility for consequences identified below, a well-trained COR might have been able to prevent or mitigate the effects of the problems. According to officials, a COR prepared a statement of work for a contract to build floors and install tents but failed to include any power requirements necessary to run air conditioners, heaters, and lights because the COR and unit personnel did not have the electrical technical expertise to properly and safely specify the correct power converter package with the original request. Thus, the tents were unusable until the unit used a field ordering officer to order, at an additional cost, the correct power converters so that the tents were usable and completed in a timely fashion. Contracting officials told us that guard towers at a forward operating base were poorly constructed and unsafe to occupy. As shown in figure 3, the staircase was unstable and not strong enough for climbing; it had to be torn down and reconstructed. The COR’s inadequate subject area-related technical expertise or access to subject matter experts prevented the early identification of defective welding on the staircase that rendered it unsuitable to use to climb up the guard tower. A senior engineer inspector official told us the cement block walls that had been accepted by a COR were poorly constructed. The COR did not have the subject area-related technical expertise or access to subject matter experts necessary to properly inspect and reject substandard cement block walls. For example, the contracting official noted large holes in a cement block wall that remained after the wood scaffolding was removed, which rendered the wall unstable (fig. 4). A dining facility expected to service 1,000 military personnel was unused for a year due to emergent construction deficiencies such as electrical and plumbing issues. Contracting officials attributed the construction issues to the shortage of oversight personnel with subject area-related technical expertise or access to subject matter experts in construction. As a result, according to contracting personnel, repair work to correct the deficiencies was acquired under LOGCAP for $190,000 in addition to the original cost of the contract. The issue of CORs not having adequate subject area-related technical expertise has been a longstanding problem in DOD. For example, we have previously reported in 2006, 2008, and again in 2010 that CORs do not always have the subject area-related technical expertise necessary to oversee contracts. More recently in November and August 2011, the Congressional Research Service and the Commission on Wartime Contracting in Iraq and Afghanistan reported that DOD is still in need of non-acquisition personnel with the necessary technical and subject matter expertise to perform contractor oversight, respectively. The Special Inspector General for Afghanistan Reconstruction has also reported significant construction deficiencies with contracting in Afghanistan as a result of inadequate subject area-related technical expertise on the part of CORs and other contract oversight personnel. Problem areas identified by the Inspector General included low-quality concrete (similar to conditions depicted in fig. 2 and fig. 4) and inadequate roofing installations, which were similar to other deficiencies we identified. Further, based on DOD documentation, the nature of contract work in Afghanistan has become more technical and complex, increasing the number of CORs needed, the amount of time needed to award contracts, and the number of errors during the early stages of the contracting process (e.g., the requirements determination process). Due to the complexity of construction projects in Afghanistan, DOD established an initiative in April 2011 to assign construction inspectors to assist CORs in managing and overseeing construction projects. According to a DOD memorandum, contracting officers should appoint construction inspectors, in addition to CORs, when the nature of the project requires technical assistance to ensure proper performance of work and when such assistance is available. Because this program was not in effect during the time of our visit in February 2011, we are unable to assess the effectiveness of the use of construction inspectors. However, based on our observations in Afghanistan, there is a shortage of subject area- related technical experts that can serve as construction inspectors in Afghanistan. CORs and other personnel that we interviewed in Afghanistan acknowledged the benefit of having subject matter experts in construction as well as other specialty areas such as food-, fuel-, and electricity-related services. DOD does not have a sufficient number of CORs to oversee the numerous contracts in Afghanistan and, according to some government officials, there are not enough CORs in theatre to conduct adequate oversight. The CENTCOM Joint Theater Support Contracting Command requires the nomination of CORs for all service contracts worth over $2,500 with significant technical requirements that require on-going advice and surveillance from technical or requirements personnel, unless exempted by the contracting officer. Although there is no specific guidance on the number of contracts a single COR should manage, the CENTCOM Joint Theater Support Contracting Command requires that CORs nominations signed by the unit commander contain a statement verifying that the CORs will have sufficient time to perform assigned tasks. Similarly, the Defense Contingency Contracting Officer’s Representative Handbook states that the requiring unit must allow adequate resources (time, products, equipment, and opportunity) for the CORs to perform their functions. In 2004, 2006, and again in 2010, we reported that the DOD did not have a sufficient number of trained oversight personnel, and during the course of our review we noted that this situation persisted. Further, we found that CORs do not always have the time needed to complete their oversight responsibilities. While available data do not enable us to determine the precise number of contracts that require CORs, in fiscal year 2011, DOD completed over 35,000 contracting actions on over 24,600 contracts and orders that were executed primarily in Afghanistan.CORs we interviewed in Afghanistan, some CORs are responsible for providing oversight to multiple contracts in addition to performing their primary military duty. For example, one COR we interviewed was assigned to more than a dozen construction projects. According to the COR, it was impossible to be at each construction site during key phases of the project, such as for the concrete pouring of building footings, wiring installation, or plumbing. Consequently, according to contracting officials, construction on these multiple projects was completed without sufficient government oversight and problems were not always identified until the building was completed. This often resulted in significant rework, at a cost to the U.S. taxpayer. According to contracting officials and In another instance, an entire compound of five buildings was built in the wrong location. According to DOD, based on the statement of work, the compound should have been constructed on base behind the security walls but instead was constructed outside the perimeter of the base in a non-secure location. Contracting officials we spoke with in Afghanistan attributed the problem to the numerous contracts managed by the COR and the lack of time to perform contract oversight duties. As a result, according to officials, the buildings (shown in fig. 5) could not be used. The cost of the compound including the five buildings was $2.4 million. In addition, in some cases units did not assign enough CORs to provide oversight. For example, we were told by one unit that it did not have a sufficient number of CORs to provide proper oversight of dining facility services, including ordering and inspecting food and supplies. Although the unit was able to provide one COR for each dining facility, the dining facilities operate 24 hours a day. Contracting officials expressed concern that there were not enough CORs to provide sufficient oversight of the dining facilities 24 hours a day during all shifts of operation. DOD and the services have taken some steps, such as developing a new CORs training course with a focus on contingency operations to improve oversight of contracts in contingency operations, such as in Afghanistan; other more general efforts, such as the COR certification program for services’ acquisitions, may also lead to improvement. However, in our work in Afghanistan we found that CORs are still not fully prepared to oversee the multitude of contracts for which they are assigned, potentially resulting in a significant waste of taxpayer dollars and an increased risk to the success of operations. The current mechanism for training CORs that also perform duties related to the requirements determination process and to the development of requirements documentation continues to have weaknesses because DOD has not yet developed training standards to ensure that these personnel fully understand Joint Operational Area specific issues such as the Afghan First program, the Counterinsurgency Contracting Guidance, and the details on the preparation of statements of work and documents required by the contract review boards. As noted in an Army Contracting Command publication, what contracting organizations do and how they do it cannot be foreign to the warfighter. Military personnel such as commanders, senior leaders, CORs, and other personnel expected to have a role in operational contract support are often not familiar with their contract roles and responsibilities until they reach theater because DOD has not sufficiently expanded the professional military education curriculum and provided more training on contract support with a particular emphasis on contingency operations. Further, having an insufficient number of CORs with the appropriate subject area-related technical expertise or access to dedicated subject matter experts in specialty areas hinders DOD’s ability to ensure that operational units obtain vital supplies and services when needed. Moreover, contract management and oversight has become more challenging due to a shortage of oversight personnel, an increase in the number of contracts, a high personnel turnover rate, training burden challenges, and an increase in the complexity of the work contracted. All of these have resulted in delays and errors in the procurement process. Further, as a result of these workload constraints, military personnel serving as CORs are limited in the number of contracts that they can reasonably manage and oversee considering the technical nature and complexity of each contract. Given DOD’s heavy reliance on contractors during operations in Afghanistan and given the unpredictability of potential future contingencies, it is critical that DOD address these challenges as soon as possible to mitigate the risk to the success of operations, to obtain reasonable assurance that contractors are meeting their contract requirements and that troops are getting what they need to support contingency operations, and to help ensure that tax dollars are not being wasted. To provide for improved oversight of operational contract support, we are recommending that DOD enhance the current strategy for providing contract management and oversight in Afghanistan and other areas of operations. Specifically, we recommend that the Secretary of Defense take the following four actions: Direct the CENTCOM Commander in consultation with the Secretaries of the military departments to develop standards for training to ensure that CORs are fully trained on the contract support in Afghanistan, to include information on the Afghan First program, Counterinsurgency Contracting Guidance, and details on the preparation of statements of work and documents required by the contract review boards. Direct the Chairman of the Joint Chiefs of Staff and the Secretaries of the military departments to fully institutionalize operational contract support in professional military education to ensure that CORs, commanders, senior leaders, and other personnel expected to perform operational contract support duties are prepared to do so by integrating and expanding the curriculum and by increasing the number of training offerings on operational contract support with a particular emphasis on contingency operations. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics in consultation with the appropriate CENTCOM officials to establish and maintain a sufficient number of subject matter experts in specialty areas dedicated to the CENTCOM Joint Theater Support Contracting Command to assist CORs with providing contract oversight. Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to develop standards regarding the number of contracts that a COR can manage and oversee based on the technical nature and complexity of the contract. We provided a draft of this report to DOD for comment. In written comments, DOD concurred with our recommendations. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD concurred with our recommendation that the Secretary of Defense direct the CENTCOM Commander in consultation with the Secretaries of the military departments to develop standards for training to ensure that CORs are fully trained in contract support in Afghanistan, to include information on the Afghan First program, Counterinsurgency Contracting Guidance, and details on the preparation of statements of work and documents required by the contract review boards. DOD stated that CENTCOM has identified COR training in its pre-deployment requirement for units and personnel being deployed to Afghanistan, referring to fragmentary order 09-1700, which lists theater training requirements for forces deploying to the CENTCOM area of responsibility. Although the fragmentary order identifies COR training as a training requirement for certain personnel, the wording in this order lacks the specificity to adequately prepare CORs for contract support in Afghanistan. For example, the fragmentary order does not require that CORs be trained on how to use the Afghan First Program and the Counterinsurgency Contracting Guidance and on how to prepare the statements of work and other documents required by the contract review boards. DOD further stated that CENTCOM reviewed and updated pre-deployment training requirements during a conference in early January 2012, but did not provide any specific information on what those updates entailed. DOD also stated that the COR training requirement will remain as required pre- deployment training and that an updated version of the pre-deployment requirement will be finalized and released no later than April 2012. Because DOD did not provide any specific details on what, if any, changes to training requirements will be included in its April 2012 update, we are unable evaluate the extent to which DOD’s proposed actions would address our recommendations. DOD concurred with our recommendation that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff and the Secretaries of the military departments to fully institutionalize OCS in professional military education by increasing the number of training offerings with a particular emphasis on contingency operations to ensure that CORs, commanders, senior leaders, and other personnel expected to perform OCS duties are prepared to do so. DOD stated that the Deputy Assistant Secretary of Defense for Program Support in the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics and the Director of Logistics in the Joint Staff are currently engaged in a study to develop a strategy for OCS professional military education and that DOD recognizes the need for a holistic view of the entire OCS education requirement. DOD said it will assess existing professional military education to recommend OCS learning objectives for appropriate places in existing curricula. Additionally, DOD stated that the Army has recently taken major steps to improve training for commanders, senior leaders, and personnel expected to perform OCS duties. However, DOD did not describe what specific steps DOD has taken to fully institutionalize OCS in professional military education. Further, while it is commendable that DOD is developing a strategy for the OCS professional military education, DOD did not indicate when its strategy would be completed. Until DOD expands the curriculum and increases the number of training offerings on OCS, contract management and oversight in Afghanistan will continue to be hindered. DOD concurred with our recommendation that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics in consultation with the appropriate CENTCOM officials to establish and maintain a sufficient number of subject matter experts in specialty areas dedicated to the CENTCOM Joint Theater Support Contracting Command to assist CORs with providing contract oversight. DOD stated that the Undersecretary of Defense for Acquisition, Technology, and Logistics will work through the Joint Staff to have CENTCOM identify the requirements for dedicated subject matter experts and the military departments to source these positions within budget constraints, and that the subject matter experts will be sourced through the normal requirements process. We agree that this proposed strategy has the potential to address our recommendation to establish and maintain a sufficient number of subject matter experts in specialty areas. DOD concurred with our recommendation that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics develop standards regarding the number of contracts that a COR can manage and oversee based on the technical nature and complexity of the contract. DOD agreed that there is a limit to the number of contracts that a COR can support. Further, DOD stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics will develop and publish appropriate standards based on the technical nature and complexity of the contract. We agree that these actions, if fully implemented, would address the intent of our recommendation. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Chairman of the Joint Chief of Staff, the Under Secretary of Defense for Personnel and Readiness, the Under Secretary of Defense for Acquisition, Technology & Logistics, Secretaries of the Army, Navy and Air Force, the Commandant of the Marine Corps, and the Commander of CENTCOM. This report will be available at no charge on GAO’s website, http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1808 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this letter. GAO staff who made key contributions are listed in appendix III. To determine the extent to which the required Department of Defense (DOD) training prepares contracting officer’s representatives (COR) to perform their management and oversight duties in Afghanistan, we examined guidance, evaluated the content of the required training, and interviewed CORs and senior contracting personnel from over 30 defense organizations and units in Bagram, Kabul, Kandahar, and Camp Leatherneck, Afghanistan. We examined guidance such as the Joint Publication 4-10, the Defense Contingency Contracting Officer’s Representative Handbook and the U.S. Central Command (CENTCOM) Joint Theater Support Contracting Command Standard Operating Procedures addressing the CORs program to identify training requirements for CORs in contingency areas such as Afghanistan. To evaluate the content of the training, we attended training for CORs at Fort Carson, Colorado, and completed the Defense Acquisition University’s online CORs contingency courses. We reviewed documents such as the program of instructions or course syllabus and other related training documents on the curriculum. We interviewed commanders, senior leaders, and contracting personnel from the Office of the Secretary of Defense, the Joint Staff, the combatant commands, service headquarters, the Defense Contract Management Agency, and defense universities to obtain a comprehensive understanding of what training was available for CORs in Afghanistan. To help determine what knowledge CORs needed to perform their management and oversight responsibilities, we reviewed contract-related documents such as contracts, purchase requisitions, and statements of work. To determine the extent to which CORs have the appropriate subject area-related technical expertise to oversee contracts in Afghanistan, we reviewed the CENTCOM Joint Theater Support Contracting Command Standard Operating Procedure addressing the CORs program and the Defense Contingency Contracting Officer’s Representative Handbook. We spoke with commanders, senior leaders, senior contracting personnel, and CORs in Afghanistan to understand the degree of subject area-related technical expertise possessed by CORs for contracts they were assigned to manage and the extent to which subject matter experts were available to provide technical support to CORs. We examined contract-related documents such as contracts and training transcripts to assess the technical requirements of the contract as well as the technical background of CORs. To determine the extent to which the number of CORs is sufficient to manage the contracts in Afghanistan, we examined the CENTCOM Joint Theater Support Contracting Command guidance and the Defense Contingency Contracting Officer’s Representative Handbook to identify requirements related to the workload of CORs. We interviewed senior DOD contracting personnel and CORs to determine whether there was a sufficient number of CORs to manage the contracts in Afghanistan. In addition, we met with CORs to identify their contract workload and the nature of contracts they were assigned to manage. We selected units to interview that would be in Afghanistan and available during the time of our visit based on input from service officials as well as status reports from the U.S. Army, the U.S. Air Force, and the U.S. Army National Guard. To facilitate our meetings with CORs and contracting personnel in Afghanistan, we developed a set of structured questions that were pre-tested and coordinated with service contracting experts to help ensure that we had solicited the appropriate responses. We selected and examined photographs of supplies and services provided to us by the DOD personnel to best illustrate the nature of the contract support issues we encountered in Afghanistan. During our review, we visited or contacted key officials, CORs, senior contracting and other contracting personnel from DOD components and entities in the United States and in Afghanistan. U.S. Forces Afghanistan, South Joint Sustainment Command Afghanistan Defense Contract Management Agency Logistics Civil Augmentation Program Regional Contracting Center 101th Combat Aviation Brigade 3rd Naval Construction Regiment 451st Air Expeditionary Wing 1st Brigade Combat Team/4th Infantry Division Defense Contract Management Agency Logistics Civil Augmentation Program Regional Support Command Logistics Civil Augmentation Program Division/Marine Headquarters Group, I Marine Expeditionary Force Marine Aircraft Wing Marine Logistics Group, I Marine Expeditionary Operational Contract Support team, I Marine Expeditionary Force C-8 Comptroller, I Marine Expeditionary Force Camp Leatherneck Commandant, I Marine Expeditionary Force U.S. Central Contracting Command Defense Contract Management Agency Logistics Civil Augmentation Program Senior Contracting Officer – Afghanistan Task Force Spotlight Task Force 2010 717th Expeditionary Air Support Operations Squadron Regional Contracting Center Combined Joint Task Force Four Defense Contract Management Agency Defense Contract Audit Agency 2nd Brigade, 34th Infantry Division 17th Combat Support Sustainment Brigade Combined Joint Task Force-101 CJ 4 & 8 46th Military Police We performed our audit work from April 2010 to March 2012 in accordance with generally accepted government auditing standards. Generally accepted government auditing standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for this assessment based on our audit objectives. Cary B. Russell, (404) 679-1808 or [email protected]. In addition to the contact named above, William Solis, Director; David Schmitt, Assistant Director; Carole Coffey, Assistant Director; Tracy Burney, Alfonso Garcia, Christopher Miller, Michael Shaughnessy and Natasha Wilder made key contributions to this report. Peter Anderson, Kenneth Cooper, Branch Delaney, Mae Jones and Amie Steele provided assistance in report preparation. | In fiscal year 2011, DOD reported obligating over $16 billion for contracts that were executed primarily in Afghanistan. GAO has previously identified the need for DOD to improve its oversight of contractors by non-acquisition personnel, such as CORs, and Congress has addressed this issue in legislation. CORs act as the liaisons between the contractor, the contracting officer, and the unit receiving support. Following up on previous GAO work on this topic, GAO determined the extent to which (1) DODs required training prepares CORs to perform their contract management and oversight duties, (2) CORs have the subject area-related technical expertise needed to oversee contracts, and (3) the number of CORs is sufficient to oversee the contracts in Afghanistan. GAO conducted field work in Afghanistan and the United States and focused on the preparedness of CORs to manage and oversee contracts in the CENTCOM area of responsibility. The Department of Defense (DOD) has taken steps to enhance its existing training program for contracting officers representatives (CORs), but the required training does not fully prepare them to perform their contract oversight duties in contingency areas such as Afghanistan. DOD requires that CORs be qualified by training and experience commensurate with the responsibilities to be delegated to them. DOD took several actions to enhance its training program, such as developing a CORs training course with a focus on contingency operations. However, GAO found that CORs are not prepared to oversee contracts because the required training does not include specifics on how to complete written statements of work and how to operate in Afghanistans unique contracting environment. For example, DOD contracting personnel told GAO about opening delays and additional expenses related to the construction of a dining facility, which was originally constructed without a kitchen because it was not included in the original statement of work. In some cases, contract-specific training was not provided at all. In addition, not all oversight personnel such as commanders and senior leaders receive training to perform contract oversight and management duties in Afghanistan because such training is not required of them. Because DODs required training does not prepare CORs and other oversight personnel to oversee contracts, units cannot be assured that they receive what they paid for. CORs do not always have the necessary subject area-related technical expertise to oversee U.S. Central Command (CENTCOM) contracts they were assigned to. Contracting officials noted, for example, that the staircases on guard towers at a forward operating base were poorly constructed and unsafe to climb. The COR assigned to that contract had inadequate subject area-related technical expertise, preventing the early identification of the defective welding on the staircases. According to contracting officials, situations like this often occurred due to the shortage of CORs with expertise in construction. Also, at the time of GAOs field work, CORs for contracts written by CENTCOM contracting officers did not have access to subject matter experts, particularly those with construction experience. According to contracting personnel, because CORs do not have the subject area-related technical expertise needed to oversee contracts or access to subject matter experts, facilities were sometimes deficient and had to be reconstructed at great additional expense to the taxpayer. DOD does not have a sufficient number of CORs to oversee the numerous contracts in Afghanistan. CENTCOM requires CORs to be nominated for all service contracts over $2,500 that, unless exempted, require significant ongoing technical advice and surveillance from requirements personnel. However, there is no guidance on the number of contracts a single COR should oversee. According to contracting officials and CORs GAO interviewed in Afghanistan, some CORs were responsible for providing oversight to multiple contracts in addition to carrying out their primary military duty. For example, one COR GAO interviewed was assigned to more than a dozen construction projects. According to that COR, it was impossible to be at each construction site during key phases of the project because the projects were occurring almost simultaneously at different locations. Consequently, according to officials, in situations like these, construction was completed without sufficient government oversight and problems were sometimes identified after facilities had been completed. GAO recommends that DOD enhance the current strategy for managing and overseeing contracts in contingency areas such as Afghanistan by, for example, developing training standards for providing operational contract support (OCS), fully institutionalizing OCS in professional military education, and developing standards regarding the number of contracts that CORs can oversee based on the technical nature and complexity of the contract. DOD concurred with all of GAOs recommendations. |
Job Corps was established in 1964 to address employment barriers faced by severely disadvantaged youth throughout the United States. Thirty years later, it remains as a nationally operated program at a time when responsibility for other federal training programs, most notably the Job Training Partnership Act (JTPA), has been delegated to state and local agencies. In program year 1993, the most recent 1-year period for which complete spending and outcomes data were available, about three-fourths of the program’s total expenditures of about $933 million was for center operating costs, such as staff salaries, equipment, maintenance, and utilities (see fig. 1). The remaining funds were used for student allowances and payments; contracts for outreach, screening, and placement services; contracts with national training providers; and facilities construction, rehabilitation, and acquisition. Currently, 111 Job Corps centers are located throughout the United States, including Alaska, Hawaii, and Puerto Rico (see fig. 2). Although most states have at least one center, four states have no centers—Delaware, New Hampshire, Rhode Island, and Wyoming—while several states have four or more centers (California, Kentucky, Oklahoma, Oregon, New York, Pennsylvania, Texas, and Washington). Private corporations and nonprofit organizations, selected through a competitive procurement process, operate 81 centers; the Departments of Agriculture and the Interior, as required by law, directly operate 30 centers, called civilian conservation centers, under interagency agreements. While the program’s capacity has fluctuated over the years since its establishment, the current capacity closely approximates its original size. In 1966, about 41,900 slots were available at 106 centers. Today, approximately 41,000 slots are available at 111 centers, ranging in size from 120 slots at a center in California to 2,234 at another center in Kentucky. Appendix III lists the centers, their student capacity, and their operating costs for program year 1993. Job Corps enrolls youth aged 16 to 24 who are economically disadvantaged, in need of additional education or training, and living in a disruptive environment. Enrollments are voluntary, and training programs are open-entry and self-paced, allowing students to enroll throughout the year and to progress at their own pace. Individuals enroll in Job Corps by submitting applications through outreach and screening contractors, which include state employment service agencies, nonprofit organizations, and private for-profit firms. On average, students spend about 8 months in the program but can stay up to 2 years. Each Job Corps center provides services including basic education, vocational skills training, social skills instruction, counseling (for personal problems as well as for alcohol and drug abuse), health care, room and board, and recreational activities. Each center offers training in several vocational areas, such as business occupations, automotive repair, construction trades, and health occupations. These programs are taught by center staff, private contractors, or instructors provided under contracts with national labor and business organizations. Participation in Job Corps can lead to placement in a job or enrollment in further training or education. It can also lead to educational achievements such as attaining a high school diploma and reading or math skill gains. One feature that makes Job Corps different from other federal training programs is its residential component. For example, employment training services under JTPA, the federal government’s principal job training program for the economically disadvantaged, are provided in a nonresidential setting. Under Job Corps, 90 percent of the students live at the centers, allowing services to be provided 24 hours a day, 7 days a week. The premise for boarding students is that most come from a disruptive environment and therefore can benefit from receiving education and training in a new setting where a variety of support services are available around the clock. The residential component is a major reason the program is so expensive. While in the program, students receive allowance and incentive payments. For example, initially a student receives a base allowance of about $50 per month, increasing to about $80 per month after 6 months. In addition, students are eligible to receive incentive bonuses of between $25 and $80 each if they earn an exceptional rating on their performance evaluations, held every 60 days. Students can also earn bonuses of $250 each for graduating from high school or receiving a general equivalency diploma, completing vocational training, and getting a job. Students receive an additional $100 if the job is related to the vocational training they received while in Job Corps. Students obtain jobs through a variety of mechanisms, including finding the job on their own, being referred by their vocational instructor, and being placed by the Job Corps center or a contracted placement agency. The last comprehensive study of the effectiveness of the Job Corps program was done nearly 15 years ago. While that study concluded that the program was cost effective—returning $1.46 to society for every dollar being spent on the program—more recently, audits by Labor’s Inspector General, media reports, and congressional oversight hearings have surfaced issues and concerns with the program’s operations. Among these are concerns about the quality of training and outcomes in relation to program costs, incidents of violence occurring at some centers, and the overall management of the program. The Job Corps program is the most expensive employment and training program that Labor administers, spending, on average, four times as much per student as JTPA. According to Labor’s program year 1993 figures, the cost per Job Corps terminee averaged about $15,300. In contrast, the cost per youth terminee (aged 16-22) in JTPA averaged about $3,700.The clientele targeted by Job Corps, as well as the comprehensive services provided to the students, contributes to the high cost of the program. Job Corps seeks to enroll the most severely disadvantaged youth who have multiple barriers to employment. We compared characteristics—at the time of program enrollment—of the 63,000 program year 1993 Job Corps terminees with the 172,000 comparable youth terminees from JTPA.Using JTPA’s definition of hard-to-serve clients, we compared those characteristics that could be commonly applied to both programs—being a school dropout, being deficient in basic skills (reading and/or math skills below the eighth grade), receiving public assistance, and having limited English proficiency. We found that the percentage of Job Corps students with a combination of two or more of these employment barriers was much greater than it was for JTPA participants—about 68 percent of all Job Corps terminees nationwide compared with 39 percent of JTPA terminees(see fig. 3). To address the needs of students with multiple employment barriers, Job Corps provides a comprehensive range of services. Among these services are those associated with the residential component and instruction in social skills. Residential living services include meals, lodging, health and dental care, and transportation. Social skills instruction is a structured program that teaches 50 skills, including working in a team, asking questions, dealing with anger, learning self-control, handling embarrassment, and arriving on time for appointments. Taken together, expenditures for residential living and social skills instruction accounted for about 44 percent of the program year 1993 Job Corps operating costs nationally. At the six centers we visited, we obtained detailed information on program year 1993 expenditures for various Job Corps activities and found that about 45 percent of the funds was spent on residential living and social skills instruction, whereas about 22 percent went for basic education and vocational training and 21 percent for administration (see fig. 4). While Job Corps reported nationally that in program year 1993 about 59 percent of the 63,000 students who left the program obtained jobs,only 36 percent of Job Corps students complete their vocational training (see fig. 5). At the six centers we visited, we found that almost half the jobs obtained by students were low-skill jobs not related to the training provided. However, the students who completed vocational training at these centers were 5 times more likely to obtain a training-related job at wages 25 percent higher than students who did not complete their training. Yet, about 40 percent of program funds at the six centers was spent on students who did not complete vocational training. Using program year 1993 results, five of the six centers we visited would not have met Labor’s current standard for measuring vocational completion—56 percent of vocational enrollees in the program for at least 60 days should complete their vocational training. At the 6 centers we visited, we analyzed the outcomes for the 2,449 students who had been enrolled in Job Corps for at least 60 days and who also had entered a vocational training program and found that about 44 percent of the students completed their vocational training. As shown in figure 6, the proportion of these students who completed vocational training programs ranged from about 18 percent at one center to about 61 percent at another—overall, about 30 percent completed vocational training. Overall, students who completed vocational training were 50 percent more likely to obtain a job than those students who did not complete it (76 percent versus 49 percent, respectively). Furthermore, we found that those students who completed their vocational training were more likely to get a training related job than those who did not complete it.Comparing the types of jobs obtained by students who did and did not complete their vocational training, we found that students who had completed their training were five times more likely to obtain a job that was training related. At the six centers we visited, about 37 percent of the students who had completed vocational training obtained training-related jobs (see fig. 7). In contrast, only 7 percent of those students who did not complete their training obtained training-related jobs. For example, training-related jobs for students who received health care training included nurses’ assistant, physical therapy aide, and home health aide; for those who received training in the skilled construction trades, training-related jobs included painter, carpenter, and electrician. Overall, about 14 percent of all program year 1993 terminees at the six centers received training-related jobs (this consisted of 11.4 percent vocational completers and 2.8 percent noncompleters). Furthermore, we found that the average wage paid to the students who obtained these training-related jobs was 25 percent higher than the average wage paid to students who did not obtain training-related jobs—$6.60 versus $5.28 per hour. About two-thirds of the jobs obtained by students who did not complete their training were in low-skill positions such as fast food worker, cashier, laborer, assembler, and janitor. In order to get a better picture of how much the program spends in relation to the outcomes attained, we analyzed program costs with respect to the amount of time that students spent in the program at the six centers. We determined that the average cost per student day was $65—ranging from $51 per day at one center to $119 at another center. We used this computation to calculate the cost of various program outcomes at the six centers. At these centers, vocational completers, on average, remained in the program longer than those who did not complete training (400 days versus 119 days, respectively). As a result, these centers spent considerably more on vocational completers. For example, the cost per student who completed vocational training, on average, was $26,219 compared with $7,803 for students who did not complete vocational training. Yet, because less than a third of the students completed vocational training, a large proportion of the centers’ program funds—approximately 40 percent, or about $19 million—was spent on students who did not complete the training. As shown in figure 8, most centers spent at least 50 percent on students who completed their vocational training. However, one center spent only about 25 percent of its funds on students who completed their vocational training. Nationally, about 66 percent was spent on students who completed vocational training. On the basis of our survey of employers of a random sample of Job Corps students from the six centers, we found that employers were generally satisfied with the basic work habits and technical preparation of the Job Corps students they employed. Although students did not remain with these employers for very long (about one-half worked 2 months or less), the majority of employers said they would hire them again. Because neither Labor nor the Job Corps centers had information on student job retention, we contacted the employers of a random sample of 413 students who obtained jobs. Our survey of employers was intended to validate reported placement data, determine job retention periods, and gauge employer satisfaction with students’ basic work habits and specific technical skills provided by the Job Corps program (see app. II for a detailed description of our methodology). Of the employers who responded, 79 percent rated the Job Corps students’ basic work habits average to excellent. In addition, for those employers reporting that the job matched the training, 85 percent believed the students were at least moderately prepared to handle the technical requirements of the job. Students who obtained jobs upon leaving Job Corps tended not to remain with those employers for very long. Of those students for which we obtained employment information, about 88 percent were no longer working with their initial employer. As shown in figure 9, approximately 30 percent of the students who were no longer employed in their initial job worked less than a month, while about 20 percent worked 6 months or longer. According to the employers, the predominant reasons students were no longer employed were that they quit (45 percent), were fired (22 percent), or were laid off (13 percent). Our employer survey gave us information that raises concerns about the validity of Job Corps-reported job placement statistics. We tried to contact employers for 413 students who Labor reported as having been hired. In 34 instances, employers reported they had no record of having hired the student. Another 2 employers stated they had hired a student, but the student never reported for work. Furthermore, another seven students were not employed, but were placed with an employment agency or enrolled in JTPA training. Thus, about 10 percent of the reported job placements appeared to be invalid. We were also unable to find the employer of record for almost 10 percent of our sample of students (an additional 39 students) using both the telephone number listed in Labor’s records and directory assistance. According to Labor, placement contractors verify 100 percent of the job placements, and Labor regional offices re-verify a sample of at least 50 percent of reported job placements. We provided Labor, at its request, detailed information on the 34 students that employers reported they had no record of hiring and the 39 whose employers we were unable to locate. Labor responded that, in the short time it had available, it was able to verify employment for 44 of these 73 students. However, our review of Labor’s documentation showed that it provided additional evidence to support only 18 placements (12 of the 34 and 6 of the 39). For many of the remaining placements, Labor merely provided the original documents that were on file when we initially attempted to verify employment. In other instances, the data differed from the original documents with respect to the employer and employment dates of record, or verification was made by the student or a relative and not an employer. Thus, we continue to question 15 percent of the placements included in our sample. A substantial part of Job Corps’ vocational training is provided by national contractors on a sole source basis. Our work directed at this long-standing practice raises questions about whether the program and its students are benefiting from this arrangement. On the basis of our review of Labor data, it is uncertain whether the results achieved by the national contractors are much better than those achieved by other Job Corps training providers. Labor has been awarding sole source contracts to nine national unions and one building industry association for over a decade—15 years for one contractor and over 25 years for several others. Its justification for making sole source awards, rather than using full and open competition, is based on three broad factors: (1) the contractor’s past relationship with Job Corps, that is, experience with Labor’s Employment and Training Administration in general and Job Corps specifically, and its thorough knowledge of Job Corps procedures and operation; (2) the contractor’s organizational structure, that is, a large nationwide membership related to a trade, and its strong relationship with national and local apprenticeship programs; and (3) the contractor’s instructional capability, that is, qualified and experienced instructors; ability to provide training specifically developed for the learning level of Job Corps students; and the ability to provide recognition of training as credit toward meeting the requirements of a journeyman. National contractor expenditures during program year 1993 totaled $41 million, about one-third of Job Corps’ overall expenditures for vocational training. (See app. IV for a listing of the national contractors, contract awards, and the year of their initial award from Labor.) While Labor officials stated that a primary justification for awarding sole source national contracts is that the contractors’ maintain an extensive nationwide placement network, it is unclear whether the national contractors are any more successful in placing Job Corps students in jobs than are other training providers. According to Labor officials, because these organizations are national in scope, they can identify job openings, regardless of geographic location, and place Job Corps students in the positions. Thus, they are not constrained by the local job market in seeking jobs for their students. However, Labor’s data show that, programwide, very few of the job placements for those trained by national training contractors in program year 1993 were attributed to the national contractors. According to Labor data, the largest number of job placements (48 percent) were made by “self, family, or friend,” whereas only 3 percent were made by national contractors. The percentage of job placements by national contractors at the six centers we visited was even smaller. Labor data show that less than 1 percent of the placements were made by these contractors. Labor officials acknowledged that the data in their system do not accurately reflect the extent to which national contractors place students because their system was not designed to capture this information. On the other hand, they could not tell us how many placements, in fact, were made by the contractors. Thus, it is unclear whether Job Corps benefits, as contended by Labor officials, from the national contractors’ nationwide placement network. Another reason Labor used in justifying national sole source contracts is that the union contractors are considered to be an effective means for getting Job Corps students into apprenticeship programs. Labor data show that 12 percent of the students in program year 1993, who went through national contractor-provided vocational training courses for at least 90 days were placed in apprenticeship programs. However, we have no basis to determine whether this is acceptable, because Labor does not specify a target level for entry into apprenticeships. Using Labor’s national data, we found only moderate differences in the performance of the national contractors as compared with other Job Corps training providers. In program year 1993, the national contractors had a programwide job placement rate of 59 percent compared with 54 percent for other Job Corps training providers, and a training-related job match of 44 percent compared with 36 percent for others. Comparisons at the six centers we visited were similar, with a job placement rate of 64 percent for national contractors compared with 59 percent for other Job Corps training providers, although the training-related job match was higher—42 percent compared with 30 percent. The national contractors account for about one-third of Job Corps’ vocational training expenditures and the training they provide is primarily in a declining occupational category—the construction trades—which represents about 4 percent of the job market. About 84 percent of national contractor training is in construction-related occupations. Similarly, Job Corps in general emphasizes training in the construction trades. Nationally, about one-third of the program year 1993 terminees were enrolled in construction-related training. Similarly, at five of the six centers we visited, about one-third of the terminees, collectively, were trained in one of the construction trades. These trades encompass a number of occupations, including carpenter, cement mason, and bricklayer. Our analysis of Bureau of Labor Statistics data shows that over the past 8 years (1986-1993) the proportion of construction-related jobs in the labor market has declined by almost 10 percent. While Job Corps provides extensive services to a severely disadvantaged population—a program design that inherently leads to high costs—our evaluation has surfaced several issues that we believe merit further investigation. We noted that completing vocational training appears to be very important to achieving a successful program outcome, yet only a little over one-third of the students complete their vocational courses. As a result, a substantial portion of Job Corps’ funds (40 percent at the six sites we visited) is being spent on noncompleters. Turnover is high among students in their initial job following Job Corps training. The overall implication of this is unknown; are students moving to other, and perhaps better, jobs, or are they becoming unemployed? We also have serious concerns about the validity of reported job placements. These statistics may be overstated by 9 percentage points at the six centers where we conducted our site work. We will continue to pursue these issues. Our work raises questions about Labor’s use of national training contractors to provide a substantial portion of its vocational training. A primary justification for using national contractors is that they are better able to place students in jobs through their nationwide placement network. However, according to Labor data, nearly half of all job placements were found by the student, family, or friends. The use of national contractors may have been prudent in the past, but times have changed. The shifting composition of the labor market, particularly the decline in the construction trades; the high proportion of vocational training funds allocated to national contractor training; and Labor’s lack of information to support its justification for these national contracts, raises questions about whether this is the most cost-effective approach to vocational training. To ensure that Job Corps vocational training programs are provided in the most efficient and effective manner, we recommend that Labor revisit whether the continued use of national training contractors is cost effective. In comments on a draft of this report, Labor expressed concerns about certain aspects of our report. In response to our recommendation on the use of national contractors, Labor agreed to review the practice of contracting with national training providers on a sole source basis. The following summarizes its concerns and provides our response. (Labor’s comments are printed in app. V.) Labor pointed out a number of items in our report that it believes should be modified or clarified, and we have done so where appropriate. Specifically, we have modified our characterization of program growth over the years, included information on a new study of Job Corps’ net impact, revised the percentage of vocational completers nationwide, and revised our presentation of Job Corps student job retention. In addition, we have made a number of other technical changes to our report to respond to Labor’s comments. Labor expressed concern that we did not recognize other program outcomes, such as general equivalency diploma (GED) attainment, and based our conclusions only on vocational completion and job placement. GED attainment and gains in reading and math skills are quantifiable program outcomes experienced by many Job Corps students. In our view, these outcomes are a means to an end—that is, providing students with the basic educational skills needed in the world of work—and not an end in and of themselves. These other measures are an adjunct to the principal measures of vocational completion and job placement. In fact, Labor’s own literature—Job Corps in Brief, Program Year 1993—states that “Employment and enrollment in full-time education or training are the only positive outcomes recognized by Job Corps in its performance measurement systems.” Labor agreed that, as our report states, Job Corps is more costly than other JTPA programs because of its residential nature and the severely disadvantaged population targeted by the program. However, Job Corps suggested a number of alternative cost-effectiveness comparisons, such as comparing Job Corps with community colleges. Our purpose in making the cost comparison with the JTPA title II-C program was to provide context for Job Corps’ high cost, not to show cost effectiveness. Therefore, we believe, and Labor agrees, that using JTPA title II-C for cost comparison purposes is relevant. As for comparing Job Corps’ completion rates and cost effectiveness with other institutions like community colleges, this was not the purpose of our report, and we would need to do additional work to try to make a relevant comparison. We do not believe that Labor has justified the relevance of the comparisons made in its comments because the populations served and institutions’ purposes are vastly different from the Job Corps.’ Labor also stated that our cost data, which showed that 40 percent of expenditures at the six centers we visited was spent on noncompleters, was not representative of Job Corps as a whole. In developing our data, we computed an average cost per student day using the centers’ program year 1993 total costs and total number of paid days for all students. We applied this in turn to the total student days spent in the program by completers and noncompleters. We believe that our methodology results in a fair allocation of costs to these student categories. While acknowledging that our computations may be true for the six centers, Labor claims that the national average expenditures for noncompleters was 34 percent in program year 1993. Nonetheless, we believe that a substantial amount of program resources is being spent on students who fail to complete their vocational training programs. Using Labor’s estimate, Job Corps spent about $328 million on noncompleters in program year 1993. Labor also took issue with our finding that Job Corps’ reported job placement information is often inaccurate. Using information on questionable job placements from our telephone survey, Labor undertook an effort to verify these placements. Our examination of the documentation Labor used to support its verifications shows that many of these placements remain questionable. Of the 73 questionable placements on which we provided information to Labor, it was able to provide additional evidence supporting 18 placements. We continue to question the remaining placements because Labor provided no additional information beyond that which was on file at the time of our initial verification attempts. In all, we continue to question 15 percent of the placements included in our sample. Labor also raised concerns that we used inappropriate data in concluding that the use of national training contractors to provide vocational training raises questions about whether this is a cost-effective approach. Labor states that the 3-percent placement rate we cite is based on data not designed for this purpose. Our report acknowledges Labor’s assertion that the data do not accurately reflect the extent to which national contractors place students. However, of greater importance is Labor’s acknowledgement that it does not know how many placements were made by the contractors, a primary justification for the continuation of 25 years of sole source contracts. As a result, Labor is paying a substantial portion of its vocational training funds to national contractors but is unable to assess how effective they are in placing students in jobs. Therefore, we believe that our conclusion and related recommendation remain valid. In addition, Labor has agreed to review its practice of contracting with the national training providers on a sole source basis. Labor also took exception with our discussion of the Job Corps program’s emphasis on training in the construction trades. While acknowledging that the construction trades have declined as a proportion of the total job market, Labor stated that they have increased in the total number of jobs, about 80,000 jobs over the 8-year period 1986-93. Labor also pointed out advantages associated with employment in the construction trades and that it may be the most appropriate training for many students. We do not disagree with Labor’s assertion that training in the construction trades may be beneficial for some students. Nonetheless, we believe that a valid question remains about whether it is appropriate for Job Corps to spend over one-third of its vocational training funds on an occupational category that makes up about 4 percent of the labor market. We are sending copies of this report to the Secretary of Labor; the Director, Office of Management and Budget; relevant congressional committees; and other interested parties. If you or your staff have any questions concerning this report, please call Sigurd R. Nilsen at (202) 512-7003 or Wayne J. Sylvia at (617) 565-7492. Other major contributors include Thomas N. Medvetz, Dianne Murphy, Jeremiah F. Donoghue, Betty S. Clark, and Marquita Harris. We designed our study to collect information on the characteristics of Job Corps students, the services they were provided, and the outcomes they achieved, including employers’ satisfaction with the students hired. We also obtained information on program year 1993 expenditures and the use of national contractors to provide vocational training. In doing our work, we interviewed Job Corps officials at the national and regional levels and conducted site visits at six judgmentally selected Job Corps facilities. We augmented the information collected during the site visits with data from Labor’s Student Pay, Allotment and Management Information System (SPAMIS), a database containing nationwide Job Corps data on all program year 1993 terminees. We also obtained selected data on participants aged 16 to 24 included in Labor’s Standardized Program Information Report (SPIR), a database containing information on program year 1993 JTPA terminees from titles II-A and II-C (programs for economically disadvantaged adults and youth, respectively). This additional data allowed us to compare, nationwide, the characteristics of terminees from Job Corps and JTPA. We also administered a telephone survey to employers of a random sample of Job Corps students who obtained jobs within 6 months after leaving the program. The methodology employed in this survey is discussed in greater detail in appendix II. We conducted site visits at six Job Corps centers during the period December 1994 through April 1995. We selected the sites judgmentally to provide a mixture of Job Corps centers that were (1) located in different Job Corps regions (to provide geographic dispersion); (2) rated among high and low performers according to the Job Corps ranking of performance indicators; (3) operated as civilian conservation centers (CCC) and contractor-operated centers; and (4) operated by different center contractors. Table I.1 lists the centers visited and the characteristics of each. Rank(out of 109) Contractor— Wackenhut Educational Services, Inc. Contractor— Career Systems Development Corp. Contractor—EC Corp. During these site visits, we interviewed center directors on various aspects of center operations, toured the facilities, and reviewed center records. Using the Dictionary of Occupational Titles and other guidance, we analyzed the jobs students obtained relative to the training received to determine whether these jobs were training related. We also compiled detailed cost information using individual center financial records to determine the true nature of expenditures—how much was being spent for administration, basic education and vocational training, social skills instruction, residential living, and other support services. We interviewed Labor officials at both the national and regional offices to obtain an overview of Job Corps operations and budgeting procedures, including how funds are tracked at the national level; reporting requirements for each level of oversight; and methods used for cost allocations. We also collected information on the contracting process, including information on the national training contracts; contracts for center operators; and, to some extent, those awarded for outreach, screening, and placement services. We analyzed Labor data to determine whether Job Corps was serving severely disadvantaged youth—its intended population. We used individual-level data and performed univariate and cross-tabulation descriptive procedures to compare selected characteristics of about 63,000 Job Corps terminees with those of about 172,000 JTPA out-of-school terminees aged 16 to 24 from titles II-A and II-C for program year 1993. Using SPAMIS and SPIR databases, we compared those characteristics considered to be barriers to employment that were commonly defined and uniformly collected by both Job Corps and JTPA. These characteristics included (1) being a school dropout, (2) having basic skills deficiencies (that is, reading or math skills below eighth grade), (3) receiving public assistance, and (4) having limited English proficiency. To provide information on employers’ perceptions about the training provided by the six Job Corps Centers we visited, we surveyed by telephone the employers of a random sample of students from each of these six centers. Sampled students are representative of the population of students at these six centers who had terminated from the program during program year 1993 with at least 60 paid days at the center, and who obtained employment within 6 months after leaving the program. The final sample contained 413 cases representing a population of 1,524 students. To identify this population, we used data files provided to us by the six centers. We verified and, where appropriate, augmented the data with SPAMIS data files from the Department of Labor. Using the telephone numbers provided in the data files, we telephoned the employers of the sampled students during the month of May 1995. We asked employers about students’ job tenure and about their satisfaction with students’ work habits and specific technical skills. We directed the survey to those officials most knowledgeable about employment histories and placement information. Our analyses are based on responses from employers of 92 percent of the sampled students. Findings from the survey were statistically adjusted (weighted) to produce estimates that are representative for each of the six sites and for the six sites combined. All data are self-reported, and we did not independently verify their accuracy. We used the data provided by the six centers and augmented it, as necessary, with the SPAMIS database to develop a data file. The file contained all required information for each member of our target population—Job Corps program terminees from program year 1993 who had been in Job Corps for at least 60 paid days and who had received jobs within 6 months of leaving the program. Using the Statistical Package for the Social Sciences sampling routine, we selected a simple random sample for each site. The population for the 6 sites ranged from 96 to 425 students, for a total of 1,524. The sample for the 6 sites ranged from 49 to 81 students, for a total of 413. Table II.1 contains population and sample sizes by site. During our survey, we asked employers to verify placement information, including job titles and hiring dates; provide corrected information, when appropriate; and provide job tenure information. We also asked employers to assess students’ work habits, technical skills, and whether the observed length of stay was average for that job. Interviewers used an electronic form of the survey, prepared using Questionnaire Programming Language, and entered the data directly into a computer file. Interviewer files were collated and processed on a site-by-site basis, base weights and nonresponse weights were calculated and attached to the file, the data from the six sites were merged, and all identifying data were removed. The responses contained in this report represent combined weighted responses for all six sites. We telephoned the employers of the 413 originally sampled students during the month of May 1995. Of the 413 students in the original sample, 55 were found to be ineligible for our survey. We considered a student ineligible if his or her employer’s phone number was incorrect or disconnected and we could not obtain a new one, or if the employer did not have records available to verify the student’s employment. Subtracting these ineligible students from our original sample yielded an adjusted sample of 358 students. At least three attempts were made to contact the employer of each of the 358 students. After repeated calls, we were unable to reach and/or interview the employers of 28 of these students. These 28 cases were classified as nonrespondents. We were able to reach and complete interviews with the employers of the other 330 sampled, eligible students. Dividing the number of students with whom we completed interviews by the adjusted sample yields a response rate of 92 percent. The survey questions about employer satisfaction with students proved to be very sensitive. In about 46 percent of the 330 interviews, employers declined to answer these particular questions about the students because of company policies or concerns about protecting the privacy of the student or the employer. All sample surveys are subject to sampling error, that is, the extent to which the results differ from what would be obtained if the whole population had been administered the questionnaire. Since the whole population does not receive the questionnaire in a sample survey, the true size of the sampling error cannot be known. However, it can be estimated from the responses to the survey. The estimate of sampling error depends largely on the number of respondents and the amount of variability in the data. For this report, site-level estimates are not provided, and therefore sampling errors at the site level were not calculated. For the estimates for the six centers combined, the sampling error ranges between +/- 3 and +/- 9 percentage points at the 95-percent confidence level. In addition to sampling errors, surveys are also subject to other types of systematic error or bias that can affect results. This is especially true when respondents are asked to answer questions of a sensitive nature or to provide factual information that is inherently subject to error. Lack of understanding of the questions can also result in systematic error. Bias can affect both response rates and the way that respondents answer particular questions. It is not possible to assess the magnitude of the effect of biases, if any, on the results of a survey. Rather, possibilities of bias can only be identified and accounted for when interpreting results. This survey had two major possible sources of bias: (1) sensitivity of certain issues and questions and (2) bias associated with all telephone surveys due to inability to reach the sampling target. The employer ratings of employees’ workplace behaviors requested by our survey are sensitive to several factors. For example, the particular rating provided by an employer may have been influenced by his/her ability to recall the specific habits and abilities of a particular individual in response to our questions. It also may have been affected by his/her overall like or dislike of the individual irrespective of the particular behaviors in question. Furthermore, some employers declined to provide any information about satisfaction with employees’ performance and technical skills. This reluctance may have had any number of unknown causes, including an unwillingness to report poor performance or an internal policy prohibiting the disclosure of any performance information. A second kind of bias may result from our inability to reach every sampled employer because of their inaccessibility by telephone. Certain types of businesses could not be reached because of various problems including the presence of answering machines or the inaccuracy of information contained in the data files. To the extent that businesses using answering machines are different than those that do not, there could be bias in the type of employer we were able to reach. Additionally, while we made every attempt to ascertain correct information, in some cases we were unable to do so. To the extent that errors in the data file provided by Job Corps are not random, bias of an unknown direction or magnitude could be present in the nature of the responses we received. Capacity (no. of students) (continued) Capacity (no. of students) (continued) Capacity (no. of students) (continued) Capacity (no. of students) Award(millions) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on Job Corps program operations, focusing on: (1) who is being served and the services provided; (2) the outcomes that the program is achieving in relation to program cost and employers' satisfaction with Job Corps students they hire; and (3) whether the long-standing practice of awarding sole-source contracts for vocational training services is cost-effective. GAO found that: (1) Job Corps services severely disadvantaged youth and provides them with comprehensive services in a residential setting; (2) 68 percent of the students that left Job Corps in 1994 encountered several barriers to employment, such as not having a high school diploma, lacking basic skills, receiving public assistance, and having limited English proficiency; (3) 20 percent of Job Corps' funds were spent on basic education and vocational skills training in 1994; (4) Job Corps students that complete vocational training are five times more likely to get higher paying, training-related jobs; (5) most employers are generally satisfied with Job Corps students' basic work habits and the technical training provided by the Job Corps program; (6) only moderate differences exist between the job placement rates of national contractors and Job Corps training providers; and (7) the continued use of national contractors as training providers is not cost-effective because they account for nearly one-third of Job Corps' vocational training expenditures and the training they provide is primarily in a declining occupational category. |
OIOS was created in 1994 to assist the Secretary-General in fulfilling internal oversight responsibilities over UN resources and staff. The stated mission of OIOS is “to provide internal oversight for the United Nations that adds value to the organization through independent, professional, and timely internal audit, monitoring, inspection, evaluation, management consulting, and investigation activities and to be an agent of change that promotes responsible administration of resources, a culture of accountability and transparency, and improved program performance.” OIOS is headed by an Under Secretary-General who is appointed by the Secretary-General—with the concurrence of the General Assembly—for a 5-year fixed term with no possibility of renewal. The Under Secretary- General may be removed by the Secretary-General only for cause and with the General Assembly’s approval. OIOS’s authority spans all UN activities under the Secretary-General. To carry out its responsibilities, OIOS is organized into four operating divisions: (1) Internal Audit Division I (New York); (2) Internal Audit Division II (Geneva); (3) Monitoring, Evaluation, and Consulting Division; and (4) Investigations Division. OIOS derives its funding from (1) regular budget resources, which are funds from assessed contributions from member states that cover normal, recurrent activities such as the core functions of the UN Secretariat; and (2) extrabudgetary resources, which come from the budgets for UN peacekeeping missions financed through assessments from member states, voluntary contributions from member states for a variety of specific projects and activities, and budgets for the voluntarily financed UN funds and programs. Management of the UN’s rapidly growing spending on procurement involves several UN entities. The Department of Management controls the UN’s procurement authority, and its 70-person UN Procurement Service develops UN procurement policies and procures items for UN headquarters. While the Procurement Service procures certain items for peacekeeping, about one-third of all UN procurement spending is managed by about 270 staff at the Department of Peacekeeping Operations’ 19 widely dispersed field missions. These missions may not award contracts worth more than $200,000 without the approval of the Department of Management (based on advice from the Headquarters Committee on Contracts). UN procurement spending has more than tripled since 1997, peaking at $1.6 billion in 2005. Major items procured include air transportation services, freight forwarding and delivery services, motor vehicles and transportation equipment, and chemical and petroleum products. The sharp increase in UN procurement was due in part to a five- fold increase in the number of military personnel in peacekeeping missions. Peacekeeping expenditures have more than quadrupled since 1999, from $840 million to about $3.8 billion in 2005. Peacekeeping procurement accounted for 85 percent of all UN procurement in 2004. In September 2005, the UN World Summit issued an “outcome document,” which addressed several management reform initiatives, including reforms for: ensuring ethical conduct; strengthening internal oversight and accountability; reviewing budgetary, financial, and human resources policies; and reviewing mandates. While the outcome document was endorsed by all UN member countries, there is considerable disagreement within the General Assembly over the process and implementation of the reforms. In December 2005, UN member states agreed to a $950 million spending cap on the UN’s biennium budget for 2006-2007, pending progress on management reforms. These funds are likely to be spent by the middle of 2006, at which time the General Assembly will review progress on implementing reforms and decide whether to lift the cap and allow for further spending. The UN is vulnerable to fraud, waste, abuse, and mismanagement due to a range of weaknesses in existing oversight practices. The General Assembly mandate creating OIOS calls for it to be operationally independent. In addition, international auditing standards note that an internal oversight activity should have sufficient resources to effectively achieve its mandate. In practice, however, OIOS’s independence is impaired by constraints that UN funding arrangements impose. In passing the resolution that established OIOS in August 1994, the General Assembly stated that the office shall exercise operational independence and that the Secretary-General, when preparing the budget proposal for OIOS, should take into account the independence of the office. The UN mandate for OIOS was followed by a Secretary-General’s bulletin in September 1994 stating that OIOS discharge its responsibilities without any hindrance or need for prior clearance. In addition, the Institute of Internal Auditors’ (IIA) standards for the professional practice of auditing, which OIOS and its counterparts in other UN organizations formally adopted in 2002, state that audit resources should be appropriate, sufficient, and effectively deployed. These standards also state that an internal audit activity should be free from interference and that internal auditors should avoid conflicts of interest. International auditing standards also state that financial regulations and the rules of an international institution should not restrict an audit organization from fulfilling its mandate. In addition to funding from the UN regular budget, OIOS receives extrabudgetary funding from 12 different revenue streams. Although the UN’s regular budget and extrabudgetary funding percentages over the years have remained relatively stable, an increasing share of OIOS’s budget is comprised of extrabudgetary resources (see fig. 1). OIOS’s extrabudgetary funding has steadily increased over the past decade, from 30 percent in fiscal biennium 1996-1997 to 63 percent in fiscal biennium 2006-2007 (in nominal terms). The majority of OIOS’s staff (about 69 percent) is funded with extrabudgetary resources. The growth in the office’s budget is primarily due to extrabudgetary resources for audits and investigations of peacekeeping operations, including issues related to sexual exploitation and abuse. Total (in million) UN funding arrangements severely limit OIOS’s flexibility to respond to changing circumstances and reallocate its resources among its multiple funding sources, OIOS locations worldwide, or among its operating divisions—Internal Audit Divisions I and II; the Investigations Division; and the Monitoring, Evaluation, and Consulting Division—to address changing priorities. In addition, the movement of staff positions or funds between regular and extrabudgetary resources is not allowed. For example, one section in the Internal Audit Division may have exhausted its regular budget travel funds, while another section in the same division may have travel funds available that are financed by extrabudgetary peacekeeping resources. However, OIOS would breach UN financial regulations and rules if it moved resources between the two budgets. According to OIOS officials, for the last 5 years, OIOS has consistently found it necessary to address very critical cases on an urgent basis. A recent example is the investigations of sexual exploitation and abuse in the Republic of Congo and other peacekeeping operations that identified serious cases of misconduct and the need for increased prevention and detection of such cases. However, the ability to redeploy resources quickly when such situations arise has been impeded by restrictions on the use of staff positions. OIOS is dependent on UN funds and programs and other UN entities for resources, access, and reimbursement for the services it provides. These relationships present a conflict of interest because OIOS has oversight authority over these entities, yet it must obtain their permission to examine their operations and receive payment for its services. OIOS negotiates the terms of work and payment for services with the manager of the program it intends to examine, and heads of these entities have the right to deny funding for oversight work proposed by OIOS. By denying OIOS funding, UN entities could avoid OIOS audits or investigations, and high-risk areas could potentially be excluded from timely examination. For example, the practice of allowing the heads of programs the right to deny funding to internal audit activities prevented OIOS from examining high-risk areas in the UN Oil for Food program, where billions of dollars were subsequently found to have been misused. In some cases, the managers of UN funds and programs have disputed the fees OIOS has charged after investigative services were rendered. For example, 40 percent of the $2 million billed by OIOS after it completed its work is currently in dispute, and since 2001, less than half of the entities have paid OIOS in full for the investigative services it has provided. According to OIOS officials, the office has no authority to enforce payment for services rendered, and there is no appeal process, no supporting administrative structure, and no adverse impact on an agency that does not pay or pays only a portion of the bill. OIOS formally adopted the IIA international standards for the professional practice of internal auditing in 2002. Since that time, OIOS has begun to develop and implement the key components of effective oversight. However, the office has yet to fully implement them. Moreover, shortcomings in meeting key components of international auditing standards can serve to undermine the office’s effectiveness in carrying out its functions as the UN’s main internal oversight body. Effective oversight demands reasonable adherence to professional auditing standards. OIOS has adopted a risk management framework to link the office’s annual work plans to risk-based priorities, but it has not fully implemented this framework. OIOS began implementing a risk management framework in 2001 to enable the office to prioritize the allocation of resources to oversee those areas that have the greatest exposure to fraud, waste, and abuse. OIOS’s risk management framework includes plans for organization- wide risk assessments to categorize and prioritize risks facing the organization; it also includes client-level risk assessments to identify and prioritize risk areas facing each entity for which OIOS has oversight authority. Although OIOS’s framework includes plans to perform client- level risk assessments, as of April 2006, out of 25 entities that comprise major elements of its “oversight universe,” only three risk assessments have been completed. As a result, OIOS officials cannot currently provide reasonable assurance that the entities they choose to examine are those that pose the highest risk, nor that their audit coverage of a client focuses on the areas of risk facing that client. OIOS officials told us they plan to assign risk areas more consistently to audits proposed in their annual work plan during the planning phase so that, by 2008, at least 50 percent of their work is based on a systematic risk assessment. Although OIOS’s annual reports contain references to risks facing OIOS and the UN organization, the reports do not provide an overall assessment of the status of these risks or the consequence to the organization if the risks are not addressed. For instance, in February 2005, the Independent Inquiry Committee reported that many of the Oil for Food program’s deficiencies, identified through OIOS audits, were not described in the OIOS annual reports submitted to the General Assembly. A senior OIOS official told us that the office does not have an annual report to assess risks and controls and that such an assessment does not belong in OIOS’s annual report in its current form, which focuses largely on the activities of OIOS. The official agreed that OIOS should communicate to senior management on areas where the office has not been able to examine significant risk and control issues, but that the General Assembly would have to determine the appropriate vehicle for such a new reporting requirement. While OIOS officials have stated that the office does not have adequate resources, they do not have a mechanism in place to determine appropriate staffing levels to help justify budget requests, except for peacekeeping oversight services. For peacekeeping audit services, OIOS does have a metric—endorsed by the General Assembly—that provides one professional auditor for every $100 million in the annual peacekeeping budget. Although OIOS has succeeded in justifying increases for peacekeeping oversight services consistent with the large increase in the peacekeeping budget since 1994, it has been difficult to support staff increases in oversight areas that lack a comparable metric, according to OIOS officials. OIOS staff have opportunities for training and other professional development, but OIOS does not formally require or systematically track staff training to provide reasonable assurance that all staff are maintaining and acquiring professional skills. UN personnel records show that OIOS staff took a total of more than 400 training courses offered by the Office of Human Resources Management in 2005. Further, an OIOS official said that, since 2004, OIOS has subscribed to IIA’s online training service that offers more than 100 courses applicable to auditors. Despite these professional development opportunities, OIOS does not formally require staff training, nor does it systematically track training to provide reasonable assurance that all staff are maintaining and acquiring professional skills. OIOS policy manuals list no minimum training requirement. OIOS officials said that, although they gather some information on their use of training funds for their annual training report to the UN Office of Human Resources Management, they do not maintain an officewide database to systematically track all training their staff has taken. UN funds are unnecessarily vulnerable to fraud, waste, abuse, and mismanagement because of weaknesses in the UN’s control environment for procurement. Specifically, the UN lacks an effective organizational structure for managing procurement, has not demonstrated a commitment to improving its professional procurement workforce, and has failed to adopt specific ethics guidance for procurement officials. The UN has not established a single organizational entity or mechanism capable of comprehensively managing procurement. As a result, it is unclear which department is accountable for addressing problems in the UN’s field procurement process. While the Department of Management is ultimately responsible for all UN procurement, neither it nor the UN Procurement Service has the organizational authority to supervise peacekeeping field procurement staff to provide reasonable assurance that they comply with UN regulations. Procurement field staff, including the chief procurement officers, instead report to the Peacekeeping Department at headquarters through each mission’s chief administrative officer. Although the Department of Management has delegated authority for field procurement of goods and services to the Peacekeeping Department, we found that the Peacekeeping Department lacks the expertise, procedures, and capabilities needed to provide reasonable assurance that its field procurement staff are complying with UN regulations. The UN has not demonstrated a commitment to improving its professional procurement staff in the form of training, a career development path, and other key human capital practices critical to attracting, developing, and retaining a qualified professional workforce. Due to significant control weaknesses in the UN’s procurement process, the UN has relied disproportionately on the actions of its staff to safeguard its resources. Given this reliance on staff and their substantial fiduciary responsibilities, management’s commitment to maintaining a competent, ethical procurement workforce is a particularly critical element of the UN’s internal control environment. Recent studies indicate that Procurement Service staff and peacekeeping procurement staff lack knowledge of UN procurement policies. Moreover, most procurement staff lack professional certifications attesting to their procurement education, training, and experience. The UN has not established requirements for headquarters and peacekeeping staff to obtain continuous training, resulting in inconsistent levels of training across the procurement workforce. More than half of the procurement chiefs told us that they had received no procurement training over the last year and that their training opportunities and resources are inadequate. All of them said that their staff would benefit from additional training. Furthermore, UN officials acknowledged that the UN has not committed sufficient resources to a comprehensive training and certification program for its procurement staff. In addition, the UN has not established a career path for professional advancement for procurement staff, which could encourage staff to undertake progressive training and work experiences. The UN has been considering the development of specific ethics guidance for procurement officers for almost a decade, in response to General Assembly directives dating back to 1998. While the Procurement Service has drafted such guidance, the UN has made only limited progress towards adopting it. Such guidance would include a declaration of ethics responsibilities for procurement staff and a code of conduct for vendors. We found weaknesses in key UN procurement processes or control activities. These activities consist of processes that are intended to provide reasonable assurance that management’s directives are followed and include reviews of high-dollar-value contracts, bid protest procedures, and vendor rosters. The Chairman and members of the Headquarters Committee on Contracts stated that the committee does not have the resources to keep up with its expanding workload. The number of contracts reviewed by the committee has increased by almost 60 percent since 2003. The committee members stated that the committee’s increasing workload was the result of the growth of UN peacekeeping operations, the complexity of many new contracts, and increased scrutiny of proposals in response to recent UN procurement scandals. Concerns regarding the committee’s structure and workload have led OIOS to conclude that the committee cannot properly review contract proposals. Without an effective contract review process, the UN cannot provide reasonable assurance that high-value contracts are undertaken in accordance with UN rules and regulations. The committee has requested that its support staff be increased from four to seven, and its chairman has stated that raising the threshold for committee review would reduce its workload. The UN has not established an independent process to consider vendor protests, despite the 1994 recommendation of a high-level panel of international procurement experts that it do so as soon as possible. An independent bid protest process is a widely endorsed control mechanism that permits vendors to file complaints with an office or official who is independent of the procurement process. Establishment of such a process could provide reasonable assurance that vendors are treated fairly when bidding and would also help alert senior UN management to situations involving questions about UN compliance. In 1994, the UN General Assembly recognized the advantages of an independent bid protest process. Several nations, including the United States, provide vendors with an independent process to handle complaints. The UN has not updated its procurement manual since January 2004 to reflect current UN procurement policy. As a result, UN procurement staff may not be aware of changes to UN procurement procedures that have been adopted over the past 2 years. Also missing from the procurement manual is a section regarding procurement for construction. In June 2005, a UN consultant recommended that the UN develop separate guidelines in the manual for the planning and execution of construction projects. These guidelines could be useful in planning the UN’s future renovation of its headquarters building. A Procurement Service official who helped revise the manual in 2004 stated that the Procurement Service has been unable to allocate resources needed to update the manual since that time. The UN does not consistently implement its process for helping to ensure that it is conducting business with qualified vendors. As a result, the UN may be vulnerable to favoring certain vendors or dealing with unqualified vendors. The UN has long had difficulties in maintaining effective rosters of qualified vendors. In 1994, a high-level group of international procurement experts concluded that the UN’s vendor roster was outdated, inaccurate, and inconsistent across all locations. In 2003, an OIOS report found that the Procurement Service’s roster contained questionable vendors. In 2005, OIOS concluded that the roster was not fully reliable for identifying qualified vendors that could bid on contracts. While the Procurement Service became a partner in an interagency procurement vendor roster in 2004 to address these concerns, OIOS has found that many vendors that have applied through the interagency procurement vendor roster have not submitted additional documents requested by the Procurement Service to become accredited vendors. In addition, most Peacekeeping Department field procurement officials with whom we spoke stated that they prefer to use their own locally developed rosters instead of the interagency vendor roster. Some field mission procurement staff also stated that they were unable to comply with Procurement Service regulations for their vendor rosters due to the lack of reliable vendor information in underdeveloped countries. OIOS reported in 2006 that peacekeeping operations were vulnerable to substantial abuse in procurement because of inadequate or irregular registration of vendors, insufficient control over vendor qualifications, and dependence on a limited number of vendors. To conduct our study of UN oversight, we reviewed relevant UN and OIOS reports, manuals, and numerous program documents, as well as international auditing standards such as those of the IIA and the International Organization of Supreme Auditing Institutions (INTOSAI). The IIA standards apply to internal audit activities—not to investigations, monitoring, evaluation, and inspection activities. However, we applied these standards OIOS-wide, as appropriate, in the absence of international standards for non-audit oversight activities. We met with senior Department of State (State) officials in Washington, D.C., and senior officials with the U.S. Missions to the UN in New York, Vienna, and Geneva. At these locations, we also met with the UN Office of Internal Oversight Services management officials and staff; representatives of Secretariat departments and offices, as well as the UN funds, programs, and specialized agencies; and the UN external auditors—the Board of Auditors (in New York) and the Joint Inspection Unit (in Geneva). We reviewed relevant OIOS program documents, manuals, and reports. To assess the reliability of OIOS’s funding and staffing data, we reviewed the office’s budget documents and discussed the data with relevant officials. We determined the data were sufficiently reliable for the purposes of this testimony. To assess internal controls in the UN procurement process, we used an internal control framework that is widely accepted in the international audit community and has been adopted by leading accountability organizations. We assessed the UN’s control environment for procurement, as well as its control activities, risk assessment process, procurement information processes, and monitoring systems. In doing so, we reviewed documents and information prepared by OIOS, the UN Board of Auditors, the UN Joint Inspection Unit, two consulting firms, the UN Department of Management’s Procurement Service, the UN Department of Peacekeeping Operations, and State. We interviewed UN and State officials and conducted structured interviews with the principal procurement officers at each of 19 UN field missions. Although OIOS has a mandate establishing it as an independent oversight entity—and OIOS does possess many characteristics consistent with independence—the office does not have the budgetary independence it requires to carry out its responsibilities effectively. In addition, OIOS’s shortcomings in meeting key components of international auditing standards can serve to undermine the office’s effectiveness in carrying out its functions as the UN’s main internal oversight body. Effective oversight demands reasonable budgetary independence, sufficient resources, and adherence to professional auditing standards. OIOS is now at a critical point, particularly given the initiatives to strengthen UN oversight launched as a result of the UN World Summit in the fall of 2005. In moving forward, the degree to which the UN and OIOS embrace international auditing standards and practices will demonstrate their commitment to addressing the monumental management and oversight tasks that lie ahead. Failure to address these long-standing concerns would diminish the efficacy and impact of other management reforms to strengthen oversight at the UN. Long-standing weaknesses in the UN’s internal controls over procurement have left UN procurement funds highly vulnerable to fraud, waste, abuse, and mismanagement. Many of these weaknesses have been known and documented by outside experts and the UN’s own auditors for more than a decade. Sustained leadership at the UN will be needed to correct these weaknesses and establish a procurement system capable of fully supporting the UN’s expanding needs. We recommend that the Secretary of State and the Permanent Representative of the United States to the UN work with member states to: support budgetary independence for OIOS, and support OIOS’s efforts to more closely adhere to international auditing standards; and encourage the UN to establish clear lines of authority, enhance training, adopt ethics guidance, address problems facing its principal contract- review committee, establish an independent bid protest mechanism, and implement other steps to improve UN procurement priorities. In commenting on the official draft of our report on UN internal oversight, OIOS and State agreed with our overall conclusions and recommendations. OIOS stated that observations made in our report were consistent with OIOS’s internal assessments and external peer reviews. State fully agreed with GAO’s finding that UN member states need to ensure that OIOS has budgetary independence. However, State does not believe that multiple funding sources have impeded OIOS’s budgetary flexibility. We found that current UN financial regulations and rules are very restrictive, severely limiting OIOS’s ability to respond to changing circumstances and to reallocate funds to emerging or high priority areas when they arise. In commenting on the official draft of our report on UN Procurement, the Department of State stated that it welcomed our report and endorsed its recommendations. The UN did not provide us with written comments. This concludes my testimony. I would be pleased to take your questions. Should you have any questions about this testimony, please contact Director Thomas Melito, (202) 512-9601 or [email protected]. Other major contributors to this testimony were Phyllis Anderson, Assistant Director; Joy Labez, Pierre Toureille, Jeffrey Baldwin-Bott, Joseph Carney, Kristy Kennedy, Clarette Kim, and Barbara Shields. | The United States has strongly advocated that the United Nations (UN) reform its management practices to mitigate various program and financial risks. The findings of the Independent Inquiry Committee into the Oil for Food Program have renewed concerns about UN oversight, and the 2005 UN World Summit proposed actions to improve the UN's Office of Internal Oversight Services (OIOS). Furthermore, over the past decade, as UN procurement more than tripled to $1.6 billion in response to expanding UN peacekeeping operations, experts have called on the UN to correct procurement process deficiencies. We examined (1) whether UN funding arrangements for OIOS ensure independent oversight; (2) the consistency of OIOS's practices with key auditing standards; and (3) the control environment and processes for procurement. The UN is vulnerable to fraud, waste, abuse, and mismanagement due to a range of weaknesses in existing management and oversight practices. In particular, current funding arrangements adversely affect OIOS's budgetary independence and compromise its ability to investigate high-risk areas. Also, weaknesses in the control environment and UN procurement processes leave UN funds vulnerable to fraud, waste, and abuse. UN funding arrangements constrain OIOS's ability to operate independently as mandated by the General Assembly and required by international auditing standards OIOS has adopted. First, while OIOS is funded by a regular budget and 12 other revenue streams, UN financial rules severely limit OIOS's ability to respond to changing circumstances and reallocate resources among revenue streams, locations, and operating divisions. Thus, OIOS cannot always direct resources to high-risk areas that may emerge after its budget is approved. Second, OIOS depends on the resources of the funds, programs, and other entities it audits. The managers of these programs can deny OIOS permission to perform work or not pay OIOS for services. UN entities could thus avoid OIOS audits or investigations, and high-risk areas can be and have been excluded from timely examination. OIOS has begun to implement key measures for effective oversight, but some of its practices fall short of the applicable international auditing standards it has adopted. OIOS develops an annual work plan, but the risk management framework on which the work plans are based is not fully implemented. Moreover, OIOS annual reports do not assess risk and control issues facing the UN organization, or the consequences if these are not addressed. OIOS officials report the office does not have adequate resources, but they also lack a mechanism to determine appropriate staffing levels. Furthermore, OIOS has no mandatory training curriculum for staff. UN funds are vulnerable to fraud, waste, abuse, and mismanagement because of weaknesses in the UN's control environment for procurement, as well as in key procurement processes. The UN lacks an effective organizational structure for managing procurement, has not demonstrated a commitment to improving its procurement workforce, and has not adopted specific ethics guidance. While the UN Department of Management is responsible for UN procurement, field procurement staff are supervised by the UN Department of Peacekeeping Operations, which lacks the expertise and capacity to manage field procurement. Also, the UN has not established procurement training requirements or a career path, and has yet to adopt new ethics guidance for procurement staff, despite long-standing General Assembly mandates. In addition, the UN has not established an independent process to consider vendor protests despite a 1994 recommendation by a high-level panel to do so as soon as possible. Further, the UN does not consistently implement its process for helping to ensure it conducts business with qualified vendors. |
The Social Security Administration (SSA) manages two major federal disability programs that provide cash benefits to people with long-term disabilities—the Disability Insurance (DI) and Supplemental Security Income (SSI) programs. The DI program was enacted in 1956 and provides monthly cash benefits to severely disabled workers. SSI was enacted in 1972 as an income assistance program for aged, blind, or disabled people. Disability is defined in the Social Security Act as an inability to engage in substantial gainful activity (SGA) because of a severe physical or mental impairment. Both programs use the same criteria and procedures for determining whether the severity of an applicant’s impairment qualifies him or her for disability benefits. In 1995, 5.7 million disabled workers and their dependents received about $40.2 billion in DI benefits; 4.7 million disabled or blind SSI claimants received about $21.1 billion in SSI benefits. From the 6.8 million recipients in 1988, overall program enrollment has increased by more than 50 percent. In fiscal year 1995, SSA spent $3 billion on these two programs, more than half of the agency’s total administrative expenses for the year. Nevertheless, the agency has acknowledged that it has had difficulty providing a satisfactory level of service to its disability claimants. The process is slow, labor-intensive, and paper-reliant. Despite efforts to manage this workload with shrinking resources, SSA has not been able to keep pace with program growth. Initial claim levels remain high, appealed case backlogs are growing, and decisions are not being made in a timely manner. In fiscal year 1995, about 2.5 million initial disability claims were forwarded to state offices for disability determinations, an increase of 43 percent over fiscal year 1990. During the same period, of the applicants requesting an administrative law judge (ALJ) to reconsider a decision denied at the initial claim level, the number escalated from about 311,000 to about 589,000, an increase of 89 percent. Furthermore, SSA is concerned with the amount of time required to process claims—in many cases a claimant waits more than a year for a final disability decision. As of June 1996, processing an initial disability claim averaged 78 days for DI claims and 94 days for SSI claims; the processing time for an ALJ decision averaged 373 days. Under the current eligibility determination process, DI and SSI disability claims can pass through from one to five decision points, at which eligibility is determined. The initial claim, initial state Disability Determination Service (DDS) decision, reconsideration, ALJ hearing, Appeals Council, and federal court review all involve procedures for evidence collection, review, and decision-making. The decision points within the current disability claims process are shown in figure 1.1. To be considered eligible for either program, claimants must meet SSA’s definition of disability. Claimants must also meet work requirements for DI claims and financial eligibility requirements for SSI claims. Under both programs, applications for disability benefits can be initiated at one of SSA’s over 1,300 field offices or through SSA’s toll-free telephone system. SSA field office personnel assist with completing the application; obtaining medical, financial, and work history information; and determining whether applicants meet the nonmedical criteria for eligibility. Field offices forward claimant information, along with supporting medical evidence, to a state DDS, of which there are 54. At the DDS, medical evidence is further developed and a final decision is made as to the existence of a medically determinable impairment that meets SSA’s definition of disability. SSA funds the state DDS agencies, provides them with guidance for making disability decisions, and reviews the accuracy and consistency of their decisions. Claimants who are dissatisfied with an initial determination may request reconsideration by the DDS. A reconsideration is conducted by different staff from the original staff, but the criteria and process for determining disability are the same. Claimants who disagree with a reconsideration denial have the right to a hearing before 1 of SSA’s 1,035 ALJs in the Office of Hearings and Appeals. At these hearings, claimants and medical or vocational experts may submit additional evidence; attorneys usually represent the claimants. If denied by the ALJ, the claimant may then request a review by SSA’s Appeals Council. The Appeals Council may affirm, modify, or reverse the decision of the ALJ; the Council may also remand the case to the ALJ for further consideration or development. Finally, the claimant may appeal the Council’s decision to federal court. SSA faces increasing responsibilities in the future and must manage its growing workload with fewer resources. SSA has estimated that if it conducts business as usual, it would need the equivalent of about 76,400 workers to handle its workload by the end of the century. Instead, SSA expects to handle this work with about 62,000 workers—2,000 fewer than it has today. To successfully manage its growing workload, SSA knows that it must (1) increasingly rely on technology and (2) build a workforce with the flexibility and skills to operate in a changing environment. Concerned about managing its workload while reducing administrative costs, saving time, and improving the quality of service, SSA’s leadership decided it needed to redesign its disability claims process. To improve the process, SSA’s leadership turned to business process reengineering. SSA concluded that redesigning its process for deciding disability claims was critical to its goal of providing world-class customer service with fewer resources. In April 1994, we testified that the redesign proposal for the disability process is SSA’s first valid attempt to address major fundamental changes needed to realistically cope with the disability determination workload. We cautioned SSA, however, that many difficult implementation issues would need to be addressed. These include new staffing and training demands, development and installation of technology enhancements, and confrontation with the entrenched cultural barriers to change. Reengineering is risky by definition, but if done well it can net positive benefits for the organization. As envisioned, SSA expects the redesigned process will produce tangible savings. However, the bulk of these savings will come from more efficient use of federal and state employees to process disability claims. Greater efficiency will (1) allow the agency to use its current workforce to accomplish other pressing activities and (2) avoid hiring to replace all those who retire or otherwise leave the agency. In addition, SSA expects the redesign will result in intangibles, such as improved customer service, an empowered and better-trained workforce, and increased public confidence in SSA. When SSA proposed its redesign, it estimated that it would cost $148 million to administer, with the largest portion of these costs allocated to training activities. However, SSA estimated net savings of $704 million through fiscal year 2001—the year for which full implementation is anticipated. SSA also estimated recurring annual savings of $305 million, once the redesign is fully implemented. While success cannot be guaranteed, leading private organizations have used business process reengineering to identify and quickly put in place dramatic improvements in their operations. The objective of reengineering is to fundamentally rethink and redesign a business process from start to finish, so that it becomes more efficient and, as a result, significantly improves service to customers. There is, however, no “right” way to reengineer and no step-by-step sequence of prescribed activities. Reengineering is highly situational and should be tailored to meet the needs of each organization, according to reengineering experts. Nevertheless, today’s leaders in business process reengineering advocate certain critical success features, or best practices, to help organizations increase the likelihood of success. Case studies show that reengineering has failed to achieve the desired change, in part, because managers have not followed best practices. These practices include concentrating on a small number of initiatives at any given time for broad-scoped comprehensive projects; developing and implementing the initiatives quickly; identifying, securing, and maintaining stakeholder support; and having the organizational commitment to initiate and sustain the redesign. Concentrating on a small number of initiatives at any given time is essential. According to the experts, reengineering should remain focused to achieve rapid results. Without such focus, an organization risks becoming overwhelmed. Further, once started, the scope of the redesign should not be expanded. Trying to work on too much forces managers to choose among projects, which further dilutes the time and attention required to quickly move the redesign forward. Developing and implementing initiatives quickly is also essential. According to some reengineering experts, the time from concept formulation to realizing the first release of a reengineered process should take no more than 12 months. Other reengineering experts note that while the full value of a redesigned process may take 2 to 5 years, individual initiatives should be accomplished in a year or less. Identifying, securing, and maintaining stakeholder support is also an essential element of redesign. Stakeholders consist of individuals who are both internal and external to an organization, as well as groups that can influence the organization in some way. For SSA, internal stakeholders include the staff within the organization that will need to adapt to changes in business processes; external stakeholders include the Congress, state employees, labor unions, oversight bodies, key interest groups, customers, and others who oversee, fund, or are affected by SSA’s activities. Managers of redesign should strive to secure and maintain support of all stakeholders. Without such support throughout redesign, the chances of success can be jeopardized. Finally, having the organizational commitment to initiate and sustain redesign is another essential element. It is paramount to the success of the redesign. As a top-down process, reengineering requires strong, continuous, and committed senior executives from the beginning of the redesign. The Chairman of the House Subcommittee on Social Security, House Ways and Means Committee, asked us to provide information on the implementation challenges facing SSA as it redesigns its disability claims process. More specifically, in this report, we address SSA’s vision and progress for redesigning the disability claims process, issues related to the scope and complexity of the redesign, and the agency’s efforts to maintain stakeholder support. To develop our information, we reviewed extensive literature on the principles of reengineering. We interviewed officials at SSA headquarters and its Atlanta Regional Office. We also reviewed SSA’s extensive design, development, testing, and implementation data for the redesign. We met with the president of the National Council of Disability Determination Directors (NCDDD), who represents the 54 state DDSs, and obtained state director views on SSA’s testing and implementation activities. We also met with representatives from the Office of Management and Budget, the American Federation of Government Employees, and the National Association of Disability Examiners. We received formal briefings from SSA and state organizations on specific projects and activities related to the redesign effort. These briefings included periodic updates by the director, Disability Process Redesign Team (DPRT), on the overall redesign direction and progress; demonstrations on the development of technology enhancements; and presentations by state employee associations on the issues, progress, and problems associated with redesign. We did not assess the validity of SSA’s redesign as a means to improve services to claimants and to reduce administrative costs. Nevertheless, in the course of our work, we noted that SSA’s redesign includes features that appear sensible for a project of this nature. Two such features are (1) a single approach for all decisionmakers to use when making decisions and (2) enhanced technology to support the redesign. Our audit work was conducted from July 1995 through September 1996 in accordance with generally accepted government auditing standards. As with many federal agencies faced with fiscal constraints and increasing demands for services, SSA recognized the need to dramatically improve its disability claims process. Consequently, SSA created an implementation plan for improving its process through 80 initiatives. By September 30, 1996, 38 of those initiatives were to be addressed. Although SSA has begun nearly all of the initiatives it planned to have under way during the first 2 years of its implementation plan, as of July 1996, SSA had (1) not completed any initiative and (2) not begun testing for 14 of the 19 initiatives that contain testing requirements. In October 1993, SSA created a Disability Reengineering Project Team to fundamentally rethink and redesign the disability determination process, so as to make it more efficient and improve service to claimants. The team was asked to redesign the process so as to better use technology to help SSA reduce the costs and time of claims processing and enable the agency to meet its workload demands with fewer resources. The team did the following: analyzed the current process; sponsored a series of general public and claimant focus groups to understand the public’s preferences relating to service; compared key aspects of the process with best practices of other public and private sector organizations; conducted independent research; and solicited ideas for improving the process from thousands of stakeholders who were involved in the disability process, including employees, health care providers, consumer advocates, and legal representatives. After extensive consultation with individuals and organizations representing the disabled, the Commissioner, in September 1994, approved SSA’s vision for redesigning the disability claims process. The redesigned, user-friendly process emphasizes making correct decisions quickly and efficiently at the earliest possible point. This process is expected to reduce average processing time: for a decision on an initial DI claim, the time would be reduced from 78 days to almost 60 and for a decision on an initial SSI claim, from 94 days to about 60. Similarly, the processing time for appealed cases is expected to be reduced from 373 to 225 days. The steps in SSA’s new process are shown in figure 2.1. The goal of the redesigned process is to guide all decisionmakers at all levels to (1) use standards from the same sources for decision-making and (2) make “correct” decisions in an easier, faster, and more cost-effective manner at the earliest possible point in the process. SSA states a correct disability decision is one that appropriately considers whether an individual meets the factors of entitlement for disability, as defined by SSA’s statute, regulations, rulings, and policies. According to SSA, correct decisions in the new process depend on these factors: a simplified decision methodology that provides a common frame of reference for determining disability by all decisionmakers in processing claims; consistent direction and training to all decisionmakers; enhanced and targeted collection and development of medical evidence; an automated and integrated claims-processing system that will assist decisionmakers in gathering evidence; a single, comprehensive quality review process; and the creation of the disability claim manager (DCM) position to give claimants direct access to the decisionmaker throughout the process and the opportunity to discuss any claim before it is disallowed. Under the redesigned process, a DCM will be the focal point for claimant contacts throughout the process and will be responsible for processing and deciding the initial claim. In the current process, these responsibilities are shared by federal claims representatives and state disability examiners. In the redesigned process, the DCM will take the initial claim, gather and retain claim information, develop medical and nonmedical evidence, share information with medical consultants, analyze information, and make the decision as to whether to allow or deny the claim. If the evidence for the initial claim does not support an allowance before denying the claim, the DCM will issue a predecision notice, advising the claimant of what evidence has been considered and providing the claimant with the opportunity to submit additional evidence. If no evidence is provided or if the evidence provided does not support an allowance, the DCM will deny the claim. Claimants who disagree with a DCM decision can appeal the decision to the Office of Hearings and Appeals. When a claimant appeals a decision, an adjudication officer (AO) will interview the claimant and become the primary contact during the appeal. This position is not available under the current process and is being introduced by SSA to make allowance decisions in less time. The AO will review the file, identify the issues in dispute, and determine whether there is a need to obtain additional evidence. The AO will also have the authority to issue a favorable decision, if warranted, or forward the completed claim to an ALJ for consideration. If, after careful review, the ALJ denies the claim, the claimant may appeal the decision to a federal district court. Throughout its effort, SSA intends to assess all redesign activities against the Commissioner’s five primary objectives for the redesign. These are making (1) the process user-friendly for claimants and their representatives, (2) the right decision the first time, (3) the decision as quickly as possible, (4) the process efficient, and (5) the work satisfying for staff. In November 1994, SSA released an extensive and complex redesign implementation plan to facilitate turning its vision into reality. The plan, to be accomplished over a 6-year period—beginning in fiscal year 1995 and concluding in fiscal year 2000—includes six lead areas, encompassing 23 process improvement features and three enablers. The lead areas are process entry and intake, disability decision methodology, medical evidence development, administrative appeals, quality assurance, and communication. The enablers, critical support structures that SSA contends are necessary for successful implementation, are developing a single presentation of all policies for determining disability, using third parties to help claimants with application packages, including completing forms and obtaining the medical evidence necessary for deciding claims. See appendix I for a description of (1) the 23 features and more details on the three enablers and (2) planned completion dates. To help direct its redesign effort, SSA established a management structure to provide leadership, oversight, and continuity throughout the testing and implementation phase. The relationship between SSA’s redesign implementation team and the Commissioner, principal deputy commissioner, and executive steering committee is shown in figure 2.2. An executive steering committee was formed to meet on a regular basis to advise the Commissioner on development of the redesigned process and to ensure the support of SSA’s senior management team. The committee includes the principal deputy commissioner and the director of the DPRT, as well as senior managers representing SSA, state, and union components. Some of these include the Office of Disability; Office of Hearings and Appeals; Office of Budget; Association of Administrative Law Judges, Inc.; and the Office of Systems Components. SSA assembled the DPRT to help direct the implementation of the redesigned disability claims process. Team leaders work full-time on the redesign and are responsible for its major components. Within the major components, designated heads of lead areas will coordinate planning and oversee implementation. These designees, as well as DPRT staff who assist them, are drawn from SSA’s federal and state workforce. Overall day-to-day leadership, control, and coordination of all redesign implementation activities is vested in the director of the DPRT. The director, reporting to the Commissioner and principal deputy commissioner, is expected to establish implementation priorities, develop specific timelines, and provide oversight to ensure that implementation decisions are consistent with the vision for the redesign process. In addition, task teams were established to address specific implementation issues within each of the areas. These teams were directed to address a broad range of planning issues involving strategic, tactical, and operational matters. In early 1995, 12 task teams met to formulate and recommend specific actions that should be undertaken. For each task team, the overall purpose and related activities are summarized in table 2.1. In deciding to redesign the disability claims process, SSA tackled the entire process rather than using a building block approach, improving aspects of the process a little at a time. SSA’s ambitious approach led it, in November 1994, to identify 83 initiatives (later reduced to 80) associated with 23 process features. SSA chose to prioritize these initiatives by dividing them into three time frames: near-term (fiscal year 1995 to 1996), mid-term (fiscal year 1997 to 1998), and long-term (fiscal year 1999 to 2000). Near-term implementation initiatives are those (1) scheduled to be fully implemented nationwide by the end of fiscal year 1996 or (2) for which the research and development or site testing can be initiated by the end of fiscal year 1996. Mid-term initiatives are those that are scheduled to be developed and tested in fiscal years 1997 and 1998 and implemented nationwide by fiscal year 1998. Finally, long-term initiatives are those requiring extensive research and development that cannot be tested fully before fiscal year 1999 or cannot be fully implemented nationwide before fiscal year 2001. SSA’s near-term initiatives, to be completed or under way by September 30, 1996, include a rollout of 40 (later reduced to 38), almost one-half, of the 80. The 38 initiatives were designed to set the pace for fully implementing the redesign. Completing the initiatives will require a significant investment in time and resources. Thousands of federal, state, and contractor employees will be needed throughout the country for (1) activities such as designing, developing, testing, and evaluating processes and (2) developing and delivering training programs. Each initiative contains its own set of unique and complex circumstances. The six process features and corresponding near-term initiatives are summarizied in table 2.2. See appendix I for DPRT’s complete timetable for redesign. The time frames established in SSA’s November 1994 implementation plan, “Disability Process Redesign: Next Steps in Implementation,” sets forth an outside time frame, September 30, 1996, for (1) completing the near-term initiatives or (2) initiating research and development or site testing. Nevertheless, the redesign implementation team was to focus on completing the tasks as early in the time frame as possible. However, SSA has not met its near-term goal. While SSA has completed six tasks (a subcomponent within an initiative) as of July 1996, it has not fully completed or implemented any near-term initiative and is running behind in meeting its testing milestones. As to tasks completed between November 1994 and July 1996, SSA has (1) disseminated a 1-page disability information fact sheet, (2) completed program operation instructions for the Early Decision List and sequential interviewing, (3) revised the disability form 3368 to collect medical source information, (4) finalized the DCM Workgroup report, (5) published regulations to test the DCM, the predecision interview, and the elimination of the reconsideration step in the current process, as well as began training all decisionmakers on existing policy for treating physician opinion, pain and other symptoms, and residual functional capacity, and (6) developed a research plan for developing a new disability determination methodology. Furthermore, of the 19 initiatives requiring testing, which were to be completed or initiated by September 30, 1996, only 5 had testing ongoing as of July 1996; 3 of them—the AO position, use of mail-in applications, and the single-decisionmaker—were being fully tested; the other 2 had limited testing under way. Testing on the remaining 14 has not started. The status of SSA efforts to complete the 38 near-term initiatives is shown in table 2.3. SSA began its redesign by identifying problems with the current claims process and focusing on initiatives it felt needed to be undertaken immediately. In its 2-year plan for near-term improvements, SSA has moved forward with 38 initiatives rather than keeping its efforts focused on a few initiatives at one time and striving for rapid process change—a best practice associated with successful reengineering. Many of the initiatives SSA has undertaken are complex, requiring more time to complete than it planned. Thus, the risk of leadership turnover, before the overall project is complete, is increased. According to reengineering experts, continuity of senior executive leadership is much more likely for initiatives of shorter duration. Further complicating SSA’s redesign activities is the difficulty it has experienced in trying to maintain the support of all its stakeholders. SSA identified more than 140 stakeholders, many with conflicting concerns. While SSA has been working to secure their support for the redesigned process, a number of stakeholders do not support SSA’s approach. Moreover, because none of the initiatives have been successfully implemented, there are no concrete and measurable results that enable SSA to demonstrate the merits of its approach to encourage stakeholder support. In deciding to tackle 38 initiatives in the first 2 years of the redesign, SSA did not follow a best practice—organizations that successfully manage redesign usually focus on a small number of initiatives at one time. Nevertheless, SSA decided to take on a large number of initiatives concurrently. Some of the more important initiatives—such as technology enhancements, the DCM position, and process unification—are large and complex. They will require many years to complete and the commitment and support of numerous stakeholders. A major part of SSA’s redesign is implementing technological enhancements to improve the disability claims process. The redesigned process would replace a slow, labor-intensive, and paper-reliant process with an automated system from first contact to final decision. Throughout all stages of the process, all staff will use essentially the same software to assign claims, schedule appointments, gather and store information, develop medical and nonmedical evidence, facilitate decision-making, provide case control, keep fiscal and accounting information, and manage the information. SSA will also need to acquire over 50,000 intelligent workstations (personal computers). This extensive software and hardware acquisition will be installed on a local area network (LAN), connecting more than 1,350 SSA and state offices throughout the United States. SSA estimates that it will be 1998 before the hardware is installed in all field locations. SSA’s software development activities demonstrate the long-term and complex nature of this initiative. Developing software designed to allow SSA to move from its current manual process to an automated process is critical to success. However, the scheduled implementation of this new software has been delayed by about 28 months because of problems identified during testing. Software development is further constrained by the lack of firm requirements for the new disability determination process. For example, SSA cannot effectively develop software to obtain medical evidence of records until the DPRT decides how it wants to standardize information, requested from medical sources, to substantiate disability claims. SSA chose to create the DCM position to consolidate different elements of the claims determination process. However, recognizing the scope of the changes involved, SSA determined it needed to introduce the position gradually; the DCM position would not become fully operational until fiscal year 2000. The DCM is a key dimension of SSA’s redesign. SSA plans to (1) establish over 11,000 DCM positions in about 1,350 federal and state locations and (2) recruit DCMs from its current workforce of about 16,000 federal claims representatives and about 6,000 state disability examiners. As mentioned earlier, the DCM would be responsible for making all decisions about a disability claim. This is a major deviation from current practice: an SSA claims representative processes the initial claim; then a state disability examiner and a medical consultant make the medical determination. The DCM would conduct personal interviews, develop records for evidence, and determine medical and nonmedical eligibility. Specifically, the DCM would gather and store claim information, develop both medical and nonmedical evidence, share necessary facts in a claim with medical consultants and specialists in nonmedical or technical issues, analyze evidence, and make the decision whether to allow or deny the claim. If the initial evidence does not support an allowance before denying the claim, the DCM will issue a predecision notice advising the claimant of what evidence has been considered and provide the claimant with the opportunity to submit additional evidence. Although DCMs could still call on medical and technical support personnel for assistance, a DCM alone would make the final decision on both medical and nonmedical aspects of a disability claim. To accomplish all these tasks, the DCM would need a number of crucial initiatives, such as technology enhancements, process unification, and a simplified decision methodology. However, SSA acknowledges that these initiatives will not be implemented soon. In addition, SSA faces many other challenges before the DCM can become operational, for example, securing support from state governments, state and federal labor unions, and congressional committees; developing training plans; conducting tests at pilot sites; bargaining with state unions; posting vacancy announcements for positions; and selecting and training employees. In October 1996, SSA stated that the decision to implement the DCM will not be made until valid and reliable testing demonstrates that this position is viable. The scope of process unification has increased significantly since the implementation plan for the redesign was released in November 1994. At that time, the DPRT was primarily interested in developing a single policy manual—known as the “one book”—of all substantive policies for determining disability. Since then, SSA has expanded the scope of its initiative to put together the one book. Under process unification, SSA hopes to achieve similar results on similar cases at all stages of the disability claims process, with consistent application of laws, regulations, and rulings. SSA’s expanded initiative includes (1) conducting the same training for 14,000 decisionmakers, including doctors and reviewers, (2) developing a consistent quality review process that balances review of allowances and denials and applies the same standards at all stages of the process, and (3) using more consistent medical input throughout the disability determination process. Consequently, process unification will not be completed by September 30, 1996, as initially envisioned, but will be phased in through a series of incremental changes that could take through January 1998 or longer to complete. When undertaking reengineering initiatives, organizations are often working toward accomplishing a vision for the future; they may invest several or more years to fully complete all of the initiatives. This is also true for SSA’s redesign initiatives. As mentioned earlier, experts suggest, however, that organizations that have successfully reengineered their work processes meet their long-term vision by implementing discrete projects of relatively short duration. Experts therefore advocate planning initiatives that can be implemented within 12 months. Experts also state that achieving quick progress is the key to maintaining stakeholder support for long-term changes. Furthermore, redesign in government agencies can be affected by constantly changing political environments that often restrict the time available for career officials to achieve program goals. Consequently, redesign initiatives with relatively short time frames allow organizations to avoid major disruption because of leadership changes. Some of SSA’s initiatives, however, are beginning to expand in scope and become lengthy endeavors. Reengineering experts also caution that lengthy initiatives can affect the continuity and availability of the agency’s senior executives. Such senior executives are a necessary prerequisite for successful reengineering. These executives are the cornerstone of any redesign effort and actively demonstrate the agency’s commitment to initiate and sustain the change. Although SSA recognizes the importance of management stability and continuity to the redesign process, it has experienced turnover in three senior executive positions since implementation began. We did not develop evidence that such turnover has had a negative impact on SSA’s redesign. But continued turnover could result in possible loss of momentum or change of scope or direction. Redesign initiatives that take many years to complete face increased risk—the longer the project runs, the greater the chance that turnover of leadership will occur. Maintaining stakeholder support is critical to reengineering. Because stakeholders can jeopardize the chances for successful reengineering if they are not committed to it, managers of redesign must seek out and secure support from all stakeholders. Stakeholders have considerable knowledge of the business and organizational environment and can help rally support from other stakeholders. SSA identified and tried to involve stakeholders in the redesign, but has encountered problems obtaining and maintaining their support. In September 1993, SSA established an executive workgroup to identify the stakeholders that should be involved in the development and implementation of redesign. More than 140 stakeholders were identified from congressional, federal, state, public, and private groups. In its November 1994 redesign implementation plan, SSA called on its federal and state workforce to make the vision a reality. Since then, some actions taken by SSA have raised major concerns for some stakeholders—especially salary issues. According to the president of the American Federation of Government Employees, Local 1923, the union would have opposed the DCM position if SSA attempted to implement it as a grade 11. Under a memorandum of understanding between the union and SSA, those assigned to DCM positions will receive temporary promotions to grade 12, one grade higher than the journeyman level for the claims representative position. However, this action raised concerns for the state DDS directors and their workforce, many of whom believe that the agreement with the union will (1) exacerbate the existing salary gap between state and federal employees and (2) give federal employees a workload that is currently states’ responsibility. Another stakeholder disagreement arose following deliberations of a workgroup SSA created to determine how to accelerate testing of the DCM position. This workgroup was comprised of SSA and DDS management, claims representatives and disability examiners, and federal and state union representatives. The workgroup’s final report endorsed SSA’s proposal to test 1,500 DCMs over a 3-year period. Even though DDS representatives were workgroup participants, they did not support SSA’s proposal to test such a large number of positions. At the conclusion of the DCM workgroup’s activities, the NCDDD presented a position paper to the DPRT director. The paper stated that the directors would only agree to a pilot test involving 60 state and 60 federal DCMs. On September 11, 1996, the director, DPRT, stated that SSA plans to begin training DCMs in January 1997. Federal employees will receive about 30 weeks of training and state employees about 6. After formal training is complete, a period of coaching and mentoring will take place. The total time envisioned for the formal training and the coaching period is about 18 months. However, as further evidence that stakeholder support is eroding, the director also said that he was not sure there will be a DCM test. He explained that (1) of the 16 states that previously agreed to take part in the test, 3 have decided not to participate and (2) several of the remaining 13 states are now reconsidering their decision to participate. Further, SSA has not obtained strong support from a major stakeholder—the NCDDD. The directors manage over 14,000 state employees nationwide, of whom about 6,000 are disability examiners. According to two recent NCDDD surveys, the DDS directors indicated that many states were not strongly supportive of a number of redesign initiatives. According to the first survey, conducted in September 1995, only 3 of the 42 respondents, or about 7 percent, strongly supported redesign. In addition, 17 states, or about 40 percent, either moderately or strongly did not support SSA’s efforts to redesign the disability process. According to the second survey, conducted in January 1996, the DDS directors’ opinions about redesign had worsened, in part due to DCM testing. In response to the question about how the states viewed the overall redesign, 28 of 51 respondents, or about 55 percent, either moderately or strongly did not support redesign. Further, according to the survey, only 1 of 50 DDS directors thought the DCM position could be implemented successfully without all the enablers in place. In addition, 24 of these directors thought the DCM position could never be successfully implemented. Given the high cost and long processing time of SSA’s current process, the agency’s redesign, which undertakes a large number of initiatives at one time, is proving to be overly ambitious. Some initiatives are also getting more complex as SSA expands the work required to complete them. This approach is likely to limit the chances for success and has already led to delays in implementation: testing milestones have slipped and stakeholder support for the redesign has diminished. As of July 1996, activity is under way for most of SSA’s near-term initiatives; however, none is complete and many are behind schedule. Only about one-fourth of the near-term initiatives that contain testing requirements have been started. Consequently, SSA has not made the progress it intended in order to know whether specific initiatives will achieve the desired results. Further, many of the initiatives are complex and have expanded in scope, thus increasing the time frames to complete them. A disadvantage to extending the time frames and delaying implementation is that they increase the likelihood that SSA will experience senior executive changes during the course of the redesign. Moreover, this delay also means that no concrete and measurable results are available to maintain stakeholder support. While any one of the problems discussed in this report could possibly be managed and handled successfully, SSA currently faces a multitude of problems that raises questions about the likelihood redesign will succeed. To increase the likelihood that its reengineering project will succeed, given the major delays that SSA has experienced and the risk of further decline in stakeholder support, we recommend that the Commissioner of the Social Security Administration concentrate on accomplishing rapid results through initiatives of smaller, more manageable scope. This effort should include selecting those initiatives most crucial to producing significant, measurable reductions in claims-processing time and administrative costs—including those initiatives intended to achieve process unification, establishment of new decision-making positions, and enhancement of information systems support—and combining those initiatives into an integrated process, testing that process at a few sites, and evaluating the results—before proceeding with full-scale implementation. The valuable experience gained in these initial efforts can then be used both to improve the redesign and to build support among stakeholders and potential program beneficiaries. In addition, other initiatives could be undertaken at a later date, when progress is ensured for the initiatives described above and resources become available. In its comments, SSA generally agreed with the thrust of our report and its recommendation. SSA stated it is directing a larger portion of its redesign resources to crucial initiatives. Further, SSA plans to evaluate several key redesign features in early 1997—the single decisionmaker and predecision interview process, elimination of the reconsideration stage, and the proposed adjudication officer (AO) position—in an integrated test. This approach does not, however, include integrated testing of all the initiatives we and SSA now consider crucial. Among the initiatives excluded from this testing approach are process unification, quality assurance, and enhancement of information systems support. We continue to believe that SSA, before proceeding with full-scale implementation, should combine all crucial initiatives into an integrated process, test that process at a few sites, and evaluate test results. The approach we recommend is quite similar to one that was under consideration at SSA in 1995. Under that 1995 approach, sites were to serve as comprehensive test locations, with the principal function of integrating and combining all crucial initiatives, including automation and technology enablers. In its comments, SSA also expressed some reservations about how quickly it could complete redesign. SSA stated that while other organizations could achieve results quickly, such an expectation regarding SSA’s redesign would be unrealistic, given the scope of the initiatives. But during the course of our work, we identified several instances of large, complex government and private organization redesigns in which significant test results were achieved in a relatively short time. Although testing a fully integrated process may require considerable effort, quick completion would both (1) provide valuable information that would assist SSA in selecting a redesign solution and (2) serve as a concrete demonstration of progress. These two factors should be helpful in building support among stakeholders and potential program beneficiaries. See appendix II for the full text of SSA’s comments. | Pursuant to a congressional request, GAO evaluated the Social Security Administration's (SSA) efforts and progress in redesigning the disability determination claims process to reduce administrative costs and the time a claimant waits for a decision, focusing on: (1) SSA's vision and progress for redesigning the disability claims process; (2) issues related to the scope and complexity of the redesign; and (3) SSA's efforts to maintain stakeholder support. GAO found that: (1) SSA is about one-third the way through the 6 years it estimated for redesigning the process, but has made relatively little progress in meeting its goals; (2) as of July 1996, SSA had not completed any initiative and testing had not begun for 14 of the 19 initiatives that contain testing requirements; (3) there have not been concrete and measurable accomplishments to keep the support of stakeholders; (4) a number of these initiatives have expanded in scope, thus increasing the time frames required to complete them; (5) increasing the time frames has several disadvantages, such as delaying implementation and heightening the risk of disruption from turnover in senior executives; (6) in addition to delays, SSA has also experienced turnover of senior executives since the beginning of the redesign; (7) although it is difficult to determine if this turnover has had a negative impact on the redesign thus far, continued turnover could result in possible loss of momentum or change of direction; (8) further complicating SSA's redesign efforts are difficulties in maintaining much needed stakeholder support; (9) some federal and state employees, as well as the unions that represent them, are concerned that redesign could mean the loss of jobs; (10) state employees are concerned about SSA's decision to pay federal employees at a higher rate than state employees for the same job; and (11) support from state management officials involved in the disability claims process has been declining steadily. |
Intellectual property is an important component of the U.S. economy, and the United States is an acknowledged global leader in its creation. However, the legal protection of intellectual property varies greatly around the world, and several countries are havens for the production of counterfeit and pirated goods. The State Department has cited estimates that counterfeit goods represent about 7 percent of annual global trade, but we would note that it is difficult to reliably measure what is fundamentally a criminal activity. Industry groups suggest, however, that counterfeiting and piracy are on the rise and that a broader range of products, from auto parts to razor blades, and from vital medicines to infant formula, are subject to counterfeit production. Counterfeit products raise serious public health and safety concerns, and the annual losses that companies face from IP violations are substantial. Eight federal entities, the Federal Bureau of Investigation (FBI), and the U.S. Patent and Trademark Office (USPTO) undertake the primary U.S. government activities to protect and enforce U.S. IP rights overseas. These eight entities are: Departments of Commerce, State, Justice, and Homeland Security; USTR; the Copyright Office; the U.S. Agency for International Development; and the U.S. International Trade Commission. They undertake a wide range of activities that fall under three categories: policy initiatives, training and technical assistance, and law enforcement. U.S. policy initiatives to increase IP protection around the world are primarily led by USTR, in coordination with the Departments of State, Commerce, USPTO, and the Copyright Office, among other agencies. These policy initiatives are wide ranging and include reviewing IP protection abroad, using trade preference programs for developing countries, and negotiating agreements that address intellectual property. Key activities to develop and promote enhanced IP protection in foreign countries through training or technical assistance are undertaken by the Departments of Commerce, Homeland Security, Justice, and State; the FBI; USPTO; the Copyright Office; and the U.S. Agency for International Development. A smaller number of agencies are involved in enforcing U.S. IP laws. Working in an environment where counterterrorism is the central priority, the FBI and the Departments of Justice and Homeland Security take actions that include engaging in multi-country investigations involving intellectual property violations and seizing goods that violate IP rights at U.S. ports of entry. Finally, the U.S. International Trade Commission has an adjudicative role in enforcement activities involving patents and trademarks. STOP is the most recent of several interagency IP coordination mechanisms that address IP policy initiatives, training and technical assistance, and law enforcement. Some of these have been effective, particularly the Special 301 process that identifies inadequate IP protection in other countries and the Intellectual Property Rights (IPR) Training Coordination Group. However, U.S. law enforcement coordination efforts through NIPLECC have had difficulties. STOP was, in part, a response to the need for further attention to IP enforcement. Our September 2004 report found that coordination efforts through the Special 301 process and the IPR Training Coordination Group have generally been considered to be effective by U.S. government and industry officials. “Special 301,” which refers to certain provisions of the Trade Act of 1974, as amended, requires USTR to annually identify foreign countries that deny adequate and effective protection of IP rights or fair and equitable market access for U.S. persons who rely on IP protection. USTR identifies these countries with substantial assistance from industry and U.S. agencies and then publishes the results of its reviews in an annual report. Once a list of such countries has been determined, the USTR, in coordination with other agencies, decides which, if any, of these countries should be designated as Priority Foreign Countries, which may result in an investigation and subsequent actions. As our report notes, according to government and industry officials, the Special 301 process has operated effectively in reviewing IP rights issues overseas. These agency officials told us that the process is one of the best tools for interagency coordination in the government, and coordination during the review is frequent and effective. The IPR Training Coordination Group is a voluntary, working-level group comprised of representatives of U.S. agencies and industry associations involved in training and technical assistance efforts overseas for foreign officials. Meetings are held approximately every 4 to 6 weeks and are well attended by government and private sector representatives. The State Department leads the group, and meetings have included discussions on training “best practices,” responding to country requests for assistance, and improving IPR awareness among embassy staff. According to several agency and private sector participants, the group is a useful mechanism that keeps participants informed of the IP activities of other agencies or associations and provides a forum for coordination. NIPLECC was created by the Congress in 1999 to coordinate domestic and international intellectual property law enforcement among U.S. federal and foreign entities. NIPLECC members are from five agencies and consist of: (1) Commerce’s Undersecretary for Intellectual Property and Director of the United States Patent and Trademark Office; (2) Commerce’s Undersecretary of International Trade; (3) the Department of Justice’s Assistant Attorney General, Criminal Division; (4) the Department of State’s Undersecretary for Economic and Agricultural Affairs; (5) the Deputy United States Trade Representative; and (6) the Department of Homeland Security’s Commissioner of U.S. Customs and Border Protection. Representatives from the Department of Justice and USPTO are co-chairs of NIPLECC. Coordination efforts involving IP law enforcement through NIPLECC have not been as successful as other efforts. In our September 2004 report, we stated that NIPLECC had struggled to define its purpose and had little discernible impact, according to interviews with industry officials and officials from its member agencies, and as evidenced by NIPLECC’s own annual reports. Indeed, officials from more than half of the member agencies offered criticisms of NIPLECC, remarking that it was unfocused, ineffective, and “unwieldy.” We also noted that if the Congress wishes to maintain NIPLECC and take action to increase its effectiveness, it should to consider reviewing the council’s authority, operating structure, membership, and mission. In the fiscal year 2005 Consolidated Appropriations Act, the Congress provided $2 million for NIPLECC expenses, to remain available through fiscal year 2006. The act also created the position of the Coordinator for International Intellectual Property Enforcement, appointed by the President, to head NIPLECC. The NIPLECC co-chairs are to report to the Coordinator. In July 2005, Commerce Secretary Gutierrez announced the presidential appointment filling the IP Coordinator position. Since then, NIPLECC has added an assistant, a policy analyst, part time legislative and press assistants, and detailees from USPTO and CBP. Since the Consolidation Appropriations Act, NIPLECC has held two formal meetings but has not issued an annual report since 2004. In October 2004 the President launched STOP, an initiative to target cross- border trade in tangible goods and strengthen U.S. government and industry IP enforcement actions. The initiative is led by the White House under the auspices of the National Security Council and involves collaboration among six federal agencies: the Departments of Commerce, Homeland Security, Justice, and State; USTR; and the Food and Drug Administration. STOP has five general objectives: (1) empower American innovators to better protect their rights at home and abroad, (2) increase efforts to seize counterfeit goods at our borders, (3) pursue criminal enterprises involved in piracy and counterfeiting, (4) work closely and creatively with U.S. industry, and (5) aggressively engage our trading partners to join U.S. efforts. The IP Coordinator is also serving as the coordinator for STOP. Both agency officials and industry representatives with whom we spoke consistently praised the IP Coordinator, saying that he was effectively addressing their concerns by speaking at seminars, communicating with their members, and heading U.S. delegations overseas. STOP has energized U.S. efforts to protect and enforce IP and has initiated some new efforts, however its long-term role is uncertain. One area where STOP has increased efforts is outreach to foreign governments. In addition, STOP has focused attention on helping small- and medium-sized enterprises to better protect their IP rights. Industry representatives generally had positive views on STOP, although some thought that STOP was a compilation of new and on-going U.S. agency activities that would have occurred anyway. STOP’s lack of permanent status as a presidential initiative and lack of accountability mechanisms could limit its long-term impact. Agency officials participating in STOP cited several advantages to the initiative. They said that STOP energized their efforts to protect and enforce IP by giving them the opportunity to share ideas and support common goals. Officials said that STOP had brought increased attention to IP issues within their agencies and the private sector as well as abroad, and attributed that to the fact that STOP came out of the White House, thereby lending it more authority and influence. Another agency official pointed out that IP was now on the President’s agenda at major summits such as the G-8 and the recent EU-U.S. summits. STOP has initiated some new efforts, including a coordinated U.S. government outreach to foreign governments that share IP concerns and enforcement capacities similar to the United States. For example, the United States and the European Union (EU) have formed the U.S.-EU Working Group on Intellectual Property Rights, and in June 2006, the United States and European Union announced an EU-U.S. Action Strategy for Enforcement of IP Rights meant to strengthen cooperation in border enforcement and encourage third countries to enforce and combat counterfeiting and piracy. One particular emphasis of STOP has been to help small- and medium- sized enterprises (SMEs) protect their IP in the United States and abroad through various education and outreach efforts. In 2002, we reported that SMEs faced a broad range of impediments when seeking to patent their inventions abroad, including cost considerations and limited knowledge about foreign patent laws, standards, and procedures. We recommended that the Small Business Administration (SBA) and the USPTO work together to make a range of foreign patent information available to SMEs. Within the last year, an SBA official told us that SBA began working with STOP agencies to distribute information through its networks and recently linked SBA’s website to the STOP website, making information about U.S., foreign, and international laws and procedures accessible to its clients. Many industry representatives with whom we spoke viewed STOP positively, maintaining that STOP had increased the visibility of IP issues. For example, one industry representative noted a coordinated outreach to foreign governments that provided a more collaborative alternative to the Section 301 process, whose punitive aspects countries sometimes resented. Another indicated that his association now coordinates training with CBP that is specific to his industry as a result of contacts made through STOP. In addition, most private sector members with whom we spoke agreed that STOP was an effective communication mechanism between businesses and U.S. federal agencies on IP issues, particularly through the Coalition Against Counterfeiting and Piracy (CACP), a cross- industry group created by a joint initiative between the Chamber of Commerce and the National Association of Manufacturers. Private sector officials have stated that CACP meetings are their primary mechanism of interfacing with agency officials representing STOP. There were some industry representatives who questioned whether STOP had added value beyond highlighting U.S. IP enforcement activities. Some considered STOP to be mainly a compilation of ongoing U.S. IP activities that pre-dated STOP. For example, Operation Fast Link and a case involving counterfeit Viagra tablets manufactured in China, both listed as STOP accomplishments, began before STOP was created. In addition, some industry representatives believed that new activities initiated under STOP would have likely occurred without STOP. As a presidential initiative, STOP was not created by statute; has no formal structure, funding, or staff; and appears to have no permanence beyond the current administration. NIPLECC, on the other hand, is a statutory initiative, receives funds, and is subject to congressional oversight. Recently, the lines between NIPLECC and STOP have blurred, possibly lending STOP some structure and more accountability. For example, as mentioned before, NIPLECC’s IP Coordinator is also the focal point for STOP. In addition, NIPLECC recently adopted STOP as the strategy it is required to promulgate under the Consolidated Appropriations Act of 2005. This legislation calls for NIPLECC to establish policies, objectives, and priorities concerning international intellectual property protection and intellectual property law enforcement; promulgate a strategy for protecting American intellectual property overseas; and coordinate and oversee implementation of these requirements. However, the nature of the relationship between STOP and NIPLECC is not clear. Although the IP Coordinator has recently reported in congressional hearings that NIPLECC adopted STOP as its strategy, there have been no formal announcements to the press, industry associations, or agency officials responsible for carrying out STOP activities. In addition, STOP documents do not refer to NIPLECC. Our meetings with agency and industry officials indicated that they are unclear about the relationship between STOP and NIPLECC. The absence of a clearly established relationship makes it difficult to hold NIPLECC accountable for monitoring and assessing the progress of IP enforcement under STOP. We believe that accountability mechanisms are important to oversight of federal agency efforts and can contribute to better performance on issues such as IP protection. One of STOP’s five goals is to increase federal efforts to seize counterfeit goods at the border, but work we are conducting for this Subcommittee illustrates the kind of challenges that STOP faces in achieving its goals. CBP and ICE are responsible for border enforcement efforts, but their top priority is national security. CBP has taken several steps since fiscal year 2003, when it made IP matters a priority trade issue, to update and improve its border enforcement efforts. While CBP seizures of IP- infringing goods have grown steadily since fiscal year 2002, the total estimated value of seizures during that time generally did not exhibit similar growth. Additionally, some steps that CBP is taking to improve IP enforcement are works in progress whose impact on this STOP objective is uncertain. CBP’s ability to effectively enforce IP rights at the border is also challenged by limited resources for such enforcement and by long- standing weaknesses in its ability to track the physical movement of goods entering the United States using the in-bond system. STOP documents cite increases in IP-related seizures as a positive indicator of its efforts to stop counterfeit goods at the border. The overall task of assessing whether particular imports are authentic has become more difficult as trade volume and counterfeit quality increase. The number of IP-related seizures has grown steadily, with CBP and ICE together making about 5,800 seizures in fiscal year 2002 and just over 8,000 seizures in fiscal year 2005. However, there is no corresponding trend in the estimated value of such seizures. The estimated value of goods seized in fiscal years 2002 and 2003 was $99 million and $94 million, respectively. This figure jumped to a peak of about $139 million in fiscal year 2004, but dropped back to the former level, about $93 million, in fiscal year 2005. According to CBP officials, the agency’s goal is to focus its resources in part on high-value seizures, but a large percentage of annual seizure activity does not result in a significant seizure value. For example, nearly 75 percent of fiscal year 2005 seizures were small-scale shipments made at mail and express consignment facilities (facilities operated by companies that offer express commercial services to move mail and cargo, such as the United Parcel Service) or from individuals traveling by air, vehicle, or on foot. These seizures represented about 14 percent of total estimated seizure value in that year. Conversely, about 14 percent of fiscal year 2005 seizures involved large-scale shipments (i.e., containers) and accounted for about 55 percent of that year’s estimated seizure value. The number of seizures made on goods emanating from China has risen from about 49 percent of the estimated domestic value of all IP seizures in fiscal year 2002 to about 69 percent in fiscal year 2005. While CBP seizes goods across a range of product sectors, in recent years, seizures tend to be concentrated in particular goods, such as apparel, handbags, cigarettes, and consumer electronics. CBP also seeks to increase seizures of goods involving public health and safety risks, and its data shows that the estimated domestic value of seized goods involving certain health and safety risks, specifically pharmaceuticals, electrical articles, and batteries, increased during fiscal years 2002-2005. However, seizures in these and certain other health and safety categories represented less than 10 percent of the total estimated domestic value of seizures in fiscal year 2005, and seizures of other potentially dangerous goods, such as counterfeit auto parts, remain relatively limited. For example, CBP estimated in a letter to an automotive industry trade association that it made 14 seizures in fiscal years 2003-2005 of certain automotive parts. A representative from another automotive industry trade association noted that CBP’s ability to make seizures in this area depends on its receiving quality information about counterfeiters from companies. In various STOP documents, CBP cites steps it has taken to improve IP enforcement, but many of these are works in progress whose impact and effectiveness are undetermined. CBP identified IP matters as a priority trade issue in fiscal year 2003 and developed an agency-wide strategy for IP enforcement. The strategy addresses several components of IP enforcement, such as targeting (identifying high risk shipments), international coordination, communication to employees, and industry outreach. A CBP official who oversees the IP strategy told us that CBP seeks to perform IP enforcement more efficiently, and the strategy notes the importance of conducting IP enforcement while minimizing the burden on front line resources whose priority is national security. Several elements of the strategy were specifically designated as activities to support STOP. CBP’s key STOP-related activity is the creation of a statistical computer model that is designed to identify container shipments that are at higher risk of involving IP rights violations. To develop the model, CBP examined elements of past seizures and container examinations and identified certain factors that were significant characteristics of IP-infringing imports and that could be used to identify future IP rights violations. CBP piloted this model on a nation-wide basis for about one month in February 2005, but the pilot revealed several issues that need to be addressed before the model can be implemented. CBP plans to pilot the model again for up to 3 months this summer at two land border ports and one seaport. CBP will use the results of the second pilot to further evaluate the viability of the model. Another STOP-related activity for CBP is the use of post-entry audits to assist with IP enforcement. CBP officials said using such audits for this purpose is a new approach that is designed to assess whether companies have adequate internal controls to prevent them from importing goods that infringe IP rights. Initiated in fiscal year 2005, these audits are a novel approach that is likely to work best with established importers, but they may be less effective for dealing with importers that are engaged in criminal activity and deliberately take steps to evade federal scrutiny. CBP selected 40 known and potential IP-infringing companies to audit in fiscal years 2005-2006, and by July 2006 had completed 17 of these audits. In three audits, CBP found that the companies possessed or had already sold infringing goods that were not seized at the border. In two of these cases, CBP imposed penalties on the companies totaling about $4.6 million. In the third case, the audit closed in September 2005, but the decision on whether to impose penalties is still pending in CBP. A CBP official said that some less significant IP-infringing activity was found in several other audits, but CBP chose not to impose penalties in these cases. CBP also found that internal controls to prevent IP rights violations were lacking or inadequate for most of the 17 companies, and has worked with them to improve these controls. A third STOP activity for CBP is the development of a system that allows companies to electronically record their IP rights through CBP’s website. While trademark and copyright protection is obtained from USPTO and the Copyright Office, respectively, these rights must be separately recorded with CBP, for a fee. Recording with CBP provides CBP officials with information about the scope, ownership, and representation of protected IP rights being recorded. Although CBP officials have said recordation is important because it helps CBP effect legally defensible border enforcement, some companies fail to record their rights with CBP, either because they are unaware of the recordation requirement or because they choose not to. The electronic recordation system, implemented in December 2005, is designed to streamline the process; reduce processing times; and, ideally, increase the number of recordations. A link to the recordation system has been established on USPTO’s website, and a link from the Copyright Office is planned. CBP expects that most paper-based applications will eventually be eliminated. While these are important steps, we have not yet evaluated the impact of the new recordation system. Several industry representatives have cited other concerns about recordation generally, such as long recordation processing times and the effective lack of border protection caused by the inability to record copyrights with CBP before such rights are issued by the Copyright Office. For example, one private sector representative said that during the 6 to 9 months it takes to process a copyright, pirated master CDs may be allowed to enter the United States because the rights holder has not yet been able to record the title with CBP. CBP and ICE priorities and resource allocations changed dramatically after September 2001, and our initial work indicates that some headquarters and field resources for IP enforcement have declined since then. As you indicated in your statement at the June 2005 IP hearing, the ultimate success of STOP, and of IP enforcement generally, depends on whether agencies are able to recruit, train, and retain the necessary workforce to meet their objectives. You also noted that prior hearings before this Subcommittee revealed that human capital issues were hindering federal enforcement of trade laws. At several border locations we visited, we found that resources for trade and IP enforcement are thinly spread, certain IP enforcement positions had been reduced or eliminated, and one location faced challenges in filling vacant CBP Officer positions. At CBP port operations, employees in two job categories are responsible for IP enforcement — CBP Officers and Import Specialists. CBP Officers are responsible for targeting incoming shipments for security and trade purposes and conducting physical examinations of suspect goods. Import Specialists are responsible for assessing the actual value and composition of goods for duty and quota purposes and for making initial determinations of whether goods are believed to be in violation of U.S. IP rights laws. While CBP Officers are typically assigned to a single port of entry, Import Specialists assigned to a large port may be responsible for covering other smaller ports that report to the larger port. ICE field office agents investigate IP infringement cases. We have not yet gathered comprehensive data on the number of CBP Officers, Import Specialists, and ICE agents devoted to IP enforcement, but we found reduced resources, thinly spread, at several border locations that we have visited. At the Port of Los Angeles/Long Beach, the largest U.S. seaport by volume, two trade enforcement teams have been disbanded and their CBP Officers shifted to national security details. Port officials said that since the late 1990s, the number of CBP Officers performing trade-related examinations has dropped by about 43 percent, and the number of Import Specialists on an IP-devoted enforcement team has dropped by half. The Port of San Francisco services multiple port facilities, including two major seaports, two major airports, and seven smaller port locations. CBP Officers at the San Francisco air cargo facility said that 4 out of 13 CBP Officers are assigned to inspect cargo for trade violations. These 4 officers share coverage of a 7-day work week, such that about 2 CBP Officers perform trade inspections on any day. In 2001, there were about 12 CPB Officers assigned to trade inspections. San Francisco’s Director of Field Operations told us that filling 33 vacancies within his approximately 450 CBP Officer positions is a high priority. Currently, there are 3 Import Specialists, down from 6 in 2003, that focus primarily on IP enforcement and service the seaports, airports, and smaller ports within the Port of San Francisco’s area. ICE also performs IP enforcement and houses the National IPR Coordination Center (called the IPR Center) – a joint effort between ICE and the FBI intended to serve as a focal point for the collection of intelligence involving, among other things, copyright and trademark infringement. Currently, 9 of the 16 authorized ICE positions are filled and a 10th is slated to be filled. Neither of the 2 CBP authorized positions are filled. Additionally, in January 2006, 7 of 8 FBI positions were empty and the 8th position was filled by rotating FBI staff. In July 2006, an FBI official told us that no FBI staff were working at the IPR center because of limited physical space and pressing FBI casework, but that some staff would return in September 2006. The ICE field office in Los Angeles, one of the largest field offices in the country, had two commercial fraud enforcement teams before the formation of the Department of Homeland Security, but now has one. The number of agents working on commercial fraud enforcement cases, which include IP enforcement, dropped from about 14 to 9 since 2003. However, an official from this office said resource changes have not affected how the team addresses IP enforcement nor caused it to turn away any IP enforcement cases. The in-bond system has been identified by CBP and ICE officials as a mechanism that has been used to circumvent import and IP laws and regulations, presenting an enforcement challenge. A significant portion of goods received at U.S. ports do not immediately enter U.S. commerce but are instead shipped “in-bond” for official entry at other U.S. ports or are transported through the United States for export. When goods are shipped in-bond, they are subject to national security inspections at the port of arrival, but are exempt from U.S. duties or quotas and formal trade inspections until they reach the final port where they will officially enter U.S. commerce. For many years, GAO and others have noted weaknesses in the in-bond system used to monitor shipments between ports. CBP and ICE officials recognize that the in-bond system has been used by certain importers to bring counterfeit and pirated goods into the United States by avoiding official entry at the port of arrival and then diverting the goods afterwards. Some CBP officials said the in-bond system may contribute to imports of counterfeits by allowing some importers to “port shop” for ports that are less likely to identify IP violations. Indeed, CBP has made sizable IP-related seizures from the in-bond system, including 220 seizures valued at about $41 million in fiscal year 2004, representing nearly 30 percent of the total estimated domestic value of IP seizures in that year. In fiscal year 2005, there were 126 seizures valued at about $14 million, representing about 15 percent of estimated domestic value of IP seizures that year. We have found weaknesses in the past with the in-bond system and are currently conducting follow-up work to determine whether these weaknesses have been corrected. Our audit is still underway, but work to date indicates that some previously identified weaknesses in tracking and monitoring in-bonds remain. For example, in January 2004 GAO reported that CBP collects significantly less information on in-bond shipment than for regular entries and that this lack of information makes tracking in- bond shipments more difficult. In our recent work, CBP staff continue to observe that the limited information required from importers on in-bond shipments makes it difficult for CBP to ensure that the shipments have reached their proper destinations. Intellectual property protection is an issue that requires the involvement of many U.S. agencies, and the U.S. government has employed a number of mechanisms to combat different aspects of IP crimes, with varying levels of success. The STOP initiative, the most recent coordinating mechanism, has brought attention and energy to IP efforts within the U.S. government, and participants and industry observers have generally supported the new effort. At the same time, the challenges of IP piracy are enormous, and will require the sustained and coordinated efforts of U.S. agencies, their foreign counterparts, and industry representatives to be successful. Our initial observations on the structure of STOP suggest that it is not well suited to address the problem over the long term, as the presidential initiative does not have permanence or the accountability mechanisms that would facilitate oversight by the Congress. Our ongoing work on IP protection efforts at the U.S. border, one of the five areas identified by STOP, also illustrates the types of challenges that need sustained attention to make progress on the issue. We believe that our more detailed reports to be released in the near future will contribute to continuing Congressional oversight of these issues. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | U.S. goods are subject to substantial counterfeiting and piracy, creating health and safety hazards for consumers, damaging victimized companies, and threatening the U.S. economy. In 2004, the Bush administration launched the Strategy for Targeting Organized Piracy (STOP)--a multi-agency effort to better protect intellectual property (IP) by combating piracy and counterfeiting. This testimony, based on a prior GAO report as well as from observations from on-going work, describes (1) the range and effectiveness of multi-agency efforts on IP protection preceding STOP, (2) initial observations on the organization and efforts of STOP, and (3) initial observations on the efforts of U.S. agencies to prevent counterfeit and pirated goods from entering the United States, which relate to one of STOP's goals. STOP is the most recent in a number of efforts to coordinate interagency activity targeted at intellectual property (IP) protection. Some of these efforts have been effective and others less so. For example, the Special 301 process--the U.S. Trade Representative's process for identifying foreign countries that lack adequate IP protection--has been seen as effective because it compiles input from multiple agencies and serves to identify IP issues of concern in particular countries. Other interagency efforts, such as the National Intellectual Property Law Enforcement Coordination Council (NIPLECC), are viewed as being less effective because little has been produced beyond summarizing agencies' actions in the IP arena. While STOP has energized IP protection and enforcement efforts domestically and abroad, our initial work indicates that its long-term role is uncertain. STOP has been successful in fostering coordination, such as reaching out to foreign governments and private sector groups. Private sector views on STOP were generally positive; however, some stated that it emphasizes IP protection and enforcement efforts that would have occurred regardless of STOP's existence. STOP's lack of permanent status and accountability mechanisms pose challenges for its long-term impact and Congressional oversight. STOP faces challenges in meeting some of its objectives, such as increasing efforts to seize counterfeit goods at the border--an effort for which the Department of Homeland Security's Customs and Border Protection (CBP) and Immigration and Customs Enforcement are responsible. CBP has certain steps underway, but our initial work indicates that resources for IP enforcement at certain ports have declined as attention has shifted to national security concerns. In addition, prior GAO work found internal control weaknesses in an import mechanism through which a significant portion of imports flow, and which has been used to smuggle counterfeit goods. |
Over the past several years, the federal government has reported an outstanding balance of delinquent debt in the range of $50 billion to $60 billion. DCIA reflected the Congress’ recognition that timely and effective agency debt collection efforts were needed to maximize collections of delinquent debts owed to the federal government. A central theme of the legislation is that before discharging any delinquent debt owed to any executive, judicial, or legislative agency, agencies should take all appropriate steps to collect the debt. Among the collection tools the act authorized were administrative offset (withholding some or all of a federal payment scheduled to be issued to the debtor), federal salary offset, referral to PCA contractors, referral to agencies operating a debt collection center, reporting delinquencies to credit reporting bureaus, and administratively garnishing the wages of delinquent debtors. Some DCIA-authorized debt collection tools are mandatory, while others are discretionary. AWG, one of the discretionary tools, is a process whereby an employer withholds amounts from an employee’s wages and pays those amounts to the employee’s creditor in satisfaction of a withholding order. Prior to the enactment of DCIA, agencies generally were required to obtain a court order before garnishing the wages of nonfederal employees. DCIA authorizes agencies to administratively garnish up to 15 percent of a debtor’s disposable pay to satisfy delinquent nontax debt owed to the United States. Under a separate statute, Education has had authority since 1993 to garnish up to 10 percent of the disposable pay of debtors who have defaulted on student loans. Treasury’s FMS is responsible for promulgating regulations to implement AWG and other debt collection tools authorized by DCIA. In May 1998, 2 years after the act was passed, Treasury issued final regulations that provide agencies with an overall framework for implementing AWG. The regulations authorize agencies to begin the AWG process as debt becomes delinquent, but they do not stipulate when in the collection cycle this tool may be used. According to the regulations, any federal agency that administers a program that might result in a delinquent nontax debt owed to the federal government and any agency that pursues recovery of such debt may administer AWG. Therefore, agencies holding delinquent debt may administer AWG in-house; they may authorize a debt collection agency, such as FMS, to administer AWG on their behalf as part of cross- servicing operations; or they may do both. To assist agencies in implementing AWG, Treasury issued AWG Form 329, known as the AWG package, in November 1998. The package includes a Letter to Employer & Important Notice to Employer, Wage Garnishment Order, Wage Garnishment Worksheet, and Employer Certification. In February 1999, FMS issued “Instructions to Federal Agencies for Preparing AWG Forms.” As is the case with other debt collection tools, Treasury’s regulations dealing with AWG provide for due process for debtors. The regulations require that at least 30 days before initiating wage garnishment proceedings, agencies notify delinquent debtors in writing of the nature and amount of the debt and of the agency’s intention to collect through deductions from pay. The notification must also include an explanation of the debtor’s rights regarding the proposed action. These rights include the opportunity to inspect and copy agency records related to the debt, to enter into a written repayment agreement with the agency, and to request a hearing concerning the existence or amount of the debt or the terms of the proposed repayment schedule under the garnishment order. An agency must provide a debtor with a hearing before it issues a garnishment order if the agency receives the debtor’s written request for a hearing on or before the 15th business day following the mailing of the notice. If a debtor does not make a timely request for a hearing, the agency is to send a withholding order to the debtor's employer within 30 days after the debtor fails to make a timely request for a hearing. If the agency receives a debtor’s written request for a hearing after the 15th business day following the mailing of the notice, the agency must still provide a hearing to the debtor. In such a case, however, the agency is not to delay issuing a withholding order unless it determines that the delay in filing the hearing request resulted from factors over which the debtor had no control or receives information that it believes justifies delaying or canceling the withholding order. Following receipt of a withholding order, employers are required to certify to the agency certain information about the debtor, such as the debtor’s employment status and disposable pay available for withholding. The employer must deduct from all disposable pay paid to the debtor during each pay period the amount of the garnishment, which is the lesser of (1) the amount indicated on the garnishment order, up to 15 percent of the debtor’s disposable pay, or (2) the amount by which the debtor’s disposable pay exceeds an amount equivalent to 30 times the minimum wage. If multiple garnishments from various sources are applied to one debtor’s wages, the total garnishments may not exceed 25 percent of the individual’s disposable pay. Once the agency has fully recovered the amounts owed by the debtor, the agency is to send the debtor’s employer notification to discontinue wage withholding. Agencies are to review their debtors’ accounts at least annually to ensure that garnishment has been terminated for accounts that have been paid in full. The objective of our review was to determine the extent to which certain CFO Act agencies use or plan to use AWG as authorized by DCIA to collect delinquent nontax federal debts. As previously noted, we surveyed nine federal agencies: USDA, Education, DOE, HHS, HUD, VA, EPA, SBA, and SSA. Together, these agencies held about $40 billion of delinquent nontax federal debt as of September 30, 2000, which represented more than 90 percent of all CFO Act agencies’ reported delinquent nontax debt as of that date. We developed a survey instrument to obtain agency responses to a uniform set of questions (see appendix I) and received completed surveys from all nine surveyed agencies. We reviewed agency responses and followed up with cognizant agency officials, where necessary, to obtain any needed clarifications or additional information. Although we discussed certain of the survey responses with agency officials by telephone or electronic mail, we did not independently verify the reliability of all the information that agencies provided. We also conducted interviews with FMS officials responsible for regulations, guidance, and cross-servicing operations related to AWG and reviewed pertinent documents, including FMS’s “AWG Operations & Procedures Manual” and its “Performance Summary Report.” We performed our work from March 2001 through September 2001 in accordance with U.S. generally accepted government auditing standards. We requested written comments on a draft of this report from the 10 agencies covered by the report. All 10 agencies responded to our request and provided either written or oral comments, which are discussed in the “Agency Comments and Our Evaluation” section of this report and are incorporated in the report as applicable. Letters with comments from FMS, Education, HHS, HUD, SBA, SSA, and VA are reprinted and discussed in further detail, when applicable, in the appendixes. The nine large CFO Act agencies we surveyed had not used AWG as authorized under DCIA, thus undoubtedly losing some collection opportunities. Together, the surveyed agencies reported holding about $23 billion in consumer delinquent debt as of September 30, 2000. This is not to imply that AWG could be used to collect all such debt because circumstances such as bankruptcy or appeals could limit the application of this debt collection tool. Eight of the nine surveyed agencies said that they planned to adopt AWG as authorized by DCIA. Four agencies expected to implement AWG in-house and, to varying degrees, through FMS’s cross- servicing program. One of these agencies, Education, has been using AWG in-house under separate statutory authority since 1993. The four remaining agencies indicated that they would not perform AWG in-house but would authorize FMS to apply AWG to debts they referred to FMS for cross- servicing. However, we found in previous work that agencies have not been promptly referring all eligible debts to FMS when they become 180 days delinquent, as required by DCIA. Prompt referral of eligible debts is especially important for agencies that contemplate relying primarily on FMS to conduct AWG through cross-servicing because FMS intends to apply AWG as a tool of last resort, to be used only after all other collection efforts have been exhausted. Use of AWG, whether in-house or at FMS, should likely yield a marked increase in agency collections of consumer delinquent debt. The increase would result largely from AWG’s effectiveness as leverage to obtain payment in full, to secure a repayment plan, or to obtain full payment on a compromised amount. According to testimony by debt collection experts, the mere threat of AWG is often enough to motivate repayment. These experts based their testimony on experience at Education, which indicated that employees did not want their employers to find out that they had defaulted on their student loans. As a result, according to the debt collection experts, about 50 percent of the debtors notified of Education’s intent to use AWG made payment arrangements instead of allowing their wages to be garnished. According to Education officials and agency documents, collection of defaulted student loans has increased dramatically since Education implemented AWG in 1993 under the Higher Education Act, as amended. Education indicated that it had collected more than $306 million in principal and interest on defaulted student loans from fiscal year 1997 through March 2001 using 10 percent garnishment authority. The primary difference between AWG under DCIA and wage garnishment under the Higher Education Act is that the Higher Education Act allows up to 10 percent of disposable pay to be garnished, while DCIA allows up to 15 percent of disposable pay to be garnished. Eight of the nine surveyed agencies said they plan to implement AWG under DCIA authority. EPA, the ninth agency, determined that use of AWG would not be cost-effective because of its limited applicability to the agency’s debts. According to agencies’ survey responses and other agency correspondence, all eight agencies expect to implement AWG by the end of fiscal year 2003, as shown in table 1. Agencies gave various reasons for the delay in implementing AWG, including their need to focus priorities on the mandatory provisions of DCIA, to develop the required AWG regulations, and to complete the systems changes necessary to implement AWG. As shown in table 1, four agencies we surveyed (USDA, DOE, HUD, and VA) indicated that they plan to rely primarily on FMS to perform AWG as part of cross-servicing. DOE said that given due process requirements and efficiencies of processing debts, the agency prefers that FMS perform AWG. HUD indicated that it uses FMS’s cross-servicing program as its main “active” collection tool. VA indicated that it views AWG as a collection tool of last resort and stated that it would concentrate on its own established methods of collection, such as internal offset and referral of debts to the Treasury Offset Program. Although USDA stated that it plans to rely primarily on FMS to perform AWG, the agency did not comment on why it preferred this course of action. In addition, four of the surveyed agencies (Education, HHS, SBA, and SSA) plan to implement AWG in-house and, to varying degrees, through FMS’s cross-servicing program. The use of AWG in conjunction with other debt collection tools, whether performed in- house or at FMS, can provide leverage to obtain payments from delinquent debtors. Depending on the nature of an agency’s delinquent debt, relying on FMS to apply AWG as part of cross-servicing may be the best approach. FMS’s incorporation of AWG into the cross-servicing program would undoubtedly improve its collection success and make its cross-servicing collection efforts more comprehensive. However, relying primarily on FMS to perform AWG has definite limitations. First, not all delinquent debt reported by agencies as eligible for cross-servicing has been promptly referred to FMS in the past, and debt that has been referred has often been well beyond DCIA’s 180-day delinquency threshold. Second, under FMS’s cross-servicing program, AWG is considered to be a collection means of last resort and will therefore be used far into the debt collection process. To maximize the debt collection potential of AWG for debts referred to FMS for cross-servicing, agencies should send eligible debts to FMS promptly—even, when practicable, prior to the 180-day delinquency threshold. As we stated in our October 2001 testimony, debt collection experts have testified that AWG can be an extremely powerful debt collection tool, as the mere threat of AWG is often enough to motivate debtor repayment. Although debt referred for cross-servicing was not reported to Treasury on the Report on Receivables separately by consumer and commercial debt, the four surveyed agencies that plan to rely primarily on FMS for AWG implementation (USDA, DOE, HUD, VA) together reported having referred only $288 million of about $690 million of all types of debt that they reported as eligible for cross-servicing as of September 30, 2000. For example, as discussed in our October 2001 testimony, the USDA agencies we reviewed (the Rural Housing Service and the Farm Service Agency) had not identified and promptly sent debts to FMS for cross-servicing. Consequently, if AWG had been attempted only on delinquent debts reported as referred for cross-servicing, substantial amounts of delinquent debt would not have been subject to this debt collection tool. Because FMS views AWG as a collection tool of last resort, it is critical that agencies relying on FMS to implement AWG refer debts promptly to FMS for cross-servicing, even, when practicable, before they reach the 180-day delinquency threshold. According to the “AWG Operations & Procedures Manual” developed by FMS for its PCA contractors, AWG is a tool to be used after all other collection efforts have been exhausted. FMS has taken the position that AWG should generally be the collection tool of last resort because AWG will only allow the government to receive up to 15 percent of a person's disposable wages as long as the person is employed. The collection procedures FMS has provided to PCA contractors require that they attempt first, to collect the entire debt by having the debtor pay the debt in full with one payment; second, to establish an acceptable payment plan that pays the debt in full; third, to establish an acceptable one-time compromise agreement; and fourth, to establish a compromise agreement that is paid off in 6 months. The “AWG Operations & Procedures Manual” does not incorporate the use of AWG in conjunction with other debt collection tools as leverage to obtain payment in full, a repayment plan, or a more favorable compromise amount and payment schedule. The potential leverage of AWG and related collections may be delayed if agencies do not refer debts to FMS as soon as possible. Based on FMS’s established procedures for cross-servicing, debts agencies refer to FMS would typically age at least another 90 days before issuance of the AWG notice to the debtor and 120 days before issuance of the garnishment order to the employer. FMS first attempts to collect referred debts for 30 days at its governmentwide debt collection center before referring the debt to a PCA. Assuming that FMS’s cross-servicing activities operate in a manner consistent with the schedule in its manual, after referral, debts would generally remain with the PCA to be pursued using other collection tools for another 60 days before the agency could request FMS’s approval to mail the AWG notice. Debts would age another 30 days before the AWG package could be sent to the debtor’s employer. The four surveyed agencies that said they would rely primarily on FMS to implement AWG (USDA, DOE, HUD, VA) do not forward debts to FMS for cross-servicing until they are at least 61 days delinquent. Some debts are more than 180 days delinquent when they are sent to FMS. In response to our survey, USDA and DOE indicated that the delinquency timeframe for referring debts to FMS varies by field office. For DOE, debts range from 61 to 180 days delinquent at the time of referral to FMS. USDA did not provide the range of delinquency for debts referred to FMS. HUD indicated that it currently refers debts to FMS for cross-servicing when they are from 121 to 180 days delinquent. VA indicated that it refers debts to FMS when they are more than 180 days delinquent. Since PCAs will typically use AWG as a debt collection tool of last resort under FMS’s cross-servicing program, debts of agencies that rely primarily on FMS to implement AWG will be, at a minimum, more than 150 days delinquent (i.e., 61 days at referral plus 30 days at FMS plus 60 days at the PCA) before the notice is sent to the debtor. These debts will be more than 180 days delinquent (because an additional 30 days will transpire after the notice is mailed to the debtor) before wage garnishment begins. If the debtor requests a hearing within the required time frame, wage garnishment could be delayed as much as 60 additional days pending a hearing decision. It is important to note that, regardless of what the surveyed agencies told us about when they are referring debts to FMS for cross-servicing, DCIA does not require agencies to refer debts to FMS until they are 180 days delinquent. Moreover, as previously mentioned, agencies have not in the past promptly referred all eligible debts that are 180 days delinquent to FMS. According to FMS data, as of September 30, 2001, more than 50 percent of debt referred for cross-servicing governmentwide was more than 2 years delinquent at the time of referral. As we have previously testified, industry statistics have shown that the likelihood of recovering amounts owed on a debt decreases dramatically as the age of the debt increases. Although FMS officials told us that the age of delinquency has no bearing on what can be collected using AWG, this view ignores AWG’s potential to motivate debtors to pay their debt in full, to enter into a repayment plan for the full amount, or to agree on a compromised amount. The old adage that “time is money” is very relevant to the application of AWG to delinquent debts. Therefore, whenever possible, eligible debts should be referred promptly to FMS for cross-servicing, even prior to the 180-day delinquency threshold established by DCIA. If AWG’s potential for boosting collections on delinquent debts is to be realized, agencies must develop clear implementation plans and regulations that are consistent with those issued by Treasury. Agencies must take these steps whether they intend to implement AWG in-house, to rely on FMS to implement AWG, or to do both. At the completion of our fieldwork, however, none of the eight agencies that plan to use AWG had comprehensive written implementation plans. In addition, although all eight agencies were developing regulations to implement AWG, none had finalized their regulations. It is not clear when the eight agencies will be able to take full advantage of the debt collection potential of AWG, either in-house or through FMS. As of the completion of our fieldwork, only two small agencies not included in our review, the Railroad Retirement Board and the James Madison Foundation, had provided FMS the authority to use AWG as part of cross-servicing. Although Treasury regulations do not require that agencies prepare written implementation plans before implementing AWG, we believe that comprehensive written implementation plans are critical for the eight surveyed agencies that intend to use AWG. At a minimum, each agency’s implementation plan should specify whether the agency intends to implement AWG in-house, through FMS, the types of debts to which the agency will apply AWG, since AWG may not be a feasible means of collection for all types of debt the agency holds; the tasks involved in implementing AWG and who will have responsibility for carrying out each task; and the process for conducting hearings, regardless of whether AWG is conducted in-house or through FMS. Three of the agencies had plans, but the plans were deficient. The other five agencies said they did not have written implementation plans. Three surveyed agencies (HUD, SBA, and SSA) indicated on their surveys that they had a written plan for implementing AWG. The plans they submitted, however, did not clearly describe how and by whom hearings would be conducted or clearly indicate when the agencies could fully implement AWG as a routine debt collection tool. In addition, SBA’s and SSA’s plans did not specify which types of debts would be subject to AWG. They did not address, for example, which age categories of debts would be subject to AWG and what the minimum debt amount subject to AWG would be. HUD’s plan stated that debts related to certain programs that are referred to FMS for cross-servicing would be subject to AWG, but the plan did not make it clear whether AWG would be applicable to all other programs and related debts administered by the agency. The other five surveyed agencies that plan to implement AWG indicated that they did not have written implementation plans. Three of these agencies (USDA, DOE, and VA) plan to rely primarily on FMS to perform AWG. The other two agencies (Education and HHS) plan to implement AWG in-house and, to varying degrees, through FMS’s cross-servicing program. The survey responses and follow-up information do not make it clear whether most of these agencies intend to develop an implementation plan. USDA stated that it has not prepared a formal AWG implementation plan because of a shortage of resources and the need to address other DCIA priorities. The importance of an AWG implementation plan for USDA was discussed in a hearing before your subcommittee on December 5, 2001. In that hearing, the commissioner of FMS stated that USDA intended to authorize FMS to use AWG as part of the cross-servicing program and needed to develop a plan to take full advantage of this debt collection tool. The commissioner emphasized that a significant percentage of USDA’s delinquent debt portfolio, such as Food Stamp Program debts, is exempt from cross-servicing by Treasury and therefore would not be subject to FMS’s AWG program. DOE indicated that it has not developed a written implementation plan because it has submitted its debt collection regulations to FMS to determine if they adequately cover AWG and will allow FMS to administer AWG functions on DOE’s behalf. VA indicated that it will rely on FMS to implement AWG and does not believe that a written implementation plan is necessary. HHS indicated that it plans to implement AWG based on departmental regulations, but the regulations have not yet been published. Although Education did not provide a written implementation plan, agency officials stated that they have prepared a system requirements document that includes steps for making the system changes necessary to administer AWG under DCIA. Notwithstanding the reasons agencies gave for not yet having written AWG implementation plans, we believe such plans are needed to help ensure that agencies fully incorporate AWG into their debt collection processes in the near future. We recognize that Education may not have as great a need for an implementation plan because the agency has wage garnishment experience under authority separate from DCIA. As of the completion of our fieldwork, none of the eight surveyed agencies that plan to use AWG as authorized by DCIA had finalized the regulations needed to implement AWG, but each agency was developing such regulations. Treasury regulations require an agency to prescribe regulations for the conduct of AWG hearings that are consistent with the Treasury regulations. FMS considers the performance of AWG hearings to be a creditor agency function and does not plan to conduct hearings on behalf of such agencies. FMS will refer debts back to creditor agencies to conduct required hearings. In the responses to our survey and in FMS’s discussions with agencies, a major concern agencies raised about AWG was their ability to handle the hearings that debtors may request once they receive an AWG notice. Two of the surveyed agencies (SBA and USDA) indicated that they anticipate obstacles related to the hearings process, including arranging for hearings and developing hearing procedures. Many agencies have stated to FMS that they do not have the staff to handle AWG hearings. Accordingly, agencies will have to either establish that capacity or obtain hearings services on a contract basis through another agency that is willing to provide such services. FMS has informed agencies that if they do not believe they have the staff to handle the hearing requests, other agencies are available to perform hearing activities for a fee. According to a VA official, for example, VA provides hearing services on federal salary offset to other agencies for about $100 per hearing. VA expects to conduct AWG hearings for its own agency and for other agencies for a similar fee. The best available indication of the frequency of hearing requests is from Education’s experience with wage garnishment under student loan legislation. During fiscal year 2000, Education issued 90,658 Notices of Intent for wage garnishment, and approximately 10 percent of the debtors who received a notice requested a hearing. AWG has the potential to be a powerful tool for collecting delinquent federal debts, especially those owed by debtors who are not currently making payments under an agreement with the agency. More than 5 years after the enactment of DCIA, which authorized but did not mandate use of AWG, and more than 3 years after Treasury issued implementing regulations for AWG, however, none of the large CFO Act agencies we surveyed had begun using AWG as authorized by DCIA, either in-house or through FMS. And although eight of the nine agencies we surveyed said they intended to use this debt collection tool, none had adequately completed crucial preliminary steps—preparing detailed implementation plans and developing the necessary implementing regulations. By failing to implement AWG, agencies have clearly missed opportunities to maximize collection of delinquent debts. Even when agencies do begin implementing AWG, those that rely primarily on FMS may find that the tool’s effectiveness—particularly its usefulness in leveraging full or compromise payments from debtors who wish to avoid wage garnishment—is limited because agencies have in the past failed to promptly refer a significant portion of eligible debts to FMS for cross- servicing and because FMS intends to use AWG as a collection tool of last resort, thus allowing debts (that may already be more than 180 days delinquent) to age significantly before sending an AWG notice to the debtor. To help ensure that agencies effectively incorporate AWG into their debt collection processes, we recommend that the secretaries of the Departments of Agriculture, Education, Energy, Health and Human Services, Housing and Urban Development, and Veterans Affairs; and the commissioner of the Social Security Administration direct their chief financial officers and that the administrator of the Small Business Administration direct the associate deputy administrator for capital access to take the following steps: Prepare comprehensive written implementation plans that clearly define, at a minimum, the types of debt that will be subject to AWG, the policies and procedures for administering AWG, and the process for conducting hearings. Some of the details that should be considered for inclusion in the plan are (1) whether the agency will conduct AWG in- house, at a debt collection center, or both; (2) the types of debts, if any, that will be sent to FMS prior to becoming 180 days delinquent; and (3) whether hearings will be conducted by the agency or contracted out. Complete and finalize regulations for conducting AWG. Use AWG in conjunction with other debt collection tools, when practicable, as leverage to obtain payments from delinquent debtors. Expedite referrals of eligible debts to FMS for cross-servicing when relying on FMS to perform AWG. Agencies should refer such debts prior to the 180-day delinquency threshold when practicable. We also recommend that the commissioner of FMS modify FMS’s “AWG Operations & Procedures Manual” to incorporate the use of AWG in conjunction with other debt collection tools, when practicable, as leverage to obtain payments from delinquent debtors. Each of the 10 agencies covered by our report responded to our request for comments. We received a combination of written and oral comments from the 9 agencies we surveyed and written comments from FMS. Letters with comments from FMS, Education, HHS, HUD, SBA, SSA, and VA are reprinted and discussed in further detail, when applicable, in the appendixes. Eight of the 9 agencies we surveyed either stated that they agreed with our report or indirectly indicated some level of agreement by describing their efforts to implement one or more of our recommendations. Several of these provided updates to their responses to our survey document that was in large part the basis for this report. We modified our report to reflect stated changes in how agencies expected to implement AWG and any related schedule changes. We also incorporated a number of technical suggestions as appropriate. The 9th agency, EPA, stated that appropriate staff reviewed the report and the agency did not have any comments. The only explicit disagreement on our recommendations was expressed by FMS. While saying it strongly agreed that agencies relying on FMS to implement AWG should refer debts promptly to FMS for cross-servicing, it disagreed with our recommendation concerning its timing and philosophy in applying AWG. We recommended that FMS modify its “AWG Operations & Procedures Manual” to incorporate the use of AWG in conjunction with other debt collection tools, when practicable, as leverage to obtain payments from delinquent debtors. FMS stated that PCA officials cannot threaten an action unless they actually intend to take it and FMS policy requires that AWG only be used when all other attempts at collection have been exhausted. FMS acknowledged that its policy does allow expedited use of AWG in defined circumstances but said it continues to believe that AWG should be used only when all other collection attempts have been unsuccessful and the debtor has been given every opportunity to otherwise resolve the debt. We did not recommend that FMS threaten debtors with the use of AWG with no intention of using it. Rather, if upon contacting the debtor, FMS could not obtain payment in full or a satisfactory repayment agreement, our view was that FMS or its PCA contractor could immediately initiate AWG. The intent would be to use AWG alone or in conjunction with other debt collection tools to liquidate the debt. Although FMS policy currently prescribes that its PCAs use AWG only after all other collection efforts have been exhausted, neither current law nor regulation contemplates that only one collection tool may be used at a time. In fact, the AWG regulations specifically state that agencies may pursue other debt collection remedies separately or in conjunction with AWG. As stated in our report, viewing AWG only as a collection tool of last resort ignores AWG’s potential to motivate debtors to pay their debt in full, to enter into a repayment plan for the full amount, or to agree on a compromise amount. In our view, deferring the notice of intended use negates a major benefit mentioned by experts on the utility of AWG. Their main point was that invoking the possibility of use could influence those who otherwise might not respond to a request for payment, or to actually pay, to do so because they do not wish employers to become aware that they have delinquent federal debt. FMS stated that it allows its PCAs under certain circumstances to initiate the AWG process before the expiration of the 60-day period following receipt of referred debt from other federal agencies. Our recommendation contemplates that such action should be the norm rather than the exception. As mentioned in our report, according to FMS data, more than 50 percent of debt referred for cross-servicing governmentwide as of September 30, 2001, was more than 2 years delinquent at the time of referral. Moreover, agencies are not required to send debts to FMS until they are more than 180 days delinquent. Consequently, FMS in most cases is not the first federal agency to attempt collection from or debt resolution with the debtors because creditor agencies should be attempting to collect their delinquent debts prior to sending them to FMS for cross-servicing. For FMS to wait until it or its PCA contractors have exhausted all collection efforts prior to initiating AWG by notifying the debtor that such action will be taken greatly diminishes AWG’s potential to help FMS leverage payment from debtors who are significantly delinquent on their obligations to the federal government and who likely had not cooperated with the referring federal agencies in resolving their debts. Finally, while there may be differences of opinion on when FMS should initiate AWG in its collection process, HUD in its comments on our report spoke in favor of our recommendation to FMS that FMS should incorporate the use of AWG in conjunction with other debt collection tools rather than consider AWG as a collection tool of “last resort.” In developing its own implementation plan for AWG, HUD said that it expected that effective use of AWG would be part of FMS’s plan for servicing debt referred for cross- servicing. In that light, HUD said that it would expect that those assigned to cross-service debts would be encouraged to use AWG as a tool to obtain voluntary payment. This view from HUD aptly summarizes our position and is a key aspect of the reasoning behind our recommendation that FMS use AWG in conjunction with other collection tools and not principally as a last resort. For the reasons offered above, we continue to believe that FMS should modify its “AWG Operations & Procedures Manual” and policy with regard to the use of AWG. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the chairmen and ranking minority members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform and to the ranking minority member of your subcommittee. We will also provide copies to the heads of the agencies we surveyed, the secretary of the treasury, and the commissioner of FMS. We will then make copies available to others upon request. Please contact me at (202) 512-3406 if you or your staff have any questions on this report. I can also be reached by e-mail at [email protected]. Key contributors to this assignment were Kenneth R. Rupar, Michael S. LaForge, Linda K. Sanders, and Michael D. Hansen. 1. See our discussion in the “Agency Comments and Our Evaluation” section. 1. Education stated that the draft report is misleading and should be revised because it leaves the reader the impression that no agency is conducting AWG and does not give a clear synopsis of Education’s use of AWG. We disagree. The primary focus of our work was implementation of AWG as authorized by the DCIA. We accurately stated that none of the nine CFO Act agencies we surveyed were using AWG as authorized by DCIA and all but one agency indicated that they intend to do so. Despite this, we clearly state in this report, as Education noted in its response, that Education has effectively used wage garnishment under authority similar to DCIA’s since 1993 to collect delinquent student loans. According to Education, such efforts have dramatically increased collections on delinquent student loans. We made this point at the beginning of our report, as well as in the body of our report and in a separate subsection that is titled “Education’s Use of Wage Garnishment under Separate Authority Has Increased Debt Collections.” 2. As stated in this report, Education has been using AWG under separate authority, to garnish up to 10 percent of debtors’ disposable pay and plans to implement AWG under DCIA authority in fiscal year 2002. While we acknowledge that Education has had to consider the additional requirements to smoothly transition to the DCIA process, other agencies have also had to develop the necessary procedural and programmatic changes for implementing AWG. As such, we stated in this report that the eight agencies we surveyed that are planning to implement AWG under DCIA authority, including Education, gave various reasons for the delay in its implementation, including the need to complete the necessary systems changes. 1. Although SSA provided us copies of their written implementation plan and project scope agreement, we do not consider either document to be a comprehensive written AWG implementation plan. The one-page written implementation plan that was provided to us did not address (1) which debts would be subject to AWG, (2) which age categories of debts would be subject to AWG, and (3) what would be the minimum debt amount subject to AWG. Also, the plan did not clearly describe how and by whom hearings would be conducted or clearly indicate when SSA could fully implement AWG. Although SSA provided us its project scope agreement, it only documented the scope of software changes that are needed to implement AWG. 1. We have revised our report to reflect that responsibilities for addressing our recommendations at SBA reside with the associate deputy administrator for capital access. 1. We understood that the survey and implementation plan submitted by HUD covered the agency as a whole. The intent of our comment was to address our concern that HUD’s departmentwide implementation plan only specifies the use of AWG for certain debts referred to FMS for cross-servicing and did not make it clear whether AWG would be applicable to all other programs and related delinquent debts administered by the agency that are not referred to FMS for cross- servicing. 2. We have revised our report to reflect HUD’s expected implementation date of fiscal year 2002. 1. HHS suggested that the report highlight that it will utilize AWG in-house and through FMS. Although table 1 reflects that HHS will be using AWG both in-house and through FMS’s cross-servicing program, we have revised the body of our report so that it more clearly reflects that HHS and certain other agencies will implement AWG in-house and to varying degrees through FMS’s cross-servicing program. 2. HHS suggested that the date for its expected implementation be changed to fiscal year 2002. As of the completion of our fieldwork, HHS estimated its expected implementation date to be the end of calendar year 2001. We have revised our report to incorporate HHS’s updated expected implementation date. 3. HHS stated that it transmits data elements to Treasury that identify a claim as consumer or commercial debt and suggested that we modify the report to identify the agencies that do not report the consumer or commercial debts separately. We understand that agencies transmit debts to Treasury for cross-servicing as consumer or commercial debts, however, debts are not separately reported in this manner on the Treasury Report on Receivables. We have revised our report to clarify that debts referred are not reported as consumer or commercial debts on the Treasury Report on Receivables. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system). | To improve federal debt collection, the Debt Collection Improvement Act of 1996 established a framework of debt collection tools, including administrative wage garnishment (AWG). This report discusses the extent to which nine agencies use or plan to use AWG to collect delinquent nontax federal debt and provides GAO's perspective on ways to make AWG more widespread and effective. GAO found that none of the nine agencies had yet implemented AWG. Although AWG is not mandatory, by failing to use this tool--more than five years after the act's enactment and more than three years after the Department of the Treasury issued implementing regulations--agencies have missed an opportunity to maximize collection of delinquent debt. Agencies identified various reasons for not yet implementing AWG or for deciding not to do so, including the need to focus their resources on implementing the act's mandatory provisions. Although some agencies or programs may have valid reasons for not implementing wage garnishment, all of the larger programs that deal with individuals and that have a demonstrated risk of financial loss resulting from unpaid debt should have AWG as a viable debt collection option. Reliance on the Financial Management Service (FMS) to perform AWG as part of cross-servicing might be prudent for some agencies, provided the collection tool is used as early as practicable to maximize its collection potential. However, the act does not require that agencies refer debts for cross-servicing until they are more than 180 days delinquent, and FMS, which views wage garnishment as a tool of last resort, does not contemplate initiating AWG in most cases until the debt has been with FMS for at least 90 days. As a result, FMS's use of AWG could be significantly limited and delayed. |
Prior to 1996, agencies generally did not have the authority to adjust civil monetary penalty maximums that were established in statute. Congress would occasionally adjust individual penalties or specific groups of penalties through various statutes but not all civil penalties. As a result, many penalties had not been changed for decades. When the Federal Civil Penalties Inflation Adjustment Act of 1990 (the 1990 Act) was enacted, Congress noted in the “Findings” section of the legislation that inflation had weakened the deterrent effect of many civil monetary penalties. The stated purpose of the 1990 Act was “to establish a mechanism that shall (1) allow for regular adjustments for inflation of civil monetary penalties; (2) maintain the deterrent effect of civil monetary penalties and promote compliance with the law; and (3) improve the collection by the federal government of civil monetary penalties.” However, the act did not give agencies the authority to adjust their civil monetary penalties for inflation. In 1996, Congress enacted section 31001(s)(1) of DCIA, amending the 1990 Act to require agencies to issue regulations adjusting their covered penalties for inflation. The 1990 Act as amended by DCIA required agencies with covered penalties to adjust them by regulation published in the Federal Register by October 23, 1996, and at least once every 4 years thereafter. The 1996 Inflation Adjustment Act amendment limited the first such adjustment to 10 percent of the penalty amount. It required specific calculation and rounding procedures to be followed and excluded penalties under certain statutes (e.g., the Occupational Safety and Health Act of 1970, the Social Security Act, the Internal Revenue Code of 1986, and the Tariff Act of 1930). However, as we reported in March 2003, the 10 percent cap on initial adjustments prevented some agencies from fully adjusting for inflation in the hundreds of percentages since last set or adjusted by Congress. The 1990 Act was further amended in 2015 to improve the effectiveness of civil monetary penalties and to maintain their deterrent effect. Specifically, the Inflation Adjustment Act requires: 1. agencies to adjust each civil monetary penalty with an initial catch-up adjustment through an interim final rulemaking (IFR) in the Federal Register, no later than July 1, 2016, and to take effect no later than August 1, 2016; 2. agencies to include in the annual AFRs, submitted under OMB Circular A-136, Financial Reporting Requirements, information about the civil monetary penalties within the agencies’ jurisdiction, including the inflation adjustment of the civil monetary penalty amounts; and 3. OMB to issue guidance to agencies for implementing the inflation adjustments. In response to the Inflation Adjustment Act, in February 2016, OMB issued OMB Memorandum M-16-06, Implementation of the Federal Civil Penalties Inflation Adjustment Act Improvements Act of 2015, for agencies implementing the civil monetary penalty inflation adjustment requirements of the Inflation Adjustment Act and, in October 2016, revised OMB Circular A-136 to include guidance to federal agencies for including inflation adjustments in annual financial reporting. Consistent with OMB guidance for implementing inflation adjustments, federal agencies are responsible for identifying the civil monetary penalties that fall under the statutes and regulations they enforce. Agencies with questions on the applicability of the inflation adjustment requirement to an individual penalty should first consult with their offices of general counsel and then seek clarifying guidance from OMB if necessary. In addition, agencies may request OMB concurrence that they be allowed to adjust the amount of a civil monetary penalty by less than the amount required under the Inflation Adjustment Act (a reduced catch- up adjustment determination), if they demonstrate that the otherwise required increase of the penalty or penalty range would have a negative economic effect or that the social costs would outweigh the benefits. Consistent with the Inflation Adjustment Act, agencies should consult with OMB before proposing a reduced catch-up adjustment determination. We confirmed with OMB that it did not receive any requests from agencies to be allowed to adjust the amount of a civil monetary penalty by less than the required amount in 2016. Of the 52 federal agencies reviewed, we determined that 49 federal agencies were required to publish IFRs with the initial catch-up inflation adjustment amounts in the Federal Register. We excluded three agencies—the International Trade Commission and Postal Regulatory Commission based on their determination that they are not subject to the Inflation Adjustment Act provisions, and the Tennessee Valley Authority based on its determination that it currently has no civil monetary penalties to assess or enforce. We found that 34 of the 49 federal agencies subject to the Inflation Adjustment Act published IFRs with the initial catch-up inflation adjustment amounts in the Federal Register by the July 1, 2016, deadline. In addition, 9 of the 15 remaining agencies made the required publication after the July 1, 2016, deadline set by the Inflation Adjustment Act and by December 31, 2016. The remaining 6 agencies had not made the required publication as of December 31, 2016. Because of the complex nature of the initial catch-up inflation adjustments, OMB staff from the Office of Federal Financial Management, and the Labor Branch emphasized to us that its preference was for federal agencies to take the necessary time to publish accurate and complete initial catch-up inflation adjustments through IFRs, even if agencies were not able to meet the Inflation Adjustment Act publication deadline. In light of the challenges agencies faced in publishing on time and their efforts to publish accurate and complete initial catch-up adjustments, we are reporting on agencies that published these adjustments as of December 31, 2016; however, we do not consider these agencies to be in compliance with the July 1, 2016, deadline set by the Inflation Adjustment Act. The remaining 6 agencies subject to the Inflation Adjustment Act that did not publish IFRs with the initial catch-up inflation adjustment amounts by December 31, 2016, in the Federal Register were the 1. Merit Systems Protection Board (MSPB), 2. National Aeronautics and Space Administration (NASA), 3. National Endowment for the Arts (NEA), 4. General Services Administration (GSA), 5. National Transportation Safety Board (NTSB), and 6. U.S. Department of Agriculture (USDA). As a result of our inquiries, 3 of these federal agencies, MSPB, NASA, and NEA, subsequently published their initial catch-up inflation adjustment amounts in the Federal Register in June 2017. GSA officials told us that GSA had difficulties coordinating internally to timely submit its IFR and that, as of July 31, 2017, the projected timeframe to publish the initial catch-up inflation adjustment amounts in the Federal Register is within the next 90 days. In addition, NTSB officials stated that although NTSB has the statutory authority to assess civil penalties for violations, it has never sought to impose civil penalties. Thus, NTSB originally determined that it did not have to publish an initial catch-up inflation adjustment. However, as a result of our inquiries, NTSB officials told us that NTSB now plans to publish its initial catch-up inflation adjustment amounts in October 2017. USDA officials stated that USDA is in the process of preparing and reviewing a draft rulemaking and plans to begin its clearance process to submit an initial catch-up inflation adjustment rulemaking for publication in the Federal Register in 2017. Although GSA, NTSB, and USDA state that they plan to publish catch-up inflation adjustments in the Federal Register, it has now been over a year since the July 1, 2016, publication deadline set by the Inflation Adjustment Act. Without timely adjustments of their civil monetary penalties, there is an increased risk that agencies’ civil monetary penalties are not keeping pace with inflation. Civil monetary penalties are a key method of regulatory enforcement, providing federal agencies authority to punish violators and serving as a deterrent to future violations. Civil monetary penalties can lose their ability to punish willful and egregious violators appropriately and to serve and protect the public as a deterrent to future violations if not timely adjusted for inflation. Figure 1 summarizes the status as of December 31, 2016, of the publication of the initial catch-up inflation adjustments for civil monetary penalties for the 52 federal agencies that we reviewed. Further details of each federal agency’s status are provided in appendix II. Under the Inflation Adjustment Act and OMB Circular A-136, section II.5.11, Civil Monetary Penalty Adjustment for Inflation, federal agencies are directed to report in the 2016 AFRs information about the civil monetary penalties within agencies’ jurisdiction, including the catch-up inflation adjustment of the civil monetary penalty amounts, and federal agencies must report this information if the agencies, or their subbureaus or divisions, enforce any civil monetary penalties. Of the 52 federal agencies that we reviewed, we found that 9 agencies are not subject to the requirements to report civil monetary penalties information in the AFR. Of the remaining 43 agencies, we found that 32 agencies reported information in the 2016 AFRs about their civil monetary penalties, including the catch-up inflation adjustment of the civil monetary penalty amounts, as directed by OMB guidance. The other 11 federal agencies did not report civil monetary penalty catch-up inflation adjustment information in the 2016 AFRs, as required by the Inflation Adjustment Act and consistent with OMB guidance. Officials from 8 of the 11 federal agencies told us that although their agencies had the authority to assess or enforce penalties within their jurisdictions, they had not actually assessed or enforced any civil monetary penalties during the reporting period. Some of these officials indicated that they interpreted the terms “assess” and “enforce” in the implementing guidance, and “enforce” in the reporting guidance, to mean that they imposed a civil monetary penalty. Therefore, they took the position that their agencies did not need to report civil monetary penalties information in the 2016 AFRs because they did not impose any civil monetary penalties during the reporting period. However, as a result of our inquiries, other agencies informed us that they plan to report civil monetary penalty information in the 2017 AFRs despite not having imposed civil monetary penalties during the reporting period. OMB staff stated that it is the agencies’ responsibility to determine whether they “assessed” or “enforced” civil monetary penalties. The standards for internal control in the federal government state that the agency’s management should externally communicate the necessary quality information to achieve its objectives. In addition, the Inflation Adjustment Act requires that the Director of OMB issue guidance to federal agencies on implementing the inflation adjustments required under the act. With clarified OMB guidance, the risk of agencies’ inconsistent AFR reporting of civil monetary penalty adjustment information would be reduced. The remaining 3 federal agencies—the Federal Election Commission (FEC), Federal Maritime Commission (FMC), and National Indian Gaming Commission (NIGC)—did not report in the 2016 AFRs information about the civil monetary penalties, including the catch-up inflation adjustment of the civil monetary penalty amounts. Officials from FEC, FMC, and NIGC indicated that they inadvertently omitted the information on civil monetary penalty adjustments in the 2016 AFRs and that they should have reported civil monetary penalty information. All three agencies informed us that they plan to report the required civil monetary penalty information in the annual AFRs, starting with fiscal year 2017. Without timely and complete reporting of civil monetary penalty information in the AFRs, OMB and other decision makers may not have the information needed to help ensure the effectiveness of civil monetary penalties in enforcing statutes and preventing violations. Accordingly, it is important that agencies report such information in the AFRs. Figure 2 summarizes the status of reporting civil monetary penalties information in the AFRs of the 52 federal agencies that we reviewed for fiscal or calendar year 2016 (as applicable, as agencies may have different year-end reporting dates). Further details of each federal agency’s reporting status are provided in appendix III. Civil monetary penalties prescribed by statute that are timely adjusted for inflation allow agencies to punish violators appropriately and serve as a deterrent to future violations. While most federal agencies subject to the Inflation Adjustment Act have followed the act’s requirements and OMB’s guidance, some agencies did not timely publish their civil monetary penalty catch-up inflation adjustments in the Federal Register or report their civil monetary penalty information in the 2016 AFRs. Specifically, three federal agencies have taken more than a year since the publication deadline set by the Inflation Adjustment Act to publish inflation catch-up adjustments in the Federal Register, and three other federal agencies have not yet reported civil monetary penalty information in the AFRs. In addition, agencies had differing interpretations of OMB’s guidance related to civil monetary penalty inflation adjustment implementation that could result in inconsistent AFR reporting of civil monetary penalty adjustment information. Without timely adjustments of their civil monetary penalty amounts and their publication in the Federal Register, there is an increased risk that agencies’ civil monetary penalties are not keeping pace with inflation. In addition, without timely and complete reporting of their civil monetary penalties in AFRs, decision makers may not have the information needed to help ensure the effectiveness of civil monetary penalties in enforcing statutes and preventing violations. To help ensure that agencies’ civil monetary penalties are adjusted timely and keep pace with inflation, we are making the following three recommendations. 1. The Acting Administrator of the General Services Administration (GSA) should publish the initial catch-up inflation adjustment in the Federal Register. 2. The Acting Chairman of the National Transportation Safety Board (NTSB) should publish the initial catch-up inflation adjustment in the Federal Register. 3. The Secretary of Agriculture (USDA) should publish the initial catch- up inflation adjustment in the Federal Register. To help ensure timely and complete reporting of agencies’ civil monetary penalty information in agency financial reports (AFR) and to provide the Office of Management and Budget (OMB) and other decision makers with the information needed to help ensure the effectiveness of civil monetary penalties in enforcing statutes and preventing violations, we are making the following four recommendations. 4. The Chairman of the Federal Election Commission (FEC) should publish civil monetary penalties within its jurisdiction, including any penalty adjustments, in FEC’s 2017 AFR. 5. The Acting Chairman of the Federal Maritime Commission (FMC) should publish civil monetary penalties within its jurisdiction, including any penalty adjustments, in FMC’s 2017 AFR. 6. The Chairman of the National Indian Gaming Commission (NIGC) should publish civil monetary penalties within its jurisdiction, including any penalty adjustments, in the Department of the Interior’s 2017 AFR. 7. The Director of OMB should clarify its guidance related to civil monetary penalty inflation adjustment information that agencies are required to report in the AFRs. We provided a draft of this report to the six federal agencies to which we directed recommendations—FEC, FMC, GSA, NIGC, NTSB, and USDA—and to OMB. FMC, GSA, and NIGC provided written comments, which are reprinted in appendixes IV through VI, respectively. FMC neither agreed nor disagreed with our recommendation, but stated that FMC plans to publish updates to its civil monetary penalty information in its 2017 performance and accountability report. GSA agreed with our recommendation and stated that it is developing a comprehensive plan to address it. NIGC generally agreed with our findings and recommendations and provided a technical comment, which we incorporated as appropriate. Officials from FEC, NTSB, and USDA provided e-mail responses to our draft report. The Director of Congressional, Legislative and Intergovernmental Affairs at FEC stated in an e-mail that FEC had no comments. In an e-mail, the Governmental Affairs Liaison at NTSB neither agreed nor disagreed with our recommendation, but stated that NTSB plans to publish the initial catch-up inflation adjustment in October 2017, which we incorporated in the report. The Attorney-Advisor in the Office of General Counsel at USDA stated in an e-mail that USDA did not have any comments. OMB staff from the Office of Federal Financial Management, the Labor Branch, and General Counsel met with us to provide oral comments. OMB staff generally agreed with our recommendation; however, they suggested that we revise the recommendation to use more broad terms. We agreed with this suggestion and modified the report accordingly to allow OMB more flexibility to meet the intent of our recommendation. OMB staff also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Chairman of the Federal Election Commission, the Acting Chairman of the Federal Maritime Commission, the Acting Administrator of the General Services Administration, the Chairman of the National Indian Gaming Commission, the Acting Chairman of the National Transportation Safety Board, the Secretary of Agriculture, the Director of the Office of Management and Budget, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9399 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. This report addresses to what extent federal agencies subject to the Federal Civil Penalties Inflation Adjustment Act of 1990, as amended (Inflation Adjustment Act), have complied with the requirement to (1) publish their initial catch-up inflation adjustments in the Federal Register and (2) report in the 2016 agency financial reports (AFR) information about the civil monetary penalties within the agencies’ jurisdiction, including the inflation adjustment of the penalty amounts, as directed by the Office of Management and Budget’s (OMB) guidance. To address our first objective, we obtained the population of 52 federal agencies that could be subject to the applicable provisions of the Inflation Adjustment Act from OMB’s summary list. To assess the completeness of the population of the federal agencies identified by OMB, we compared OMB’s summary with GAO’s previously identified list of federal agencies reporting civil monetary penalties. We performed a broader electronic search in the Federal Register to identify any other federal agencies that published civil monetary penalty information from January 1, 2012, through December 31, 2016. Of the 52 federal agencies identified by OMB, we excluded three federal agencies—the International Trade Commission, Postal Regulatory Commission, and the Tennessee Valley Authority (TVA)—based on their determinations about the applicability of the Inflation Adjustment Act to their agencies. For the remaining 49 federal agencies, we electronically searched the Federal Register to determine whether the required interim final rulemakings (IFR) with civil monetary penalties, including catch-up inflation adjustments, were published from February 24, 2016 (issuance date of the OMB implementation guidance for fiscal year 2016), through August 1, 2016 (effective date established in the Inflation Adjustment Act for the new penalty levels). We conducted meetings with OMB staff from the Office of Information and Regulatory Affairs, the Office of Federal Financial Management, and the Labor Branch to gather information on federal agencies’ activities and reporting in compliance with the Inflation Adjustment Act and in accordance with OMB guidance. We categorized federal agencies for our first objective by determining (1) whether the agency complied with the Inflation Adjustment Act to publish its initial catch-up inflation adjustments by July 1, 2016, and to take effect no later than August 1, 2016; (2) whether the agency published its initial catch-up inflation adjustments in calendar year 2016 (i.e., no later than December 31, 2016); (3) whether the agency is subject to the catch-up adjustment provisions of the Inflation Adjustment Act; and (4) whether the agency had applicable civil monetary penalties to assess or enforce. Some agencies stated that they were not required to publish any catch-up inflation adjustment of the civil monetary penalty amounts through an IFR because (1) they were not subject to the catch-up adjustment provisions of the Inflation Adjustment Act, (2) the authority under which they assess and enforce civil monetary penalties was expressly excluded by the act, or (3) they determined that they had no applicable civil monetary penalties under the act. We relied on the agencies’ determinations regarding applicability and did not independently verify the information they provided. Also, we followed up with agencies that had not published their initial catch-up inflation adjustments through IFRs in the Federal Register. We contacted the appropriate officials within these agencies for explanations as to why their agencies did not publish the required IFRs with civil monetary penalty catch-up inflation adjustments and whether they believed that their agencies should have published the IFRs with this information. To address our second objective, we reviewed the 2016 AFRs of the 52 federal agencies identified by OMB staff to determine whether the information presented was in compliance with provisions of the Inflation Adjustment Act and consistent with the guidance in OMB Circular A-136, Financial Reporting Requirements. We conducted meetings with OMB staff and discussed the various formats agencies used to present the civil monetary penalty inflation adjustment information. OMB staff explained to us that their emphasis was on federal agencies reporting accurate and complete inflation adjustment information rather than strictly following the format in OMB Circular A-136. Further, OMB staff considered the civil monetary penalty inflation adjustment information table in OMB Circular A-136 only to be used as an illustrative example by federal agencies to facilitate their AFR reporting. Therefore, we considered agencies that reported civil monetary penalties, including catch-up inflation adjustment of the civil monetary penalty amounts, as being in compliance with the act, even if they did not strictly follow the OMB Circular A-136 table format example. Of the 52 federal agencies identified by OMB, we determined that certain agencies were not required to follow OMB Circular A-136 AFR reporting guidance. For example, we identified agencies established as government corporations (e.g., Corporation for National and Community Service, Federal Deposit Insurance Corporation, Pension Benefit Guaranty Corporation, and TVA) that were not required by OMB Circular A-136 to report civil monetary penalty information in the AFRs. Officials at the U.S. Postal Service, Consumer Financial Protection Bureau, Federal Reserve Board, International Trade Commission, and Postal Regulatory Commission stated that pursuant to certain laws or regulations, their agencies have determined that they are not required to report civil monetary penalty information in the AFRs. In total, we found 9 federal agencies that were not applicable for our analysis of the AFRs. For the remaining 43 federal agencies, we reviewed the agencies’ 2016 AFRs to review civil monetary penalty inflation adjustment information. The Inflation Adjustment Act requires agencies to include in the AFRs submitted under OMB Circular A-136 information about the civil monetary penalties within the agencies’ jurisdiction, including the inflation adjustment of civil monetary penalty amounts. OMB Circular A-136 states that agencies’ AFRs must include a Civil Monetary Penalty Adjustment for Inflation section “if there is a civil monetary penalty enforced by the agency, subbureau, or division.” Further, according to OMB Memorandum M-16-06, Implementation of the Federal Civil Penalties Inflation Adjustment Act Improvements Act of 2015, “a civil monetary penalty is any monetary assessment levied for a violation of a Federal civil statute or regulation, assessed or enforceable through a civil action in Federal court or an administrative proceeding.” We identified 8 federal agencies that stated that they did not impose a civil monetary penalty during the 2016 AFR reporting period. Based on our inquiries with OMB staff and responses from these 8 federal agencies, some agencies interpreted OMB’s reporting requirements as applying only to those federal agencies that have assessed or enforced civil monetary penalties during the reporting periods. The selected federal agencies’ staff confirmed to us that their agencies did not report the civil monetary penalty adjustment for inflation information in the 2016 AFRs because their agencies did not assess or enforce any civil monetary penalties during the 2016 AFR reporting period, as defined in the OMB Circular A-136 and OMB Memorandum M-16-06 guidance. As a result, we did not make any determination on these 8 federal agencies regarding compliance with the AFR reporting requirement provisions of the Inflation Adjustment Act. Additionally, we inquired with officials at federal agencies that had not reported the civil monetary penalties information in the AFRs consistent with OMB guidance. We contacted the appropriate official for explanations as to why their agencies did not report civil monetary penalties information, including the inflation adjustment of the penalty amounts, as directed by OMB guidance, and whether they believed their agencies should have reported such information in the agencies’ AFRs. We focused our review on the extent to which agencies followed the IFR publication and the AFR reporting requirements of the Inflation Adjustment Act. We did not attempt to verify whether a penalty adjusted for inflation by an agency appropriately met the definition of a covered civil monetary penalty in the Inflation Adjustment Act or that the adjustment was the correct amount. We conducted this performance audit from December 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 1 summarizes the status for the interim final rulemaking (IFR) requirement of the Federal Civil Penalties Inflation Adjustment Act of 1990, as amended (Inflation Adjustment Act), for each agency as of December 31, 2016, provided in the Office of Management and Budget’s summary list of 52 federal agencies that could be subject to the applicable provisions of the Inflation Adjustment Act. USDA officials stated that USDA has been working with the Office of Management and Budget (OMB) to publish its initial catch-up adjustment for inflation through an IFR in the Federal Register since August 2016 and expects to finalize the publication in 2017. DHS and DOL jointly published catch- up adjustment for inflation through IFR for the H-2B Temporary Non- agricultural Worker Program. Publication date: July 6, 2016 Effective date: July 6, 2016 Publication date: July 20, 2016 Effective date: August 1, 2016 GSA officials stated that GSA plans to publish its catch-up adjustment for inflation through an IFR in the Federal Register within 90 days after July 31, 2017. As a result of our inquiries, MSPB published its catch-up adjustment for inflation through a final rule in the Federal Register on June 5, 2017. As a result of our inquiries, NASA published its catch-up inflation adjustment through an IFR in the Federal Register on June 26, 2017. As a result of our inquiries, NEA published its catch-up inflation adjustment through an IFR in the Federal Register on June 15, 2017. Publication date: July 6, 2016 Effective date: August 1, 2016 NTSB officials stated that NTSB plans to publish its catch-up adjustment for inflation in the Federal Register in October 2017. HUD stated that it delayed the effective date of its IFR to August 16, 2016, because 42 U.S.C. § 3535(o)(3) requires that “Any regulation implementing any provision of the Department of Housing and Urban Development Reform Act of 1989 that authorizes the imposition of a civil money penalty may not become effective until after the expiration of a public comment period of not less than 60 days.” Because DOI and DOT are each listed as an organizational unit in OMB’s summary list, we categorized the publishing status of these units as a whole rather than listing each component separately. While some components published IFRs by July 1, 2016, with an August 1, 2016, effective date, we categorized DOI and DOT as having published after the required date because at least one component for each of these units published its IFR after July 1, 2016; had an effective date after August 1, 2016; or both. We provided the publication date and effective date for each component in the remarks column. ITC officials stated that the Tariff Act, the authority under which ITC assesses and enforces civil monetary penalties, is expressly excluded from the catch-up adjustment provisions of the Inflation Adjustment Act, and ITC is therefore not required to publish an IFR. PRC officials stated that PRC is not subject to catch-up adjustment provisions of the Inflation Adjustment Act as it is not considered to be a federal agency under 5 U.S.C. § 105, the definition applicable to the Inflation Adjustment Act. In addition, PRC officials stated that PRC had no applicable civil monetary penalties under the act because its civil monetary penalties do not have a specific monetary amount or a maximum amount. TVA officials stated that TVA determined that it had no applicable civil monetary penalties under the act. Because TVA no longer receives appropriations as of 1999, TVA concluded that with its self- funding status, it would be impossible for recipients of TVA funds to incur penalties. Table 2 summarizes federal agencies’ reporting of civil monetary penalty information in 2016 agency financial reports as required by the Federal Civil Penalties Inflation Adjustment Act of 1990, as amended (Inflation Adjustment Act), provided in the Office of Management and Budget’s summary list of the 52 federal agencies that could be subject to the applicable provisions of the Inflation Adjustment Act, for fiscal or calendar year 2016 (as applicable, as agencies may have different year-end reporting dates). In its fiscal year 2016 AFR, USDA disclosed that it has not finalized and published a final rule to make inflation adjustments as of November 2016 and thus did not include any current catch-up inflation adjustment information. DHS and DOL separately reported the civil monetary penalties information in their respective AFRs. FCA officials stated that FCA did not assess or enforce any civil monetary penalties during the reporting period and therefore did not report this information in its AFR. FCSIC included civil monetary penalty information in its 2016 annual report issued on June 9, 2017. FHFA officials stated that FHFA did not assess or enforce any civil monetary penalties during the reporting period and therefore did not report this information in its AFR. MSPB officials stated that MSPB did not assess or enforce any civil monetary penalties during the reporting period and therefore did not report this information in its AFR. NEA officials stated that NEA did not assess or enforce any civil monetary penalties during the reporting period and therefore did not report this information in its AFR. NIGC officials stated that NIGC is an independent federal regulatory agency within DOI. NIGC’s financial information is consolidated and reported in DOI’s AFR. NTSB officials stated that NTSB did not assess or enforce any civil monetary penalties during the reporting period and therefore did not report this information in its AFR. OGE officials stated that OGE did not assess or enforce any civil monetary penalties during the reporting period and therefore did not report this information in its AFR. RRB officials stated that RRB did not assess or enforce any civil monetary penalties during the reporting period and therefore did not report this information in its AFR. STB officials stated that STB did not assess or enforce any civil monetary penalties during the reporting period and therefore did not report this information in its AFR. CFPB officials stated that CFPB is not required to follow OMB Circular A-136 under Section 1017(a)(4)(E) of the Dodd-Frank Wall Street Reform and Consumer Protection Act. PRC officials stated that PRC is not subject to the AFR reporting provision of the Inflation Adjustment Act as it is not considered to be a federal agency under 5 U.S.C. § 105, the definition applicable to the act. In addition, PRC officials stated that PRC had no applicable civil monetary penalties under the act because its civil monetary penalties do not have a specific monetary amount or a maximum amount. In addition to the contact named above, Shirley Abel (Assistant Director), Jeremy Choi (Auditor-in-Charge), Vincent Gomes, Maxine Hattery, Jason Kelly, Vivian Kim, and Diana Lee made key contributions to this report. | The IAA includes a provision for GAO to annually submit to Congress a report assessing the compliance of agencies with the inflation adjustments required by the act. Specifically, GAO’s objectives were to determine to what extent federal agencies subject to the IAA have complied with the requirements to (1) publish in the Federal Register their initial catch-up inflation adjustments and (2) report in the 2016 AFRs information about civil monetary penalties, including the catch-up inflation adjustment of the civil monetary penalty amounts. GAO obtained the population of 52 federal agencies identified by OMB that could be subject to the applicable provisions of the IAA and, for those subject to the requirements, electronically searched the Federal Register and reviewed the 2016 AFRs. The Federal Civil Penalties Inflation Adjustment Act of 1990, as amended (the IAA) calls for federal agencies to (1) adjust civil monetary penalties for inflation with an initial catch-up inflation adjustment published in the Federal Register and (2) report in the 2016 agency financial reports (AFR) civil monetary penalty information, including the catch-up inflation adjustment. The act also requires the Office of Management and Budget (OMB) to issue implementation guidance. Most federal agencies subject to the IAA complied with the provisions of the act to publish their initial catch-up inflation adjustments in the Federal Register no later than July 1, 2016. However, certain federal agencies with civil monetary penalties covered by the IAA did not comply with the statutory requirement. GAO found that six federal agencies did not publish their civil monetary penalty initial catch-up inflation adjustment amounts by December 31, 2016. As a result of GAO inquiries, three of these six subsequently published their catch-up adjustments for inflation in the Federal Register . In addition, most federal agencies subject to the IAA complied with the provisions of the act to report civil monetary penalty information in the 2016 AFRs, including the catch-up inflation adjustment. However, certain federal agencies with civil monetary penalties covered by the IAA did not comply with the statutory requirements. Specifically, three federal agencies did not report, in the 2016 AFRs, required information about the civil monetary penalty catch-up inflation adjustment in the 2016 AFRs. GAO also found that OMB had not provided clear guidance regarding federal agencies' reporting on civil monetary penalty information in the AFRs. As a result, officials from federal agencies had different interpretations, which could result in inconsistent AFR reporting of such information. GAO recommends that (1) six federal agencies take the necessary actions to meet IAA requirements and (2) OMB clarify its guidance regarding federal agencies' reporting on civil monetary penalties in AFRs. Two of the agencies did not comment on their respective recommendations, while the remaining four all indicated that they were taking actions to address the recommendations made to them. OMB generally agreed with the recommendation addressed to it but suggested a revision to use more broad terms. GAO modified the recommendation accordingly to allow OMB flexibility to meet the intent of the recommendation. |
The Federal Reserve System (Federal Reserve or System), our nation’s central bank, is unique among governmental entities in many respects, particularly in its finances. Unlike many government agencies whose operations are funded through the Congressional appropriations process, the Federal Reserve deducts operations and other expenses from its revenues and transfers the remaining amount to the U.S. Department of the Treasury (Treasury). Although the primary mission of the Federal Reserve is to support a stable economy, not to make a profit or maximize its transfer to Treasury, System revenues contribute to total U.S. revenues; thus, deductions from System revenues represent a cost to U.S. taxpayers. In today’s constrained budget environment, Congress seeks to be well informed on all activities that affect the government’s finances. For this reason, Members of Congress have requested our assistance in providing information about the revenues and costs of the Federal Reserve, factors that could affect Federal Reserve finances, and about the mechanisms used to control costs and conduct strategic planning. The Federal Reserve was created by the Federal Reserve Act of 1913 “. . . to provide for the establishment of Federal reserve banks, to furnish an elastic currency, to afford means of rediscounting commercial paper, to establish a more effective supervision of banking in the United States, and for other purposes.” The Federal Reserve’s basic structure includes a federal agency in Washington, D.C.—the Board of Governors of the Federal Reserve System (Board), whose seven members are appointed by the president and confirmed by the Senate. Figure 1.1 shows the organizational structure of the Board. The structure also includes 12 federally chartered corporations, located in various regions of the country (Federal Reserve districts), known as Federal Reserve Banks (Reserve Banks). Figure 1.2 shows the boundaries of the Federal Reserve districts. The Federal Reserve is unusual in many respects compared to other entities established to carry out public purposes. It is a federal system that is part public and part private; although the Board is a government agency, the Reserve Banks are not. Also, the Federal Reserve does not follow the familiar federal structure of a “top-down” hierarchy, with all policymaking powers centralized in Washington, D.C. Instead, the Board and the Reserve Banks have shared responsibilities and policymaking authority in many areas of operation. The Federal Reserve’s part-public, part-private composition evolved from efforts to ensure our central bank’s balanced consideration of public and private interests at national and regional levels. A related feature is the Federal Reserve’s structural independence from political influence and direct taxpayer support. The Federal Reserve’s budget is not subject to the approval of Congress or the administration, and the central bank receives no government appropriations. The Reserve Banks are structured as self-supporting corporations, and the Board is financed by a levy on the Reserve Banks. The Reserve Banks are federally chartered corporations wholly owned by private-sector commercial banks (which, as members of the Federal Reserve, are known as member banks.) In terms of assets and personnel, most of the Federal Reserve is in the Reserve Banks: virtually all of the Federal Reserve’s assets, liabilities, revenues, and expenses are carried on the books of the Reserve Banks, and 94 percent of the over 25,000 employees of the Federal Reserve are employed by the Reserve Banks. As of December 31, 1994, the assets of the Reserve Banks were $437.0 billion, and their liabilities and equity were $429.5 and $7.4 billion, respectively. Although the Reserve Banks are privately owned corporations, they differ from most such entities in important ways. The ownership of all stock of the Reserve Banks confers on member banks only some of the typical attributes of control and financial interest. For example, member banks receive dividends on Federal Reserve stock, but these dividends are set by law at 6 percent of paid-in capital. Member banks may not sell the stock or pledge the stock as collateral for loans. Also, member banks elect six directors of the Reserve Banks’ boards of directors; the Board appoints three directors and designates one of these as chairman and another as deputy chairman of the board. Finally, the Reserve Banks have been considered instrumentalities of the federal government in at least one context. Although the Federal Reserve must report to Congress annually on its operations, decisions of System policymakers generally are not subject to ratification by the president or any presidential appointees in the executive branch or by Congress. The Federal Reserve Act not only gives the board of directors at each of the Reserve Banks powers of supervision and control over the Reserve Bank, but it also grants the Board of Governors the power to exercise general supervision over the Reserve Banks. For example, the board of directors at each Reserve Bank appoint and determine the compensation of the top official of each Reserve Bank; however, the Board of Governors is authorized to approve or disapprove these decisions. The Federal Reserve Act does not specifically define the general supervisory responsibilities of the Board of Governors. To help systemwide planning and decisionmaking in a system of shared responsibilities, the Board and the Reserve Banks participate in systemwide conferences. In 1994, the conference structure consisted of three major conferences: the Conference of Presidents (COP), the Conference of First Vice Presidents (COFVP), and the Conference of General Auditors, as described in table 1.1. Each of these conferences is supported by committees, subcommittees, and task forces, involving many Federal Reserve officials and staff. In 1995, a new management structure for financial services took over many of the duties of the COFVP that were related to priced services. The new committee was established to streamline the Federal Reserve’s decisionmaking process and to make the Reserve Bank first vice presidents more accountable for strategic planning of major business lines throughout the entire System. This new structure and details on the Federal Reserve’s decisionmaking authority are discussed further in chapter 5. Many, but not all, of the responsibilities of the Federal Reserve are shared by the Board and the Reserve Banks. This is perhaps best explained in the context of a discussion of the basic mission of the Federal Reserve. The mission of the Federal Reserve today, which is critical to the nation’s economy, can be generally described in terms of four major functions or responsibilities: conducting monetary policy; maintaining the stability of the financial system and containing systemic risk that may arise in financial markets; providing services to financial institutions and other governmental agencies; and supervising and regulating banks and bank-holding companies. Table 1.2 briefly describes the basic responsibilities of the Federal Reserve and explains how these responsibilities are shared by components of the System. The budgetary controls and oversight structures internal to the Federal Reserve reflect the fact that each of the Reserve Banks has its own management structure in addition to being supervised by the Board. Concerning budget controls—the primary control over spending—the Reserve Banks and the Board have similar, but separate, processes, and the Board approves all final budgets. The Federal Reserve’s internal oversight structure includes general auditors at each of the Reserve Banks and the Office of the Inspector General (OIG) at the Board. The OIG is authorized to audit activities for which the Board has primary responsibility. In addition, some Board divisions, such as the Division of Reserve Bank Operations and Payment Systems (DRBOPS), Division of Human Resources Management (DHRM), and Division of Banking Supervision and Regulation (DBS&R), conduct operations reviews using the Board’s delegated authority to supervise the Reserve Banks. DRBOPS also conducts financial examinations, operational audits, and annual performance evaluations of the Reserve Banks. The budgetary controls and oversight structures of the Federal Reserve are discussed further in chapter 5. The Reserve Banks use accrual accounting to track expenses. For this reason, the Reserve Banks’ operating expenses reflect only the depreciation costs of capital acquisitions. Each major unit of the Reserve Banks has an operations budget and a capital asset budget, and each is controlled by the same process. Each year, the Federal Reserve deducts operations and other expenses from current revenue and transfers the remaining amount to Treasury. For example, in 1994, the Federal Reserve deducted about $3.5 billion from the current revenue of about $24 billion and returned about $20.5 billion to Treasury. The amount returned to Treasury has varied during the period of 1988 to 1994, as shown in table 1.3. The return of these remaining revenues to Treasury is in accordance with a policy established by the Board of Governors, and is not required by statute. The amount that the Federal Reserve transfers to Treasury each year is a function of the amount of System revenues and deductions, which are affected by a variety of factors. The Federal Reserve has three major sources of recurring revenue: interest on U.S. securities held primarily to collateralize currency (currency-related securities), other interest earned, and fee income. Table 1.4 briefly identifies the sources of revenue and briefly describes the primary factors that determine the amounts received from each source. As detailed in this table, only in the case of priced services and net payments for fiscal agent services can the Federal Reserve set fees in response to fluctuations in the costs it incurs. Most Federal Reserve revenue comes from interest earned on U.S. government securities that are held by Reserve Banks and used to back, or collateralize, Federal Reserve notes. The Federal Reserve Act introduced Federal Reserve notes—the “paper money” that we use today. Before being issued to the public, Federal Reserve notes must be secured by legally authorized collateral—gold certificates, special drawing rights (SDR), and U.S. Treasury and federal agency securities purchased through open-market operations. About $381.5 billion in Federal Reserve notes were in circulation as of December 31, 1994, and as shown in table 1.4, the assets that collateralized those notes accounted for about 87 percent of the Federal Reserve assets at that time. Those assets represented mainly U.S. Treasury and federal agency securities. Throughout the 1988 to 1994 period, interest received on such currency-related securities ranged from 79 to 87 percent of all System revenues. Interest on noncurrency-related U.S. securities refers to interest earned on securities purchased to implement monetary policy. The Federal Reserve influences the economy mainly through a system of managed reserves.The Monetary Control Act of 1980 requires all depository institutions to hold reserve balances in accounts with the Reserve Bank for their Federal Reserve districts or other designated institutions or, as permitted by Board regulations, in the form of cash in their vaults. The Federal Reserve sets reserve requirements for depository institutions and determines the total of reserves for the banking system. By purchasing securities in the market, the Federal Reserve expands reserves when it wants to lower interest rates and encourage more credit in the economy. Conversely, by selling securities, the Federal Reserve reduces reserves when it wants to raise interest rates and restrict the amount of credit. The Federal Reserve’s control over bank reserves enables it to play a major role in protecting the economy against systemic risk—that is, excessive disruption from financial market disturbances. In the event of a financial crisis, such as a plunge in stock prices, the Federal Reserve may increase the liquidity of markets by temporarily supplying extra reserves to the banking system through open-market operations. Interest on foreign securities was an additional source of revenue. In 1994, the Federal Reserve held $20.5 billion in foreign securities. The Federal Reserve also has a reciprocal swap network with different central banks, which is not included on the balance sheet. The Federal Reserve earns interest on foreign-denominated assets, but also faces risks in that it can gain or lose on trades. The Federal Reserve earns interest on loans provided to depository institutions through its discount window. Through the discount window, commercial banks and other depository institutions may borrow reserves from the Federal Reserve. These institutions are expected to draw on all other reasonably available sources of funds before coming to the discount window. The loans are made at a rate of interest—the discount rate—set by the Reserve Banks and approved by the Board. The Monetary Control Act requires the Federal Reserve to charge depository institutions for its services to financial institutions, setting fees in such a way that, over the long run, the revenues from these services will recover the costs of providing them. The act also requires all depository institutions to meet the Federal Reserve’s reserve requirements and grants these same institutions access to System services at market prices as well as access to short-term or discount loans. In addition to the services mentioned in table 1.2, the Federal Reserve provides securities safekeeping and transfer and noncash collection services. Services to financial institutions and Treasury constitute a large portion of the Federal Reserve expenses. From current revenues, the Federal Reserve deducts the cost of operating the 12 Reserve Banks and the Board and other expenses before transferring the remaining revenues to Treasury. Generally, these deductions can be categorized as expenses; other deductions; and losses, gains, and other adjustments. Table 1.5 briefly describes these deductions. For purposes of this report, operating expenses of the Federal Reserve include the cost of operating the Reserve Banks and the Board. As shown in table 1.6, the Federal Reserve makes other adjustments to current revenue. For example, gains and losses resulting from sales of U.S. Treasury and agency securities or changes in value of foreign exchange or assets denominated in foreign currencies are accounted for in adjustments to current revenue. Other gains or losses, such as those realized as the result of changes to accounting rules, are also accounted for in this way. In 1993, the Federal Reserve experienced a significant one-time deduction to revenues, primarily the result of the initial accrual of postretirement employee benefits required by a change in accounting rules. In recent years, adjustments have been volatile because of gains or losses on assets denominated in foreign currencies, both from actual transactions and from revaluation to dollars of assets held in portfolio. Our objectives were to (1) analyze trends in the cost of Federal Reserve operations during 1988 to 1994 and the System’s management processes for controlling spending and overseeing operations, (2) identify opportunities to increase the System’s efficiency without adversely affecting its effectiveness, (3) identify ongoing and future developments that could significantly affect the Federal Reserve’s mission and finances, and (4) assess the System’s strategic management processes and identify actions the Federal Reserve could take to successfully meet future challenges and ensure the efficiency and effectiveness of its operations. To analyze trends in the Federal Reserve’s spending during 1988 to 1994, we developed a 7-year trend analysis on the expenses of the Federal Reserve. In addition, we examined the Reserve Banks’ and the Board’s cost-accounting systems to identify trends in the Federal Reserve’s operating expenses by mission-related activity (such as services to financial institutions and Treasury) and types of expense (such as salaries, benefits, and travel). However, we did not audit these numbers and did not verify their accuracy. We reviewed a variety of financial-accounting, cost-accounting, staffing, and budgetary reports prepared by the Board and the individual Reserve Banks. To analyze the Federal Reserve’s spending trends, we compared the System’s levels of spending to inflation and levels of discretionary spending of the federal government during the same period. Due to the limitations of our audit authority, we did not analyze direct costs relating to the buying, selling, and holding of securities and foreign currency or other valuables in connection with the implementation of monetary policy. To identify opportunities that exist to increase the System’s efficiency without adversely affecting its effectiveness, we concentrated our work on personnel compensation, travel, and procurement and contracting. More specifically, we reviewed Board and Reserve Bank personnel compensation data and policies, travel policies, and samples of travel vouchers; compared personnel compensation policies and regulations to those of developed and mailed a standardized data collection instrument, or questionnaire, to obtain pay and benefits information from the human resource officers at the eight Reserve Banks where we did not do detailed audit work in these areas; contacted questionnaire respondents by telephone to further clarify did an in-depth review of the procurement process at the San Francisco Reserve Bank, focusing on items with a cost of more than $25,000; and interviewed officials at three Reserve Banks who were responsible for procurement, reviewed procurement guidance, traced purchases through the payment process, and reviewed selected contracts. To identify ongoing and future developments that could significantly affect the Federal Reserve’s mission and finances, we analyzed studies, data, and other information and interviewed knowledgeable officials of the Federal Reserve on issues related to check clearing, currency processing, and bank supervision and interviewed officials and analyzed supporting documentation from the Federal Reserve Bank of Kansas City to determine how the automation consolidation project, the Federal Reserve Automation Services (FRAS), had affected its check-clearing priced service. DBS&R. We did our work in accordance with generally accepted government auditing standards at the Board of Governors in Washington, D.C., and at the Federal Reserve Banks in Chicago; Dallas; Kansas City, MO; New York; Richmond; and San Francisco from January 1994 through September 1995. We obtained written comments on a draft of this report from the Federal Reserve Board of Governors. The comments are discussed at the ends of chapters 3 and 5 and reprinted in appendix V. Staff of the Federal Reserve Board provided additional technical comments on the draft report, which were incorporated as appropriate. In the 7-year period from 1988 to 1994, as many commercial banks restructured to reduce operating costs and increase revenues and Congress and the Executive Branch acted to constrain discretionary federal spending, the cost to operate the Federal Reserve has increased steadily and substantially—from $1.36 billion in 1988 to $2.00 billion in 1994, or 48 percent. This percentage increase exceeded the 25-percent inflation that occurred during the same period, was also greater than the 17-percent increase in overall federal discretionary spending, and was almost the same as the 51-percent increase in federal nondefense spending. The growth in Federal Reserve expenses was caused by significant increases that occurred in expenses for bank supervision and regulation, personnel compensation, and extensive automation modernization and consolidation. Since the early 1980s, federal budgeting has been dominated by concern about the budget deficit. In the mid-1980s, the deficit was greater than $200 billion; in the early 1990s, the deficit approached $300 billion. By 1985, the high deficit prompted the enactment of the Gramm-Rudman-Hollings Act (GRH), which established deficit targets for each year through fiscal year 1991, when the budget was to be balanced. GRH was amended in 1987. In 1990, Congress revised the GRH process with the Budget Enforcement Act of 1990 (BEA). Rather than focusing on fixed deficit targets, BEA was designed to limit legislative actions by limiting appropriations and restricting the creation or expansion of any entitlement program or tax cut. BEA categorizes all federal spending as either discretionary (funded through annual appropriation acts) or direct (entitlements or spending that results from laws other than appropriation acts). BEA set discretionary spending limits—called caps—to control the aggregate amount that can be appropriated and expended for all discretionary programs in a fiscal year. Thus, all discretionary programs compete with each other within the caps. Direct spending programs are controlled by BEA’s pay-as-you-go (PAYGO) rules. The main PAYGO requirement is that legislation enacted during a session of Congress which increases direct spending or decreases revenues, must be offset by revenue increases or a cut in another direct spending program. If the legislative action increases the deficit for a fiscal year, a sequestration from certain direct spending accounts occurs. In addition to these budgetary control mechanisms, the administration and Congress are attempting to make the federal government smaller and more cost efficient by reforming or “reinventing” its agencies and work processes. For example, the National Performance Review (NPR), under the direction of the Vice President, is an administration initiative that seeks to propose recommendations on how the federal government could work better and cost less. Throughout the 1980s and into the 1990s, many U.S. banks also made strategic decisions to restructure their activities, cut operating costs, and generally develop more efficient operations. The U.S. banking system underwent this transition in response to intense domestic and international competition, technological and financial innovations, and changing market conditions. For several reasons, the Federal Reserve is not subject to the same cost-reduction pressures that are affecting both public agencies and private sector firms. The Federal Reserve, for example, is not subject to BEA, primarily because it operates without congressional appropriations and funds its operations and pays other expenses from the current revenue of the Reserve Banks. Also, unlike private firms, the Federal Reserve does not have a profit incentive to lower costs and increase efficiency. From 1988 to 1994, as shown in figure 2.1, Federal Reserve operating expenses increased from $1.36 billion to $2.00 billion. This was an increase of about twice the amount of inflation and about 3 times the increase in overall federal discretionary spending that occurred in that period. During this same period, Federal Reserve operating expenses increased at about the same rate as the percentage increase for nondefense federal discretionary spending. The Federal Reserve uses cost-accounting systems that allocate operating budget expenditures to both mission-related categories and expense categories. For budgeting purposes and accounting, expenditures of the Federal Reserve are accounted for in five major mission-related areas of the System: monetary policy, supervision and regulation, services to financial institutions and the public, services to Treasury and other government agencies, and System policy and oversight. Costs of support and overhead, including Board expenditures for System policy direction and oversight, are allocated to each Federal Reserve mission activity. The costs are distributed to the Federal Reserve mission activities in accordance with predetermined ratios derived by estimated usage. The Federal Reserve also categorizes operating expenses by expense categories. These categories include personnel compensation, equipment and software, buildings, travel, shipping, materials and supplies, and communication. As shown in table 2.1, although spending in all five of the Federal Reserve’s mission-related activities increased during 1988 to 1994, the supervision and regulation area experienced the highest spending growth. The rate of spending increases in mission-related activities ranged from 34 percent (services to financial institutions and the public) to 102 percent (supervision and regulation). The growth in the supervision and regulation area resulted from staff increases in the area. Services to financial institutions and the public, services to government agencies, and bank supervision and regulation accounted for almost 90 percent of the Federal Reserve’s costs. Within the services to financial institutions activity, expenses for priced services increased substantially less than those for nonpriced services. Priced services expenses increased the least of any mission-related activity. According to Federal Reserve officials, growth in supervision and regulation expenditures was driven primarily by staff increases. These staff increases resulted from the implementation of the regulatory requirements mandated by banking reform laws, such as the Federal Financial Institutions Reform, Recovery and Enforcement Act of 1989 (FIRREA) and the Federal Deposit Insurance Corporation Improvement Act (FDICIA) of 1991. The supervision and regulation area increased its staff over 42 percent during 1988 to 1994 from 2,456 to 3,498. During the same period, both the Office of the Comptroller of the Currency (OCC) and the Federal Deposit Insurance Corporation (FDIC) had increased the size of their bank supervision and regulation staff by about 24 percent and 45 percent, respectively. As shown in table 2.1, costs related to priced services rose by the smallest rate of growth during 1988 to 1994. The Federal Reserve has a significant incentive to restrain priced services costs because under the Monetary Control Act, fees for services are to be based on the recovery of expenses, and the System competes with the private sector in providing services to financial institutions. The Monetary Control Act requires the Federal Reserve to charge financial institutions for priced services, such as check processing, and to recover its costs. In addition, the Federal Reserve competes with private check clearinghouses and automated clearinghouse (ACH) networks in processing checks and conducting ACH transactions. As shown in table 2.2, the three Federal Reserve expense categories with the largest growth rates were equipment and software (85 percent), travel (66 percent), and personnel compensation (53 percent) during 1988 to 1994. The percentage growth in Federal Reserve expenses for personnel compensation, equipment and software, building, and travel all exceeded the amount of inflation (25 percent). The most significant expense category during the period of 1988 to 1994 was personnel compensation, which accounted, in 1994, for nearly two-thirds of the Federal Reserve’s operating budget and over 70 percent of the total growth in the System’s operating budget. (See fig. 2.2.) Personnel compensation expenses increased from $858 million in 1988 to $1.3 billion in 1994—an increase of $456 million, or about 53 percent. Thus, any increase in personnel compensation costs would have a disproportionate impact on the overall increase in Federal Reserve spending. As shown in table 2.2, the overall increase in Federal Reserve operating expenses from 1988 to 1994 was about $646 million; personnel compensation accounted for about $456 million, or 71 percent, of this increase. Figure 2.3 shows the contributions (by percentage) of each major expense category to this $646 million increase. Increases in staffing levels and in the overall cost of benefits as well as the changes in the workforce composition of the Federal Reserve contributed to the rising cost of System personnel compensation in the 1988 to 1994 period. The percentage increases for the Federal Reserve’s staffing levels and its overall cost of benefits exceeded the comparable percentages for the federal government (see table 2.3). Salary growth in the Federal Reserve and the federal government was comparable during the period. More specifically, System benefits have been adjusted to account for the Federal Reserve’s decision to start, in 1987, to amortize the overfunded portion of its pension plan. The effect of this action has been to reduce the Federal Reserve’s expenses. In addition, the Federal Reserve started in 1993 to accrue the cost of health benefits for its retired employees. The federal government’s employee pension program is prefunded, but it and postretirement health benefits have an unfunded liability. While the federal government’s overall staffing level declined by 2 percent, the overall staffing level of the Federal Reserve increased from 24,829 to 25,744, or by about 4 percent. This percentage increase is about the same as employment growth in the federal government outside the Department of Defense. Viewed in terms of mission-related activities, the level of staffing shifted considerably from 1988 to 1994, as shown in table 2.4. The largest increase in staffing occurred in supervision and regulation, whose staff tend to be white-collar employees, primarily bank examiners. The second-largest increase occurred in support, due primarily to new, automated data-processing professionals, also white-collar staff. The largest decrease in staffing occurred in services to financial institutions, which is an area of activity with a larger proportion of blue-collar workers who handle functions in check and currency processing. Federal Reserve salary costs, which amounted to a total of about $1 billion in 1994, constituted 79 percent of System personnel compensation costs in 1994. During 1988 to 1994, the Federal Reserve salary costs increased by 44 percent compared to an increase of 33 percent for the federal government. Adjusting for the Federal Reserve’s increase in staffing, we compared the salary costs of the System and the federal government on a per employee basis for the period of 1988 to 1994. The results showed that the 39-percent increase in the per employee cost of Federal Reserve salaries was slightly higher than the 36-percent increase in the per capita salary cost in the federal government. The growth in the Federal Reserve’s overall salary costs can be attributed to the increase in new professional positions created by the Federal Reserve during 1988 to 1994. During 1988 to 1994, the cost of benefits represented an increasingly larger share of the Federal Reserve’s personnel compensation costs—16 percent in 1988 and 21 percent in 1994. During this same period, the cost of Federal Reserve benefits increased by 98 percent compared to an increase of 59 percent for the federal government. Again, adjusting for the Federal Reserve’s increase in staffing, we compared the benefit costs of the System and the federal government on a per employee basis for the period of 1988 to 1994. The results showed that the increase in the per employee cost of Federal Reserve benefits (96 percent) was higher than the increase for the federal government (62 percent). The difference in the growth of Federal Reserve and federal government benefits can be attributed to (1) higher costs for benefits offered to existing staff and (2) the additional cost of benefits for new positions created by the Federal Reserve in the period. Although travel and equipment and software expenses constituted a small portion of the System’s operating expenses, during 1988 to 1994, these expenses had the highest growth rates in operating expenses. As previously mentioned, equipment and software and travel expenses increased by 85 percent and 66 percent, respectively. The growth in the equipment and software expenses primarily resulted from the depreciation/amortization expenses in equipment, including computers and software. The cost of the Federal Reserve’s travel expenses increased significantly more than the federal government’s travel expenses during 1988 to 1994. The Federal Reserve’s travel expenses increased by 66 percent compared to 26 percent for the federal government. This may be due, in part, to differences in the travel policies of the Federal Reserve and the federal government. During 1988 to 1994, the Reserve Banks spent about $1.7 billion on capital acquisitions. Costs for capital expenditures are allocated to a budget separate from the Reserve Banks’ operating budget. During 1994, the Reserve Banks spent approximately $270 million to acquire capital assets, such as computer equipment and software for the Federal Reserve’s automation and consolidation project, which is known as FRAS. The Reserve Banks’ 1994 capital expenditures represented a 60-percent increase from the 1988 expenditures. However, unlike operating expenses, which increased steadily every year, the growth in the capital expenditures was somewhat sporadic, increasing in some years and decreasing in other years. We did not compare the Reserve Banks’ capital acquisitions to the federal government’s capital acquisitions because of the differences in the way capital spending is tracked. As previously mentioned, the large growth in the Reserve Bank’s capital expenditures was partially the result of FRAS. Three automation consolidation centers will consolidate most of the independent mainframe operations of the 12 Reserve Banks, providing consolidated mainframe and contingency support for, among other things, the Federal Reserve’s mission-critical payments system. The three FRAS centers are at the Dallas and Richmond Reserve Banks and the East Rutherford Operations Center of the New York Federal Reserve. As of December 31, 1994, the total capital acquisition cost, primarily for computer and software, was $242 million. While many commercial banks were downsizing their organizations and the federal government was constraining spending, the Federal Reserve’s costs were steadily increasing. During 1988 to 1994, the cost of operating the Reserve Banks and the Board increased by 48 percent—nearly twice the amount of inflation. The growth in the Federal Reserve’s operating budget was primarily produced by cost increases in the supervision and regulation area and in the expense categories of personnel compensation, travel, and equipment. Although the Federal Reserve’s expenditures increased in all five mission-related areas, supervision and regulation experienced the most growth during 1988 to 1994. The priced services area—where the Monetary Control Act requires the Federal Reserve to recover costs and the Federal Reserve competes with the private sector in providing services to financial institutions—had the lowest cost growth. Thus, where the System had significant incentives to constrain costs, it appeared to have done so. Personnel compensation costs, accounting for nearly two-thirds of the operating budget, grew by 53 percent during 1988 to 1994. Also, personnel compensation costs represented over 70 percent of the growth in the Federal Reserve’s operating budget. The growth in Federal Reserve benefits and the increase of professional employment at the Reserve Banks contributed to the rise in the Federal Reserve personnel compensation costs. The results of our review of many policies and practices of the Board and Reserve Banks indicated that opportunities exist to reduce the Federal Reserve’s spending. Federal Reserve personnel compensation (pay and benefits) varied within the Federal Reserve and included benefits that were relatively generous compared to those of government agencies with similar responsibilities. We also found that improvements and greater uniformity in Reserve Bank policies and practices relating to travel reimbursements, contracting and procurement, and construction planning could reduce operating and capital spending costs and reduce the Reserve Banks’ risk of potential conflict of interest and favoritism. For example, we found that the Federal Reserve overlooked opportunities to reduce costs in planning and managing the design of the new Dallas Reserve Bank building. Finally, we found that a reduction in annual Federal Reserve transfer to its surplus account, while not representing a direct reduction in Federal Reserve expenditures, would have a positive budgetary impact in the year that any such reductions occurred. The boards of directors of the 12 Reserve Banks supervise and control the Reserve Banks, subject to the general supervision of the Board of Governors. The Board employs individuals who are necessary to conduct the business of the Board. It also sets employee salaries and benefits; approves compensation paid by Reserve Banks to their employees; and establishes regulations, policies, and practices covering employee benefits. The Reserve Banks and Board have established differing employee pay levels and benefits. Except for the salaries of the Chairman and members of the Board, Federal Reserve salaries are not limited by ceilings established by the civil service pay system. Salaries at some other federal financial regulators, notably OCC and FDIC, also have not been limited by civil service pay rules. Two important objectives of the Federal Reserve’s compensation system are to (1) attract, retain, and motivate qualified employees at all levels of responsibility and (2) be externally competitive with local and/or regional labor markets. To accomplish these objectives, the Board and 12 Reserve Banks conduct individual salary surveys of private and public institutions with related job positions in local labor markets. The Board and Reserve Banks also survey other organizations periodically as they make benefit decisions. While we sought to understand the nature and scope of the Federal Reserve’s surveys, we did not verify or analyze the data and methodology used in these surveys. To determine whether opportunities exist to reduce the Federal Reserve’s costs of operation, we reviewed personnel pay and benefits at the Reserve Banks and Board. We compared the general procedures for setting Reserve Board and Banks’ salaries to those of the federal government. We compared the specific benefits offered Board and Reserve Bank employees to those of federal financial regulatory agencies with responsibilities analogous to some responsibilities of the Federal Reserve. The federal agencies whose salaries and benefits served as comparisons were FDIC, OCC, and the Securities and Exchange Commission (SEC). OCC and FDIC generally are not subject to civil service limitations in providing salaries and benefits to their employees, while SEC is subject to such limitations. We did not attempt to analyze differences in employee responsibilities when we compared Federal Reserve salary levels and benefits to those of FDIC, OCC, and SEC. The Federal Reserve attempts to offer salaries competitive with private sector organizations. The Federal Reserve is not constrained by maximum limits when setting its salaries for most positions. (The notable exceptions are the salaries for the Governors of the Board, including the Chairman.)In addition, both Board and Reserve Bank salaries are based on independent salary surveys of organizations that have similar local labor forces. As a result, the Federal Reserve offers salaries that are competitive with private sector organizations in a given locality. In contrast, most civil service salaries are subject to maximum levels, the highest of which is level IV of the executive pay schedule, which was $115,700 in 1994. In 1990, Congress passed the Federal Employees Pay Comparability Act (FEPCA), which provided for a comprehensive, long-term pay reform program designed to ultimately make federal salaries more competitive with the private sector. Under FEPCA, locality pay adjustments were to be phased in over a 9-year period beginning in 1994. The goal was to reduce pay disparities between federal white-collar workers and nonfederal workers to no more than 5 percent by the year 2002. However, budget constraints have already resulted in reductions of scheduled locality pay adjustments for federal workers for 1995 and 1996. Thus, in contrast to the Federal Reserve, civil service agencies, including SEC, have been unable to offer their employees salaries comparable to those in local labor markets. Other federal financial regulators do not face these constraints. FDIC employees received salaries with geographic differentials that amounted to up to 31 percent more than civil service basic pay levels. OCC’s salary structure, which had a maximum base salary of $166,400, also provided employees with geographic pay differentials of up to 34 percent of OCC’s base salary levels. FDIC and OCC salaries are, with some exceptions, not limited to ceilings established by the civil service pay system; notable exceptions are the salaries of members of FDIC’s board of directors and the salary of the Comptroller of the Currency. One interesting result of the statutory limits on the Governors’ salaries, coupled with the ability of the Federal Reserve to set competitive salaries for other positions, is that a substantial number of Federal Reserve employees are paid more than the Chairman of the Board. Specifically, 120 top-level Federal Reserve officials, including all Reserve Bank presidents, earned more in 1994 than the Chairman. In 1994, the annual salaries of Reserve Bank presidents ranged from $159,600 to $229,600, while the Board Chairman’s salary was $133,600 (the maximum allowed), and each of the other Board members’ salary was $123,100. Appendix III provides the titles and the number of Federal Reserve employees who earn more than the Board Chairman and also shows the 1994 salaries for the presidents of the 12 Reserve Banks. Although employee benefits at the Board and Reserve Banks differed in many respects, some Federal Reserve benefits were systemwide and available at the same levels to all Federal Reserve employees. Systemwide benefits included retirement plans, the thrift savings plan, business travel/accident insurance, life and survivor insurance, and a long-term disability income plan. Other benefits not offered on a systemwide basis included mass transit subsidies and leave granted for marriage, bereavement, family care, and floating holidays. From our review of the Federal Reserve’s personnel policies and practices, we found that a few Federal Reserve benefits were more generous than those available to OCC and FDIC and many were more generous than civil service benefits, such as those available to SEC. Also, the Federal Reserve provided additional benefits to some high-level officials, including home security systems, bodyguards, and home-to-work transportation in Federal Reserve-owned vehicles. All 12 Reserve Banks offered their employees comprehensive health insurance packages. Although the Board paid the same health care premiums as other federal agencies (payments that ranged from about 60 to 75 percent) the percentage of premiums paid by Reserve Banks differed. Most Reserve Banks paid 75 to 90 percent of health insurance premiums. Appendix III shows the percentage of health insurance premiums paid by the Reserve Banks, the Board, FDIC, OCC, and SEC. The total health care cost paid by the Board and the Reserve Banks in 1994 was $7.5 million and $64.9 million, respectively. Recognizing that health care costs are escalating, Reserve Banks are attempting to reduce health plan costs. Federal Reserve health care benefits were managed on a decentralized basis, with each Bank negotiating its own health care coverage. One Reserve Bank eliminated its preferred provider option previously available to employees, replaced it with a managed care network, and reduced the number of available health maintenance organizations. Federal Reserve officials estimated that health care costs would have been about $900,000 more for 1993 without these changes. Another Reserve Bank reduced the number of health care plans available to employees to two effective in April 1994; officials estimated resulting savings of about $2.3 million over the following 3 years. Although the Reserve Banks have individually made efforts to reduce health care costs, the Reserve Banks have not worked together to determine whether their combined bargaining powers would further reduce these expenses. The number of days allowed for annual and sick leave differed significantly among the Reserve Banks and between the Reserve Banks and the Board. The Board, along with FDIC, OCC, and SEC, followed the civil service guidelines, which provide between 13 and 26 days for annual leave and 13 days for sick leave each year. The number of annual leave days available to Reserve Bank employees ranged from 10 to a maximum of 23 to 32 days per year, depending on length of service. Thus, relatively junior Reserve Bank employees were granted fewer annual leave days than civil service permits, but more senior employees could accrue more annual leave days each year than civil service permits. However, some Reserve Banks offered additional paid leave for certain purposes, such as bereavement or marriage, which was in addition to annual leave. Among the Reserve Banks, the number of sick days employees accrue varied considerably. Six Reserve Banks offered fewer sick leave days annually than civil service employment (ranging from 8-1/4 to 12 days), while two others offered sick leave in the range of 15 to 18 days. Other Reserve Banks appeared to offer more generous sick leave policies. Tables III.7 and III.9 in appendix III show the leave benefits available at the Reserve Banks, the Board, FDIC, OCC, and SEC. Federal Reserve Board and Bank employees do not participate in the retirement programs that cover most federal civilian employees. Separate retirement programs apply to Board and Reserve Bank employees. Before 1983, Board employees, along with federal employees in general, were not covered by the Social Security program. Board employees were under the Federal Reserve Board Retirement System (FRBRS), and other federal employees were under the Civil Service Retirement System (CSRS). Even though they were separate, the two systems’ provisions were virtually identical. In contrast, Reserve Bank employees were not considered to be employed by the federal government. They were covered by Social Security and a separate retirement system designed to complement their Social Security benefits. The Social Security Amendments of 1983 required all federal employees, including Board employees, first hired after December 1983 to participate in Social Security. Accordingly, new retirement systems had to be developed to recognize the availability of Social Security benefits for the covered employees. The Federal Employees’ Retirement System (FERS) was developed to cover federal employees in general. However, rather than develop a new retirement system for future Board employees, the Federal Reserve decided that they would be covered by the retirement system already in place for Reserve Bank employees. In addition to the pension benefits available from the FRBRS and Reserve Bank retirement systems, Board and Reserve Bank employees can earn additional retirement income through participation in a thrift plan sponsored by the Federal Reserve. The thrift plan includes two components—a savings account and a deferred compensation account. Employees may contribute to either or both accounts. Employee contributions to the savings account are made with after-tax dollars, and contributions to the deferred compensation account are made with pretax dollars. For each dollar an employee contributes to the thrift plan, up to 6 percent of salary, the Federal Reserve matches 80 percent of the employee’s contributions. Thus, the maximum employer contribution to any employee’s thrift plan is 4.8 percent of salary. Employees may contribute additional amounts to the thrift plan with no matching contributions by the Federal Reserve. Even though the pension benefits available to other federal employees in CSRS and Board employees in FRBRS are the same, Board employees have the distinct advantage of being eligible to participate in the thrift plan and receive matching contributions from the Federal Reserve. Employees in CSRS may contribute to a thrift plan, but receive no contributions from their employing agencies. The Reserve Bank pension plan differs from the FERS pension plan that applies to federal employees in general. Some of the features of the Reserve Bank plan are less generous than the counterpart features of FERS. For example, Reserve Bank plan’s benefits are based on employees’ average salaries earned during their 5 highest-paid years, while FERS’ benefits are based on employees’ average salaries earned during their 3 highest-paid years. Also, Reserve Bank employees must be at least age 60 with 30 years of service to retire with unreduced retirement benefits, while FERS provides unreduced benefits as early as age 55 with 30 years of service. However, these FERS advantages are more than offset by a number of significant features of the Reserve Bank plan that are superior to FERS. Some of the features where the Reserve Bank plan is more generous than FERS are as follows: The Reserve Bank plan is free to employees; FERS requires employees to contribute .8 percent of their salaries toward plan costs. The Reserve Bank plan’s benefit calculation formula provides considerably greater benefits than the FERS formula. In the Reserve Bank plan, benefits are equal to 1.3 percent of average salary up to the Social Security integration level and 1.8 percent of average salary over the integration level multiplied by total years of service. (The integration level is the average of the maximum amounts of salary covered by Social Security from 1959 through the year of retirement. For employees retiring in 1995, the integration level was $24,312). In the FERS plan, the formula for each year of service is 1.1 percent of average salary for retirees who are at least age 62 with 20 years of service. For retirees who are younger than age 62, the FERS benefit formula is 1 percent of average salary for each year of service. The Reserve Bank plan allows employees as young as age 50 to voluntarily retire early with reduced benefits. FERS does not have a similar provision. Under FERS, employees cannot voluntarily retire before age 55 unless they or their agencies are facing involuntary employee separations. FERS also includes a thrift plan to which covered employees and their agencies can contribute to increase retirement income. The FERS thrift plan is designed somewhat differently from the Federal Reserve thrift plan; overall, it provides slightly greater benefits to participating employees. Unlike the Federal Reserve thrift plan, all employees in FERS receive agency contributions equal to 1 percent of their salaries regardless of whether the employees make any contributions. The agencies then match, dollar-for-dollar, employee pretax contributions of up to 3 percent of salary and 50 cents on the dollar for the next 2 percent of salary that employees contribute to the thrift plan. Thus, compared to the maximum 4.8 percent of salary the Federal Reserve will contribute to an employee’s thrift plan, employing agencies will contribute as much as 5 percent of employees’ salaries to the FERS thrift plan. Also, to receive the maximum employer contribution of 4.8 percent of salary, Board and Reserve Bank employees must contribute 6 percent of their salaries. Employees in FERS can receive employer contributions of 5 percent of salary by contributing 5 percent of their salaries to the FERS thrift plan. As shown in table 3.1, the Federal Reserve offered a few benefits to its employees that are generally not offered to civil service employees. These benefits included separate dental insurance, subsidized employee cafeterias, premium conversion accounts, flexible spending accounts, matching contributions for savings accounts, and mass transit subsidies. In addition, some banks offered marriage, bereavement, parental care, and floating holiday leave as leave categories separate and distinct from the usual annual and sick leave offered. Appendix III provides a full description of these selected Federal Reserve benefits and their availability at FDIC, OCC, and SEC. To determine whether opportunities existed to reduce the Federal Reserve’s operational costs, we also reviewed travel reimbursement policies within the Federal Reserve. Each year, the Board and the Reserve Banks spend millions of dollars for employee travel. In 1994, for example, the total of travel expenditures for the Federal Reserve was $42 million. According to Board officials, Board personnel are authorized to use government rates for lodging and airfare. Some Reserve Bank officials we interviewed stated that Reserve Bank employees are ineligible for government rates for lodging and airfare because the Reserve Banks are not federal agencies. However, one Bank official disagreed, stating that Reserve Bank employees can request government rates for lodging, but cannot insist on receiving government rates. Under regulations comparable to those for other federal employees, Board employees are reimbursed for lodging and meal expenses on a per diem basis. However, members of the Board are permitted to receive reimbursement for domestic lodging and meals on either an actual expense or per diem basis, when deemed appropriate. The Board’s general policy directive for Reserve Bank travel expenditures allows for variations in Reserve Bank reimbursement procedures. These differences can result in additional expenditures. One Reserve Bank we reviewed had maximum lodging reimbursement rates, while another Reserve Bank had recommended reimbursement rates. However, two Reserve Banks reimbursed at cost without maximum or recommended rates. As a result of these policy differences, two travelers’ overnight lodging allowances for the same city could vary widely, depending on each traveler’s Reserve Bank. In addition to the differences noted in lodging costs, Reserve Banks reimbursed employees for meals using varying schedules and rates. Two Reserve Banks reimbursed travelers for meals on the basis of a schedule that divided the day into four quarters, while another Reserve Bank used a more narrowly defined schedule that aligned with typical meal times. Additionally, another Reserve Bank reimbursed travelers for meals depending on whether they were traveling to a Federal Reserve System entity versus other locations. Of the Reserve Banks we reviewed, two also allowed employees to choose actual cost reimbursement rather than a flat per diem rate. As a result of these policy differences, the total meal reimbursement for a 3-day trip to the Board in Washington, D.C., could range from $76 to $105. We believe making travel policies uniform within the Federal Reserve could provide an opportunity to reduce Federal Reserve expenses, particularly if caps on reimbursements were set below current levels. In addition, more uniform policies could result in some administrative costs reductions, particularly if common travel policies would enable travel expenses to be managed on a more centralized basis, thus reducing the need for staff time devoted to travel administration at each Reserve Bank. To determine whether opportunities existed to reduce the Federal Reserve’s operational costs, we also reviewed procurement and contracting practices at several Reserve Banks. Unlike personnel costs that remain relatively stable, expenses associated with capital acquisitions can vary significantly from year-to-year, offering additional opportunities for controlling and reducing procurement costs. The 12 Reserve Banks spent more than $560 million in 1994 to acquire buildings, equipment, supplies, and services. Nearly half of the total ($267 million) was used to buy capital items (buildings and equipment) and fund building projects. As discussed in chapter 1, only depreciation costs of capital assets are accounted for in annual operating budgets. Because the Board and Reserve Banks spend millions each year for goods and services, certain controls should be in place to ensure that those dollars are spent wisely. For example, the Reserve Banks should have an effective procurement and control process in place to ensure that they receive goods and services at the most reasonable cost. Moreover, to prevent fraud and abuse, the procurement practices should also preclude potential conflicts of interest between the Reserve Banks and contractors. The Board and the Reserve Banks used different procurement guidelines. Although not specifically directed to do so by the Federal Reserve Act, a spokesman for the Board told us that the Board follows the spirit of the federal government contracting rules, which are contained in the Federal Acquisition Regulations (FAR). Reserve Banks are not required to follow these rules. However, each Reserve Bank is required to follow general procurement guidance, called Uniform Acquisition Guidelines (UAG), which were adopted by the Reserve Banks in 1985. The UAGs were developed by the Reserve Banks in conference committees. They were designed to provide minimum requirements for Reserve Bank procurement activities. By providing opportunities for all interested bidders to become a selected source, the guidelines attempt to ensure that Reserve Banks treat sources fairly and impartially. By fostering competition in the procurement process, Reserve Banks will also have a greater opportunity to realize cost savings through lower competitive pricing. Despite the UAGs, we observed the following: Practices at individual Reserve Banks differed significantly and some practices favored certain sources over others. For instance, some Reserve Banks did not allow an equal opportunity for new bidders to bid for large procurements and limited bidders lists to sources with which the Reserve Banks had traditionally done business. This practice existed even though other equally qualified sources were both available and interested. Furthermore, some Reserve Banks retained incumbent contractors for certain services for years without recompeting the award, thus precluding other firms from competing for those services. At one of the four Reserve Banks we visited, the records indicated that the cafeteria contract was last competed over 9 years ago. At another Reserve Bank we visited, personnel could not locate documentation of their last cafeteria contract negotiations, which they believed occurred in the late 1980s. By limiting the ability of other sources to compete for a contract, Reserve Banks tend to reduce competition, thereby missing opportunities to reduce procurement costs. Proper controls over conflict of interest were not followed at certain Reserve Banks. For instance, the UAGs prohibit disclosure of specific information contained in bids or proposals to anyone except Reserve Bank personnel before awarding the contract. However, two of the four Reserve Banks we visited transferred almost all functions leading up to the award of major building contracts to architecture and engineering (A&E) firms. A&E firms receive and evaluate bids and recommend the source that should receive the award. In contrast, at the other two Reserve Banks we visited, only Reserve Bank personnel were allowed to receive and evaluate the bids or proposals and choose the successful source. The building department’s vice president at one of the four Reserve Banks told us that the larger the role the A&E firm plays, the greater the potential for favoritism and conflict of interest. Practices at certain Reserve Banks lacked independent checks and reconciliations. Although each Reserve Bank should have controls for independent checks and reconciliations of voucher payments, at two of the four Reserve Banks we visited, only the building department was responsible for authorizing progress payments made to construction contractors. At both Reserve Banks, officials responsible for the payment function, where the reconciliation should take place, did not track payment amounts against the total available contract dollars. Instead, when vouchers were received that showed the approval of the building department, the vouchers were paid. Noteworthy practices used by certain Banks were not disseminated among the Reserve Banks. Several Reserve Banks had procedural strengths or notable practices that were missing in others. Building department officials at one of the four Reserve Banks requested and analyzed various elements of cost included in construction proposals, which enabled them to evaluate the proposed prices. They had found that challenging the bids/proposals from construction contractors resulted in improved understanding of what is required, as well as better quality and lower prices. However, we found no evidence that information about these “best practices” was being disseminated within the Federal Reserve. Specifics on these practices are described in appendix IV. To determine whether opportunities exist to reduce the Federal Reserve’s operational costs, we also reviewed decisions related to the construction of the Dallas Reserve Bank facility. Even though the cost of the Dallas building project was $8 million less than the initially approved budget and construction was completed ahead of schedule, opportunities existed for the Federal Reserve to reduce costs. In two areas, we found that the Reserve Bank could have cut costs approved by the Board. First, the Dallas Reserve Bank building was larger than the plan initially specified; second, the Reserve Bank purchased more land than necessary. Since the building contained enough square footage to meet the projected space-study needs through 2017, the purchase of additional acreage for expansion purposes had questionable value. By July 1988, the Dallas Reserve Bank had outgrown its original building. The building could not house all employees, no longer complied with the evolving building codes, and contained many space deficiencies. Faced with these problems, Dallas Reserve Bank officials commissioned a study, to identify alternatives that would resolve the space problems. As a result of that study, in November 1988, the Dallas Reserve Bank recommended that the Board: approve a space plan for a building with 540,334 net usable square feet, which would satisfy the Reserve Bank’s projected needs through 2017; locate the new building on land within the Central Business District (CBD), which would provide the most effective and appropriate solution for satisfying the Reserve Bank’s space needs over the long term; approve a target budget of $171.8 million for the construction of a new building on a new site; and authorize the Reserve Bank to proceed with a site selection and conceptual design for a new building. In January 1989, the Board approved the Dallas Reserve Bank’s proposal to construct a new building at a new location within the Dallas CBD. The Board-approved plan had a target budget of $171.8 million and a target completion date of August 1992. In July 1990, the Board authorized the Dallas Reserve Bank’s proposal to follow an expedited (or “fast track”) construction plan. This approach allowed the Reserve Bank to begin construction with incomplete construction drawings and without finalized subcontract agreements. Additionally, the Board lowered the final budget for the land purchase and new building design and construction to $164.5 million. The expedited construction plan also allowed occupancy 3 months ahead of schedule. The proposal submitted by the Dallas Reserve Bank to the Board called for the construction of a 540,334 square foot building. The building requirements for the Reserve Bank’s new facility were based on the Board’s projected space requirements. The Board requires that new building projects allow for 15 years of personnel growth and 25 years of vault and other space growth. The Dallas Reserve Bank hired a consultant to determine the projected space needs on the basis of the Board criteria. The space study found that 540,334 square feet would allow for 15 years of personnel growth through 2007 and that 580,093 square feet would allow for 25 years of equipment growth though 2017. The completed building contains 595,385 square feet, which is 55,051 square feet more (about 10 percent) than the initially approved square footage. In addition, the new building’s square footage was more than the 580,093 square feet the bank is projected to need in 2017. The two areas most overbuilt, in terms of total square feet and percent authorized, were the data services and lobby areas. In the data services area, 70,167 square feet were authorized by the Board. However, in the completed building, the final square footage for data services was 90,860, or 29 percent more than was authorized. The building’s two lobby entrances called for 7,800 in total square feet, while the actual square footage on completion was 27,369, or an increase of 250 percent. According to a Reserve Bank official, the architect’s plan provided for more space than was approved by the Board. However, the additional space did not cause concern since the design and construction costs for the plan were less than the budgeted amount approved by the Board. The Dallas Reserve Bank purchased, with the Board’s approval, 8.02 acres of land for $27.7 million, or $79.30 per square foot. They needed 6.02 acres for new building construction and purchased the additional 2 acres for future building expansion or sale. Since the building design exceeded projected space needs through 2017, the need for additional land was redundant. According to a senior official at the Dallas Reserve Bank, the Bank could have purchased only the 6.2 acres for approximately $20.7 million and foregone the additional 2 acres for a total savings to the Federal Reserve of $7 million. Downward adjustments to the surplus account, or its elimination, would have a positive budgetary impact by increasing the amounts returned to Treasury in the years that they occur. The current formula for calculating the amounts to be contributed to surplus accounts is as follows. Each Reserve Bank’s capital stock is by law equal to 6 percent of the paid-in capital and surplus of its member banks. Annually, as banks’ paid-in capital and surplus grow or shrink, member banks are required to adjust the amount of their Federal Reserve Bank stock to equal 6 percent of their paid-in capital and surplus. The Reserve Banks then contribute, out of Federal Reserve earnings, amounts to their surplus accounts so that the surplus balances are equal to the amount of paid-in capital. During 1988 to 1994, the total of the surplus accounts systemwide increased 79 percent, from $2.1 billion in 1988 to $3.7 billion in 1994. The Federal Reserve has stated in its publications that the purpose of the surplus accounts is to ensure that adequate capital is available to absorb possible losses. In its monetary policy, lender of last resort, and payment system activities, the Federal Reserve is exposed to risks that could potentially generate large losses. However, because the Federal Reserve’s interest income so far exceeds its expenses, we believe it is highly unlikely the Federal Reserve will ever incur sufficient annual losses such that it would be required to use any funds in the surplus account. In the years 1914 and 1915, the first 2 years of its operations, the Federal Reserve experienced net losses. However, every year since then, for 79 years, the Federal Reserve has recorded substantial net profits. The profits for 1994 were $20 billion and expenses, including losses, were about $3 billion. We could find no criteria to use in assessing the amount held in surplus. According to Federal Reserve officials, the methodology for deciding that amount has changed and is somewhat arbitrary. Currently, and in the past, the levels of the surplus account have been discretionary because the requirement to have the surplus account equal to paid-in capital has been a matter of Federal Reserve policy; it was not required by law. However, in a provision of the Omnibus Budget Reconciliation Act of 1993, Congress required the Federal Reserve, in fiscal years 1997 and 1998 only, to calculate the surplus account using the current formula and then to reduce the account by $106 million in fiscal year 1997 and $107 million in fiscal year 1998. Although the law did not specifically state the purpose of those transfers, its effect was to reduce the federal government’s projected deficit in those years. Considering that this provision only applies to fiscal years 1997 and 1998 and the general lack of criteria for assessing surplus amounts, Congress may wish to determine whether these surplus accounts are necessary and, if so, set permanently in law an appropriate amount for these accounts. Because the Federal Reserve’s spending represents a cost to U.S. taxpayers, the Federal Reserve should operate as efficiently as possible. Our review indicates opportunities exist to reduce Federal Reserve spending. Federal Reserve expenditures for personnel benefits varied among Reserve Banks and some benefits were generous compared to those of federal agencies with similar responsibilities. Also, we believe that opportunities exist for reductions in discretionary spending for health care and travel costs through the systemwide management of these areas. Although several Reserve Banks have undertaken efforts to reduce their health care costs, we believe that centralized management of the Federal Reserve’s health care plans could further reduce health care costs. Furthermore, we believe travel expenses could be reduced by adopting the most cost-effective “best practices” in travel reimbursement policies. Although instituting uniform, cost-conscious practices at all Reserve Banks may appear contrary to the tradition of independently managed Reserve Banks, the Reserve Banks have adopted uniform policies and procedures in many areas of operation. Our review of contracting and procurement practices at some Reserve Banks also indicate opportunities to reduce discretionary spending for goods and services. We believe that the Federal Reserve could better ensure the purchase of goods and services at reasonable cost through increased compliance with UAG as well as systemwide adoption of “best practices” in procurement and contracting. Moreover, in its planning and management of the Dallas Reserve Bank construction project, the Reserve Bank overlooked opportunities to reduce spending that the Board had approved. Downward adjustments to the surplus account, or its elimination, would have a positive budgetary impact by increasing the Federal Reserve’s annual transfer to Treasury in the years that any such reductions occur.Federal Reserve deductions would have to exceed the billions of dollars transferred to Treasury annually before the Federal Reserve’s use of the account would be necessary. Since the chances of an occurrence of such an event are extremely remote, we believe that capping, reducing, or even eliminating the surplus account represents an opportunity to decrease deductions to the amount transferred to Treasury each year. We recommend that the Board of Governors review pay and benefits levels at the Board and the Federal Reserve Banks to determine if current levels can continue to be justified in today’s environment of increased governmental and private-sector cost containment; assess whether managing the Federal Reserve’s health care coverage on a systemwide basis could reduce health care costs; review travel policies at the 12 Reserve Banks and change those policies review contracting and procurement practices at the 12 Reserve Banks to ensure that these practices are in compliance with the system acquisition guidelines and result in cost-effective contracts; ensure that the “best practices” in contracting and procurement at the 12 Reserve Banks are regularly identified, disseminated, and adopted by the Reserve Banks; and review policies regarding the size of the surplus account and determine if opportunities exist to decrease the amount held in the account. Congress should consider the results of the Board’s review and decide if there is a continued need for the Federal Reserve’s surplus account and, if so, what the appropriate amount of the account should be. In written comments on a draft of this report, the Federal Reserve’s Board of Governors did not agree with our recommendations that they review pay and benefits levels and consider reducing or eliminating the surplus account. The Board stated that the Federal Reserve strives to provide salaries and benefits competitive with local private sector markets and that its current pay and benefits levels are necessary to attract and retain skilled employees. The Board agreed that the appropriate level of the surplus account is open to debate, but it did not agree to consider reducing or eliminating the surplus account. The Board stated that reducing the surplus account would have no real economic impact and cited the possibility that, without the surplus account, temporary short-term losses could lead to a perceived impairment of its capital that could raise investors’ concerns about the System’s ability to conduct sound monetary policy. The Board agreed with our recommendations concerning the Federal Reserve’s policies and practices regarding travel, contracting, and procurement. The Board also agreed with our recommendation concerning the management of health care benefits. Because personnel costs accounted for almost 70 percent of the Federal Reserve’s total operating costs and increased by over 50 percent in the 1988 to 1994 period, we believe these costs should be one of the first areas to be examined for potential savings. We acknowledge that certain benefit levels may be necessary for the Federal Reserve to attract and retain a skilled workforce. However, we do not believe the Board has made a convincing case that these benefits need not be reexamined with a view toward greater cost containment. In addition to the private sector, the Federal Reserve also competes with public sector employers, and its benefits are clearly more generous than those of the federal government overall. In some cases, the Federal Reserve’s benefits are more generous than those of the other financial industry regulators who are the major employer-competitors in areas such as bank supervision. Moreover, we note that less than half of the Federal Reserve’s total workforce is highly skilled professional staff, such as lawyers, economists, and financial analysts. We maintain, and the Board agreed, that reducing or eliminating the surplus account, by transferring these funds to Treasury, would increase overall government receipts and reduce the unified budget deficit in the year that any such transfer occurred. We also agree with the Board that reducing or eliminating the surplus account would be offset by a reduction in subsequent years of interest payments to Treasury that the Federal Reserve would have otherwise earned by investing these funds in government securities. However, we believe Congress has a legitimate interest in deciding whether it would be more appropriate to have these funds returned immediately, either to reduce the outstanding public debt or for other purposes, rather than to receive them over a longer period of time. To allow for the possibility that a small, temporary loss could raise investor concerns about the Federal Reserve’s ability to conduct sound monetary policy, we suggested that Congress may wish to set an appropriate level for the surplus account as an alternative to its elimination. The Federal Reserve System faces major challenges in its mission and lines of business, particularly in services to depository institutions and government agencies and in bank supervision. These challenges include (1) increased competition from the private sector and increasing difficulties in recovering costs in priced services, (2) increasingly widespread use of electronic transactions in the financial services industry, and (3) the continuing rapid consolidation of the banking industry, which could affect both the need for, and the distribution of, bank examination staff. Because these areas account for the largest part of the Federal Reserve’s expenses and staffing, addressing these challenges effectively will likely result in major changes in how the Federal Reserve operates. As the Federal Reserve undertakes to meet these challenges, it is also likely to find that its current structure, established in 1913 when the nation’s financial industry was much less complicated, is increasingly inappropriate for the fast-paced, global financial world of today and the next century. However, if major changes to the Federal Reserve’s structure are to be made to promote increased efficiency and competitiveness, such changes will need to be carefully weighed against any potential effects on the independence of our nation’s central bank. The overwhelming majority of the workload and expenses incurred at the Reserve Banks is related to three lines of business—services to depository institutions, services to government agencies, and bank supervision and regulation. These lines of business account for over 90 percent of all Federal Reserve Bank expenses, as shown in table 4.1. Except for bank supervision, most of this workload is production-oriented, whether paper driven, such as processing currency for banks and clearing checks, or electronic in nature, such as running the automated clearinghouse and funds transfer systems. In these areas, employees often work in shifts, under fairly rigid deadlines and production expectations. These three lines of business are precisely the areas subject to an increasing variety of external and internal environmental pressures and challenges. In providing services to depository institutions, the Federal Reserve faces its most immediate and significant challenges to its mission. The Monetary Control Act of 1980 requires that the Federal Reserve base its fees for certain services—check processing, automated clearinghouse (ACH) transactions, Fedwire, securities transfers, etc—on, among other things, the costs of providing such services. At the same time, the Federal Reserve is required to promote the accessibility and efficiency of the nation’s payments system, a role that may make it difficult for the Federal Reserve to raise prices sufficiently to recover its costs. Because services to depository institutions represent over 61 percent of all Federal Reserve Bank expenses and employ the largest part of Reserve Bank staffing, these changes are likely to have a dramatic effect on the size of the Reserve Banks’ expenses, workload, and staffing needs. The Federal Reserve faces intense competition in check clearing. In 1993, for the first time in a number of years, the actual volume of checks handled by the Federal Reserve declined, albeit by a modest 0.2 percent. The Federal Reserve reported that the total volume of commercial checks for 1994 declined by almost 15 percent from 1993 levels. The implementation of same-day settlement rules by the Federal Reserve, beginning on January 3, 1994, is partly responsible for this declining trend. Federal Reserve officials told us they expect further declines in the years ahead. A significant factor in the Federal Reserve’s loss of volume and market share in check clearing is the growth of private clearinghouses. The nation’s check-clearing volume is still growing slowly, but on a per capita basis, the volume is stagnant. At the same time, private clearinghouses competing with the Federal Reserve have grown. The California Bankers Clearing House, the Chicago Clearing House, and the Clearing House Association of the Southwest reported increases in the numbers of member banks in 1994. The California Bankers Clearing House also reported that it is delivering checks to 200 nonmember banks for same-day settlement and, in the process, saving its member banks $3.2 million a year in fees these banks would have had to pay the Federal Reserve for these services. Other factors promise even further reductions in check-clearing volume for the Federal Reserve. These factors include electronic check presentment, in which only the essential check data are recorded and transmitted to the payor bank so that payment or return decisions can be accelerated; check imaging, which involves the use of digitized images of entire checks to perform processing operations; banking consolidation and increased interstate banking, resulting in the increase of “on us” checks, which will not need to go through a clearinghouse, and electronic banking, which is now being offered by some banks, could, in the long term, make paper checks an anachronism. In combination, these factors indicate a continued and perhaps accelerating decline in the Federal Reserve’s check-clearing business. About 22 percent of all Reserve Bank employees were involved in check clearing in 1994. As volume declines, the Federal Reserve will need to prepare for reductions in staff required for cost-competitive services. In other priced services, the Federal Reserve is also likely to face increased competition. The market share of private ACH providers, such as the New York Automated Clearinghouse, the Arizona Clearinghouse Association, and VISA will likely increase. Even in book-entry securities transfer services, an area where the Federal Reserve currently faces only nominal competition, the Federal Reserve is anticipating that future developments could lead to increased competition. The Federal Reserve is facing increased difficulty in recovering its costs for priced services. As shown in table 4.2, costs have outpaced revenues since 1990 in three of the Federal Reserve’s priced services. Some of the difficulties in recovering costs stem from higher than anticipated automation consolidation costs associated with the Federal Reserve Automation Services (FRAS). (See chs. 2 and 5 for details on the FRAS project.) These costs have been a particular problem in check clearing. In recent times, the Federal Reserve has been able to mitigate the effects of these trends in several ways: The Federal Reserve has simply deferred certain automation consolidation costs to future years. The Federal Reserve has reduced its targeted return on equity. In 1993 and 1994, the target rate of return was about 5 percent, which was historically a low rate of return, primarily because of losses that bank-holding companies experienced in 1989 and 1991. Past overfunding of the Federal Reserve’s pension plans has enabled the Federal Reserve to offset some additional costs of providing priced services by allocating a portion of the overfunding to priced services, resulting in a decrease in expenses for those services. In 1993, for example, the amount of the overfunded plan allocated to priced services was $36.7 million. Even so, the impact of the overfunded pension plan was not sufficient to enable the Federal Reserve to meet its targeted return on equity in 1994. The overfunding will be completely amortized in the year 2002. These conditions are all temporary. The Federal Reserve will be faced with increasing pressures on its pricing policies. For example, with regard to the return on equity, median rates of return on equity among large bank-holding companies are now in the 15- to 16-percent range, so the target rate of the Federal Reserve may have to move toward that number. Meeting a 15-percent target rate of return on equity would require the Federal Reserve to increase its revenue by about $50 million, which amounts to about a 7-percent across-the-board price increase. Several changes in services to Department of the Treasury and other government agencies, and depository institutions, could have a significant impact on Federal Reserve costs as well as on staffing levels and alignment. These changes include consolidation of U.S. savings bonds operations, increased government use of electronic benefit transactions, and changes in the U.S. currency. Treasury, which directs the U.S. government’s savings bonds program, ordered the Federal Reserve to consolidate its savings bonds operations to five locations. This consolidation has resulted in the need to relocate staff at Reserve Banks that were losing savings bonds operations. Most of the savings bonds employees at nonconsolidation Reserve Bank locations have been relocated to other departments at their respective Reserve Banks. However, one Reserve Bank could not relocate all of its savings bonds employees to other departments and was forced to lay off some of those employees. An increased use of electronic payments in services provided to Treasury and other government agencies may also result in realignments or reductions in staff at Reserve Banks. The National Performance Review’s (NPR) recommendation that the U.S. Department of Agriculture distribute food stamp benefits through Electronic Benefits Transfer (EBT) may result in the realignment of Reserve Bank staff. EBT uses an automated financial transaction process and card access technologies to electronically deliver federal and state benefits to recipients via point-of-sale (POS) terminals and Automated Teller Machines (ATM). Currently, the Federal Reserve receives the paper coupons deposited by merchants at their financial institutions, confirms the totals, checks for counterfeit coupons, destroys the coupons, credits the sending institution’s account, and debits the U.S. Treasury account for the value of the food coupons. Under the EBT system, funds would be transferred electronically from the U.S. Treasury’s bank account to the retailer’s depository account via the automated clearinghouse (ACH). Recently, Texas converted its food-stamp operations to an EBT arrangement. This necessitated the Dallas Federal Reserve Bank’s eliminating 22 positions in its food-stamp processing area. Introduction of a 1-dollar coin, which is currently being considered by Congress, could result in dramatic staffing reductions in Reserve Banks’ currency processing operations. Many nations use a coin for monetary transactions at, and in many cases well above, the level for which the United States uses a paper dollar. Although the Susan B. Anthony 1-dollar coin was not accepted by the public when it was introduced in 1979, a switch to a 1-dollar coin, particularly if the paper dollar were withdrawn from circulation, could nevertheless reduce Federal Reserve expenses and result in savings to the taxpayers. One-dollar paper notes make up approximately 40 percent of the currency processed by Federal Reserve Banks. Officials told us that if the 1-dollar coin were introduced and the 1-dollar bill were removed from circulation, substantial reductions in currency processing staff would need to be made, perhaps resulting in the elimination of the second shift processing at many Reserve Banks. The continuingly intense banking industry consolidation would likely affect the locations and need for Federal Reserve bank examination staff. As banks merge or are acquired, the Federal Reserve will face the need to reexamine its current distribution of examination staff. Some Reserve Banks may see a need for increased staffing; others may find that they must radically reduce their examination staffs. As an example, figure 4.1 shows the percentage changes in the number of state-member banks by Federal Reserve district for the period of 1990 to 1995. Less certain are the potential effects of any bank regulatory consolidation Congress may enact. Differing consolidation proposals have been made to consolidate federal financial institutions’ regulatory responsibilities. Some proposals would provide for the complete consolidation of all regulation into a single federal regulator. Other proposals envision retaining or even increasing the responsibilities of the Federal Reserve in bank supervision. Some proposed changes to the banking regulatory structure have raised policy issues about the Federal Reserve’s role in bank regulation. The Federal Reserve has raised strong objections to a new regulatory system in which its role in direct bank supervision would be eliminated or substantially reduced. Federal Reserve officials argue that the System’s ability to conduct monetary policy and operate the payments system and the discount window would be greatly impaired by the removal of its responsibilities for regulating and supervising bank-holding companies and state-member banks. Likewise, those who support maintaining the Federal Reserve’s involvement in bank regulation argue that if the Federal Reserve is to be responsible for forestalling financial crises and effective as the “lender of last resort,” the Federal Reserve must have direct experience with at least a portion of the depository institutions. On the other hand, others argue that the Federal Reserve can obtain information needed for monetary control through other means, such as reports from other agencies or Board representation on other agencies. Because supervision and regulation activities account for approximately 20 percent of Federal Reserve Bank operating expenses, a reduction in the central bank’s direct role in bank supervision and regulation could have a significant impact on the Reserve Banks. Conversely, if the Federal Reserve were given responsibility for some or all of the largest banks, the percentage of the banking system assets for which the Federal Reserve would be the primary regulator could increase. While assigning large banking organizations to the Federal Reserve would address concerns about systemic risk, this could change the geographic distribution of Federal Reserve supervisory responsibilities. Such a redistribution would, of course, affect expenditures at individual Reserve Banks. The Federal Reserve’s revenues, and hence its return to the taxpayers, would be enhanced by charging fees for bank examinations. Federal bank regulators differ in their policies regarding the assessment of fees for bank examinations. The Office of the Comptroller of the Currency (OCC) charges national banks for examinations that it conducts. In contrast, state-chartered banks, which are supervised by either the Federal Reserve or the Federal Deposit Insurance Corporation (FDIC) in conjunction with state-banking agencies, are charged fees by those state-banking agencies but not by their federal regulator. Thus, the costs of the Federal Reserve’s bank examinations—$368 million in 1994—are borne by the taxpayers, while for national banks, the costs of examinations are borne by the banks that are examined. The Federal Reserve Act authorizes the Federal Reserve to charge fees for bank examinations, but the Federal Reserve has not done so, either for the state-member banks it examines or the bank-holding company examinations it conducts. Similarly, FDIC is authorized to charge for bank examinations but does not do so. The administration’s fiscal year 1996 budget includes provisions for both FDIC and the Federal Reserve to charge for bank examinations. The Federal Reserve is concerned that if it instituted charges for its bank examinations it could create incentives for state-member banks, who are already charged for state examinations, to either change their charters to national charters or resign membership in the Federal Reserve (opting to be supervised by FDIC as state-nonmember banks), to avoid paying fees for both state and federal examinations. Such incentives, the Federal Reserve believes, would have major disruptive effects on the dual banking system. We believe any disruption would be small. At the end of 1994, there were 3,078 national banks with 56 percent of total bank assets, 6,398 state-chartered nonmember banks with 23 percent of total bank assets, and only 974 state-chartered member banks with 21 percent of total bank assets. Thus, the number of banks that would be affected is relatively small. If the FDIC also adopted examination fees, incentives for banks to become state-nonmember banks to avoid such fees would be eliminated. With respect to double-charges for bank examinations, we believe an equitable fee-sharing arrangement with state agencies that is based on the division of supervisory responsibility seems possible. Moreover, charging for bank-holding company examinations would not present such possible disruptions because the Federal Reserve is their federal regulator, regardless of whether the subsidiary banks are chartered by OCC or the states. Charging holding company examination fees might also encourage greater efficiency in supervising banking organizations. Addressing the challenges discussed above will likely result in dramatic changes in staffing and how work is done at the Reserve Banks. In addition, continuing pressures to contain costs, in part fueled by the increasing competition from the private sector in priced services, may result in changes in how Federal Reserve programs are managed. Taken together, such changes will likely call into question the continuing appropriateness of the Federal Reserve’s current structure. Changes that can affect many of the Federal Reserve’s lines of business—particularly those concentrated at the Reserve Banks, such as check clearing, currency processing, and bank supervision—may result in substantial reductions in staffing at the Reserve Banks in the years ahead. These trends are already beginning to occur. Overall staffing at the Reserve Banks has declined modestly by 1.4 percent from the first quarter of 1994 to the first quarter of 1995. And staffing in the line of business, services to financial institutions and the public, which includes priced services, declined somewhat more—by 2.2 percent. Some Reserve Banks have offered “early out” retirements to some employees to encourage reductions during 1988 to 1994. As Reserve Banks contract in size, the continuing justification for the overhead structure, replicated at 12 Reserve Banks, will be called into question. Federal Reserve overhead expenses rose from $355 million in 1988 to $564 million in 1994, an increase of about 59 percent. This is one of the greatest increases among the Federal Reserve’s lines of business for this period. As the Federal Reserve faces the challenges we have just described, it will have significant opportunities to reduce staffing and, therefore, costs, particularly at the Reserve Banks. As this occurs, the Federal Reserve should plan to reduce overhead expenses comparably. Increased competition from the private sector and the continuing need to make governmental functions as cost efficient as possible will likely require that the Federal Reserve achieve significantly greater efficiencies in its operations—for example, in personnel pay and benefits, travel costs, procurement, and other areas. Systemwide management of many Federal Reserve activities has the potential to reduce costs to taxpayers, the government, and financial institutions. The Federal Reserve has often chosen in the past to manage programs on a systemwide basis for reasons of efficiency and to ensure effective operations of Reserve Bank programs. For example, some Federal Reserve benefits are established systemwide and are available at the same levels to all employees, regardless of where they work. In this regard, the Board of Governors sets benefits for all Federal Reserve employees. These systemwide benefits include retirement plans, thrift savings plans, business travel/accident insurance, life and survivor insurance, and a long-term disability income plan. Benefits that are not established systemwide include health benefits and various types of leave, such as marriage leave and bereavement leave (see app. III). For large System projects, the Federal Reserve has often taken a systemwide approach to procurement and management. When the Federal Reserve determined the need for a new generation of currency processing equipment, a single contract was used to purchase all 132 machines from a single vendor. According to the Board’s Division of Reserve Bank Operations and Payment Systems’ (DRBOPS) Cash Manager, this helped ensure a better price compared to prices resulting from the Reserve Banks’ purchasing the machines individually. When the Federal Reserve determined the need for the Federal Reserve’s data processing and communications to have improved reliability, risk management, and security, among other things, Reserve Bank and Board decisionmakers chose to centralize those operations at three centers rather than continue separate operations at each of the Reserve Banks. Finally, when the Office of the Inspector General (OIG) criticized the individual ethics programs at the Reserve Banks, the Federal Reserve responded by establishing uniform ethics standards (the Uniform Code of Conduct) and standardizing financial disclosure and other ethics-related forms throughout the Federal Reserve. We have also identified several opportunities for the Federal Reserve to better control costs and increase efficiencies through increased systemwide management. These include the Federal Reserve’s taking the following steps: review benefits programs at the 12 Reserve Banks to reduce or eliminate benefits that are not necessary to attract and retain a quality workforce; manage other benefits—such as health plans—on a cost-effective systemwide basis, utilizing the combined bargaining power of the 12 Reserve Banks; standardize travel policies and procedures to eliminate anomalies among the Reserve Banks that may result in unnecessary expenditures; and review contracting and procurement practices at the 12 Reserve Banks to (1) eliminate practices that could result in excessive costs and (2) promote and publicize “best practices” that are identified. As more centralized management is instituted, the continuing need for separate management structures at the 12 Reserve Banks may increasingly be called into question. For example, increasingly uniform Reserve Bank personnel policies would reduce the need for 12 separate Reserve Bank personnel departments. Similarly, if travel policies are made more consistent, travel may be able to be managed more efficiently on a systemwide basis. The structure of the Federal Reserve was shaped when the U.S. economy was much more regional in nature. For example, during congressional debate on establishing the Reserve Banks, a Member of Congress said that the numbers and locations of the Reserve Banks should be such that “. . . no bank be more than an overnight’s train ride from its Reserve Bank.” Today, the increased use of electronic funds and securities transfers make the geographical location of Reserve Banks irrelevant for many functions. Demographics that shaped decisions about the location of Reserve Banks have also changed profoundly. Except for minor boundary changes, the geographical structure of the Federal Reserve has remained unchanged since 1914, while the nation’s population has shifted dramatically. Although population statistics are an inexact proxy for all matters considered in the original decisionmaking, they have rough parallels in bank assets, check-clearing volume, currency needs, and other factors that have an impact on the Federal Reserve’s lines of business. Since 1914, population growth and shifts have resulted in increasing disparities in population in the 12 Reserve districts, which were fairly similar in size in 1914. For example, the San Francisco Reserve Bank in 1914 served 6 percent of the nation’s population; the St. Louis Reserve Bank served almost 10 percent. As of 1990, the San Francisco Bank served almost 20 percent of the population, while the St. Louis Bank served just 5 percent. Overall, in 1914, the populations served by the Reserve Banks represented a range of 5 to 14 percent of the nation’s population. By 1990, the range had spread to 3 to 19 percent of the nation’s population, as shown in table 4.3. Further changes in the nation’s population, coupled with reduced staffing at the Reserve Banks and increasing systemwide management of the Federal Reserve, call into question the continuing need for 12 Reserve Banks. In addition, an examination of the continuing need for maintaining 25 branch banks may be appropriate. Although the Board has authority to open or close branch banks, it has not done so frequently. Twenty-four of the current 25 branch banks were established by 1927. Since then, the Board has opened only one additional branch bank—the Miami branch of the Atlanta Federal Reserve Bank in 1975. The Board has only closed one branch bank in the Federal Reserve’s history; the Spokane branch of the San Francisco Reserve Bank was closed in 1938. Considering the substantial changes in the nation and its financial system since most of the branches were established, an overall review of the branch bank structure would seem appropriate. The Federal Reserve’s structure, established in 1913, was the end result of many compromises designed to promote Federal Reserve accountability to the public, and, at the same time, to maintain Federal Reserve independence from the nation’s political processes. The importance of the banking industry was acknowledged by establishing member banks as owners of Reserve Bank stock. At the same time, representation from the public was ensured through the membership of the Reserve Banks’ Boards of Directors, which are chosen to include a diverse representation from agriculture, commerce, industry, services, labor, and consumers across each Reserve Bank’s district. The importance of money centers, such as New York and San Francisco, was geographically balanced through the creation of 12 Reserve Banks—the maximum allowed under the Federal Reserve Act—thus ensuring that both rural and urban interests would be represented in the work and the deliberations of our central bank. In the same way, the power of the Board was tempered by establishing the Reserve Banks as independent entities subject only to the “general supervision” of the Board. Finally, while the Federal Reserve was created by an act of Congress and is required to report periodically to Congress, its actions do not need to be ratified by Congress or the president and, as explained in previous chapters, it is funded independently from the congressional appropriations process. In many ways, these compromises have served the nation well and have created additional benefits for the Federal Reserve perhaps not fully envisioned when the Federal Reserve Act was passed. Federal Reserve officials believed that the broad geographic diversity represented by the Reserve Banks aids in the conduct of monetary policy by ensuring that various regional perspectives on the nation’s economy are heard. A total of 281 individuals, many of whom are prominent leaders of industry, the financial services community, labor groups, and consumer interests, serve as directors of the Reserve Banks and their branches. These directors provide both a sounding board for Federal Reserve policies as well as an established “community of interest” to support the Federal Reserve when challenges to its independence arise, as they have from time to time in the past. Federal Reserve officials also feel that this community of directors provides a very useful network of relationships for the nation’s economy during times of financial crisis. We are not in a position to fully evaluate the merits of these benefits for the Federal Reserve or the nation. If, because of the major challenges facing the Federal Reserve, changes to the Federal Reserve’s structure are contemplated, these issues would need to be carefully evaluated when doing so. As to the benefit of having diversity of economic information for monetary policy purposes, in today’s information age, it is likely that sufficient quality economic information could be gathered in some manner, even if the number of Reserve Banks were reduced. As to the benefits of its directors’ network of support, the effects of a reduction in the numbers of Reserve Banks or a diminution of their responsibilities is less clear. If some Reserve Banks were to become, in effect, merely payments system processing centers, for example, the ability of these banks to attract prominent directors might be jeopardized. Any actual or perceived effects this might have on the independence of the Federal Reserve would need to be weighed carefully against any potential improvements in efficiency and cost savings that such changes would yield. In this and previous chapters we have discussed a number of changes facing the Federal Reserve. These are summarized below in table 4.4. Taken together, these changes will likely result in substantial reductions in staffing at the Reserve Banks, which will likely call into question the continued appropriateness of the Federal Reserve’s current structure. We believe that responding to these challenges and making any accompanying structural changes that may become desirable can best be effectively accomplished through strategic management and planning by both the Reserve Banks and the Board working together for the System. In chapter 5, we focus on strategic planning and how the Federal Reserve can take steps to proactively manage for these current and future challenges. If the Federal Reserve is to effectively meet the challenges it faces and streamline operations, the Board and the Reserve Banks must work together to strategically plan for the future. Our prior work in public- and private-sector management reform showed that organizations that have been successful in improving their efficiency have done so by effectively implementing initiatives to focus on their primary missions and business lines, realign their structures to fit their mission, and apply technology to their work processes. Without strong external pressure to minimize overall costs, the Federal Reserve must create the necessary self-discipline for the institution to adequately control its costs and respond effectively to future challenges. However, we found weaknesses in the planning, budgeting, and internal oversight processes that are key mechanisms for helping accomplish these goals. A fundamental review of the Federal Reserve’s missions, structure, and use of technology would present the Federal Reserve with profound cultural challenges; however, the Federal Reserve has begun to show that it can address operational issues strategically and work in a systemwide manner when necessary. As the Federal Reserve enters the next century, it is vital that both the Reserve Board and the Banks continue to foster a systemwide focus so that the Federal Reserve can fulfill its mission in an efficient and effective manner. On the basis of our earlier work in public- and private-sector management reform, we found that leading organizations were able to effectively adapt to changes and challenges in their environment by planning strategically for the future. These organizations had the management processes in place—strategic planning, budgeting, and performance measurement—that supported their top leadership in setting strategic direction and establishing organizationwide priorities. Through strategic planning, organizations were able to better identify emerging issues and challenges and posture themselves to address these changes proactively. Successful organizations also integrated their planning processes with budgeting and performance management. With sound budgeting processes, these organizations were better able to weigh the priorities of the moment against those of the future. These organizations were also able to identify mistakes and make the appropriate adjustments by linking their budgeting processes to performance management. Our work has also shown that public and private sector organizations that were able to achieve significant cost reductions while improving performance and service delivery did so by fundamentally rethinking their mission, strategic goals, lines of business (products and services), and customer needs. As a result of these reassessments, organizations sometimes found it necessary to redefine all or part of their missions, set new strategic goals, and modify their lines of business. In redefining their missions and strategic goals, organizations sometimes found that a fundamental rethinking and radical redesign of their key business and work processes was needed. Known as business process reengineering, this fundamental rethinking seeks to achieve dramatic improvements in critical performance measures. In reviewing their core management and business processes, these leading organizations identified those that were highest in cost, were most customer sensitive, and presented the most significant opportunities and risks for improvement. They then considered the full range of information technology alternatives and information needs to determine how information technology could simplify and reduce the time and cost of carrying out these work processes. After considering the range of needs and available alternatives, these organizations radically redesigned these work processes to better carry out their core missions. As discussed in chapter 4, the Federal Reserve faces major challenges to its business lines, particularly in the delivery of priced services to financial institutions. To effectively address these challenges, the Board and Reserve Banks need to work together to strategically plan for the future. We found that the Federal Reserve had a range of strategic plans and strategic planning initiatives in place or under development. For example, Board divisions and Reserve Banks had strategic planning processes that supported the formulation of strategic plans. According to Federal Reserve planning documents, the strategic planning process is to be linked to the Federal Reserve’s budgeting and resource allocation process. In addition to these strategic plans, strategic plans at the System-level had been adopted, or were being developed, for Financial Services and Information Technology. However, the Federal Reserve did not have a process for integrating these individual planning processes and providing a systemwide focus for assumptions involving the future environment and relationships among functions. As a result, the Federal Reserve may not be making the best use of its many strategic planning processes to prepare for the future and undertake the bold thinking that is needed to address current and future challenges. Strategic planning within the Federal Reserve is carried out by the Chairman, the Board, and the Reserve Banks. As the chief executive officer of the Board, the Chairman is responsible for, among other things, providing (1) overall leadership and organizational direction to help establish major policy goals of the Federal Reserve and (2) administrative direction to the other Governors, the Board staff, and Reserve Banks. In his leadership capacity, the Chairman is involved in key decisions relating to major organizational structure changes that are designed to achieve strategic goals. The Chairman also conveys his views on the future direction, goals, and objectives of Federal Reserve policy through participation in meetings with the chairmen of Reserve Banks’ boards of directors and various Federal Reserve conferences. The Board, which sets policy for the Federal Reserve, also has a role in strategic planning. The Board carries out its work through regular meetings and is assisted by standing committees and ad hoc committees. The standing committees perform a range of functions. The committees help formulate policy, review annual budgets for the relevant Board staff units, and monitor the performance of Board staff units or Reserve Banks against the approved budget. One of the standing committees, the Committee on Reserve Bank Activities, is responsible for overseeing the administrative operations of the Federal Reserve. Its purview includes general supervision over Reserve Bank operations, budgets, and planning activities and oversight of DRBOPS. Each of the Reserve Banks has a strategic planning process that establishes goals and direction for the Reserve Bank. Because of the independent structure of the Reserve Banks and shared supervisory authority within the Federal Reserve, the Reserve Banks have established a conference structure, composed of the Conference of Presidents (COP) and the Conference of First Vice Presidents (COFVP), to help develop systemwide consensus on issues and proposals that affect all Reserve Banks. COP, representing the Reserve Bank presidents, focuses on issues related to discounts and credits, management systems, strategic planning, personnel, legislation and regulations, supervision, and research. COFVP, representing the Reserve Bank first vice presidents, focuses on operational issues affecting the Reserve Banks. The conferences are supported by committees and subcommittees that administer the bulk of the conferences’ work and often initiate projects. The organizational structures of COP and COFVP are shown in figures 5.1 and 5.2. In late 1994, a new management structure was installed to streamline the decisionmaking process and increase the accountability of Reserve Bank first vice presidents for strategic planning of financial services—which are priced services and other services, such as cash operations—provided to financial institutions. Under the new structure, the Financial Services Policy Committee, which is composed of two presidents and three first vice presidents, is responsible for the overall direction of financial services and related support functions. Furthermore, the committee serves as the vehicle for conveying major issues to the Board for discussion and actions. The new structure has dramatically altered the responsibilities of COFVP. COFVP maintains responsibility over the budget process. The Financial Services Management Committee is composed of six first vice presidents—the chairperson, four product group directors, and the director of automation services. The management committee is responsible for developing and implementing business plans for the financial services and monitoring budgets and projects. The Financial Services Operations Council is responsible for coordination and provides advice to the management committee. The product offices are responsible for planning the future direction of each service area and receive support from their respective advisory groups. To carry out the Board’s supervisory role, DRBOPS staff serve as liaisons to the various groups in the new structure. Figure 5.3 illustrates the new structure for financial services management. Although the Federal Reserve has a range of strategic planning processes or programs in place or under development, we found these processes were not designed to address, on behalf of the Federal Reserve, the critical challenges raised by an increasing need to constrain costs, likely changes for System business lines, or the possible implications of those changes on the Federal Reserve’s structure. In reviewing the Federal Reserve’s strategic plans and strategic programs under development, we found that they were generally focused on the strategic goals and objectives of individual divisions, Reserve Banks, or functions. While we believe these plans serve an important purpose in defining the direction of these Federal Reserve entities, we believe that the emerging issues and challenges facing the Federal Reserve will necessitate bold strategic planning focused on the System as a whole. For example, the Federal Reserve may find the System’s long-term interest better served, both from a cost-reduction and performance perspective, by a review of (1) the System’s mission and business lines; (2) the need for all Reserve Banks to perform many of the same functions; and (3) the potential for further consolidation or centralization of certain missions and functions. Determining the future direction of the Federal Reserve and what is best for the System overall, will require the Chairman, the Board, and the Reserve Banks to make hard decisions that will raise further issues and concerns regarding their impact on the Federal Reserve’s system of shared leadership and control. The Federal Reserve recently took action toward achieving greater integration of its strategic planning processes. Recognizing the need for a more systemwide focus, the Board, in mid-1995, chartered the establishment of a new planning entity known as the Federal Reserve System Strategic Planning Coordination Group (SPCG). In assembling SPCG, the Federal Reserve put together an organizationally diverse group whose membership includes the Chairman of the Board (who serves as an ex officio member) and representatives of the Board, and the Reserve Banks, and all major functional and support areas. SPCG is to provide a common framework for the development and refinement of the many individual strategic plans and action plans within the Federal Reserve. According to Federal Reserve planning documents, several Board members and Reserve Bank presidents believed that the discrete strategic planning processes within the Federal Reserve would benefit from greater coherence, especially in terms of assumptions about the future environment and interrelationships among functions. While we believe the establishment of SPCG is a positive step for the Federal Reserve, we are concerned that SPCG’s scope of responsibility and authority may be too limited. Specifically, SPCG was tasked to develop for senior management (governors, presidents, first vice presidents, and certain Board division directors) a document setting forth a common view of the mission, vision, values, and priorities of the Federal Reserve; view of, and assumptions about, the future environment in which the Federal Reserve will operate; understanding of the strengths and opportunities, as well as the weaknesses and vulnerabilities, of the Federal Reserve; and recognition of major challenges or redirections facing the Federal Reserve. In describing the scope of SPCG’s work, the Board also identified the following four important issues that the group might address. How can the Federal Reserve Board and Reserve Banks work better as a System rather than as 13 separate entities? How can the Board and Reserve Banks achieve better coordination across functional areas within units and within the Federal Reserve? How can the Board and Reserve Banks achieve better coordination across units within functional areas? Are there changes or innovations in the structure or governance of the Federal Reserve that would make it work better? If the Federal Reserve is to more fully use SPCG, it may need to (1) broaden the group’s responsibilities to specifically include a fundamental review of Federal Reserve operations focusing on the primary mission, business lines, and structure that would best support the Federal Reserve’s overall mandate in an environment of an increasingly constrained federal budget and (2) better empower the group to have an impact by changing expectations throughout the Federal Reserve about the nature of the changes that could result from the group’s work. The SPCG Chairman and Vice Chairman have stated that SPCG is not intended to develop new specific action plans or objectives or to override plans or objectives already in place, for either functional areas or organizational units. Rather, the results of the planning coordination process would be the common framework for developing and refining constituent strategic plans and action plans. Minutes of a September 1995 SPCG meeting indicated the group’s concern about its limited authority. The minutes identified several important questions as being planned to be addressed by the group. Two of these questions were (1) how the group could guide organizational decisionmaking, help set priorities for the Federal Reserve, and drive the System’s budget processes and (2) how the group could strike an appropriate balance between a system framework and the system strength derived from district/functional autonomy. Beginning in the latter 1980s, information technology within the Federal Reserve underwent a profound change. Between 1988 through 1994, the Federal Reserve spent hundreds of millions of dollars on information technology. By late 1995, according to Federal Reserve planning documents, most mission-critical applications had been or were being completely rewritten; a new network, FEDNET, had been built and was being deployed; and the FRAS organization, established to consolidate the mainframe processing function, had assumed responsibility for most mainframe processing. While we did not do an in-depth review of FRAS, we believe that such an approach makes sense. However, Reserve Banks have remaining concerns about the spillover implications of a systemwide approach to mainframe processing consolidation for the System’s future. Because of the size of the information technology investment and the potential that such technology holds for providing higher quality services at a faster and lower cost, it is critical that the Federal Reserve ensures that its strategic information technology planning is an integral part of the Federal Reserve’s strategic planning process and business planning and that assumptions about the future environment are fully considered. In the 1980s, several Reserve Banks, primarily seeking cost efficiencies, proposed that their Reserve Banks consolidate mainframe processing. On the basis of this effort, the Board later established a committee to study the feasibility of consolidation for the Federal Reserve as a whole. This committee proposed that the Federal Reserve replace the independent mainframe operations of the 12 Reserve Banks and consolidate these operations into 3 automation centers. This proposal also included a unique organizational structure for overseeing mainframe computer operations, placing the responsibility for the consolidated operations under a Senior Automation Executive located within the Richmond Reserve District as a separate organizational entity called FRAS. In 1990, the Board and the Reserve Banks adopted this proposal, setting a new precedent for a systemwide approach to an important operational function. The Federal Reserve’s approach to implementing FRAS represented a major departure from the decentralized approach traditionally used by the Federal Reserve to carry out its operational functions. The objectives of automation consolidation, in descending order of importance, were to improve reliability and disaster recovery, increase control of payment system risk in a national banking environment, improve security of the total automation environment, enhance responsiveness to changing business requirements, and improve efficiency. The Federal Reserve anticipates that FRAS will be responsible for operating mission-critical systems, such as Fedwire (which handles more than $1 trillion in transactions each business day from almost every U.S. financial institution) and key information systems, such as the Federal Reserve’s bank statistics database and payroll system. The systemwide approach to automation consolidation prompted concerns about the control of automation resources and the impact of this approach on the Reserve Bank autonomy and the future of the Federal Reserve. These concerns were twofold: (1) that the consolidation of this activity would lead to the consolidation of other activities and (2) that the Reserve Banks would lose control of the automation resources. The Reserve Banks worried whether they would continue to manage the automation resources or whether the Board’s staff would become more involved in the planning and day-to-day management of automation resources. Concerns were also expressed that, as consolidation progresses, a few “significant” Reserve Banks would emerge. The emergence of such Reserve Banks could cause other Reserve Banks to have a harder time recruiting prestigious directors, thereby diminishing the regional character and local support of the Federal Reserve. As originally conceived, FRAS was to be a system to provide cheaper mainframe processing support for the delivery of services to Treasury and the financial institutions. However, as is often the case with major information technology projects, the scope of the project grew to include applications not originally envisioned in the original plans for FRAS. Planning for FRAS could have taken into greater account the needs of the Reserve Banks. Some Reserve Bank officials told us that the growth in scope of FRAS, particularly to include check processing, had made it difficult for them to comply with the requirement in the Monetary Control Act that service fees should recover the costs of priced services. As the Federal Reserve proceeds in the implementation of FRAS, it needs to better identify the Federal Reserve’s overall mission needs, the needs of the Reserve Banks, and those work processes that hold the most promise for improved service delivery through information technology. While we did not do an in-depth review of FRAS, it appears that the design of FRAS assumes the retention of all key missions and business lines. Furthermore, we did not observe an identification of those work processes that could be reengineered and that hold the most promise and risk for the application of information technology. If the Federal Reserve revises its assumptions about the future environment and the Federal Reserve’s core missions and business lines, it must ensure that these decisions are well integrated with its information technology strategic planning. The Federal Reserve is currently working on a strategic plan for Federal Reserve information technology. The plan seeks to lay out a planning horizon for the Federal Reserve through the year 2000. As of February 1996, the strategic plan was still in draft. In reviewing the draft plan, we observed that it lays out the strategic goals and strategies by mission. The draft plan also assumed the retention of all missions, business lines, and operating structures. As the Federal Reserve refines its information technology strategic plan, it is vital that the Federal Reserve continually checks its key strategic assumptions and makes sure that the information technology strategic goals keep pace with key strategic decisions. An effective budget process should support top management in constraining costs, weighing current priorities against future priorities, and allocating resources according to organizational priorities. For an institution such as the Federal Reserve, it is especially important that there be a rigorous budget formulation and execution process in place to constrain cost and foster the internal self-discipline necessary to periodically reassess its strategic goals and priorities. The Federal Reserve’s budget process seeks to ensure that overall Federal Reserve objectives are accomplished efficiently and effectively. In reviewing the budgeting process for both the Board and Reserve Banks, we found that the Federal Reserve had a budgeting process that imposed some discipline in that there was no material overspending of approved budgets. However, we found the Federal Reserve’s budget process had a weakness in that it used a current services approach that assumed existing functions would be retained and that assumed continued incremental budgetary growth. Such an approach, we believe, did not adequately support top management in constraining costs and imposing the internal self-discipline necessary for the Federal Reserve to respond effectively to future priorities. In reviewing the budgeting process for 1988 to 1994, we found the operating budgets of the Board and the Reserve Banks were formulated on the basis of the assumption that existing units would generally continue to perform their required functions and their budgets would increase from year-to-year to account for expected increases in inflation and salaries. With no formal constraints on overall spending, the extent of increases in unit budgets was left ultimately to the discretion of the Board. The formulation of the Board’s budget was overseen by the administrative governor under authority delegated by the Chairman of the Board and managed by the Board’s Office of the Controller. The process began in the spring of each year with the development of a budget guideline and extends through November. In the spring, each Board division developed a strategic plan, which identified and prioritized objectives, and a proposed budget. Next, the Board Governor (or Governors) with administrative responsibility for the division reviewed the plans and commented on the merit of the proposed budget. The divisions had the opportunity to revise their strategic plans on the basis of those comments. The Program Analysis and Budgets section of the Controller’s Office then developed a proposed budget guideline, or acceptable percentage increases in Board expenses for the upcoming year. According to officials we interviewed, the percentage increases were based on such factors as inflation and the expected cost of programs and initiatives identified in strategic planning sessions. The proposed percentage increases were first reviewed by the administrative Governor and the Board Chairman; if they were satisfied, the Board received the proposal for approval during the summer. Each Board division used the approved percentage increase to prepare a revised budget proposal that they submitted to the Controller in the fall. After reviewing the budget proposals and making any necessary adjustments, the Controller coordinated meetings to discuss the budget proposal with each division and the appropriate administrative Governor(s). On the basis of these meetings, the Controller could make additional adjustments before consolidating the division budgets. The consolidated budget was then given to the Administrative Governor for review and presentation to the Board Chairman. After all appropriate adjustments had been made, the Administrative Governor presented the consolidated budget to the full Board for approval at a public meeting shortly before the new budget year, which began in January. Percentage increases and proposed Reserve Bank budgets were formulated and approved in a process separate from formulation and approval of the Board’s increases and budget. The Reserve Banks’ process generally took 6 months, culminating in the Board’s approval of the proposed increase in late spring of the year before the subject budget year. During 1988 to 1994, the Federal Reserve’s conferences—COP and COFVP—along with their supporting committees, subcommittees, and task forces, provided a systemwide mechanism for the development and sequential review at many System levels of budgetary proposals and objectives that affect all Reserve Banks. Various data were considered in developing the percentage increase proposal, including volume and cost projections for priced services, Federal Reserve project cost projections, and information on Reserve Bank initiatives affecting expenses. Shortly after the Board’s approval of the allowed increase in Reserve Bank budgets, each Reserve Bank developed budget documents and materials, including a proposed budget. These proposals were initially reviewed by COFVP and then forwarded to COP for review. The COP’s budget recommendations were then reviewed and approved, in turn, by DRBOPS, the Board’s Reserve Bank Activities Committee and, finally, by the full Board shortly before the start of the budget year. The budgets for the Reserve Banks and the Board were monitored throughout the year. For example, the Board’s actual expenditures were compared to the budget plan throughout the year to ensure compliance with approved budget and program plans. The Office of the Controller had lead responsibility for monitoring the Board’s budget. The Controller submitted quarterly reports to the Board that compared each division’s performance with its expenses and conducted midyear reviews with each division to control costs and provide a baseline for analyzing the upcoming year’s budget request. Generally, if the Reserve Banks and the Board did not deviate from their respective budgets by more than 1 percent, they were allowed to reprogram funds from one spending category to another without seeking Board approval. Reserve Bank budgets were also monitored throughout the year—both at the Board and Reserve Banks. These budgets were monitored mainly through the Reserve Bank’s cost-accounting system by the individual Reserve Bank controller and staff of DRBOPS. The cost-accounting system facilitated the comparison of the financial and operating performance at Reserve Banks individually and as a whole. In exercising its statutory authority to generally supervise the Reserve Banks, the Board required the Reserve Banks to submit budgets annually and to seek approval, on an ad hoc basis, for large purchases (capital acquisitions). In addition to the budget approval process, the Board established various levels of approvals for Reserve Bank operations for expenditures related to buildings, equipment acquisitions, and price changes for Reserve Bank services. Over certain dollar amounts, these proposed expenditures must be approved by the Board. For proposals that fell below the specified threshold, the Board delegated its approval authority to DRBOPS or the Reserve Banks. In addition, DRBOPS may forward Reserve Bank proposals that may have systemwide policy implications to the Board. According to a DRBOPS official, proposals approved at the Reserve Bank-level are routinely forwarded to the Board as an information item. In reviewing the execution of the Federal Reserve’s budget between 1988 and 1994, we observed that the budget processes of the Board and the Reserve Banks resulted in budgets that increased each year. However, amounts finally approved were generally lower than those initially requested. As a whole, we found that Reserve Banks and the Federal Reserve sometimes exceeded the initially approved operating budgets, but generally by amounts that were less than 1 percent of the approved operating budget. Concerning the Federal Reserve’s capital budget, we found that in every year except 1992, the Federal Reserve spent less than was budgeted. In most years, the underspending was primarily related to data processing and data communications equipment. However, in 1992, the Federal Reserve overspent its data processing and data communications budget by almost $52 million. In that year, the Federal Reserve’s initial capital budget did not call for purchasing any computer equipment for FRAS. However, in 1992, the Federal Reserve began FRAS-related acquisition and development; by year-end, the Federal Reserve had spent nearly $96 million to purchase computer equipment. Internal oversight processes, such as performance measurement, internal audit, and financial audits, can and should play key roles in assisting management in achieving its strategic vision for the organization. The Federal Reserve had many oversight mechanisms in place. However, we found that these mechanisms either did not support performance evaluation from a systemwide perspective or were becoming increasingly inappropriate in the changing environment. As a result, the Federal Reserve may not be making the most use of its resources devoted to Federal Reserve oversight. The Board and Reserve Banks had a variety of mechanisms to oversee many activities. Oversight of Board programs and operations is provided by the Board’s OIG. The various oversight mechanisms of the Federal Reserve are summarized in table 5.1. Through involving key stakeholders in developing performance measurement systems keyed to organizational goals, performance measurement can be used to assess how all parts of the organization are contributing to overall effectiveness in achieving the organization’s key goals. In conducting our work, we noted that the evaluation and assessment of Reserve Bank performance had received considerable attention from both Reserve Bank management and the Board. DRBOPS conducted annual assessments of Reserve Bank operations in various areas. Reserve Bank management tracked performance on a variety of measures on an ongoing basis. And other oversight mechanisms—internal and external financial examinations, operations reviews, and OIG evaluations—provided other information on performance. However, many of these current performance measures were too narrowly focused on such Bank-specific measures as the numbers of checks processed or the amount of fees collected for ACH processing. In the context of the Federal Reserve’s new efforts on systemwide planning for the Board and the Reserve Banks together, the Federal Reserve would appear to lack major systemwide benchmarks to measure how effectively the Federal Reserve—as a whole—is meeting its new challenges. Concerning systemwide goals and objectives, it may now be appropriate for the Federal Reserve to redesign its key performance indicators to more accurately reflect overall organizational goals and objectives. As a part of this new strategy, outcome-linked performance measures should be developed, for both the Board and Reserve Banks, that show how organizational components can best contribute to overall organizational effectiveness. Even given the numbers of oversight mechanisms available to the Federal Reserve, we identified specific problems—the coverage of audit and evaluations, the potential for the lack of independence, and possible audit reporting problems—that all could be improved with certain changes in Federal Reserve oversight. These problems stemmed in part from the unique structure of the Federal Reserve and the authority provided to those entities supporting the Board. For example, the Inspector General is authorized to review only the activities of the Board while DRBOPS is responsible for overseeing the Reserve Banks and for developing policies. As the Federal Reserve increases systemwide projects and consolidations, the need for stronger, comprehensive Federal Reserve oversight is likely to increase. With improved oversight, the Federal Reserve can better identify areas where efficiencies can be achieved, particularly areas with reengineering potential, and ensure that organizational results are both outcome-linked and responsive to multiple organizational priorities that may cut across various parts of an organization. The lack of a systemwide perspective has affected the audit and evaluation coverage within the Federal Reserve. Until recently, the Federal Reserve’s oversight mechanisms did not include an independent audit of the combined financial statements of the Reserve Banks. DRBOPS, which lacks clear independence, conducted individual financial examinations of each Reserve Bank on behalf of the Board. In November 1994, the Board awarded a contract to have an independent public accounting firm audit the combined financial statements of the Federal Reserve Banks for the years 1995 through 1999. We believe this would be helpful toward improving financial auditing within the Reserve. However, we also believe that a permanent policy to require an annual independent financial audit of the combined Reserve Banks’ financial statements is needed. We recommended that this be done in some of our previous work.Government experience has shown that emphasis on financial management and oversight can change with agency leadership. Therefore, legislating an annual audit requirement, as was done by the Government Management Reform Act of 1994, which expanded the Chief Financial Officer Act’s requirements to the 24 largest executive agencies to obtain annual financial statement audits, would ensure that emphasis on financial management is continued. We also noted in our review that some areas were the subject of possibly redundant audit attention. For example, at the time of our review, we observed separate evaluations of various aspects of work on the Federal Reserve Automation Services project at the Richmond Reserve Bank being conducted by the Richmond General Auditor, the OIG, and DRBOPS staff. While we did not do an in-depth analysis of areas of overlap in these audits, we nevertheless found possible areas of overlap. At the same time, in our review of a sample of contracting and procurement practices at selected Reserve Banks, we found potential for possible conflicts of interest within the bid selection processes and some lax practices in ensuring that correct payments were being made on contracts. Yet despite the fact that contracting received some audit attention at the Reserve Banks we visited, these problems were not identified. The use of the existing oversight structure to conduct systemwide audits may not be appropriate because the general auditors do not report to a systemwide board of directors. At the time of our review, one General Auditor was serving as the head of the systemwide audit of the ISS-3000 currency processing equipment. The General Auditor was to report the audit findings to that Reserve Bank’s Audit Committee even though this review was conducted for the Federal Reserve as a whole. In our view, the findings of an audit of a major systemwide project should be reported directly to the Board, which has direct fiscal responsibility for the project. We believe that the Federal Reserve could alleviate some, if not all, of these problems by providing a more focused and efficient approach to Federal Reserve oversight. The Federal Reserve could accomplish this by taking steps to better ensure the independence of its internal audit function and to expand the scope of the OIG’s authority to include responsibility for auditing the Reserve Banks and systemwide projects. As Reserve Banks are moving toward more systemwide projects and more centralized decisionmaking, the Federal Reserve’s fragmented oversight structure is increasingly inappropriate to provide adequate oversight of centralized Reserve Bank operations. If the OIG’s authority was expanded, the problems of redundant audits would be addressed. The expansion of the OIG’s authority would necessitate both an increase in staff and spending for the OIG. However, it may be possible to simultaneously reduce staffing in other oversight mechanisms. We believe that the Federal Reserve, to effectively plan for the future, needs to conduct a fundamental assessment of its operations focusing on its missions, strategic goals, and structure. Such an assessment should also include a review of the Federal Reserve strategic management processes. We believe that the Federal Reserve faces some difficult constraints in conducting such an effort. For example, the Board will need to work with the Reserve Banks to rethink their mutual roles in the shared leadership of the system. Furthermore, they will face profound challenges in planning and confronting possible changes. Planning deliberations related to redefining core missions and business lines and realigning the Federal Reserve’s structure and governance would require strategic planners to “think beyond” the statutory powers of the Board and Reserve Banks. The essential missions as well as the locations of the Federal Reserve’s Reserve Banks are set by law, and the autonomy of the Reserve Banks generally necessitates consensus-oriented decisionmaking in systemwide planning. For example, the Federal Reserve is required by law to develop and implement monetary policy, supervise and regulate banks, regulate and provide payments system services, and provide fiscal agency services to government agencies upon request. In rethinking its mission and business lines, the Federal Reserve may face conflicts and difficult policy choices, which may require that it consult with Congress for help in resolving them. For example, the Federal Reserve is required to base check-clearing fees on the recovery of its costs; at the same time, it must also function as the “clearer of last resort” and promote the safety and soundness of financial institutions. In addition, neither the Board nor the Reserve Banks is authorized to change the numbers or locations of Reserve Banks or essential elements of Federal Reserve governance. Changes that might be considered in the context of a fundamental assessment of Federal Reserve operations could require legislative action to accomplish. Because it lacks the cost minimization pressures common to most public and private entities, the Federal Reserve must work extra hard to overcome internal pressures for budgetary increases. As discussed in chapter 1, the Board is a government agency and provides Congress with an annual report of the Federal Reserve’s operations; however, the Federal Reserve is not subject to the congressional appropriations process that serves as a constraint on spending by federal entities. Furthermore, because the Federal Reserve Act sets dividends to member banks at 6 percent and prohibits them from selling their shares, shareholders, who are member banks, do not have the usual financial incentives to encourage cost-efficient operations. Additionally, the amount of interest the Federal Reserve receives on securities acquired through the issue of Federal Reserve notes is so great that it tends to mask the net decline of all other revenue sources that occurred over the 1988 to 1994 period. Therefore, it is especially important for the Federal Reserve to have management processes that support top management in constraining costs and that instill a high level of internal self-discipline that would allow the Federal Reserve to overcome institutional resistance to major management reform. However, despite its unique structure, the Federal Reserve has begun to show that it can address operational issues strategically and work in a systemwide manner when necessary, as evidenced by the recent establishment of a new Financial Services Committee to examine priced services and by the consolidation of its data-processing facilities. The Board and the Reserve Banks must work together to meet the emerging challenges and to ensure that the nation’s central bank keeps pace with the changing environment and remains a strong and competitive institution. In analyzing opportunities to reduce the cost of Federal Reserve operations to the taxpayer, any potential adverse impact on the independence of monetary policy or on the Federal Reserve’s ability to meet its key responsibilities should be considered carefully. However, we see no inherent conflict between the Federal Reserve’s independence or effectiveness and efforts to improve efficiency. Many of the functions performed by the Federal Reserve have little direct relation to monetary policy, and the Board, working with the Reserve Banks, has the authority and ability to take many cost-saving actions without jeopardizing its mission effectiveness. However, any decision to close Reserve Banks or establish a separate corporation for priced services would require congressional approval. Thus, we make recommendations to the Board and suggest several matters for congressional consideration. We recommend that the Board of Governors undertake a fundamental review of Federal Reserve operations focusing on the primary mission, business lines, and structure that would best support its overall mandate. Such an organizational review should include an assessment of the following: the Federal Reserve’s role in providing financial services to banks and government agencies and an analysis of the costs and benefits to the Federal Reserve and the taxpayers of various options for delivering such services (such options could include discontinuing delivery of certain priced services to financial institutions, privatizing the delivery of other services by establishing a private corporation for delivering such services, or retaining responsibility for being the primary service provider); cost-saving opportunities that could result from streamlining the Federal Reserve’s existing management structures and consolidating Federal Reserve operations, including possible mergers among the 12 Reserve Banks and 25 branches; and the potential for technology to support streamlined work processes in the Reserve Banks and to reduce costs and improve quality. In addition, we recommend that the Board strengthen its existing control and oversight mechanisms by, among other things, (1) reviewing the appropriateness of current budget assumptions, which assume steady annual growth; (2) taking steps to better ensure the independence of the Federal Reserve’s internal audit function and to expand the scope of its OIG’s authority; and (3) ensuring that an independent financial audit of the Reserve Banks’ combined financial statements is conducted every year. Congress should consider the results of the Federal Reserve’s assessments and determine whether it would be desirable to merge or close any of the 12 Reserve Banks or 25 branches and which of the various options for delivering priced services to financial institutions are in the best interests of public policy and represent the best balance between achieving cost savings and serving the nation’s financial interests. requiring an annual independent audit of the Reserve Banks’ combined requiring the Federal Reserve to charge for bank examinations; and establishing a statutory requirement that the Federal Reserve annually transfer its remaining revenues to the Treasury. The Federal Reserve’s Board of Governors did not agree with any of our recommendations to the Board or with our suggestions to Congress. The Board did not agree to undertake a fundamental review of the Federal Reserve System’s operations, because it believes such reviews are an ongoing and integral part of the Board’s oversight of the System. The Board stated that the Federal Reserve’s role in providing financial services to depository institutions is constantly being tested in the marketplace, and the Board noted that the System is consolidating the management of some financial services. The Board stated its belief that most savings from such consolidation efforts would be possible in electronic payment functions, such as Fedwire, with lesser savings possible in paper-based financial services, such as check clearing. The Board also did not agree to consider alternatives to the current way the System provides priced services. Concerning merging or closing any of its 12 Reserve Banks or 25 branches, the Board stated that, while the Federal Reserve’s structure would likely be different if established today, any such realignments or relocations would have to yield substantial long-term savings to offset the transition costs. Concerning the potential for technology to support streamlined work processes in the Reserve Banks, the Board stated that the Federal Reserve routinely assesses technologies for their ability to reduce costs and improve the quality of its services. Concerning our recommendations to improve the Federal Reserve’s control and oversight mechanisms, the Board did not agree with our characterization of the System’s budget process as one that assumed continuous growth. The Board also did not agree that the independence of its internal oversight would be strengthened by expanding the authority of the Board’s OIG to the Reserve Banks. The Board believed that the current audit process ensured adequate independence and that expanding the OIG’s authority could integrally involve the inspector general in the Board’s oversight process and raise questions about the inspector general’s “arm’s length” ability to audit such processes. The Board did not comment on our recommendation to institutionalize an annual external audit of the combined financial statements of the Reserve Banks. Finally, the Board did not agree with our suggestion that Congress may want to consider requiring the Federal Reserve to charge for bank examinations. The Board noted that, currently, the states charge examination fees that, on average, are approximately half of those charged by OCC for national bank examinations. The Board believed that if the Federal Reserve and FDIC were to charge for their examinations of state-chartered banks, such fees could tip the scales toward national charters and call into question the long-term viability of a valuable dual banking system. We continue to believe that the major technological and marketplace developments that are currently affecting the financial services industry have profound implications for the activities and operations of the Federal Reserve and require the System to have a strong, systemwide strategic management process. We acknowledge that the Federal Reserve has a range of strategic planning processes and programs in place or under development. And we recognize and commend the Federal Reserve’s efforts to provide a more systemwide focus for its strategic planning efforts through the recent creation of the Federal Reserve System Strategic Planning Coordination Group. However, we are concerned that these strategic planning efforts are not sufficiently integrated and thus may be too limited and insufficient to effectively address the major challenges the Federal Reserve is facing, given the potential implications of these developments for the Federal Reserve’s business lines and organizational structure. Leading private and public institutions have found that truly significant savings often come only when, as a part of a comprehensive strategic planning process, they have rethought their basic missions and lines of business and reengineered their work processes to streamline operations. The Federal Reserve’s plans to consolidate some of its operations in financial services, while commendable, fall far short of the broad rethinking that we believe is necessary if the Federal Reserve is to be as efficient and cost effective as it can be in fulfilling its critical role as our nation’s central bank. As a part of this broad rethinking, we also believe the Federal Reserve should consider consolidating some Reserve Banks and branches. We agree with the Board that such consolidation would result in transition costs but we believe that these costs could be offset by longer-term savings. We also note that consolidating banks and branches is not without precedent among central banks. For example, before the reunification of the former East and West Germany, the German central bank, the Deutsche Bundesbank (which was established by the Allies after World War II and modeled on the Federal Reserve System), had a presence in the form of Landesbanks in each of the 11 West German states. If it chose to keep intact the same structure after reunification, the Bundesbank was faced with the possibility of establishing five additional Landesbanks, one in each of the states of the former East Germany. Instead, the German government reduced the total number of Landesbanks serving the reunified 16 states to 9 Landesbanks and significantly reduced the number of central bank branches as well. The chief reasons given for these consolidation efforts were to promote efficiency and cost savings. Between January 1, 1995, and January 1, 1996, the Bundesbank reported that it was able to reduce its staff by 6 percent. Our recommendation that the Federal Reserve consider alternatives to the current way it delivers priced services to depository institutions is another example of the broad rethinking of mission and lines of business that we believe the Federal Reserve should undertake. When institutions carefully reexamine their missions and lines of business, they often determine that some lines of business are no longer profitable or no longer fit with the strategic direction they wish to take. For example, observers in the private sector have questioned whether it is appropriate for the Federal Reserve to continue to be both a provider and regulator of priced services, particularly in light of the growth of private-sector service providers. Some top officials within the Federal Reserve have, in the past, also suggested alternative ways to provide these services, such as by establishing a separate corporation. Regarding the use of technology to streamline work processes, our reviews of leading organizations that have sought to improve performance through strategic information management and technology have shown that accomplishing order-of-magnitude improvements in performance nearly always requires streamlining or redesigning critical work processes.Consequently, we believe information systems initiatives must be focused on process improvement. Using business process reengineering to drive information systems initiatives can lead to these order-of-magnitude savings, rather than the marginal efficiency gains normally associated with initiatives that use technology to do the same work, the same way, only faster. We acknowledged in several sections in this report that the Federal Reserve’s automation consolidation efforts (under the FRAS system) were designed to promote more efficient operations and to ensure increased security in the nation’s payments system. However, we are concerned that the Federal Reserve’s automation consolidation efforts may not have involved sufficient reengineering of existing work processes. Because of the size of the information technology investment and the potential that such technology holds for providing higher quality services faster and at lower cost, we believe that it is critical that the Federal Reserve ensures that its strategic information technology planning is an integral part of its strategic and business planning processes. We continue to believe that the concerns we raised about the Federal Reserve’s oversight and control mechanisms are valid. Although the Federal Reserve does not view its budget process as having a built-in assumption of annual growth, we note that, for each year from 1988 to 1994 and for each Reserve Bank, annual budget targets have been expressed as percentage increases from the previous year’s budgets. The budget did not reflect a decrease in budget authority in any year for any Reserve Bank despite the fact that during this period many Reserve Banks consolidated their savings bonds programs and mainframe computer operations. With regard to expanding the OIG’s authority to directly audit the Reserve Banks, we believe the inspector general can perform these functions while also retaining the ability to provide arm’s length reviews of the Board’s oversight processes. In an increasingly consolidated Federal Reserve System, retaining the Reserve Banks’ general auditors to do systemwide reviews seems increasingly inappropriate. And reliance on DRBOPS to do such reviews leads to questions about the independent nature of such reviews, particularly since this division also sets policy for the Reserve Banks and has approval authority over certain Reserve Bank purchases and decisions. Such problems and questions could be resolved by expanding the OIG’s authority and by taking steps to better ensure the independence of the Federal Reserve’s internal audit function. In addition, centralizing reviews of Reserve Bank programs would make more apparent any overlapping and redundant reviews and would more clearly highlight areas receiving insufficient audit attention. Finally, we found no reason to suggest that having the Federal Reserve charge for its bank examinations would threaten our valuable dual banking system. Currently, the Federal Reserve is the only one of five federal regulators of depository institutions where taxpayers, and not the industry, bear the cost of supervision. As we noted in this report, the Federal Reserve supervises less than 1,000 state-member banks, or about 9 percent of all banks, and evidence from recent mergers indicates that state charters are being considered more desirable than national charters. The Federal Reserve could also take steps through arrangements with state banking regulators to reduce any undue competitive effects of charging for bank examinations. In addition, our recommendation is not meant to be limited to charging state-member banks. The Federal Reserve’s response does not address charging for its other examinations—those for foreign banks and bank-holding companies—where the possibility of charter switching is not an issue. | Pursuant to a congressional request, GAO reviewed the operations of the Federal Reserve System, focusing on: (1) its finances and levels of spending; (2) areas where spending could be reduced; and (3) actions the Federal Reserve could take to meet future challenges in systemwide management. GAO found that: (1) Federal Reserve operating expenses increased from $1.36 billion in 1988 to $2 billion in 1994; (2) the most significant operating cost increases were for bank supervision and regulation, personnel pay and benefits, and extensive modernization and consolidation of information systems; (3) operating costs vary among reserve banks because the Federal Reserve has not established consistent policies; (4) the Federal Reserve could reduce its personnel benefits and travel-related reimbursements, and realign its contracting and procurement practices; (5) a reduction or elimination of the Federal Reserve surplus account, which increased from $2.1 billion in 1988 to $3.7 billion in 1994, would increase federal budgetary receipts in the year that the reduction or elimination occurs; (6) major developments such as increased competition from private-sector suppliers, use of electronic banking, and consolidation of the banking industry, are likely to affect the Federal Reserve's operations, future role, and management structures; and (7) the Federal Reserve must eliminate the weaknesses in its planning, budgeting, oversight, and audit processes that impede its cost control efforts. |
In a March 1999 white paper detailing modernization plans for the bomber fleet, the Air Force advised Congress that it needed 93 B-1Bs, including 70 combat-coded aircraft by the end of fiscal year 2004, to meet DOD’s strategy of being prepared to win two nearly simultaneous major theater wars. In June 2001, the Air Force proposed reducing the fleet from 93 to 60 aircraft and reducing the number of combat-coded aircraft to 36. Table 1 compares the force structure before and after OSD’s June 2001 decision to reduce and consolidate the B-1B fleet. Partly in response to concerns expressed by Members of Congress about OSD’s June 2001 decision to eliminate the B-1B mission at Mountain Home, McConnell, and Robins Air Force Bases, the Air Force identified and announced new missions for these locations in September 2001. Planning for the new missions is well underway, and the units are expected to transition to their new missions in the fourth quarter of fiscal year 2002. Mountain Home’s current F-15E squadron will be increased from 18 to 24 aircraft, and its 7 KC-135 tankers will be relocated to the Air National Guard unit at McConnell Air Force Base. The Air National Guard unit at McConnell will be redesignated as the 184th Air Refueling Wing and will have 10 KC-135R tankers. The Guard unit at Robins Air Force Base will transition to the 116th Air Control Wing and have 19 Joint Surveillance Target Attack Radar System aircraft. As you know, we have issued numerous reports on the B-1B bomber in response to a variety of congressional concerns. In February 1998, for example, we reported that the Air Force could save millions of dollars without reducing mission capability by assigning more B-1Bs to the reserve component. A list of related GAO products can be found at the end of this report. The decision to reduce the B-1B force was not based on a formal analysis of how a smaller B-1B force would impact DOD’s ability to meet wartime requirements. Air Force officials explained that their decision to reduce the fleet from 93 to 60 was made in response to an OSD suggestion to eliminate the entire B-1B fleet and to address significant funding shortages in the B-1B modification program. Furthermore, the decision was not vetted through established DOD budget processes that normally involve a wider range of participants and generally allow more time for analysis of proposed changes. Senior Air Force officials believe, based on their military judgment, that the decision will not adversely affect DOD’s ability to implement the national security strategy. With regard to the Air Force’s analysis of basing alternatives, a lack of Air Force guidance led Air Force officials in the Office of the Deputy Chief of Staff for Plans and Programs to develop their own methodology to determine where to base the reduced B-1B fleet. These officials did not document their methodology at the time the decision was made and could not replicate the calculations used to make the basing decision. However, our review of documents prepared (at our request) after the decision was made suggests the Air Force used an inconsistent methodology and incomplete costs when estimating the savings generated from the consolidation. As a result, it is not clear whether they chose the most cost-effective alternative. In May 2001, as it considered changes to the fiscal year 2002 DOD budget previously submitted to Congress by the prior administration, senior OSD officials suggested eliminating the entire B-1B fleet that had experienced long-standing survivability and reliability problems. OSD officials in offices such as the Program Analysis and Evaluation did not undertake any analysis of the impact of the proposed B-1B retirements on the Air Force’s ability to meet war-fighting requirements. At that time, the OSD Comptroller estimated that eliminating the B-1B would save approximately $4.5 billion in fiscal years 2002 through 2007. The savings would be achieved by eliminating the B-1Bs from both the active and Guard fleets and canceling the B-1B modernization program, according to an official in the Comptroller’s office. Acknowledging that it lacked sufficient funding to complete planned upgrades to all 93 aircraft, but at the same time believing that some B-1Bs should be retained, the Air Force proposed reducing the size of the fleet from 93 to 60 and reinvesting the savings in upgrades to the remaining 60 aircraft. According to the Secretary of the Air Force and the Chief of Staff of the Air Force, the proposal was budget-driven. The Chief of Staff told us that if the Air Force reduced the number of aircraft to 60, it would have sufficient funds to upgrade the remaining aircraft to make them more usable in combat. The Air Force did no formal analysis of the impact of a smaller B-1B fleet on its ability to meet current and future war-fighting requirements when it proposed this reduction. Senior Air Force leaders told us that they are comfortable with the proposed reduction because they believe that 60 upgraded aircraft will provide significantly more capability in terms of effectiveness, survivability, and maintainability than is available today. The Under Secretary of Defense (Comptroller) included the reduction in the amended 2002 DOD budget request after discussions with the Secretary of Defense and the Deputy Secretary of Defense. The decision to reduce the number of B-1Bs was not fully vetted through the DOD Planning, Programming, and Budgeting System. Under established DOD procedures, the service sends its budget to the Office of the Under Secretary of Defense (Comptroller) where issue area experts review it. Potential changes in the form of draft program budget decisions are circulated to the services, the Joint Staff, the Director of Program Analysis and Evaluation, and various assistant secretaries of defense who are in a position to evaluate the impact of the potential budget decisions on the national military strategy and the objectives of the Secretary of Defense. Their comments are provided to the Comptroller who considers them, finalizes the program budget decision, and forwards it to the Deputy Secretary of Defense for signature. According to an official in the Comptroller’s office, in this instance, the Comptroller approved the program budget decision that reduced and consolidated the B-1B fleet after discussions with the Secretary of Defense and the Deputy Secretary of Defense. A draft program budget decision was not circulated to the Office of the Director, Program Analysis and Evaluation, the Joint Staff, or the Air Force according to representatives of these offices. Air Force officials told us they were surprised when senior OSD officials decided to implement the B-1B fleet reduction in June 2001. While Air Force officials recommended reducing the fleet, they did not know that the recommendation was to be included in the fiscal year 2002 amended budget until just a few days before OSD officials transmitted the budget to Congress and made it public. These same officials told us that they were also surprised that the consolidation was to be implemented by October 1, 2001. They believed that they needed about 1 year to implement the decision. As a result of the short time frame between the OSD decision to implement the Air Force’s proposal to reduce and consolidate the B-1Bs and the release of the amended fiscal year 2002 budget, Air Force officials told us the Air Force did not have time to determine if the Guard units would get new missions and identify those meet with Members of Congress from the affected states to explain the decision. The decision to reduce the fleet and complete the consolidation by October 1, 2001, concerned Members of Congress. As a result, Congress delayed implementation until the Air Force completed a review of bomber force structure and provided information on alternative missions and basing plans. According to the legislation, the Air Force could begin implementing the fleet reduction and consolidation 15 days after providing the required report to Congress. The report was delivered to Congress in February 2002. Among other things, the report provided a summary of the (1) Air Force’s reasons for reducing the B-1B fleet, (2) follow-on missions for the affected units, and (3) details of the B-1B modernization program. The Air Force began relocating and retiring B-1Bs in July 2002. Air Force officers in the Office of the Deputy Chief of Staff for Plans and Programs said they considered a number of basing options before recommending that the remaining aircraft be consolidated at two active duty bases. However, they did not document the options considered at the time the decision was made and could not provide a comprehensive list of options considered. In early 2002, at our request, they prepared a paper that outlined some of the options they believed were considered. According to the paper, the Air Force considered options that would have consolidated the aircraft at two active bases and one Guard base, one Guard base and one active base, or two active duty bases. The option selected continues to house B-1Bs at two active duty bases—26 at Ellsworth Air Force Base and 32 at Dyess Air Force Base.According to Air Force officials, they selected this option because they believed it was the most cost-effective option available. Specifically, they believed they would achieve significant economies of scale by consolidating the aircraft at the two largest B-1B bases, which were located in the active component. Air Force headquarters staff told us they had no written guidance or directives to assist them when they completed the cost analysis for assessing where to locate the aircraft, and the officers at Air Force headquarters responsible for evaluating basing options said they received no guidance from their senior leaders. Consequently, they developed their own approach for determining where the B-1Bs should be retained. The Air Force did not document its methodology at the time the consolidation decision was made but attempted to reconstruct it in early 2002 at our request. At that time, however, Air Force officials were unable to replicate the savings estimates they had developed or provide a complete explanation of the methodology used to make the basing decision. Our review of the documentation provided to us by Air Force officials in early 2002 suggests the Air Force may have used an inconsistent methodology and incomplete costs when estimating potential savings for various basing options. According to Air Force officials, for options that stationed aircraft at both active and Guard locations, the potential savings estimates were based solely on the anticipated reductions in the cost of flying hours that would result from the smaller B-1B force. Other operations and maintenance costs that would have been saved by reducing the number of B-1Bs or eliminating a B-1B unit were not included in the estimates for these options. Such costs include depot maintenance, travel, and contractor logistics support. However, for options that stationed the aircraft at active bases only and eliminated both Guard units, Air Force officials included the projected flying hour savings from the smaller fleet and the Guard’s B-1B nonflying hour operations and maintenance costs in the savings estimates. Air Force officials could not explain why they estimated the cost savings in this manner. However, they noted that while they obtained complete operations and maintenance data for the Guard units, they did not obtain similar data for the active units. Using this methodology, the Air Force estimates for options that included a mix of active and Guard units understated the savings that could result from reducing and consolidating the fleet. Air Force officials said they considered other factors when they assessed the basing options. One factor was the impact that the consolidation might have on the individual B-1B bases. Air Force officials told us that they realized that they would have to select an option that included Ellsworth Air Force Base because, without the B-1B, Ellsworth would have no mission and the Air Force had no authority to close the base. A second factor was the need to avoid generating requirements for construction of new facilities since this would reduce the potential savings from the consolidation and might require the Air Force to seek construction funds from Congress. Several other factors that could have been considered but were not include: actual flying hour costs, mission capable rates, and aircrew experience levels for the active and Guard units. According to Air Force officials, the Air Force did not consider these factors because they believed the active and Guard units had similar capabilities. In comparing their assigned missions, flying hour costs, mission capable rates, aircrew experience, and operational readiness inspections, we found that Guard units (1) were assigned responsibility for substantially the same types of missions as their active duty counterparts, (2) had lower flying hour and higher mission capable rates during fiscal years 1999-2001, and (3) had more experienced crewmembers than the active duty units in terms of hours flown. We also found that active and Guard units received similar scores in their most recent operational readiness inspections. With the exception of an additional 24 hours to recall and mobilize Guard personnel prior to deployment, the kinds of missions assigned to Guard B-1B units and their active duty counterparts are substantially the same. For example, the Guard and active duty units have similar wartime mission responsibilities, and each of the B-1B units is assigned to support either Central or Pacific theater commanders during wartime. Additionally, during peacetime, both active and Guard B-1B units are scheduled to be available to meet ongoing contingency operation needs for 90 days every 15 months under the Air Force’s Aerospace Expeditionary Force concept. In the past, however, the two Guard B-1B units have worked together to support operational requirements during this period so that each unit is responsible for a 45-day period rather than the full 90-day period, which places less strain on volunteer Guardsmen and their employers. We compared the flying hour costs between active duty and Guard B-1B units for fiscal years 1999-2001 and found that Guard costs averaged about 27 percent lower than active duty costs. The Air Force calculates flying hour costs by dividing the cost of fuel and parts by the number of hours each unit flies the aircraft. Specifically, the Air Force considers the cost of aviation fuel, oil, and lubricants; depot-level reparables, which are expensive parts that can be fixed and used again, such as hydraulic pumps, navigational computers, engines, and landing gear; and consumable supplies, which are inexpensive parts, such as nuts, bolts, and washers, which are used and then discarded. Table 2 shows the cost per flying hour for active and Air National Guard B-1B units for fiscal years 1999-2001. Our analysis showed that the Guard’s lower cost per flying hour was due in large part to its significantly lower costs for depot-level reparables (see table 3). The Guard attributed its lower reparables costs to the higher experience levels of its maintenance personnel. Apprentice mechanics in the Guard averaged over 10 years of military experience compared to slightly more than 2 years of military experience among apprentice active duty mechanics. Officials said that more experienced maintenance personnel are often able to identify a problem part and fix it at the unit, when appropriate, instead of purchasing a replacement part from the Air Force supply system. Our analysis also showed that the lower costs of consumables in the Guard also contributed to the lower flying hour costs (see table 4). Guard officials said that they are able to keep the costs of consumables down because their experienced maintenance crews are often able to isolate, identify, and fix malfunctioning parts without pulling multiple suspect parts off the aircraft. As a result, fewer consumable supplies are used. Flying hour costs represent only a portion of the overall costs of operating and maintaining B-1B bombers. Costs such as pilot training, test equipment, and depot maintenance are not included in the flying hour cost. As we noted earlier, the Air Force did not consider these costs or the historical flying hour costs detailed previously when it made its basing decision. The Guard’s reported mission capable rates were higher than the active duty’s reported rates between fiscal year 1999 and 2001. The Air Force designates a weapon system as mission capable when it can perform at least one of its assigned combat missions. The mission capable rate specifically measures the percentage of time a unit’s aircraft are available to meet at least one of its missions. On average, the Guard units’ B1-B fleet was available between 62 and 65 percent of the time during fiscal years 1999 through 2001. During those same years, the active duty mission capable rate performance averaged between 52 and 60 percent (see fig. 1). According to Air Combat Command officials, the active duty units’ mission capable rate gain in fiscal year 2001 was primarily due to (1) increased aircraft availability following completion of an extensive modification program and (2) improvements in the Air Force’s inventory of spare parts. In the 2 years prior to fiscal year 2001, both active and Guard B-1Bs underwent extensive modifications at the depot that improved the aircraft’s survivability and equipped the aircraft with more advanced munitions. During this time, large portions of the B-1B fleet were at the depot for extended periods. According to Air Combat Command officials, active duty B-1B units experienced reduced mission capable rates because maintaining a normal operating tempo with fewer aircraft required each aircraft to be flown more frequently and resulted in more wear and tear to each aircraft. However, the Guard units’ mission capable rates were less affected. Both Air Combat Command and Guard unit officials agreed that during this time the higher experience levels of Guard maintenance personnel and the Guard’s lower operating tempo lessened the impact of having large portions of the fleet at the depot on the Guard units’ mission capable rates. The Air Force completed most of these modifications by the end of fiscal year 2000, and the gap between the active and Guard mission capable rates narrowed in fiscal year 2001. According to Air Combat Command officials, the overall shortage of spare parts experienced by the Air Force during the late 1990s also negatively affected the active duty B-1B units’ mission capable rates prior to fiscal year 2001. Although increased funding led to an improvement in the spare parts inventory by fiscal year 1999, it took about 12 to 24 months for the improvements to be reflected in the units’ reported mission capable rates according to Air Staff officials. According to Guard officials, Guard B-1B units were less affected by the shortage of spare parts because their more experienced maintainers could sometimes repair rather than replace problematic components. The Air National Guard’s B-1B pilots and weapon systems officers are generally more experienced than their active duty counterparts. The Air Force designates aircrew members as “experienced” based on the total number of flying hours they have accumulated both overall and in the B-1B. A crewmember’s experience level determines the amount of training (i.e., flying hours and sorties) he or she is required to complete each year, which, in turn, drives the unit’s overall flying hour program. For example, units with a higher number of inexperienced aircrew members would require a higher allocation of flying hours to meet training requirements each year. In comparing the Guard and active B-1B aircrew experience levels, we found that the majority of Air National Guard pilots were designated as experienced. However, this was not the case for pilots assigned to active operations squadrons at Dyess and Ellsworth. Table 5 shows the percentage of experienced pilots by unit location. Many of the Guard pilots also had other flying experience that enhanced their ability to pilot the B-1B. For example, many had prior active duty flying experience or flew other aircraft for the Guard. This experience contributed to the pilots’ overall qualifications, thereby permitting them to be designated as experienced more quickly than their active duty counterparts. The picture was similar for the B-1B’s weapon systems officers. For example, more than 80 percent of the Air National Guard’s weapons system officers were considered experienced, while in the active Air Force only about 40 percent were considered experienced. Like the Guard pilots, most of the Guard’s weapon systems officers also had experience flying other military aircraft that enhanced their ability in the B-1B. Table 6 shows the percentage of experienced weapon systems officers by unit location. The Air Force conducts periodic inspections of each of its operational units to evaluate the unit’s readiness to perform its wartime mission. The readiness inspections, conducted by the Air Combat Command Inspector General staff, are intended to create a realistic environment for evaluating the units’ sustained performance and contingency response. The bomb units are evaluated in four major areas: initial response, employment, mission support, and ability to survive and operate in a hostile environment. The Guard B-1B units scored as high or higher than did the active duty units in the most recent readiness inspections. Specifically, the B-1B bomb units at two active locations (Dyess and Ellsworth) and one Guard location (McConnell) each received excellent ratings overall in their most recent inspections. The Inspector General completed an inspection of Robins’ initial response capabilities in July 2001 and rated the unit as excellent. However, the Inspector General did not complete its inspection of the three remaining areas since the Air Force had already decided to remove the B-1Bs from Robins. Additionally, the Mountain Home wing, which includes B-1Bs, had not undergone an operational readiness inspection at the time of our review. Major decisions involving force structure need to be supported by solid analysis to document that a range of alternatives has been considered and that the decision provides a cost-effective solution consistent with the national defense strategy. DOD’s Planning, Programming, and Budgeting process establishes a consultation process for civilian and military leaders to use in reviewing alternatives to the services’ current force structure. However, the decision to reduce the B-1B did not fully adhere to this process because key offices such as the Office of Program Analysis and Evaluation and the Joint Staff did not have an opportunity to review and comment on the proposal and conduct analysis before it was approved. Moreover, although Air Force and OSD officials are comfortable with the decision, based on their military judgment, neither the Air Force nor OSD conducted any formal analysis to provide data on how a range of B-1B force size alternatives would impact DOD’s ability to meet potential wartime requirements. By following its established budget process more closely in the future and allowing experts from various offices to review and analyze force structure proposals, DOD could provide better assurance to Congress that future force structure decisions are well-supported and are in the nation’s long-term interest. In addition, the lack of an established Air Force methodology for assessing the costs associated with potential basing options led officials to use incomplete costs when estimating the projected savings for some of the basing options considered. By focusing solely on flying hour costs for some basing options, Air Force officials did not consider other operations and maintenance savings that a reduction in the number of aircraft or the number of B-1B units would generate. As a result, the Air Force may have understated the cost savings of the options that retained B-1Bs in both the Air National Guard and the active components. A more structured cost estimating methodology would ensure that that the Air Force considers all appropriate costs in calculating the savings for future aircraft realignments. To provide an analytical basis for future aircraft realignment decisions, we recommend that the Secretary of Defense direct the Secretary of the Air Force to develop a methodology for assessing and comparing the costs of active and reserve units so that all potential costs are fully considered when evaluating potential basing options and making future basing decisions. In written comments on a draft of this report, DOD did not agree with our recommendation that the Air Force develop a methodology for assessing and comparing costs to evaluate basing options because it believes that such a methodology exists. Furthermore, DOD believes that the Air Force used a methodology that considered all costs as well as noncost factors when it made its basing decision and that cost-effectiveness, while an important criterion, should not be the sole consideration in making basing decisions. DOD’s comments are included in this report as appendix II. After we received DOD’s comments, we asked the department to provide documentation describing its methodology for comparing active and reserve unit costs. DOD referred us to the instruction that outlines its Planning, Programming and Budgeting System. This instruction describes DOD’s process for developing the department’s overall plans and budget; however, it does not identify a methodology for assessing and comparing the costs associated with active and reserve units. DOD also noted that the Air Force’s Total Ownership Cost database encompasses all cost factors related to active and reserve costs and ensures that any comparison of active and reserve units is done equitably. During our audit work, we assessed whether the Total Ownership Cost database could be used to compare total operations costs for B-1Bs located at Guard and active duty units. We determined, however, that not all indirect costs for B-1B units in the Guard were included in the database, making it impossible to compare the operating costs of Guard and active units equitably. Therefore, we are retaining our recommendation that the Secretary of Defense direct the Secretary of the Air Force to develop a methodology. In commenting on our presentation of flying hour costs, DOD acknowledged the Guard’s lower flying hour costs, but it stated that including additional costs would result in more comparable flying hour costs for Guard and active duty units. DOD suggested using the direct and indirect costs included in the Air Force’s Total Ownership Cost database to calculate flying hour costs. In conducting our analysis of flying hour costs, we relied on the Air Force’s definition. The Air Force defines flying hour costs as the cost of fuel, depot-level reparables, and consumable parts divided by the number of hours flown. The Air Force does not include other costs such as software maintenance costs, contractor support costs, or military personnel costs when it calculates the cost per flying hour. DOD is correct when it states that there are other costs associated with operating a B-1B and our report recognizes that fact. In commenting on our analysis of active and Guard mission capable rates, DOD noted that the difference between active and Guard mission capable rates is not solely attributable to the experience level of Guard personnel. The department also noted that the Guard operates newer model B-1B aircraft while the active duty units operate older model aircraft and identified this as a factor contributing to lower active duty rates. While we recognize that the oldest aircraft in the fleet (1983 and 1984 models) are concentrated in the active units at Dyess Air Force Base, our analysis shows that those aircraft constitute only about one-third (33 percent) of Dyess’ fleet and only about one-fifth (or 20 percent) of the active B-1B fleet overall. Active units at Ellsworth and Mountain Home operate newer 1985 and 1986 model aircraft—the same models as those operated by the two Guard units. While aircraft age may have some effect on mission capable rates, we do not believe, based on our analysis, that this effect is significant for the B-1B force. We are sending copies of this report to the Secretary of Defense, the Secretary of the Air Force, and interested congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on this report or wish to discuss these matters further, please call me on 202-512-4300. Key contacts and staff acknowledgments are listed in appendix III. To determine what types of analyses the Department of Defense (DOD) and the Air Force did of wartime requirements and basing options before deciding on the number of aircraft to retain and where to base them, we obtained and analyzed the only contemporaneous documents that were available from the Air Force—a briefing presented to the Secretary and the Chief of Staff of the Air Force on the fleet reduction and consolidation and a memorandum from the Secretary of the Air Force to the Under Secretary of Defense (Comptroller) outlining the Air Force’s proposal to reduce and consolidate the fleet. Additionally, because the Air Force had no documents explaining its methodology for evaluating the various basing options it considered, we asked the Air Force, in early 2002, to document its methodology. The Air Force provided us with a statement explaining the methodology; however, it was unable to provide us with the cost figures used to estimate the savings. As a result, we could not verify the savings estimates that the Air Force attributed to each option. To supplement our document review, we interviewed several Air Force officials who were located in the Washington, D.C., area to determine what role each may have played in the decision to reduce the fleet and consolidate it at two bases. The officials were the Chief of Staff, U.S. Air Force; the Deputy Chief of Staff, Plans and Programs, U.S. Air Force; the former Director of the Air National Guard; and the Assistant for Operational Readiness, Air National Guard. We also spoke with officials responsible for overseeing the B-1B program in the Office of the Deputy Chief of Staff, Air and Space Operations, U.S. Air Force, Washington, D.C, to determine if any analysis of current and future wartime requirements had been completed prior to the decision. We also had numerous meetings with officials in the Office of the Deputy Chief of Staff, Plans and Programs, who were responsible for developing the basing options and estimating the savings that would result for each option to discuss their methodology and the decision process. We also met with representatives of the Air Combat Command at Langley Air Force Base, Virginia, to determine if they had any role in the decision to either reduce the number of B-1Bs or consolidate them at two active duty bases. Finally, we met with representatives of the Under Secretary of Defense (Comptroller); the Director, Program, Analysis, and Evaluation; and the Joint Staff to determine if they had completed any analysis of the Office of Secretary of Defense suggestion to eliminate the B-1B fleet or the Air Force’s proposal to reduce the fleet. In addition, we reviewed the 1999 and 2001 Air Force bomber white papers and the 2001 Quadrennial Defense Review to gather insight on current and future B-1B wartime requirements and copies of congressional testimonies by the Secretary of Defense, the Deputy Secretary of Defense, the Secretary of the Air Force, and the Chief of Staff of the Air Force to document DOD’s rationale for making the B-1B decision. To compare the flying hour costs and the capabilities of the various active and Air National Guard B-1B units, we collected and analyzed flying hour cost data for fiscal years 1999-2001 from the five B-1B units. To verify the data from these sources, we collected and analyzed cost data for the same years from the Air Combat Command; the Air Force Cost Analysis Agency, Washington, D.C.; and the Directorate of Logistics, Air National Guard, Andrews Air Force Base, Maryland. We also collected and analyzed (1) mission capable rate data for fiscal years 1999-2001 from the Air Combat Command and the two Guard units and (2) collected and analyzed data on aircrew and maintenance crew experience from the Air Combat Command and the Air National Guard Bureau. To determine if there were significant differences in active and Guard units’ wartime missions, we reviewed the wartime taskings of all five B-1B units. To compare the units’ participation in peacetime activities, we reviewed documents provided by officials at the Air Expeditionary Force Center, Langley Air Force Base, Virginia. We reviewed and compared the operational readiness inspections for the bomb wings located at Dyess, Ellsworth, and McConnell Air Force Bases and the partial operational readiness inspection for the bomb wing at Robins Air Force Base. The Inspector General staff completed an inspection of Robins’ initial response capabilities in July 2001 but did not complete its inspection of the three remaining areas since the Air Force had already decided to remove the B-1Bs from Robins. The wing at Mountain Home had not undergone an operational readiness inspection at the time we completed our review. Our work also included visits to three B-1B units to interview officials and obtain documents: 184th Bomb Wing, McConnell Air Force Base, Kansas; 7th Bomb Wing, Dyess Air Force Base, Texas; and 116th Bomb Wing, Robins Air Force Base, Georgia. In addition to those named above, Sharron Candon, Judith Collins, Penney Harwell, Jane Hunt, Ken Patton, and Carol Schuster made key contributions to this report. Air Force Bombers: Moving More B-1s to the Reserves Could Save Millions without Reducing Mission Capability. GAO/NSIAD-98-64. Washington, D.C.: February 26, 1998. Air Force Bombers: Options to Retire or Restructure the Force Would Reduce Planned Spending. GAO/NSIAD-96-192. Washington, D.C.: September 30, 1996. Embedded Computers: B-1B Computers Must Be Upgraded to Support Conventional Requirements. GAO/AIMD-96-28. Washington, D.C.: February 27, 1996. B-1B Conventional Upgrades. GAO/NSIAD-96-52R. Washington, D.C.: December 4, 1995. B-1B Bomber: Evaluation of Air Force Report on B-1B Operational Readiness Assessment. GAO/NSIAD-95-151. Washington, D.C.: July 18, 1995. Air Force: Assessment of DOD’s Report on Plan and Capabilities for Evaluating Heavy Bombers. GAO/NSIAD-94-99. Washington, D.C.: January 10, 1994. Strategic Bombers: Issues Relating to the B-1B’s Availability and Ability to Perform Conventional Missions. GAO/NSIAD-94-81. Washington, D.C.: January 10, 1994. The U.S. Nuclear Triad: GAO’s Evaluation of the Strategic Modernization Program. GAO/T-PEMD-93-5. Washington, D.C.: June 10, 1993. Strategic Bombers: Adding Conventional Capabilities Will Be Complex, Time-Consuming, and Costly. GAO/NSIAD-93-45. Washington, D.C.: February 5, 1993. Strategic Bombers: Need to Redefine Requirements for B-1B Defensive Avionics System. GAO/NSIAD-92-272. Washington, D.C.: July 17, 1992. Strategic Bombers: Updated Status of the B-1B Recovery Program. GAO/NSIAD-91-189. Washington, D.C.: May 9, 1991. Strategic Bombers: Issues Related to the B-1B Aircraft Program. GAO/T-NSIAD-91-11. Washington, D.C.: March 6, 1991. | The B-1B began operations in 1986 as a long-range heavy bomber designed primarily to carry nuclear munitions. Although the B-1B's nuclear mission was withdrawn in October 1997, the Air Force continues to rely on the B-1B to support conventional wartime missions. The B-1B has the largest payload of the Air Force's three bombers, and recent modifications have provided the capability to deliver near precision munitions. Future upgrades to the B-1B are expected to provide greater flexibility by enabling it to carry three different types of bombs simultaneously and eliminate some of its long-term survivability and maintainability problems by improving its radar warning systems, jamming ability, and other electronic countermeasures. In May 2001, the Office of the Secretary of Defense suggested retiring the entire B-1B fleet by October 2001. In June 2001, the Air Force proposed an alternative that reduced the B-1B fleet from 93 to 60 aircraft and consolidated them at two active duty locations instead of the three active duty and two National Guard locations that housed the aircraft. Congress delayed implementation of the fleet reduction until the Air Force completed a review of bomber force structure and provided a report on alternative missions and basing plans. The Air Force began consolidating the fleet in July 2002. GAO found that Air Force officials did not conduct a formal analysis to assess how a reduction in B-1B bombers from 93 to 60 would affect the Department of Defense's ability to meet wartime requirements. Nor did they complete a comprehensive analysis of potential basing options to know whether they were choosing the most cost-effective alternative. A comparison of active and Guard units' missions, flying hour costs, and capabilities showed that active and Guard units were responsible for substantially the same missions but Guard units had lower flying hour costs and higher mission-capable rates than their active duty counterparts. Additionally, the Guard's B-1B aircrew members were generally more experienced, in terms of the number of hours flown, than the active duty B-1B aircrews because most Guard aircrew members served on active duty prior to joining the Air National Guard. |
The Coast Guard, an Armed Service of the United States housed within the Department of Homeland Security, is the principle federal agency responsible for maritime safety, security, and environmental stewardship through multimission resources, authorities, and capabilities. According to the Coast Guard, the greatest threat to mission performance is the deteriorating condition and increasing technological obsolescence of its legacy assets. According to the Coast Guard, its assets—such as vessels, aircraft, and shore facilities—are essential to its homeland security missions, as well as sustaining other mission areas, such as search and rescue, law enforcement, and environmental protection. Because many of the Coast Guard’s assets were reaching the end of their expected service lives and were in deteriorating condition, the Coast Guard began the 25- year, more than $24 billion Deepwater program in the mid-1990s to upgrade or replace vessels and aircraft and to acquire other capabilities, such as improved communications systems. The Coast Guard has taken more direct responsibility for the Deepwater program acquisition strategy and management in recent years. At the start of the Deepwater acquisition, the Coast Guard chose a system-of-systems strategy that was to replace the legacy assets with an integrated package of assets, rather than using a traditional acquisition approach of replacing individual classes of legacy assets through a series of acquisitions. To carry out this acquisition, the Coast Guard awarded a competitive contract to a systems integrator, which for the Deepwater program was a contractor composed of two major companies—Lockheed Martin Corporation and Northrop Grumman Corporation. Acting as a joint venture called “Integrated Coast Guard Systems” (the contractor), these companies were responsible for designing, constructing, deploying, supporting, and integrating the various assets to meet projected Deepwater operational requirements. However, after experiencing a number of management challenges under the system-of- systems approach, the Coast Guard recognized that it needed to increase government oversight and transferred Deepwater system integration and program management responsibilities, including logistics planning, back to the Coast Guard in April 2007. Furthermore, when the Coast Guard assumed the lead role for Deepwater program management, it decided to consider future work and potential bids on these assets outside of the existing Deepwater contract. By taking this action, the Coast Guard in some cases decided to restart the planning and design of the individual assets. In addition, the Coast Guard took over logistics planning for some assets from the contractor. For example, the Coast Guard, rather than the contractor, is now developing the NSC logistics planning documents including the key logistics document—the Integrated Logistics Support Plan. The Deepwater program represents the largest acquisition in the Coast Guard’s history, and the program has experienced some serious performance and management problems, such as cost overruns, schedule slippages, and assets designed and delivered with significant defects. Since 2001, we have reviewed the Deepwater program and informed Congress, the Department of Homeland Security, and the Coast Guard of the risks and uncertainties inherent with the system-of-systems approach. In March 2004, we made recommendations to the Coast Guard to address three broad areas of concern: improving program management and oversight, strengthening contractor accountability, and promoting cost control through greater competition among potential subcontractors. In April 2006, June 2007, and March 2008, we issued follow-on reports describing the Coast Guard’s efforts to address these recommendations and provided information on the status of various Deepwater assets, including that the Coast Guard’s increased management and oversight of the Deepwater acquisition had resulted in improvements to the program. In June 2008, we reported on additional changes in Deepwater management and oversight that resulted in improvements to the program and that the Coast Guard’s mitigating strategies for the loss of patrol boats were achieving results in the near term. Since the Coast Guard took over the acquisition and management responsibilities for the Deepwater program from the contractor in 2007, it has realized that its knowledge of how the various proposed assets would work together to help meet mission needs were limited because the contractor, in some cases, had developed the plans for these assets without using all of the input from the Coast Guard. In 2001, the contractor completed a study documenting the capabilities, types, and mix of assets the Coast Guard needed to fulfill its Deepwater missions, referred to as the Fleet Mix Study. The Coast Guard has initiated a follow-on study to update the work originally completed by the contractor. The goals of this study include validating mission performance requirements and revisiting the number and mix of assets to be procured. The results of this study are expected in the summer of 2009, at which time Coast Guard leadership will assess the results and plan for future asset procurement decisions. According to Coast Guard officials, the Coast Guard plans to update the Fleet Mix Study every 4 years and, as a result, the Deepwater program may change in terms of the numbers and types of specific assets needed. While the final number may change as a result of the Fleet Mix Study, the Coast Guard currently is projected to take delivery of a total of eight NSCs between 2008 and 2017. In May 2008, the contractor delivered the first-in- class NSC, Bertholf, to the Coast Guard. The Bertholf is undergoing testing and is planned to be fully operational in the fourth quarter of fiscal year 2010. According to the Coast Guard, as of May 2009, the second NSC, Waesche, was 83 percent complete and is scheduled to be delivered in late 2009, while the third NSC, Stratton, was 11 percent complete and is scheduled for a late 2011 delivery. The Coast Guard plans to have each NSC fully operational once testing—which ranges from less than 1 year to 2 years after delivery—is completed. Coast Guard officials stated that the Coast Guard has awarded the contract to begin purchasing materials for the fourth NSC, but the Coast Guard has not awarded a contract for construction of the fourth NSC. Neither materials purchases nor production has begun on the fifth through eighth NSCs because funds for these cutters have not yet been appropriated. According to the Coast Guard, the NSC is designed to be capable of helping it execute the most challenging of maritime security mission needs and represents a giant leap forward in capability for the Coast Guard’s vessel fleet. The Coast Guard further states that the NSC is to be the largest and most technologically advanced class of cutter in the Coast Guard, with robust capabilities for maritime homeland security, law enforcement, and defense readiness missions. The NSC class is to replace the Coast Guard’s aging HEC class and is to provide several capabilities that the HECs do not have, such as the ability to collect, analyze, and transmit classified information; carry, launch, and recover unmanned aircraft, thereby increasing the cutter’s surveillance capabilities and range; more easily and safely launch small boats from and return them to the cutter; and travel away from shore for longer time periods. In 2007, the Commandant of the Coast Guard stated that the NSC will be the most sophisticated and capable cutter the Coast Guard has ever operated, with vastly improved capabilities over legacy HECs. The more capable NSCs, for example, are designed to enable the Coast Guard to screen and target vessels faster, and more safely and reliably before they arrive in U.S. waters. As a result of the increased capabilities of the NSCs, the Coast Guard plans to replace 12 HECs with 8 NSCs. Figure 1 provides a comparison of some key operational capabilities between the HEC and its replacement, the NSC. In addition to the capabilities described in figure 1, according to the Coast Guard, the NSC also has the following capabilities that go beyond those of an HEC: NSC’s engine and propulsion systems are more efficient than the HEC’s; allowing the NSC to transit faster while burning less fuel; the higher transit speed of the NSC allows it to maximize the time that it operates inside of the mission area; the NSC has the ability to conduct missions in rougher seas than the HEC; and the NSC has more comfortable accommodations for the crew, with larger sleeping and living areas that include many modern conveniences, such as computers, entertainment systems, and exercise facilities. The primary missions the Coast Guard assigns to its HECs include drug interdiction, fisheries patrols, and defense readiness. Together these missions account for over 70 percent of HEC mission assignments. Although the NSC is a multimission cutter that is to help the Coast Guard conduct its full range of missions, the Coast Guard plans to assign the NSC the same mission assignments as the HEC. Figure 2 shows the percentage of time the HEC conducted Coast Guard missions for fiscal years 1999 through 2008. Defense Readiness: Participation with the Department of Defense in global military operations Support: Training; public affairs; and cooperation with federal, state, and local agencies Other Law Enforcement: Protection of U.S. fishing grounds from illegal harvest by foreign fishermen Other: Migrant interdiction; ports, waterways, coastal security; search and rescue; and marine environmental protection In conducting missions, Coast Guard vessels log the amount of operational hours deployed by mission while on patrol. However, the Coast Guard’s system for tracking operational hours captures hours logged in support of the primary mission that a vessel conducts while on patrol; thus, any secondary missions that may have been performed on a patrol by these multimission vessels would not necessarily be reflected in the operational hour data. Prior to fiscal year 2005, the Other Law Enforcement mission area contained the Enforcement of Laws and Treaties-Other employment category which captured those law enforcement activities that did not fall under drug interdiction, fisheries enforcement, or migration interdiction operations. There are currently 12 HECs in the Coast Guard, with 2 of them based on the East Coast and another 10 on the West Coast and in Hawaii. To accomplish its missions, cutters like the HEC typically deploy and operate with support assets that aid the cutter in performing its mission requirements. These may include small boats, cutter-based air assets (such as helicopters), or land-based aircraft (such as fixed-wing aircraft or helicopters). According to the Coast Guard officials, pairing support assets with a cutter increases its surveillance and intelligence gathering range and improves its search and rescue capabilities. To maximize the time that the NSC can operate at sea each year without requiring its crews to be away from their home port more than allowed with the HEC, the Coast Guard plans to use a “crew rotational concept.” Under this concept, the Coast Guard plans to have four crews staff and operate three cutters on a rotating basis. By using the crew rotational concept, the Coast Guard hopes that each NSC will be able to provide 230 days away from home port per year as compared to the 185 days away from home port per year provided by each HEC. Days away from home port is a Coast Guard measure that reflects the level of operations for a cutter. The measure represents the days the cutter is not at the port where it is based, including days the cutter is en route to and conducting missions. For purposes of this report, we refer to days away from home port as operational days. Delays in the delivery of the NSC and its associated support assets— primarily unmanned aircraft and small boats—have created an anticipated loss of cutter operational days and delays in achieving certain other operational capabilities. Enhancements to the NSC’s capabilities following the 9/11 terrorist attacks, as well as damage to the shipyard and the exodus of workers as a result of Hurricane Katrina, contributed to these delays. These delays will require the Coast Guard to continue to rely on its aging HECs to provide cutter operational days and to use existing aircraft and small boats to support the new NSC. Also, certain systems on NSC- Bertholf are currently not functioning as planned, but the Coast Guard plans to resolve these deficiencies before NSC-Bertholf is certified as fully operational, scheduled for the fourth quarter of fiscal year 2010. Because the Coast Guard plans to deploy the first NSC without the planned unmanned aircraft and new small boats, and because on-board deficiencies still exist, the NSC will not initially operate with the full complement of its originally-planned capabilities. As a result, the Coast Guard cannot determine the extent to which the NSC’s final capabilities will exceed those of the HECs at this time and it may take several years before some of these capabilities are realized. Delays in deployment of the NSCs between the 2007 and 2008 delivery schedules show an anticipated loss of thousands of NSC operational days. Comparing the 2007 and 2008 delivery schedules shows that the first NSC will likely be 1 year behind schedule when it is certified as fully operational, now scheduled for the fourth quarter of fiscal year 2010. Further, the eighth and final NSC was to be fully operational in 2016, but is currently projected to be fully operational by the fourth quarter of calendar year 2018. The first NSC was initially projected for delivery in 2006, but slipped to August 2007 after the 9/11 requirements changes. New requirements made after 9/11 to enhance the NSC’s capabilities also contributed to these delays and include the following: expanded interoperability with the Department of Defense, DHS, and local first responders; increased self-defense and survivability, including chemical, biological, and radiological measures; increased flight capability via a longer and enhanced flight deck; upgraded weapon systems; and improved classified communication capabilities. In addition to the delays brought about by post-9/11 requirements changes and the associated enhancements to NSC capabilities, delivery of the NSC was further delayed until May 2008 because of substantial damage to the shipyard and an exodus of some of the experienced workforce as a result of Hurricane Katrina. If the Coast Guard maintains its 2008 acquisition schedule, the most recent acquisition schedule available to us, it will face a projected loss of thousands of cutter operational days available from the NSC class for calendar years 2009 through 2017 from what was originally planned. Specifically, as shown in figure 3, in comparing the number of operational days that were expected to be available from the NSC fleet in the 2007 schedule to what is expected based on the updated 2008 schedule delivery schedule, there is a cumulative projected loss of 3,080 operational days (an “operational gap”). Figure 3 represents the loss of operational capabilities as a result of delivery delays with the NSC, but does not directly translate into lost cutter operational days for the Coast Guard as a whole because it does not take into account any operational days that the Coast Guard anticipates can be provided through continued use of its HECs. Coast Guard officials emphasized that it plans for the HECs to continue to serve until the NSCs become operational. As a result, the Coast Guard officials state that they do not anticipate a gap in operational days, even though they acknowledge that the HECs have fewer capabilities than the NSCs. While continued operation of the HECs should at least partially mitigate the operational gap shown in figure 3, we believe that this analysis is useful to demonstrate the amount of time that the Coast Guard will be without the enhanced operational capabilities that the NSCs are expected to provide once they are deployed with their full complement of support assets. The Coast Guard is unable to quantify the gap in operational capabilities that it will actually experience, though, because it has not yet completed the HEC decommissioning schedule, which, according to Coast Guard officials, is to be completed in late 2009 at the earliest. The Coast Guard is also not able to estimate the impact of these lost operational days on specific future missions. However, given the enhanced capabilities that NSCs have over the HECs, a loss in NSC operational days could negatively impact the Coast Guards’ ability to more effectively conduct missions, such as migrant and drug interdiction, enforcement of domestic fishing laws, and participation in Department of Defense operations. Delays in delivery of the NSCs have required the Coast Guard to develop plans to rely on its aging fleet of HECs to continue to perform missions that the NSCs were to take over. However, Coast Guard metrics show that the HECs are becoming increasingly unreliable and, as a fleet, have not met their target number of cutter operational days in each of the past 6 fiscal years. Specifically, the fleet of 12 HECs lost a cumulative total of 118 to 390 operational days each fiscal year from 2003 through 2008. This accounts for 5 to 18 percent of the Coast Guard’s annual target of 2,220 days for the HEC fleet. According to the Coast Guard, this loss occurred because of a combination of unscheduled maintenance and additional planned maintenance beyond the 143 maintenance days allotted for each HEC annually, and averaged about 260 lost operational days per year. Coast Guard officials told us that this additional maintenance was the result of the HECs’ deteriorating condition. Table 1 shows the actual operational days provided by the HECs from fiscal years 2003 through 2008, and the gap between the days provided and the Coast Guard’s annual target of 2,220 days. Another measure of the condition of the HEC fleet is the percent of time [it is] fully mission capable (PTFMC). This metric reflects the percentage of time that the cutters operate without a major equipment failure or loss in mission capabilities. For example, a PTFMC of 50 percent indicates that the cutter had one or more major equipment failures (or casualties) that degraded or forced the termination of missions for half of the cutter’s operational days in a given year. From fiscal years 2004 through 2008, the HECs’ PTFMC was 59 percent or less, while the Coast Guard’s PTFMC goal for the HEC class was 86 percent. Figure 4 shows the PTFMC for the HECs during that period. Coast Guard officials said that because of the age and condition of the HECs, they anticipate that the maintenance needs of the cutters will continue to increase over time. According to Coast Guard officials, the loss of cutter operational days and the gap between the actual PTFMC of the HEC class and the Coast Guard’s goal of 86 percent would negatively impact their drug interdiction, defense readiness, alien migrant interdiction, and living marine resource missions. The HECs were commissioned during 1967 to 1972 and have an estimated service life of about 40 years, affected in part by a rehabilitation and service life extension program that began in the late 1980s and ended in 1992. As part of this program, each cutter received an overhaul, costing from $70 million to $90 million per cutter. Many major propulsion and hull systems, however, were overhauled but not upgraded or replaced, and these systems are now at or near the end of their useful service life. The Coast Guard plans to deploy the first NSC, scheduled to become fully operational in the fourth quarter of fiscal year 2010, without its planned support assets of unmanned aircraft and new small boats. In addition, based on our review of a Coast Guard study, future NSCs may begin missions without the originally-planned unmanned aircraft. The Coast Guard plans to draft operational specifications for the unmanned aircraft in 2010, and to acquire new small boats that will be deployed with the first NSC by the end of calendar year 2010. As a result, because Coast Guard has not determined the needed specifications, the extent of the operational gap created by the lack of these assets is not known at this time. In particular, a Coast Guard acquisition official said that the Coast Guard has not yet selected the type of unmanned aircraft that is to be deployed with the NSC, but plans to do so by the third quarter of fiscal year 2012. After the unmanned aircraft is selected, the Coast Guard must contract for the acquisition and production of the aircraft, accept delivery of it, and test its capabilities before deploying it with the NSC—activities that can take several years. The NSCs are designed to be deployed with the following combinations of support aircraft: 1 helicopter and 2 unmanned aircraft or 4 unmanned aircraft. The helicopter may be used for surveillance, rescue operations, or airborne use of force, whereas the unmanned aircraft is intended to increase the NSC’s surveillance capabilities. In addition to the support aircraft, the NSC is intended to be deployed with three new small boats, rather than the two small boats on the HECs, and, according to the Coast Guard, will be able to launch and recover small boats in rougher seas than the HEC. The small boats are designed to assist the Coast Guard in conducting vessel boardings, pursuing and interdicting vessels suspected of unlawful behavior, and conducting search and rescue operations. The Coast Guard currently operates the helicopters that can be deployed with the NSC, but has restarted the acquisition of the small boats and is in a pre-acquisition process for the unmanned aircraft because the operational requirements for the unmanned aircraft and small boats, as set forth by the contractor, did not meet the Coast Guard’s needs. These support assets are to provide the NSC with surveillance and other capabilities beyond those of the HECs. However, until operational requirements are completed and the unmanned aircraft and small boats are delivered, these increased capabilities of the NSC will not be realized by the Coast Guard. Coast Guard officials acknowledged that the lack of unmanned aircraft would create a gap between the NSC’s actual and planned capabilities, but noted that deployment of existing small boats with the NSC would mitigate any capability gap created by the absence the new small boats, as discussed later in this report. The Coast Guard has not finalized the operational requirements or acquisition schedule for the unmanned aircraft to be deployed with an NSC, making it difficult for the Coast Guard to quantify the expected operational gap. Acquisition of the unmanned aircraft was discontinued by the Coast Guard in 2007. According to Coast Guard officials, the Coast Guard discontinued this acquisition because the technology was unproven and the projected costs were greater than those originally planned. According to a Coast Guard acquisition official, the Coast Guard will assess alternative aircraft platforms and plans to select one by the third quarter of fiscal year 2012 for acquisition. Having assumed responsibility for the acquisition of the unmanned aircraft from the contractor, the Coast Guard is to follow the processes set forth in its acquisition guidance. However, because the acquisition program is in its early stages, the Coast Guard has not yet determined a date for the deployment of an NSC-based unmanned aircraft. The capabilities of the small boats that are to be deployed with the NSCs are also not currently defined. According to Coast Guard officials, the original small boat capabilities as planned by the contractor were not realistic. For example, Coast Guard officials told us that operational requirements—such as the inclusion of gun mounts, a top speed of 45 knots, and communication suite requirements—may have been achievable individually, but were not feasible when taken together. Coast Guard officials said that they do not yet know what the new operational requirements will be, but that they plan for the new small boats to have greater capabilities than the legacy small boats, which will further enhance the capabilities of the NSC. The Coast Guard planned to finalize the operational requirements by summer 2009, and Coast Guard officials anticipate deployment of the small boats by the end of calendar year 2010. However, until these operational requirements and a determined delivery schedule are in place, the Coast Guard is unable to quantify the operational gap that will be created by the absence of the new small boats that were to have been deployed on the NSC. In addition to the gaps created by lost operational days and the absence of the unmanned aircraft and small boats, the Coast Guard has identified several operational deficiencies onboard NSC-Bertholf that it plans to address by the end of calendar year 2010. In particular, according to Coast Guard officials, three deficiencies are to be addressed before the cutter is certified as fully operational in the fourth quarter of fiscal year 2010. Details on these three deficiencies are as follows: First, NSC-Bertholf currently lacks a shipboard sensitive compartmented information facility required for participation in certain Department of Defense missions and exercises. Coast Guard officials told us that building such a facility was a post-9/11 requirement the manufacturer did not have time to integrate into NSC- Bertholf. This facility is to improve communication of sensitive and classified information with other Coast Guard and Department of Defense assets and shore facilities. Work on the facility is underway and the Coast Guard plans to complete the installation and testing in February 2010. According to Coast Guard officials, the Coast Guard will also be responsible for installing similar facilities on the future NSCs, as they will not be installed by the contractor during construction for security reasons. Second, full installation of technology that aids the movement of helicopters into the NSC’s two hangars is not yet complete, because the helicopters that are to be deployed with the NSC have not yet been modified to use this technology. NSC-Bertholf is equipped with a system designed to automatically secure helicopters after landing and then move them into a hangar. According to Coast Guard officials, this system reduces the number of crew members needed to assist in landing the helicopter and increases the safety of the landing process. The system has been installed on NSC-Bertholf, but the Coast Guard has not yet completed the modification of the helicopters to enable them to integrate with the system. Therefore, the Coast Guard plans to manually tie down and move the helicopters until the modification is complete, which, according to Coast Guard officials, is planned for March 2010. Coast Guard officials stated that the system is to be included during construction of all future NSCs. Third, the functionality of the stern ramp and doors used to launch small boats on NSC-Bertholf is limited. Coast Guard officials reported that the doors do not open and close as expected and that the doors are safe to operate only when the NSC is moving at speeds of 5 knots or less, because sections of the doors protrude into the water at the edge of the cutter when they are opened. The stern launch system facilitates the launch and recovery of small boats and requires fewer crew to operate than traditional side-launch systems that rely on cranes to both lower the small boats into the water and then raise them on to the cutter when their missions are completed. Replacement doors have been designed that angle up, away from the water, and are equipped with a mechanism that will better handle their weight to enable them to operate more reliably and safely. According to the Coast Guard, the new doors are to be retrofitted to NSC-Bertholf when the cutter goes in for a maintenance period, planned for March 2010, and are to be installed on future NSCs during their construction. Until these onboard deficiencies are addressed and the NSC’s unmanned aircraft and new small boats are delivered, the NSC will be operating without planned assets that would enhance its capabilities over those of an HEC. Coast Guard officials stated, though, that even without the planned unmanned aircraft and new small boats, NSC-Bertholf’s capabilities will be greater than those of an HEC when it is certified as fully operational at the end of fiscal year 2010. In particular, the officials stated that, among other things, the NSC will have improved habitability, increased transit speeds, better fuel efficiency, and a superior weapons system. However, some of these improvements have not been fully tested and the NSC will initially not have other key capabilities, such as the unmanned aircraft, which will require several years of construction and testing after its initial selection in 2010. To mitigate the operational gaps identified to date that have been created by delays in deployment of the NSC and its associated support assets, the Coast Guard plans to keep the HECs operational and to use existing air assets and small boats until new assets are acquired. However, the costs of these plans and the extent to which these plans will successfully mitigate gaps caused by delivery delays cannot be fully determined at this time. The Coast Guard plans to perform a series of upgrades and maintenance procedures on its HECs to help mitigate the loss of NSC operational days, but the complete costs of these improvements cannot be determined because the Coast Guard has not finalized its plans for completing these tasks, nor has funding been provided. The Coast Guard has also begun a management initiative to increase the number of operational days available from the HECs, given delays in deploying the NSCs. However, because these plans have not yet been finalized and the Coast Guard could not provide estimated completion dates, the extent to which these plans will help mitigate the loss of cutter operational days faced by the Coast Guard cannot be fully determined at this time. More specifically, the Coast Guard’s mitigation plans include three key elements, as follows: First, the Coast Guard plans to overhaul or replace equipment on selected HECs through an HEC sustainment program. According to Coast Guard officials, the purpose of the program is to replace obsolete or increasingly unsupportable parts and equipment to lower the cost of future HEC maintenance and increase the number of days that the HECs are able to operate each year. Depending on the state of each individual HEC, the sustainment program could include repairs or upgrades to the hull and propulsion machinery, fire alarm systems, air- conditioning and refrigeration systems, or other equipment that has become difficult to maintain. According to Coast Guard officials, they do not expect that all of the HECs will receive these upgrades; rather, the selection of the cutters to be upgraded is to be based on an analysis of their condition. Coast Guard officials stated that the analysis of the condition of the HECs is expected to begin in 2011, and that the work to overhaul the selected cutters is to begin in 2015, with work on the first selected HEC to be completed in 2016. Based on these time frames, there will be a loss of cutter operational days resulting from the deteriorating condition of the HECs for at least the next 7 years, until 2016. During the years in which the Coast Guard carries out the sustainment program, the operational gap created by lost cutter operational days could widen because each HEC selected for upgrade is to be taken out of service for 1 year while the necessary work is completed. Coast Guard officials noted that this is required in order for HECs to continue operations until the NSCs are deployed and that they intend to coordinate the HEC upgrades, the HEC decommissioning schedule, and the deployment of the NSCs to ensure that a combination of 12 HECs and NSCs are available for operations while HECs are removed from service for upgrades. The Coast Guard officials said that they have drafted the sustainment program proposal, but it was not finalized at the time of our review and the Coast Guard does not have an estimated date for when it will be completed. The officials added that they could not predict whether this program would be funded. Second, in 2007, the Coast Guard implemented a management initiative to (1) clearly define HEC maintenance goals, (2) enumerate tasks to achieve those goals, (3) assign personnel responsible for each goal, and (4) provide a means of measuring whether the goal had been achieved, in order to improve the readiness of the HECs based on the West Coast and Hawaii. For example, the Coast Guard personnel responsible for the HECs’ maintenance were assigned the goal of improving HEC engineering equipment readiness, including tasks such as reducing the time taken to address failures in essential equipment to less than 15 days. Similarly, the commanding officers of each HEC were assigned the goal of improving scheduled preventive maintenance completion rates and to keep records to measure how much of this maintenance was completed. Through regular analysis of the measures associated with each goal or task, the responsible personnel are to identify issues that may impact mission readiness, develop and implement corrective actions, and evaluate the effectiveness of those actions. While this management initiative is still ongoing, Coast Guard officials stated that they believe it has been successful. For example, the officials told us that from 2006—the year before the initiative began—through 2008, the number of HEC equipment failures that impacted missions declined by over 50 percent. Third, in advance of the HEC sustainment program, the Coast Guard intended to increase funding for HEC maintenance by $10 million during fiscal year 2010. However, Coast Guard officials reported that their request for the funding—intended to enable the Coast Guard to complete HEC maintenance that had been deferred over time and address the near-term maintenance needs of the HECs until the sustainment program begins—was not included in the fiscal year 2010 budget. According to the Coast Guard, operational gaps caused by delays in the delivery of unmanned aircraft and small boats are to be addressed through the use of existing aircraft and small boats and thus, it likely would not incur new costs. The unmanned aircraft is intended to increase the NSC’s surveillance capabilities, while the small boats are designed to assist the Coast Guard in conducting vessel boardings, pursuing and interdicting other vessels, and conducting search and rescue operations. The Coast Guard has not yet finalized the operational requirements of these assets; therefore, it is not yet able to quantify the gap in aircraft surveillance and small boat missions created by their absence. Manned aircraft currently provide surveillance support to the HECs and other Coast Guard vessels and could be assigned to support NSC missions, as needed. While existing aircraft would provide the NSCs with a level of air support comparable to that currently provided to the HECs, a Coast Guard study found that manned aircraft cannot provide the same level of surveillance capabilities that would be provided by a cutter-based unmanned aircraft. Because the NSCs are to replace decommissioned HECs, Coast Guard officials told us that the level of support provided by the manned aircraft to the NSCs is not expected to be greater than that currently provided to the HECs. Therefore, the Coast Guard would, theoretically, not incur new costs in assigning existing air assets to the NSC as the HECs are decommissioned and no longer need air support. According to Coast Guard officials, the Coast Guard plans to deploy the first NSC with existing small boats until new small boats are acquired. During its operational testing period, NSC-Bertholf is using a prototype small boat delivered by the contractor, as well as small boats used on the HEC class. According to Coast Guard officials, there is no additional cost to use these small boats beyond the funds already allocated for small boat operations. Furthermore, Coast Guard officials told us that the configuration of the small boats on the NSC will enhance its small boat capabilities relative to the HECs. In particular, the NSC will be equipped with three small boats, rather than the two small boats on the HECs, and will be able to launch and recover small boats in rougher seas than the HEC. Nevertheless, the lack of operational requirements and a delivery schedule for new small boats precludes the Coast Guard from quantifying the gap between the capabilities of the existing small boats and those that it intends to acquire. As a result, the Coast Guard has not determined the extent to which existing small boats will help mitigate the operational gap between the existing small boats that will be initially deployed on the NSC and the new small boats with which the NSC will deploy in the future. The Coast Guard has begun planning for the logistics support transition to the NSC from the HEC, and is working to finalize its key NSC logistics support plan by October 2009, but the Coast Guard cannot determine the complete logistics transition costs. While the Coast Guard is generally following the process established in its acquisition guide and is developing logistics plans to support the NSC, the key logistics support plan has not been finalized and approved within required time frames. In particular, to meet the near term logistics needs of NSC-Bertholf, the Coast Guard has developed and is using an interim support plan, but this plan does not include the requisite descriptions of the detailed documents that the Coast Guard plans to use to provide logistics support to the NSC or time frames for completing these documents. Further, according to its acquisition guide, the Coast Guard’s key logistics support plan—the Integrated Logistics Support Plan—for the NSC should have been finalized prior to the start of production on the first NSC in June 2004; but the Coast Guard has not finalized or approved this plan. Further, the Coast Guard cannot fully estimate the costs of the transition from the HECs to its NSCs. The Coast Guard is developing logistics plans to support the NSC as required by its Major Systems Acquisition Manual (MSAM), but the key plan has not been finalized and approved in accordance with the time frames required by the MSAM. The Coast Guard is required to follow the MSAM when designing and producing new assets. Specifically, the MSAM requires a management approach that begins with the identification of deficiencies in overall Coast Guard capabilities and then proceeds through a series of structured phases and decision points to: (1) identify requirements for performance, (2) develop and match these requirements with a proposed solution (e.g., asset needed), (3) demonstrate the feasibility of the proposed asset, and (4) produce the desired asset. The MSAM process provides a number of benefits that have the potential to improve acquisition outcomes, such as ensuring that the new systems and equipment are optimally supportable and the necessary logistics support resources are in place and acquired at an optimal cost. Primarily, it requires event-driven decision making by high- ranking Coast Guard acquisition personnel at a number of key points in an asset’s life cycle. At each decision point, or “milestone,” the MSAM requires the Coast Guard to prepare certain documents or plans that capture the information needed for decision making and approval of acquisition activities. The MSAM- required documents or plans also guide the transition to a new asset (e.g., NSC) from a legacy asset (e.g., HEC), and the MSAM provides criteria for the Coast Guard to follow when preparing each of these documents. Required logistics support documents include the Integrated Logistics Support Plan, the Logistics Readiness Review, and the NSC Deployment Plan. The Integrated Logistics Support Plan, which should have been finalized and approved by the time production of the first NSC was started in June 2004, is expected to be completed by October 2009. According to Coast Guard officials, the Coast Guard contracted for the Logistics Readiness Review and the Coast Guard expects to complete the Deployment Plan within the time frames required by the MSAM, which is 2012. Table 2 describes and provides the status of these plans for the NSC acquisition. Appendix I includes a list of the Coast Guard documents necessary for NSC operations and logistical support, as well as the status of the documents. In 2007, the Coast Guard contracted with the Department of the Navy to conduct a Logistics Readiness Review of NSC logistics, which identified gaps in logistics planning and recommended corrective actions that the Coast Guard has begun to address. The Deepwater contractor developed the initial NSC logistics plans, but in 2007, the Coast Guard assumed responsibility for NSC logistical planning because, according to Coast Guard officials, the contractor’s plans were deficient. Coast Guard officials stated that they were concerned that the contractor was not completing NSC logistics plans quickly enough and the plans had insufficient detail. For example, Coast Guard officials said that the contractor’s logistics plans did not include the necessary details, such as how the contractor would support the NSC after it becomes fully operational. As part of the logistics shift from the contractor to the Coast Guard, in 2007, the Coast Guard contracted with the Department of the Navy to assess the logistics readiness level of NSC-Bertholf. While not required by the MSAM at the time the review was contracted for, Coast Guard officials said that the review helped them focus on areas where logistics planning for the NSC were lacking. Coast Guard officials added that the review proved to be very useful for logistics planning and, as a result, they revised the MSAM to now require this review before new assets transition to fully operational status. Published in May 2008, the Logistics Readiness Review focused on nine areas of logistics readiness and identified logistics gaps in those areas. The areas of logistics readiness included the adequacy of the spare parts and supplies available to support NSC-Bertholf, the adequacy of technical support document and plans, and the adequacy of the NSC logistical support facilities, among others. In total, the Navy identified 34 gaps within the 9 logistics areas and developed recommendations on how the Coast Guard could take appropriate action to address those gaps. The Navy identified 18 of the 34 gaps as “high priority,” which means that the gap introduces significant risk to near-term supportability and workarounds either do not exist or they introduce additional risk. For example, the review found that the Coast Guard had not conducted a sufficient number of analyses to determine NSC crew training needs. According to Coast Guard officials, the Coast Guard generally agreed with the Logistic Readiness Review’s findings and has made some progress in addressing the recommendations identified. According to Coast Guard officials, the Coast Guard plans to address 31 of the 34 recommendations. However, according to Coast Guard officials, the Coast Guard has decided not to address three recommendations because the costs of addressing these recommendations outweighed the benefits. For example, the review found that the lifting capability of the crane used to hoist items from the pier onto the NSC was insufficient and made a recommendation to address this deficiency. Coast Guard officials stated the Navy’s finding was based on the projected capability of the crane and countered that its actual lift capabilities are sufficient to meet the needs of the NSC. Coast Guard officials stated that the NSC logistics transition from the contractor to the Coast Guard either created or increased the significance of several of the gaps identified. For example, under the contractor- supported model, the Coast Guard would have been responsible for a limited amount of NSC maintenance. However, because the Coast Guard now plans to support the NSC with its own staff, it must train personnel and upgrade facilities. Appendix II provides more detail on the review’s findings and the status of the Coast Guard’s progress in implementing the recommendations made to address the gaps identified. Coast Guard officials noted that the Navy does not plan to validate the actions the Coast Guard has taken. Table 3 shows the Coast Guard’s assessment of the status of the 34 gaps identified by the Navy’s review. According to the Coast Guard officials, the Coast Guard has completed work to address six recommendations, such as revising the NSC Configuration Management Plan, which the Navy found to be inadequate and considered a high-priority gap. Regarding the 25 recommendations in process or not yet started, Coast Guard officials stated the Coast Guard has made some progress in addressing these recommendations. For example, one high-priority gap cited the lack of training for Coast Guard personnel who will be supporting NSC-Bertholf, so, according to Coast Guard officials, the Coast Guard is training these personnel as needs arise. Despite progress, more work needs to be done. For example, the review concluded that facility budgets are insufficient and are not aligned with asset deliveries, and that the Coast Guard has not developed plans for either home ports or facilities for all NSCs. The review recommended developing these documents to address these high-priority gaps. Coast Guard officials stated that the Coast Guard is in the process of addressing the home port recommendation, but has not started to address the facility recommendation. The NSC’s Integrated Logistics Support Plan—the key logistics planning document that is to describe the necessary logistics support activities— has not been completed and approved as required by the MSAM. The MSAM requires that this plan assign responsibility to a Coast Guard unit for the planning of each logistics area and establish a schedule with time frames for completing these activities. According to the MSAM, each of the 10 logistics areas should have a section in the Integrated Logistics Support Plan that identifies and describes the detailed documents the Coast Guard intends to use to support the project in each logistics area with the details to be provided separately. Moreover, the plan is to identify what details will be provided, who will provide them, and when. Table 4 describes the 10 logistics areas. According to the MSAM, the Coast Guard is to prepare and approve the Integrated Logistics Support Plan before production is started on the first asset in a class. Although the NSC acquisition passed this phase in June 2004, as of May 2009, the Coast Guard has not completed and approved this plan. Coast Guard officials said that the Coast Guard initially required the contractor to develop the Integrated Logistics Support Plan, but when the Coast Guard assumed responsibility for NSC logistics in 2007, it determined that the contractor’s plan did not meet the Coast Guard’s needs and began to update it. According to Coast Guard officials, they expect to complete the plan by October 2009. To meet the near term logistics needs of the NSC and guide logistics planning until the Coast Guard completes the Integrated Logistics Support Plan, it developed an Interim Support Plan. According to the Coast Guard, the interim plan is to provide information about how the Coast Guard would sustain NSC-Bertholf and to identify the personnel responsible for maintaining the NSC. Our review of the Interim Support Plan, however, found that while the plan assigns responsibility to a Coast Guard unit for activities in all 10 logistics areas, it does not provide the level of detail that would be required by the MSAM for an NSC Integrated Logistics Support Plan. In particular, as shown in table 5, we found that 5 of the 10 areas covered in the Interim Support Plan do not contain a planning section that describes the detailed documents the Coast Guard plans to use to support the NSC in each logistics area. In addition, none of the 10 logistics areas contain detailed time frames for when the planning information is to be developed and finalized. For example, while the interim plan makes note of the “Training” logistics area, the plan does not contain any dates to guide the Coast Guard’s planning of this area. Further, five areas, such as “Maintenance Planning” and “Supply Support” do not contain a planning section and, therefore, do not have required time frames for completing documents. According to Coast Guard officials, while the Interim Support Plan was developed using the MSAM-mandated Integrated Logistics Support Plan structure as a guide, they acknowledged that the interim plan does not meet MSAM requirements. Further, Coast Guard officials did not commit to including all the required items, such as details of documents to be used and time frames for completing these documents, when revising the final Integrated Logistics Support Plan because they are still in the process of determining how to proceed with finalizing the plan. Including these details and time frames for the completion of logistics planning documents could strengthen the Coast Guard’s efforts to support the NSC in the 10 logistics areas by providing a roadmap to guide its personnel regarding actions to take and when to take them. For example, the interim plan lacks MSAM-required details on maintenance planning and supply support— which are critical in determining the number of people and supplies for supporting the NSC. In addition, providing details and time frames for the other logistics areas, as noted in table 5, would help ensure such actions are conducted in accordance with management’s directives and better position the Coast Guard to more effectively support the NSCs as they are deployed. The Coast Guard has made some progress in developing a deployment plan that is to address the logistics transition from the HEC to the NSC and some of the costs of this transition and expects to complete this plan by 2012, as required by the MSAM. Specifically, the MSAM requires the Coast Guard to develop an asset deployment plan that includes items such as the timing of deliveries, the decommissioning of legacy assets, and the selection of locations where the new assets will be based. In addition, the Deployment Plan is to identify any costs that will be incurred as part of (1) NSC deployment, (2) new or modified facilities requirements, (3) staffing issues, and (4) plans for disposal of HECs. For the NSC, the MSAM requires an approved plan be in place by 2012, prior to full production. The Coast Guard anticipates it will complete the NSC Deployment Plan to satisfy this requirement within the time frame established by the MSAM. Some parts of the Deployment Plan currently under development include the following: Delivery schedule: The Coast Guard has developed an NSC delivery schedule. The first NSC was delivered in 2008 and the final NSC is expected to be delivered in 2017. Home port locations: According to Coast Guard officials, the Coast Guard plans to base the first three NSCs in Alameda, California and continues to develop home port plans for the other five cutters and determine the facilities upgrades needed at these ports. According to the MSAM, both the home port and facility plans are to be completed by 2012, and Coast Guard officials stated the Coast Guard is on track to meet this requirement for both plans. Specifically, Coast Guard officials stated that the Coast Guard expects to decide the home port locations for the fourth through sixth NSCs by the end of fiscal year 2009, and it plans to decide the home port locations for the seventh and eighth NSCs by fiscal year 2011. According to Coast Guard officials, facility planning is to begin after home port locations are determined. Decommissioning Schedule: Coast Guard officials stated that they continue to work on a decommissioning schedule and have determined that the Coast Guard will decommission HEC-Hamilton shortly after NSC-Bertholf becomes fully operational. According to Coast Guard officials, the order in which the other HECs are to be decommissioned is to be determined in 2009, although the order may change after the completion of an analysis of the condition of HECs. A critical component of this analysis is an assessment of HEC hulls. According to Coast Guard officials, saltwater corrodes a cutter’s hull over time, and the studies are to determine the extent to which the hulls are degraded on HECs. Studies of two HEC hulls have been completed, and the Coast Guard expects to complete five more in 2009, and then complete the remaining five by 2011. Ultimately, the Coast Guard plans to use these studies to inform its decision about which HECs to decommission first and which to sustain longer. According to Coast Guard officials, the time frames the Coast Guard develops to implement its HEC sustainment plan may also impact the decommissioning schedule, as the Coast Guard may delay the decommissioning of an HEC until it completes sustainment upgrades on another HEC to minimize any operational gaps. To further minimize any operational gaps, Coast Guard plans to schedule HEC decommissioning dates to coincide with NSCs becoming operational. The Coast Guard has incurred some costs and developed cost estimates related to the logistics transition from the HEC to the NSC, such as NSC maintenance personnel salaries at Alameda, but other costs related to this transition, such as facilities upgrades for ports other than Alameda, cannot be fully determined at this time. According to Coast Guard officials, the primary cost drivers of the logistics transition are: (1) maintenance planning, (2) maintenance training, (3) facilities upgrades, and (4) maintenance execution. These officials stated that the cost drivers they identified contained both transition and life-cycle logistics costs, and that it was difficult to differentiate between these costs. For example, Coast Guard officials stated that the maintenance execution cost driver—the actions taken to maintain an asset—does not distinguish between transition and life-cycle costs. A discussion of the transition component of each cost driver, the costs incurred to date, and any estimated future costs follows. Coast Guard officials said that the first cost driver for the logistics transition from HECs to NSCs is the development of maintenance planning documents and schedules. According to Coast Guard officials, most maintenance planning is complete, and as of May 2009, the Coast Guard has spent an estimated $2.5 million on these efforts. More specifically, the Coast Guard spent about $1.1 million on contracting, primarily for maintenance plan development and management, while the remaining $1.4 million represents the amount paid to Coast Guard personnel working on maintenance planning. Coast Guard officials estimated that as of May 2009, the Coast Guard had completed at least 90 percent of the needed NSC maintenance planning. Coast Guard officials stated that the second cost driver for the logistics transition from HECs to NSCs is the preparation of the crew and shore- side maintenance personnel to support the NSC. As of June 2008, the Coast Guard estimated that it needed about $7 million for training. According to Coast Guard officials, the Coast Guard continues to develop training programs and further work remains to be done. For example, the Logistics Readiness Review recommended completing additional training analyses on 30 equipment systems unique to the NSC, but Coast Guard officials stated that as of February 2009, only 4 analyses of these systems were under way. Additionally, the Coast Guard has not decided the extent to which it will develop its own training courses—which require more upfront costs—as opposed to contracting with equipment manufacturers for the training. The costs incurred for this driver as well as the overall logistics transition costs may increase if the Coast Guard decides to develop more training. Coast Guard officials told us that the third cost driver for the logistics transition from HECs to NSCs includes the modifications to the port and its associated buildings to accommodate the new NSCs. By June 2008, the Coast Guard had completed about $12.5 million of the facility upgrades needed at the Alameda, California port where at least three NSCs are to be based. These modifications included pier upgrades to accommodate the larger NSC as well as dredging the channel to accommodate the NSC’s deeper draft. Because of these logistics improvements, the Coast Guard port at Alameda can now accommodate NSC-Bertholf, as shown in figure 5. While certain facility upgrades have been completed in Alameda, other upgrades have not been completed. For example, the Coast Guard believes it will need a building to house those crew members who are part of the new rotational crewing concept for the NSC, but as of June 2009, construction of the estimated $22.4 million facility has not started. According to Coast Guard officials, the Coast Guard also has not begun facility upgrades at other locations because the Coast Guard has not finalized the NSC Home Port Plan. Coast Guard officials stated that the Coast Guard expects to decide the home port locations for the fourth through sixth NSCs by the end of fiscal year 2009, and it plans to decide the home port locations for the seventh and eighth NSCs by fiscal year 2011. Coast Guard officials stated that the Coast Guard may select home ports for NSCs in locations that could require more significant upgrades than Alameda, an outcome that would increase costs. Coast Guard officials said that the fourth cost driver for the logistics transition from HECs to NSCs is maintenance activities to support the NSCs and include (1) the cost of purchasing agreements and other commercial contracts to supply and maintain the NSCs and (2) salaries for Coast Guard shore-side maintenance personnel. According to Coast Guard officials, as of May 2009, the Coast Guard had spent $550,000 on purchasing agreements it developed with equipment manufacturers to help bridge the gap between contractor-supported and Coast Guard- supported logistics and plans to allocate $5.6 million for these agreements from 2008 through 2011. Coast Guard officials stated the Coast Guard has used these agreements to purchase parts and extend equipment warranties, among other things. Additionally, Coast Guard officials stated that the Coast Guard plans to enter into other commercial contracts for NSC maintenance from 2008 through 2011, but cannot estimate the costs of those contracts because it does not have historical maintenance data on the NSC’s new equipment that are needed to estimate the frequency of equipment failures and the costs of repairing them. Coast Guard officials stated that the Coast Guard currently has a 5-year study underway to develop more accurate maintenance cost estimates. Regarding maintenance personnel salaries, Coast Guard officials said that separating the personnel costs for the logistics transition from HECs to NSCs is difficult because maintenance execution costs are determined based on the service life of the cutters and transition costs are not accounted for separately. As such, these officials could not estimate the maintenance personnel cost component of the logistics transition. Although the Coast Guard has estimated shore side maintenance costs for NSCs that are to use Alameda as a home port, Coast Guard officials stated that they have not determined how quickly the support needs for HECs will diminish as NSCs begin conducting missions and HECs are decommissioned. With this in mind, Coast Guard officials stated that the Coast Guard plans to phase out personnel positions currently dedicated to supporting HECs and replace them with personnel dedicated to support NSCs. According to Coast Guard officials, the Coast Guard currently has 79 maintenance personnel positions in Alameda to support four HECs and could not estimate the cost for these positions. These officials stated the Coast Guard has added 11 NSC maintenance positions in Alameda, at a cost of $940,000 per year, and estimate that it will need 108 additional maintenance personnel to support the first three NSCs at a cost of about $9 million per year for all three combined. Furthermore, Coast Guard officials stated that they expect the maintenance execution cost estimates to change after the Coast Guard completes a study to determine the number of shore-side personnel needed to support the NSC—the lack of that study was identified in the Logistics Readiness Review as high priority. The NSC, the first cutter class delivered to the Coast Guard under the Deepwater program, is to be instrumental in carrying out the Coast Guard’s missions as it replaces the aging and increasingly unreliable HEC class. Although the Coast Guard assumed responsibility for NSC logistical planning in 2007 because it believed that the contractor’s plans did not contain sufficient details, the Coast Guard has yet to complete the Integrated Logistics Support Plan, as required by the MSAM. The Coast Guard has developed an interim support plan to guide logistics planning for the NSC until the Integrated Logistics Support Plan is finalized, but the interim plan lacks MSAM-required details, such as maintenance planning and supply support that are critical in determining the number of people and supplies the Coast Guard will need to support the NSC. Further, while the Coast Guard expects to complete the Integrated Logistics Support Plan by October 2009, the plan may not include the required details of logistics support documents to be used and time frames for completing them because the Coast Guard is still determining how to proceed with finalizing the plan and did not commit to including these details. Identifying these details and time frames for the completion of logistics planning documents could strengthen the Coast Guard’s efforts to support the NSC in the 10 logistics areas by providing a roadmap to guide its personnel of actions to take and when to take them, better position the Coast Guard to more effectively transition to the NSC, better ensure that the Coast Guard’s cost estimates are reasonable, and reduce uncertainties for the Coast Guard (which must budget for such costs in advance) and Congress (which must appropriate the funds). To meet MSAM requirements and aid the Coast Guard in making operational decisions, GAO recommends that the Commandant of the Coast Guard ensure that as the Coast Guard finalizes the Integrated Logistics Support Plan for the NSC, that the plan includes the required logistics support documents to be used and the time frames for completing them. In June 2009, we requested comments on a draft of this report from the Department of Homeland Security and the Coast Guard. The Coast Guard provided technical comments, which we have incorporated into the report, as appropriate. In addition to the technical comments, the Department of Homeland Security and the Coast Guard jointly provided an official letter for inclusion in this report. In the letter, the agencies noted that they generally concur with our findings and recommendation. A copy of this letter can be seen in appendix III. We are providing copies of this report to the Secretary of DHS, the Commandant of the U.S. Coast Guard, and interested congressional committees. In addition, the report will also be made available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9610, or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. This appendix lists the Coast Guard’s National Security Cutter (NSC) operations and logistics documents that are incomplete or under development. The Coast Guard uses many documents to guide the acquisition and logistical support of its assets. The Coast Guard documents relative to the NSC, their expected completion dates, and purpose are listed in table 6. This appendix describes the results of the Navy’s Logistics Readiness Review (LRR) and the Coast Guard’s efforts to address identified gaps, as of May 2009. The MSAM requires the completion of a LRR as a part of the acquisition process. The Coast Guard contracted with the Department of the Navy to conduct a LRR, which assessed the adequacy of the Coast Guard’s readiness to support the NSC based on logistics plans provided by the contractor. Specifically, the LRR determined the logistics readiness level of NSC-Bertholf, identified gaps in support, assessed potential impacts on mission performance, and recommended remediation for identified gaps. This appendix provides details on the review’s findings and the status of the recommendations made to address the gaps identified. The LRR focused on nine areas of logistics readiness, including supply support, technical documents, facilities, and aviation, among others. Table 7 provides the review’s findings in the nine areas. Support equipment is all the required equipment needed to support the operation and maintenance of a system, including: tools; ground support equipment such as generators and service carts; and calibration equipment, among others. Systems include such areas as propellers, guns, and the rudder. A review of 197 NSC systems identified incomplete and inconsistent support equipment documentation. For example, 22 percent of the items needed to support the NSC systems had complete support equipment data while the remaining 78 percent had either partial or no data. Additionally, numerous support equipment items were referenced multiple times for the same systems. For example, a system that should require only one 2,000 pound chain hoist had documents that listed a 2,000 pound chain hoist 15 times. Configuration management is the process used to understand the important components of an asset and to manage any changes to these components that might be made over the asset’s service life. This process includes identifying components that require management; controlling changes to these components; and recording changes made to components. The LRR concluded that there was limited capacity within the Coast Guard to address near-term configuration management processes and that the working-level details in the draft configuration management plan were not adequate to support the NSC. For example, the Navy identified more than 13,700 NSC equipment and system records from databases and site inspections, but the contractor’s databases included only 5,600 records. The Navy identified NSC Capstone documents, which are the documents normally required for major milestone decisions. The Navy found that several logistics documents needed to be updated, such as the Configuration Management Plan and the Interim Logistics Support Plan. The Configuration Management Plan provides the process the Coast Guard uses to control changes to NSC components, while the Logistics Support Plan serves as the master logistics support document. Other documents— including the Home Port Plan and Facilities Plan—need to be developed. The Home Port Plan is to outline where all eight NSCs are to be permanently stationed and the Facilities Plan is to describe the necessary changes to those homeports needed to accommodate NSCs. Manpower and personnel is the identification and acquisition of personnel (military and civilian) with skills and grades required to operate, support, and maintain a system over its life cycle. Training is the processes, procedures, techniques, training devices, equipment, and materials used by personnel to operate and support a system throughout its life cycle. Overall, the Navy found that this area had minor problems, but identified some areas of concern. For example, the personnel evaluation identified several administrative findings the Coast Guard needed to resolve, including filling three vacant NSC-Bertholf crew positions. Additionally, the training evaluation found that NSC training requirements are “significantly greater” than for legacy cutters and determined that 137 systems require additional formal training. For example, the LRR found that the average number of training days needed for an HEC crewmember is 23, but NSC crew members need an average of 61 days of training. The aviation logistics area was found to have minor problems and the small boats area was categorized as having moderate problems. The review identified two aviation Priority 3 gaps and found, for example, that the wind indicating system pilots use to land helicopters on the NSC was inadequate. According to the LRR, the NSC does not have a system certified by the Navy, but Coast Guard officials stated that the Coast Guard has received interim approval from the Navy to use the current system. The review also found that the Coast Guard had not made a final decision regarding the small boat package required for the NSC. The review recommended conducting a small boat LRR once the Coast Guard decided on the small boat package. Technical documentation is the information needed to translate system and equipment design requirements into discrete engineering and logistics considerations, such as manual and maintenance procedures. The Navy compared technical documentation data from different Coast Guard sources and found that there were a number of technical documentation discrepancies. The baseline documentation lists were inconsistent and did not provide the desired level of logistics information as compared with documentation found on other vessel classes. For example, the review identified about 300 document duplications and discrepancies in Coast Guard data. Moreover, the review determined that the Coast Guard was unable to effectively identify and track these documents. Supply support is all the management actions, procedures, and techniques necessary to acquire, catalog, receive, store, transfer, issue and dispose of secondary items (piece and repair parts below the major system level). The review found that the contractor did not include maintenance requirements in the spares determination process; out of the 316 items the Navy reviewed, 55 items had sufficient spares ordered, 127 items had insufficient spares, and 134 items had either incomplete or no data. The review also examined all planned, ongoing, and completed shoreside facility projects to gauge the potential impact on the delivery of NSC- Bertholf to the Coast Guard’s Alameda, California location. The review found numerous logistics gaps—such as an expired certification for a crane used to maintain NSC small boats—but none introduced significant risk to the near-term supportability of the NSC. Maintenance planning is the analytical methodology used to establish the maintenance philosophy of a system and answers questions such as: What can go wrong? Who will fix it? Where will be fixed? How will it be fixed? And how often will it need to be fixed? The LRR for the NSC did not review the detailed maintenance procedures needed to support the hull, mechanical, electrical, and communications systems because Coast Guard officials told the Navy that the procedures in place at the time of the LRR did not contain the information needed. The review identified the inadequacy of maintenance procedures as a significant gap. The Coast Guard has addressed some of the gaps identified by the Logistics Readiness Review. The Navy categorized the gaps it identified in the LRR and developed recommendations to address those gaps. The Navy ranked the gaps it identified in the LRR as Priority 1, 2, or 3. Priority 1 gaps are defined as those that introduce significant risk to near-term supportability, and workarounds either do not exist or introduce additional risk. Priority 2 gaps do not introduce significant risk to near- term supportability, and workarounds are likely to increase the cost or reduce the efficiency of maintenance or operations. Priority 3 gaps do not introduce significant risk to near-term supportability, and workarounds exist that do not introduce additional risk. Of the 34 gaps, the Navy identified 18 as Priority 1, 8 as Priority 2, and 8 as Priority 3. As of May 2009, Coast Guard officials stated that the Coast Guard had addressed 7 recommendations (3 of which pertain to priority 1 gaps), was in the process of addressing 21 (13 of which pertain to priority 1 gaps), had not started 3 (2 of which pertain to priority 1 gaps), and had decided not to address 3 gaps (none of which pertain to priority 1 gaps). Table 8 provides a list of the 34 gaps the LRR identified and the progress the Coast Guard has made in addressing these gaps. In addition to the contact named above, Christopher Conrad, Assistant Director, and Ellen Wolfe, Analyst-in-Charge, managed this review. Christoph Hoashi-Erhardt and Paul Hobart made significant contributions to the work. Geoffrey Hamilton provided legal and regulatory support; Adam Vogt provided assistance in report preparation; Michele Fejfar assisted with design, methodology, and data analysis; and Karen Burke helped develop the report’s graphics. Coast Guard: As Deepwater Systems Integrator, Coast Guard Is Reassessing Costs and Capabilities but Lags in Applying Its Disciplined Acquisition Approach. GAO-09-682. Washington, D.C.: July 14, 2009. Coast Guard: Observations on the Fiscal Year 2010 Budget and Related Performance and Management Challenges. GAO-09-810T. Washington, D.C.: July 7, 24, 2009. Coast Guard: Observations on the Genesis and Progress of the Service’s Modernization Program. GAO-09-530R. Washington, D.C.: June 24, 2009. Coast Guard: Update on Deepwater Program Management, Cost, and Acquisition Workforce. GAO-09-620T. Washington, D.C.: April 22, 2009. Coast Guard: Change in Course Improves Deepwater Management and Oversight, but Outcome Still Uncertain. GAO-08-745. Washington D.C.: June 24, 2008. Coast Guard: Strategies for Mitigating the Loss of Patrol Boats Are Achieving Results in the Near Term, but They Come at a Cost and Longer Term Sustainability Is Unknown. GAO-08-660. Washington, D.C.: June 23, 2008. Status of Selected Aspects of the Coast Guard’s Deepwater Program. GAO-08-270R. Washington, D.C.: March 11, 2008. Coast Guard: Observations on the Fiscal Year 2009 Budget, Recent Performance, and Related Challenges. GAO-08-494T. Washington, D.C.: March 6, 2008. Coast Guard: Deepwater Program Management Initiatives and Key Homeland Security Missions.GAO-08-531T. Washington, D.C.: March 5, 2008. Coast Guard: Challenges Affecting Deepwater Asset Deployment and Management and Efforts to Address Them. GAO-07-874. Washington, D.C.: June 18, 2007. Coast Guard: Status of Efforts to Improve Deepwater Program Management and Address Operational Challenges. GAO-07-575T. Washington, D.C.: March 8, 2007. Coast Guard: Preliminary Observations on Deepwater Program Assets and Management Challenges. GAO-07-446T. Washington, D.C.: February 15, 2007. Coast Guard: Coast Guard Efforts to Improve Management and Address Operational Challenges in the Deepwater Program. GAO-07-460T. Washington, D.C.: February 14, 2007. Homeland Security: Observations on the Department of Homeland Security’s Acquisition Organization and on the Coast Guard’s Deepwater Program. GAO-07-453T. Washington, D.C.: February 8, 2007. Coast Guard: Status of Deepwater Fast Response Cutter Design Efforts. GAO-06-764. Washington, D.C.: June 23, 2006. Coast Guard: Changes to Deepwater Plan Appear Sound, and Program Management Has Improved, but Continued Monitoring Is Warranted. GAO-06-546. Washington, D.C.: April 28, 2006. Coast Guard: Progress Being Made on Addressing Deepwater Legacy Asset Condition Issues and Program Management, but Acquisition Challenges Remain. GAO-05-757. Washington, D.C.: July 22, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-651T. Washington, D.C.: June 21, 2005. Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges. GAO-05-307T. Washington, D.C.: April 20, 2005. Coast Guard: Deepwater Program Acquisition Schedule Update Needed. GAO-04-695. Washington, D.C.: June 14, 2004. Coast Guard: Progress Being Made on Deepwater Project, but Risks Remain. GAO-01-564. Washington, D.C.: May 2, 2001. | As part of its more than $24 billion Deepwater program to replace aging vessels and aircraft with new or upgraded assets, the Coast Guard is preparing the National Security Cutter (NSC) for service. GAO previously reported on Deepwater assets' deployment delays and the Coast Guard's management of the Deepwater program. GAO was legislatively directed to continue its oversight of the Deepwater program. As a result, this report addresses: (1) the operational effects, if any, of delays in the delivery of the NSC and its support assets of unmanned aircraft and small boats; (2) Coast Guard plans for mitigating any operational effects and any associated costs of these plans; and (3) the extent to which the Coast Guard has plans, to include cost estimates, for phasing in logistics support of the NSC while phasing out support for the High Endurance Cutter (HEC) it is replacing. GAO's work is based on analyses of the (1) operational capabilities and maintenance plans of the NSC and its support assets and (2) data on the HECs' condition; comparison of an NSC and HEC; and, interviews with Coast Guard officials. Delays in the delivery of the NSC and the support assets of unmanned aircraft and small boats have created operational gaps for the Coast Guard that include the projected loss of thousands of days in NSC availability for conducting missions until 2018. Enhancements to the NSC's capabilities following the 9/11 terrorist attacks and the effects of Hurricane Katrina were factors that contributed to these delays. Given the delivery delays, the Coast Guard must continue to rely on HECs that are becoming increasingly unreliable. Coast Guard officials said that the first NSC's capabilities will be greater than those of an HEC; however, the Coast Guard cannot determine the extent to which the NSC's capabilities will exceed those of the HECs until the NSC's support assets are operational, which will take several years. To mitigate these operational gaps, the Coast Guard plans to upgrade its HECs and use existing aircraft and small boats until unmanned aircraft and new small boats are operational, but because the mitigation plans are not yet finalized, the costs are largely unknown. Also, the Coast Guard has not yet completed operational requirements for the unmanned aircraft or new small boats. As a result, the Coast Guard has not determined the cost of the HEC upgrade plan or the operational gap created by the delay in fielding new support assets for the NSC. The Coast Guard's logistics support plans for its transition to the NSC from the HEC are not finalized, and it has not yet fully determined transition costs. The contractor developed the initial NSC logistics plans, but Coast Guard officials said the plans lacked needed details, such as how the contractor would support the NSC after it becomes fully operational, and so, in 2007, the Coast Guard took over logistics planning. Coast Guard acquisition guidance states that an Integrated Logistics Support Plan should be completed by the time production of an asset is started. Although the first NSC has already been delivered, the Coast Guard has not yet finalized this plan, but expects to do so by October 2009. While the Coast Guard has developed an interim plan, it did not commit to including required logistics support documents to be used or time frames for completing them in the Integrated Logistics Support Plan because it is in the process of determining how to finalize the plan. Ensuring the plan includes these documents and time frames would better prepare the Coast Guard to support the NSC and aid it in making operational decisions given that the Coast Guard has not yet developed a deployment plan or completed cost estimates of the logistics transition from the HEC to the NSC. |
Property owners in certain coastal regions subject to hurricanes and flooding may have to purchase at least two, and sometimes more, different types of insurance policies. Flood insurance is offered by NFIP, while insurance for wind-related damages is generally offered by private insurance companies or state-sponsored insurers. NFIP was established in 1968 in part to provide some insurance protection for flood victims because the private insurers were and still are largely unwilling to insure for flood risks. The National Flood Insurance Act of 1968, as amended, allows homeowners to purchase up to $250,000 of NFIP coverage on their dwellings and up to an additional $100,000 for personal property such as furniture and electronics. Business owners may purchase up to $500,000 of coverage for dwellings and $500,000 on the contents. Exclusions under the flood policy include damages caused by wind or a windstorm. FEMA, which administers NFIP, is responsible for the management and oversight of NFIP and is assisted in performing these functions by a program contractor. While NFIP provides the flood insurance policy and holds the risk, private property-casualty insurers, known as WYO insurers, sell and service approximately 95 percent of NFIP’s flood policies. WYO insurers retain a portion of the premium for selling flood policies and receive fees for performing other administrative services for NFIP, but do not have any exposure to claims losses. A WYO insurer may or may not also provide coverage for wind-related risks on the same property. After an event occurs, policyholders normally contact a WYO insurer to initiate a flood damage claim. If the claimant also has a policy for wind damage from the same WYO insurer, the company generally adjusts losses pertaining to both types of damages, those caused by wind and those caused by flooding. In such cases, the WYO insurer must determine and apportion the damages caused by wind that it insures, along with those caused by flooding, insured by NFIP. To settle flood claims, insurance companies work with certified flood adjusters. When flood losses are reported, the WYO insurers assign flood adjusters to assess damages. The WYO insurers may use their own staff adjusters or contract with independent adjusters or adjusting firms to perform the flood adjustments. These adjusters are responsible for assessing damage, estimating losses, and submitting required reports, work sheets, and photographs to the insurance company, where the claim is reviewed and, if approved, processed for payment. Both the insurance industry and NFIP incurred unprecedented storm losses from the 2005 hurricane season. State insurance regulators estimated that property-casualty insurers had paid out approximately $22.4 billion in claims tied to Hurricane Katrina (excluding flood), as of December 31, 2006. However, industry observers estimate that insured losses tied to Hurricane Katrina alone (other than flood) could total more than $40 billion, depending on the outcome of outstanding claims and ongoing litigation. NFIP estimated that it had paid approximately $15.7 billion in flood insurance claims as of January 31, 2007, encompassing approximately 99 percent of all flood claims received. For hurricane-damaged properties, NFIP does not know whether both wind and flooding contributed toward damages nor the apportionment of damages between them, limiting its ability to monitor the accuracy of flood payments and address potential conflicts of interest that may arise in certain damage scenarios. Based on our preliminary review, we found that NFIP did not systematically collect and analyze data on wind-related damage when collecting flood claims data on properties subjected to both high winds and flooding, such as those damaged in the aftermath of Hurricanes Katrina and Rita. Further, such information is not sought even when the same insurance company serves as both the NFIP WYO insurer and the insurer for wind-related risks, posing a potential conflict in certain damage scenarios where properties are subjected to both types of perils. Without information on both wind and flood damages to the property, NFIP may not know on certain hurricane-damaged properties whether the amount it paid for a claim was limited to flood damage. As mentioned earlier, NFIP’s WYO insurer may also insure the same property for wind-related damages. In this situation, a potential conflict of interest can materialize because the WYO insurer has a financial interest in the outcome of the claims adjustment it performs on behalf of NFIP. Conversely, if the policy for wind-related risks were issued by another insurer, the same potential conflict of interest would not exist because the flood and wind damages would be assessed and determined separately by different insurers. WYO insurers are required to submit flood damage claims data in accordance with NFIP’s Transaction Record Reporting and Processing (TRRP) Plan, for inclusion into NFIP’s claims database. In our review of data elements in NFIP’s claims database, we found that NFIP does not require WYO insurers, which are responsible for adjusting the flood claim, to report information on property damages in a manner that could allow NFIP to differentiate how these damages (to the building or its contents) were divided between wind and flooding, even when the WYO insurer is also the wind insurer for the property. Specifically, the TRRP Plan for WYO insurers instructs them to include only flood-related damages in the data fields on “Total Building Damages” and “Total Damage to Contents.” Further, the “Cause of Loss” data field does not incorporate an option to explicitly identify property damages caused by wind or partially caused by wind (e.g. combined wind and flood, hurricane, windstorm, etc.). As a result, WYO insurers do not report total property damages in a manner that 1) identifies the existence of wind damage or 2) discerns how damages were divided between wind versus flooding for properties that were subjected to a combination of both perils. Further, NFIP program contractors stated that they do not systematically track whether the WYO insurer processing a flood claim on a property is also the wind insurer for that property. This lack of transparency over both the wind and flood damages on hurricane- damaged properties limits NFIP’s ability to verify that damages paid for under the flood policy were caused only by the covered loss of flooding. NFIP’s normal claims processing activities, which do not incorporate a means to systematically collect information on wind-related damages, were further stressed during the 2005 hurricane season. For both Hurricanes Katrina and Rita, FEMA estimates that it has paid approximately $16.2 billion in claims, with average payments of over $95,000 and $47,000, respectively. As we reported in December of 2006, in an effort to assist policyholders, NFIP approved expedited claims processing methods that were unique to Hurricanes Katrina and Rita. Some expedited methods included the use of aerial and satellite photography and flood depth data in place of a site visit by a claims adjuster for properties where it was likely that covered damages exceeded policy limits. Under other expedited methods, FEMA also authorized claims adjustments without site visits where only foundations were left and square foot measurements of the dwellings were known. Such expedited procedures facilitated the prompt processing of flood claims payments to policyholders following the unprecedented damage of the 2005 hurricanes. However, once these flood claims were processed, as was the case for other flood claims on hurricane-damaged properties, NFIP did not systematically collect wind damage claims data tied to flood-damaged properties on an after-the-fact basis. Hence, NFIP does not know the extent to which wind contributed to total property damages. FEMA officials stated that they do not have access to wind damage claims data from the WYO insurers. Specifically, a letter from FEMA to GAO stated that: “FEMA’s opinion is that, where flood insurance payments have been made, FEMA is permitted to review the background claims data in order to ensure that insurance claims payments are appropriately allocated to flood losses as opposed to wind-related losses. Such data may include the adjuster’s report(s) and any engineering reports that support (or fail to support) the allocation of loss to flood versus wind damage. FEMA may request summaries and analyses of this information at any time to ensure proper processing of flood claims. Conversely, claims paid by a WYO company that do not involve flood insurance proceeds (and the data related thereto) are not accessible by FEMA, and indeed, do not need to be, as there would have been no improper allocation of flood insurance proceeds for wind losses. Moreover, the attempt to access this unrelated data may be found to violate various privacy protections.” Hence, NFIP does not systematically collect data on wind damages for properties for which a flood claim has been received. As a result, for hurricane damaged properties subjected to both high winds and flooding, NFIP may not have all the information it needs to ensure that its claims payment was limited to only flood damage. FEMA’s reinspection program, which helps validate the adjustment process and flood payments made, provides limited information that could enable FEMA to better validate the claims payments it makes for flood damage when wind is also a factor. Based on our preliminary review, the reinspection program does not systematically evaluate the apportionment of damages between wind and flooding, even when a potential conflict of interest may arise with the WYO insurer. Along with flood claims data collected from WYO insurers that service flood policies, FEMA, through its program contractor, operates a reinspection program to monitor and oversee claims adjustments and address concerns about flood payments. The stated purpose of the reinspection program is to reevaluate the flood adjustment and claim payment made on a given property to determine whether or not NFIP paid the correct amount for flood-related damages. This is accomplished through on-site reinspections and reevaluations of a sample of flood claim adjustments. However, we found that FEMA’s reinspection program did not systematically incorporate a means for identifying whether nor the extent to which wind-related damages contributed to the losses. Without the ability to examine damages caused by both wind and flooding, the reinspection program is limited in its ability to assess whether NFIP paid only the portion of damages it was obligated to pay under the flood policy. During our study, we reviewed hundreds of reinspection files for properties with flood claims tied to Hurricanes Katrina and Rita. We found that the reinspection files did not confirm that the claim paid actually reflected only the damage covered by the flood insurance policy versus damage caused by other, uncovered damages, such as wind. Rather, the reinspection files generally contained limited and inconsistent documentation concerning the presence or extent of wind-related damages on properties without additional documentation that would enable FEMA to evaluate both the wind and flood damage information together. Specifically, the reinspection files reviewed did not consistently document whether or not damages were caused by a combination of both wind and flooding. The reinspection activities focused on reevaluating the extent to which building and content damages were caused by flooding. While some of the reinspection files included documentation as to whether or not damage was caused by a combination of wind and flooding, most did not. Information reviewed from 740 reinspection files revealed that nearly two- thirds of these reinspection reports did not include documentation to indicate whether damages were caused by a combination of both wind and flooding or only flooding. We found that approximately 26 percent included documentation indicating damage was caused only by flooding, while approximately 8 percent of the reinspection files included documentation that damages were caused by a combination of wind and flooding. In cases where reinspectors indicated that damages were caused by a combination of wind and flooding, insufficient data existed to assess the extent that wind contributed to the damages. That is, information about the wind damage during the reinspection process was not documented or analyzed in a systematic fashion. Hence, the reinspection activities did not systematically document or validate the presence or extent of wind damage in combination with flood damage in order to verify that flood payments were limited to flood damage. Moreover, as we have previously reported, FEMA does not choose a statistically valid sample for its reinspection process. Therefore, the results could not be projected to the universe of properties for which flood claims were made. We also noted that on-site reinspections of properties with flood claims tied to Hurricanes Katrina and Rita were generally conducted several months after the event. Such delays, while understandable considering the scope and magnitude of devastation resulting from these hurricanes in 2005, further limited NFIP’s ability to reevaluate the quality and accuracy of the initial damage determination, given the ongoing natural and manmade events that continued to alter the damage scene. Finally, we explored whether NFIP could use data collectively gathered by state insurance regulators on property-casualty claims resulting from the 2005 hurricane season to match with NFIP flood claims data. We found that while Florida, Mississippi, Louisiana, Alabama, and Texas collected some aggregate information about claims from the property-casualty insurers, such data would have been of limited value to NFIP to evaluate the accuracy of its flood claims payments in any systematic way. Except for Florida, which had previously collected aggregate claim data from property-casualty insurers for major hurricane events, the other states used a special data call, based on Florida’s system, to collect this aggregate claims data from the property-casualty insurers. However, the information collected was not in sufficient geographic detail to allow a meaningful evaluation of wind versus flood damage assessments and apportionments made by insurers. That is, claims data reported by property-casualty insurers through this mechanism were either reported on a statewide or county/parish-level basis that did not allow it to be matched with corresponding flood claims data on a community-level (e.g. zip code) or a property-level basis. In summary, based on our preliminary review, NFIP does not collect the information it needs to help evaluate whether it has paid only what it is obligated to pay under the flood policy for properties subjected to both high winds and flooding, such as those damaged by hurricanes Katrina and Rita. For these properties, NFIP did not systematically collect enough information to know whether there was wind damage, much less enough to understand how much of the damage was determined to have been caused by wind and how much was caused by flooding. Without the ability to collect information that documents both the flood and wind damage, NFIP’s capacity to evaluate the accuracy of its payments is limited. As mentioned earlier, this is particularly important in situations where the WYO insurer also insures the property for wind damages. This creates a potential conflict of interest when the same insurer makes both the wind and flood damage assessments, because the insurer is effectively apportioning losses between itself and NFIP. Obtaining both the flood and wind adjustment claims data, whether from the same WYO insurer that services both or from different insurers, would be necessary to NFIP to verify the accuracy of the payments made for flood claims. Information collected and assessed through FEMA’s claims reinspection program is also of limited usefulness in confirming or validating the accuracy of flood payments made by NFIP on properties damaged by both wind and flooding. Without the additional information about wind damage on properties for which flood claims were also filed, NFIP may not be certain whether it has paid only for the flood damages to these properties. Finally, we determined that using hurricane claims data collected by state insurance regulators would not have provided data on a property- or community-level basis to help NFIP determine how much damage was caused by wind versus flooding, and how these damages were apportioned between the two perils. The lack of both wind and flood claims data limits NFIP’s ability to assess whether payments made on flood claims from the 2005 hurricane season were accurate. Mr. Chairmen, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittees may have. For additional information about this testimony please contact Orice M. Williams on (202) 512-8678 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Lawrence D. Cluff, Assistant Director; Tania Calhoun; Emily Chalmers; Rudy Chatlos; Chir-Jen Huang; Barry Kirby; and Melvin Thomas. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Disputes between policyholders and property-casualty insurers over coverage from the 2005 hurricane season highlight challenges in determining the appropriateness of claims for multiple-peril events. In particular, events such as hurricanes that can cause both wind and flood damages raise questions about the adequacy of steps taken by the Federal Emergency Management Agency (FEMA) to ensure that claims paid by the National Flood Insurance Program (NFIP) covered only damages caused by flooding. As a result, the Subcommittees asked GAO to provide preliminary views on (1) the information available to and obtained by NFIP through its claims process in determining flood damages for properties that sustained both wind and flood damages, and (2) the information collected by FEMA as part of the NFIP claims reinspection process. GAO collected data from FEMA, reviewed reinspection reports, reviewed relevant policies and procedures, and interviewed agency officials and others knowledgeable about NFIP. NFIP does not collect and analyze both wind and flood damage claims data in a systematic fashion, which may limit FEMA's ability to assess whether flood payments on hurricane-damaged properties are accurate. Instead, NFIP focuses only on the flood claims data to determine whether the amount actually paid on a claim reflects the damages caused by flooding. Flood claims data, collected by NFIP through the write-your-own (WYO) insurers--including those that sell and service both the wind and flood policies--do not include information on total damages to the property from all perils. That is, NFIP does not systematically collect information on wind damages from the WYO insurer when a flood claim is received. FEMA officials state that they do not have authority to collect wind damage claims data from WYO insurers, even when the insurer services both the wind and flood policies on the same property. As a result, for hurricane-damaged properties, such as those damaged by Hurricanes Katrina and Rita, NFIP does not have all the information it needs to ensure that its claims payments were limited to damage caused by flooding. Concerns over the processing of these flood claims are heightened when the same insurance company serves as both NFIP's WYO insurer and the property-casualty (wind) insurer for a given property. In such cases, the same company is responsible for determining damages and losses to itself and to NFIP, creating a potential conflict of interest. The lack of both flood and wind damage data also limits the usefulness of FEMA's quality assurance reinspection program for NFIP flood claims. GAO found that the NFIP reinspection program did not incorporate a means for collecting and analyzing both the flood and wind damage data together in a systematic fashion to reevaluate the extent to which wind and flooding were deemed to have contributed toward damages to the property. Further, we explored whether the wind-related claims data collectively gathered by state insurance regulators would be useful to NFIP to reevaluate damage assessments. We determined that this information would be of limited value to NFIP in reevaluating wind versus flood damage determinations made because such data is not collected in enough geographic detail to match with the corresponding flood claims data on a property- or community-level basis. Without the ability to examine damages caused by both wind and flooding, the reinspection program is limited in its ability to confirm whether NFIP paid only for losses caused by flooding. |
The Flood Control Act of 1944 established a comprehensive plan for flood control and other purposes, such as hydroelectric power production, in the Missouri River Basin. The Pick-Sloan Plan—a joint water development program designed by the U.S. Army Corps of Engineers (the Corps) and the Department of the Interior’s (Interior) Bureau of Reclamation—included the construction of five dams on the Missouri River, including the Garrison Dam in North Dakota and the Oahe, Fort Randall, Big Bend, and Gavins Point Dams in South Dakota. The construction of the Fort Randall Dam, located 7 miles above the Nebraska line in south-central South Dakota, began in May 1946 and was officially dedicated in August 1956. The dam is 160 feet high, and the reservoir behind it, known as Lake Case, stretches 107 miles to the northwest. (See fig. 1.) In September 1959, the Corps began work on the Big Bend Dam, which is about 100 miles northwest of the Fort Randall Dam on land belonging to both the Crow Creek Sioux and Lower Brule Sioux tribes. The Big Bend Dam is 95 feet high and was completed in September 1966. The reservoir behind the dam, known as Lake Sharpe, is 20 miles long. (See fig. 2.) The Crow Creek Sioux and Lower Brule Sioux tribes reside on reservations located across the Missouri River from one another in central South Dakota. The Crow Creek reservation includes about 225,000 acres, 56 percent of which is owned by the tribe or individual Indians. According to the 2000 Census, the Crow Creek reservation has 2,199 residents, with the majority residing in the community of Fort Thompson. The Lower Brule reservation includes about 226,000 acres, 60 percent of which is owned by the tribe or individual Indians. According to the 2000 Census, the Lower Brule reservation has 1,355 residents, including several hundred who reside in the community of Lower Brule. Both reservations include some non- Indians, and both tribes have several hundred members who do not live on the reservations. The major economic activities for both the Crow Creek Sioux and Lower Brule Sioux tribes are cattle ranching and farming, and both tribes provide guided hunting for fowl and other game. Each tribe also operates a casino and a hotel. Both tribes are governed by a tribal council under their respective tribal constitutions, and each tribal council is led by a tribal chairman. The major employers on the reservations are the tribes, the casinos, the Bureau of Indian Affairs, and the Indian Health Service. In addition, the Lower Brule Sioux tribe provides employment through the Lower Brule Farm Corporation, which is the nation’s number one popcorn producer. See appendix II for a map of the Crow Creek and Lower Brule reservations and the locations of the previously mentioned dams and reservoirs. The construction of the Fort Randall Dam caused the flooding of more than 17,000 acres of Crow Creek and Lower Brule reservation land and the displacement of more than 100 tribal families. After these two tribes sustained major damage from this project, the construction of the Big Bend Dam inundated over 20,000 additional acres of their reservations. This flooding displaced more families, some of whom had moved earlier as a result of flooding from the Fort Randall Dam. (See table 1.) Flooding from the installation of both dams resulted in the loss of valuable timber and pasture and forced families to move to less desirable land, which affected their way of life. During the early 1950s, the Corps; Interior, through its Missouri River Basin Investigations Unit (MRBI); and the tribes—represented through tribal negotiating committees—developed their own estimates of the damages caused by the Fort Randall Dam. Discussions and informal negotiating conferences were held among the three parties in 1953 to try to arrive at acceptable compensation for damages. At that point, the Fort Randall Dam had been closed since July 1952 and portions of the reservations were underwater. The MRBI’s appraisal of damages was about $398,000 for Crow Creek and about $271,000 for Lower Brule, which was higher than the Corps’ proposal. Both the MRBI appraisal and the Corps’ proposal were substantially lower than the tribes’ settlement proposals, and the parties were unable to reach settlement. The Corps planned to take the land by condemnation, but in July 1954 decided against that action when the Congress authorized and directed the Corps and Interior to jointly negotiate separate settlements with the tribes. Meanwhile, the tribes arranged to have settlement bills introduced in July 1954. These bills requested $1.7 million for damages for the Crow Creek Sioux tribe and $2.5 million for damages for the Lower Brule Sioux tribe. Both of these bills also contained requests for about $2.5 million each for rehabilitation funds. The first formal negotiating conference was held among the parties in November 1954, and further discussions continued over several more years after the bills were introduced, but, again, the parties could not reach settlement. In 1955, with negotiations stalled, the Corps requested and obtained an official declaration of taking. The tribes—with their lands now flooded—received funds based on the earlier MRBI appraisal figures, with the understanding that negotiations for additional funds would continue. The tribes continued to insist on receiving substantially higher compensation amounts for damages, and additional funds for rehabilitation, as part of the settlement. The amounts the tribes requested for rehabilitation fluctuated in tribal settlement proposals between 1954 and 1957, but both the Corps and the MRBI maintained that rehabilitation funding was not within the scope of the negotiations. In March 1958, each tribe’s negotiating committee submitted new proposals at compensation hearings for the Fort Randall Dam. The Crow Creek Sioux tribe proposed compensation of about $2.2 million for damages and administrative expenses related to the settlement, and the Lower Brule Sioux tribe proposed compensation of about $1.8 million for damages and administrative expenses. Neither proposal included funds for rehabilitation because both tribes agreed with the government’s request to wait to procure these funds in the Big Bend Dam compensation request. In May 1958, bills were introduced in the Congress with amounts that were less than the tribes had proposed through their negotiating committees, with the amount for direct damages from Fort Randall Dam construction being substantially reduced. According to House reports, both the tribes and the Corps agreed to the amounts proposed for damages. Later that summer, amendments to the bills reduced the amount for indirect damages for both tribes. In September 1958, the Congress authorized a payment of about $1.5 million to the Crow Creek Sioux tribe, and almost $1.1 million to the Lower Brule Sioux tribe. See table 2 for a summary of selected settlement proposals related to the Fort Randall Dam. In contrast to the Fort Randall negotiations, the compensation for the construction of the Big Bend Dam was granted quickly. In bills introduced in March 1961, the Crow Creek Sioux tribe requested over $1 million for damages and administrative expenses as a result of the Big Bend Dam construction. The Lower Brule Sioux tribe requested close to $2.4 million for damages, administrative expenses, and a new school. In addition, both tribes requested the rehabilitation funds that had not been included in the Fort Randall Dam settlement—that is, the Crow Creek Sioux tribe requested more than $4 million and the Lower Brule Sioux tribe requested about $2.7 million. In June 1961, the government and the tribes agreed to a reduction in direct damages, while the tribes requested an increase to the amount for indirect damages, bringing the total amount of compensation, including rehabilitation, requested by the Crow Creek Sioux and Lower Brule Sioux tribes to about $4.9 million for each tribe. In subsequent bills over the next year, however, the Congress lowered indirect damages considerably and dropped the amount requested for a new school for Lower Brule. The amounts requested for administrative expenses and rehabilitation were also reduced. In October 1962, the Congress authorized a payment of $4.4 million to the Crow Creek Sioux tribe and about $3.3 million to the Lower Brule Sioux tribe. See table 3 for a summary of selected settlement proposals related to the Big Bend Dam. See appendixes III and IV for a timeline summary of the settlement negotiations and compensation for the two dams for the Crow Creek Sioux and Lower Brule Sioux tribes, respectively. Tribes at five other reservations affected by flood control projects along the Missouri River incurred losses ranging from about 600 acres to over 150,000 acres. These tribes received some compensation, primarily during the 1950s, for the damages they sustained. However, beginning in the 1980s, some of these tribes began requesting additional compensation. The Congress responded to their requests by authorizing the establishment of development trust funds. (See table 4.) The tribes at the Fort Berthold, Standing Rock, and Cheyenne River reservations received compensation within the ranges we had suggested the Congress consider in our reviews of the tribes’ additional compensation claims. The ranges were based on the current value of the difference between each tribes’ final asking price and the amount that the Congress authorized. We were not asked to review the additional compensation claims for the Crow Creek Sioux and Lower Brule Sioux tribes in the 1990s or for the Santee Sioux and Yankton Sioux tribes in 2002. The Crow Creek Sioux and Lower Brule Sioux tribes’ consultant differed from the approach we used in our prior reports. The consultant used a variety of settlement proposals, instead of consistently using the tribes’ final asking prices, in calculating the difference between what the tribes asked for and what the Congress authorized. As a result, the consultant’s proposed compensation estimates are higher than if he had consistently used the tribes’ final asking prices. In addition, the consultant provided only the highest additional compensation value, rather than a range of possible additional compensation from which the Congress could choose. To arrive at an additional compensation estimate, the consultant did not consistently use the tribes’ final asking prices when calculating the difference between what the tribes asked for and what they finally received. In determining possible additional compensation for the tribes at the Fort Berthold and Standing Rock reservations in 1991, and the Cheyenne River reservation in 1998, we used the tribes’ final asking prices to calculate the difference between what the tribes asked for and what they received. In our prior reports, we used the tribes’ final position because we believed that it represented the most up-to-date and complete information, and that their final position was more realistic than their initial asking prices. In contrast, the consultant used figures from a variety of settlement proposals—several of which were not the tribes’ final asking prices—to estimate additional compensation for damages (including direct and indirect damages), administrative expenses, and rehabilitation. As a result, the consultant’s estimate of the tribes’ asking prices in the late 1950s and early 1960s was about $7.7 million higher than it would have been if he had consistently used the tribes’ final asking prices. Choosing which settlement proposal to use to calculate the difference between what the tribe asked for and what it finally received is critically important, because a small numerical difference 50 years ago can result in a large difference today, once it is adjusted to reflect more current values. With respect to the Fort Randall Dam, the consultant used amounts from a variety of settlement proposals for damages and administrative expenses. To determine additional compensation, the consultant used a $2.2 million settlement proposal by the Crow Creek Sioux tribe and a $2.6 million settlement proposal by the Lower Brule Sioux tribe. (See table 5.) The Crow Creek proposal was from May 1957, and was the same as the tribe’s final asking price requested about 1 year later, in February 1958. However, the Lower Brule proposal was from the first compensation bill introduced in the Congress in July 1954, almost 4 years before the tribe’s final asking price of about $1.8 million in March 1958-—a difference of more than $850,000. For the Big Bend Dam, the consultant also used amounts from different settlement proposals for damages and administrative expenses. To determine additional compensation, the consultant used amounts from congressional bills introduced in March 1961 for direct damages, but used amounts from proposed amendments to the bills in June 1961 for indirect damages. The tribes’ asking prices from June 1961 can be considered their final asking prices because the proposed amendments are the last evidence of where the tribes requested specific compensation (indirect damages) or agreed to a compensation amount (direct damages). The consultant would have been more consistent had he used both the indirect and direct damage settlement figures in the proposed amendments from June 1961, rather than a mixture of these figures. As a result, the total amount for damages the consultant used to calculate the difference between what the tribes requested and what it finally received is about $427,000 (in 1961 dollars) higher than if the tribes’ final asking prices from June 1961 had been used consistently. (See table 6.) Lastly, the consultant did not use the tribes’ final asking prices for the rehabilitation component of the settlement payment. The consultant used a $6.7 million rehabilitation figure that the Crow Creek Sioux tribe’s negotiating committee proposed in May 1957 and a $6.3 million rehabilitation figure that was proposed in congressional bills in 1955 and 1957 for the Lower Brule Sioux tribe. (See table 7.) Both of these figures were developed during the negotiations for the Fort Randall Dam. However, the tribes agreed in their February and March 1958 proposals— their final asking prices for the Fort Randall Dam—to defer consideration of their rehabilitation proposals until after land acquisitions were made for the construction of the Big Bend Dam. The Big Bend Dam’s installation would once again result in the flooding of their lands. In our view, the consultant should have used the final rehabilitation figures proposed by the tribes in 1961—that is, $4 million for the Crow Creek Sioux tribe and $2.7 million for the Lower Brule Sioux tribe. While rehabilitation was the largest component of the tribes’ settlement proposals, we believe it should be considered separately from the comparison for damages because rehabilitation was not directly related to the damage caused by the dams. Funding for rehabilitation, which gained support in the late-1940s, was meant to improve the tribes’ social and economic development and prepare some of the tribes for the termination of federal supervision. Funding for these rehabilitation programs came from both the government and from the tribes themselves. From the late- 1940s through the early-1960s, the Congress considered several bills that would have provided individual tribes with rehabilitation funding. For example, between 1949 and 1950, the House passed seven bills for tribes totaling more than $47 million in authorizations for rehabilitation funding, and considered other bills, one of which would have provided $50 million to several Sioux tribes, including Crow Creek and Lower Brule. Owing to opposition from tribal groups, the termination policy began to lose support with the Congress in the late 1950s, and rehabilitation funding for individual tribes during this time was most often authorized by the Congress in association with compensation bills for dam projects on the Missouri River. However, the granting of rehabilitation funding for these tribes was inconsistent. Some tribes did not receive rehabilitation funding along with compensation for damages, while others did. (See table 8.) In our two prior reports, we suggested that, for the tribes of Fort Berthold, Standing Rock, and Cheyenne River, the Congress consider a range of possible compensation based on the current value of the difference between the final asking price of each tribe and the amount that it received. In calculating the current value, we used two different rates to establish a range of additional compensation. For the lower end of the range, we used the inflation rate to estimate the amount the tribes would need to equal the purchasing power of the difference. For the higher range, we used an interest rate to estimate the amount the tribes might have earned if they had invested the difference in Aaa corporate bonds as of the date of the settlement. The consultant did not follow this approach when he calculated the compensation estimates for the Crow Creek Sioux and Lower Brule Sioux tribes. Instead, he used the corporate bond rate to develop a single figure for each tribe, rather than a range. The consultant justified using only the corporate bond rate to calculate the compensation figures for the Crow Creek Sioux and Lower Brule Sioux tribes by pointing out that the Congress authorized additional compensation of $149.2 million for the tribes of Fort Berthold and $290.7 million for the Cheyenne River Sioux tribe in 1992 and 2000, respectively, by using our estimates of the high end of the range for these tribes. The consultant contended that if the Congress also uses the corporate bond rate for the Crow Creek Sioux and Lower Brule Sioux tribes to determine compensation, it would ensure parity with the amounts the tribes of Fort Berthold and the Cheyenne River Sioux received. However, the Congress has not always chosen to use the highest value in the ranges we estimated. For example, in the case of the Standing Rock Sioux tribe, the Congress chose to provide additional compensation of $90.6 million in 1992—an amount closer to the lower end of the range we estimated. Using the approach we followed in our prior reports, which was based on the tribes’ final asking prices, we found that the additional compensation the Crow Creek Sioux and Lower Brule Sioux tribes received in the 1990s was either at the high end or above the range of possible additional compensation. For both tribes, we calculated the difference between the final asking prices and the compensation authorized in 1958 and 1962. We then took the difference and adjusted it to account for the inflation rate and the Aaa corporate bond rate through either 1996 or 1997 to produce a possible range of additional compensation to compare it with the additional compensation the Congress authorized for the tribes in 1996 and 1997. For the Crow Creek Sioux tribe, we estimated that the difference adjusted to 1996 values for both dams would range from $6.5 million to $21.4 million (see table 9), compared with the $27.5 million the Congress authorized for the tribe in 1996. The $27.5 million in additional compensation already authorized for the Crow Creek Sioux tribe is therefore higher than the amount that we would have proposed in 1996 using our approach. For the Lower Brule Sioux tribe, we estimated that the difference adjusted to 1997 values for both dams would range from $12.2 million to $40.9 million (see table 10), compared with the $39.3 million the Congress authorized for the tribe in 1997. The $39.3 million falls toward the high end of the range that we would have proposed in 1997 using our approach. Our estimates of additional compensation for the two tribes vary significantly from the amounts calculated by the tribes’ consultant. Our estimated range for the two tribes combined is from about $18.7 million to $62.3 million. The consultant calculated an additional compensation figure for the two tribes of $292.3 million (in 2003 dollars)—that is, $105.9 for the Crow Creek Sioux tribe and $186.4 for the Lower Brule Sioux tribe—before subtracting the amounts received by the tribes in 1996 and 1997, respectively. There are two primary reasons for the difference between our additional compensation amounts and the consultant’s amounts. First, most of the difference is due to the different rehabilitation cost figures that were used. For the difference between the tribes’ asking prices for rehabilitation and the amounts they actually received, we used $901,450 and the consultant used about $7.3 million (in 1961 and 1957 dollars, respectively). Once the $901,450 is adjusted to account for inflation and interest earned through 1996 and 1997, it results in a range of additional compensation for rehabilitation for the two tribes combined of about $4.8 million to $15.1 million. If the consultant’s rehabilitation figure of about $7.3 million is adjusted through 1996 and 1997, his total for the two tribes is $120.9 million, or more than $105 million above our high estimate. Second, our dollar values were adjusted to account for inflation and interest earned only through 1996 and 1997 to compare them with what the two tribes received in additional compensation at that time. The consultant, however, adjusted for interest earned up through 2003. In addition, he then incorrectly adjusted for the additional compensation the tribes were authorized in the 1990s. Specifically, the consultant subtracted the $27.5 million and $39.3 million authorized for the Crow Creek Sioux and Lower Brule Sioux tribes in 1996 and 1997, respectively, from his additional compensation totals without first making the different estimates comparable. Since these amounts were in 1996 and 1997 dollar values, versus the 2003 dollar values for his current calculations, it was incorrect to subtract one from the other without any adjustment. In our view, the consultant should have adjusted his current calculations through 1996 and 1997, depending on the tribe, and then should have subtracted the additional compensation provided the tribes at that time. If there was any remaining compensation due the tribes, the final step then would have been to adjust it to reflect 2003 dollar values. Using this approach, the additional compensation provided to the tribes in the 1990s would have been subtracted from comparable dollar values. The additional compensation already authorized for the Crow Creek Sioux and Lower Brule Sioux tribes in 1996 and 1997, respectively, is consistent with the additional compensation authorized for the other tribes on the Missouri River. Rather than bringing the Crow Creek Sioux and Lower Brule Sioux tribes into parity with the other tribes, the two bills under consideration in the 109th Congress—H.R. 109 and S. 374—would have the opposite effect. Providing a third round of compensation to the Crow Creek Sioux and Lower Brule Sioux tribes, in the amounts proposed in the bills, would catapult them ahead of the other tribes and set a precedent for the other tribes to seek a third round of compensation. Our analysis does not support the additional compensation amounts contained in H.R. 109 and S. 374. Notwithstanding the results of our analysis, the Congress will ultimately decide whether additional compensation should be provided and, if so, how much it should be. Our analysis will assist the Congress in this regard. Because the consultant’s analysis was the basis for the tribes’ additional compensation claims and the consultant had asserted that the additional compensation amounts were based on a methodology deemed appropriate by GAO, we chose to provide the tribes’ consultant with a draft of this report for review and comment. In commenting on the draft, the tribes’ consultant (1) acknowledged that he had made a calculation error in his analysis, (2) proposed a range of additional compensation based on four different alternatives, and (3) discussed the complex issues of “asking price” in the context of the particular set of facts for the Crow Creek Sioux and Lower Brule Sioux tribes. In addition, the consultant commented “…that there has been no uniform or consistent approach, method, formula, or criteria for providing additional compensation. . .” to the seven tribes affected by Pick-Sloan dam projects on the Missouri River. Specifically, the consultant pointed out that the Congress has provided additional compensation to four tribes based on a per-acre analysis, while only three tribes have received additional compensation within the ranges we calculated in our two prior reports. As a result, the consultant believes that there is a wide disparity in the total compensation that the seven tribes have received from the Congress. As discussed in detail below, we believe that our approach is reasonable, and we did not make any changes to the report based on the consultant’s comments. The tribes’ consultant provided written comments that are included in appendix V, along with our specific responses. To address the perceived disparity in the total compensation amounts provided by the Congress, the consultant proposed four different alternatives for calculating additional compensation for the Crow Creek Sioux and Lower Brule Sioux tribes: (1) on a per-acre basis compared with the Cheyenne River Sioux tribe, (2) the consultant’s original proposal (amended to correct for the calculation error), (3) on a per-acre basis compared with the Santee Sioux tribe, and (4) calculations based on using the tribes’ highest asking prices. We do not believe that the consultant’s amended original proposal nor the three new alternatives represent a sound approach for establishing the range of additional compensation. Our approach is to provide the Congress with a range of possible additional compensation based on the difference between the amount the tribes believed was warranted at the time of the taking and the final settlement amount. We then adjusted the differences using the inflation rate for the lower end of the range and the corporate bond rate for the higher end. The ranges of additional compensation we calculated in this report were calculated in exactly the same way we did in our 1991 and 1998 reports, and we believe our approach is reasonable. In our view, trying to compare the total compensation for the tribes on a per-acre basis—which are two of consultant’s proposed alternatives—does not take into account the differences of what each tribe lost. For example, even if the individual resources such as timber, wildlife, and wild products would have all been valued the same for all of the tribes, if one tribe lost more of one resource than another, then their per-acre compensation values would be different. Also, about half of the payments to four of the tribes were for rehabilitation, which had no direct correlation to the acreage flooded by the dams, and the consultant did not make the different dollar amounts comparable before performing his per-acre calculations. The tribes’ consultant disagreed with our assumption that the tribes’ final asking prices were based on the most up-to-date and complete information and that they were more realistic than their initial asking prices. Specifically, the consultant noted that the tribes’ final asking prices “were made under conditions of extreme duress.” We agree with the consultant that the tribes were not willing sellers of their land at the initial price that the government offered for their land. However, we disagree that this factor invalidates the use of the tribes’ final asking prices. The drawn out negotiations for the Fort Randall Dam and the amounts of the tribes’ final asking prices do not support the conclusion that the tribes simply capitulated and accepted whatever the government offered. For example, for 12 of the 15 compensation components shown in tables 5, 6, and 7 of our report, the tribes’ final asking prices were equal to, or higher than, their initial settlement proposals. We used a clearly defined and consistent approach, whereas, in his analysis, the consultant selected only certain numbers from a variety of tribal settlement proposals without providing any justification. While the tribes’ consultant chose to use the Crow Creek Sioux tribes’ offer from May 1957, he did not use the Lower Brule Sioux tribes’ offer from the same time. Instead, the consultant chose to use the Lower Brule Sioux tribes’ initial offer from 3 years earlier—July 1954— without any explanation. Furthermore, rather than consistently using the Lower Brule Sioux tribes’ July 1954 offer, the consultant used the tribes’ rehabilitation offer from April 1957, again without any explanation. The tribes’ consultant correctly points out that only three of the seven tribes have received additional compensation consistent with the ranges calculated in our two prior reports. Until this report, the Congress had only asked us to review these three tribes’ additional compensation requests, and, each time, the Congress provided additional compensation within the ranges we calculated. Furthermore, our two prior reports dealt with the three highest tribal claims for additional compensation—all over $90 million—whereas, the four tribes that obtained additional compensation based on a per-acre calculation were all less than $40 million, and we were not asked to review those requests. As noted in this report, although the additional compensation already provided to the tribes in 1996 and 1997 was calculated on a per-acre basis, by coincidence, for the Lower Brule Sioux tribe it was within the range we would have proposed and for the Crow Creek Sioux tribe it was above our range. As such, should the Congress rely on our analysis in this report and not provide these two tribes a third round of compensation, then the additional compensation provided to five of the seven tribes would generally be within the ranges we have calculated, leaving only two tribes that would have had their additional compensation calculated based on a per-acre analysis and not analyzed by GAO. Accordingly, we believe our approach would provide more consistency among the tribes. It is important to note that both the consultant’s analysis and the two bills pending in the 109th Congress state that the additional compensation amounts for the Crow Creek Sioux and Lower Brule Sioux tribes are based on a methodology deemed appropriate by GAO. We do not believe our analysis supports the additional compensation claims. We recognize that compensation issues can be a sensitive, complex, and controversial. Ultimately, it is up to the Congress to make a policy determination as to whether additional compensation should be provided and, if so, how much it should be. We amended our observations to reflect this reality. We are sending copies of this report to interested congressional committees, the Secretary of the Interior, the tribes’ consultant, the Crow Creek Sioux and Lower Brule Sioux tribes, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To assess the consultant’s methods and analysis for determining additional compensation for the Crow Creek Sioux and Lower Brule Sioux tribes as a result of the flooding of 38,000 acres of their land and resources by the installation of the Fort Randall and Big Bend Dams, we used standard economic principles and the analysis we conducted in our two prior reports on additional compensation. We met with the tribes’ consultant to determine how he used the method that we suggested the Congress adopt as the basis for granting additional compensation to other tribes and reviewed additional information he provided on how he arrived at his proposed compensation amounts. In order to ensure that we obtained and reviewed all relevant data, we conducted a literature search for congressional, agency, and tribal documents at the National Archives and the Department of the Interior’s (Interior) library. We used original documents to learn about the negotiation process and to identify the appraised land prices and various proposed settlement amounts. As a result, we determined that these data were sufficiently reliable for purposes of this report. Specifically, from the National Archives, we reviewed legislative files containing proposed House and Senate bills, public laws enacted, House and Senate reports, and hearings held on compensation for the tribes. In addition, from Interior’s library, we obtained Missouri River Basin Investigations Unit documents to review information on early damage estimates as a result of installation of the Fort Randall Dam and on details regarding both informal and formal negotiations between the federal government and the two tribes. We also met with representatives of the two tribes on their reservations in South Dakota to (1) discuss the analysis, the actions taken with the compensation previously obtained, and plans for the additional compensation amounts requested and (2) review any records they might have on earlier compensation negotiations. The tribes, however, did not have any documentation on tribal discussions or decisions regarding either compensation negotiations or offers that took place in the 1950s and 1960s. We performed our work from October 2005 to April 2006 in accordance with generally accepted government auditing standards. January 4, February 14 Tribal Prity Act reintrodced as H.R. 109 nd S. January 4, February 14 Tribal Prity Act reintrodced as H.R. 109 nd S. The following are our comments on the Crow Creek Sioux and Lower Brule Sioux tribes’ consultant’s letter dated April 27, 2006. 1. The tribes’ consultant did not calculate a range of additional compensation as we suggested in our report. Our approach is to provide the Congress with a range of possible additional compensation based on the difference between the amounts the tribes believed was warranted at the time of the taking and the final settlement amount. We then adjusted the differences using the inflation rate for the lower end of the range and the corporate bond rate for the higher end. In deciding not to calculate a low-end value using the inflation rate, the consultant stated that “…there is no precedent for Congress using the inflation rate as a basis for any additional compensation it has awarded to the seven Tribes since 1992.” While the consultant is correct in stating that the Congress has not provided any tribe with additional compensation at the lowest value in the ranges we have calculated, there is a precedent for the Congress providing an amount less than the highest value. In 1992, the Congress authorized $90.6 million in additional compensation for the Standing Rock Sioux tribe, which was toward the low end of the possible compensation range we calculated of $64.5 million to $170 million. Although the Congress did not select the lowest value, having a lower value provided the Congress with a range from which to select. We did not suggest that the consultant should propose a range of additional compensation using four different approaches. 2. Determining whether additional compensation is warranted is a policy decision for the Congress to decide. Nonetheless, if the Congress relies on our analysis in this report and does not provide a third round of compensation to the Crow Creek Sioux and Lower Brule Sioux tribes, the additional compensation provided to five of the seven tribes—the Cheyenne River Sioux tribe, the Crow Creek Sioux tribe, the Lower Brule Sioux tribe, the Standing Rock Sioux tribe, and the Three Affiliated Tribes of the Fort Berthold Reservation—would generally fall within the ranges we calculated using our approach, thereby leaving only two tribes—the Santee Sioux tribe and the Yankton Sioux tribe— that would have had their additional compensation calculated on a per- acre basis and not reviewed by GAO. As a result, we believe using our approach, which is based on the amounts that the tribes believed were warranted at the time of the taking, would provide more consistency among the tribes, rather than less. 3. The tribes’ consultant did not make the amounts from different years comparable before making his per-acre calculations. The consultant did not adjust the original compensation amounts from 1947 through 1962 before adding them with the additional compensation amounts from 1992 through 2002. As a result, any comparisons made between the compensation amounts of the Crow Creek Sioux and Lower Brule Sioux tribes and other tribes, such as the Cheyenne River Sioux tribe or the Santee Sioux tribe, would be inaccurate. For example, for the Lower Brule Sioux tribe, the consultant added three amounts from 1958, 1962, and 1997 for a total of $43.6 million, without first adjusting the individual amounts to constant dollars. More importantly, we do not believe that an aggregate per-acre comparison among the tribes is appropriate. We agree with the tribes’ consultant that the tribes all suffered similar damages, but similar does not mean exactly the same. Damages would have to be exactly the same among all tribes for there to be equal total compensation on a per-acre basis, and this was not the case. Products, such as buildings, timber, and wildlife, were valued differently depending on type and some tribes lost more of one resource than other tribes. As a result, their per-acre compensation values would be different. Also, about half of the payments to four of the tribes were for rehabilitation that was not directly linked to the acreage flooded by the dams. 4. We disagree that the additional compensation authorized for the Cheyenne River Sioux tribe in 2000 had a “skewing” effect on the additional compensation provided to the four other tribes prior to that time. The additional compensation authorized for the Cheyenne River Sioux tribe fell within the range we calculated, as did the additional compensation authorized for the Three Affiliated Tribes of the Fort Berthold Reservation and the Standing Rock Sioux tribe. Our range was based on the amount the Cheyenne Rive Sioux tribe believed was warranted at the time of the taking. Furthermore, as our analysis in this report demonstrates, although the Crow Creek Sioux and Lower Brule Sioux tribes were provided with additional compensation in 1996 and 1997 based on a per-acre analysis, the amounts were consistent with, or higher, than the ranges we calculated in this report. 5. As the tribes’ consultant noted in his comments, he did not use the tribes’ highest offers in every case in his original analysis because he believed that some of those offers, such as the $16 million rehabilitation figure requested by the Lower Brule Sioux tribe, were skewed by special circumstances. However, the consultant uses these same highest asking prices in his fourth alternative, even though he believed them to be too unreasonable to include in his original analysis. 6. The tribes’ consultant is correct in pointing out that we did not use the exact phrase “final asking price” in our two prior reports. However, the ranges we calculated in our 1991 and 1998 reports were based on the final asking price of the tribes and their final settlements. We used the phrase “at the time of the taking” as a general phrase to denote the time period when the tribes’ were negotiating with the government for compensation for the damages caused by the dams. It is not intended to refer to a specific date. 7. We disagree with the tribes’ consultant that tribal members were forced to relocate without funds for moving expenses. The tribes did receive initial funds based on the Missouri River Basin Investigations Unit appraisals to help cover relocation expenses 3 years before they made what we refer to as their final asking prices in March 1958. In March 1955, the Crow Creek Sioux tribe received $399,313 and the Lower Brule Sioux tribe received $270,611 from the court, with the understanding that negotiations between the tribes, the U.S. Army Corps of Engineers, and Interior would continue until settlements were achieved. Tribal committees were formed to plan relocation activities with these funds. 8. We disagree with the tribes’ consultant regarding his characterization of the rehabilitation portion of the payment the tribes received. We state in this report that it should be considered separately from the comparison of the dams because it was not directly related to the damage caused by the dams. The tribes’ consultant states that “…the Congress has consistently demonstrated the understanding that funds for rehabilitation were directly linked to the damages caused by the dams.” We agree that funding for rehabilitation became intertwined with compensation for the dams, and we included rehabilitation in our analysis in this report, as shown in tables 9 and 10, as we did for the Cheyenne River Sioux tribe and the Standing Rock Sioux tribe. However, we disagree that rehabilitation is directly linked to the damages caused by the dams for the following three reasons. First, other tribes not affected by dam projects were also provided with rehabilitation funding. Second, rehabilitation funding was to improve the economic and social conditions of all tribal members, it was not limited to only those members directly affected by the dams. Third, it was clear during the negotiations that the government did not consider rehabilitation funding to be compensation for the damages caused by the dams. In addition, in this report, as in our 1998 report, we show the breakout of each component in our analysis to provide the Congress with the most complete information. In addition to the individual named above, Jeffery D. Malcolm, Assistant Director; Greg Carroll; Timothy J. Guinane; Susanna Kuebler; and Carol Herrnstadt Shulman made key contributions to this report. Also contributing to the report were Omari Norman, Kim Raheb, and Jena Y. Sinkfield. | From 1946 to 1966, the government constructed the Fort Randall and Big Bend Dams as flood control projects on the Missouri River in South Dakota. The reservoirs created behind the dams flooded about 38,000 acres of the Crow Creek and Lower Brule Indian reservations. The tribes received compensation when the dams were built and additional compensation in the 1990s. The tribes are seeking a third round of compensation based on a consultant's analysis. The Congress provided additional compensation to other tribes after two prior GAO reports. For those reports, GAO found that one recommended approach to providing additional compensation would be to calculate the difference between the tribe's final asking price and the amount that was appropriated by the Congress, and then to adjust it using the inflation rate and an interest rate to reflect a range of current values. GAO was asked to assess whether the tribes' consultant followed the approach used in GAO's prior reports. The additional compensation amounts calculated by the tribes' consultant are contained in H.R. 109 and S. 374. The tribes' consultant differed from the approach used in prior GAO reports by (1) not using the tribes' final asking prices as the starting point of the analysis and (2) not providing a range of additional compensation. First, in calculating additional compensation amounts, GAO used the tribes' final asking prices, recognizing that their final settlement position should be the most complete and realistic. In contrast, the consultant used selected figures from a variety of tribal settlement proposals. For example, for the rehabilitation component of the tribes' settlement proposals, the consultant used $13.1 million from proposals in 1957, rather than $6.7 million from the tribes' final rehabilitation proposals in 1961. Second, the tribes' consultant calculated only the highest additional compensation dollar value rather than providing the Congress with a range of possible additional compensation based on different adjustment factors, as in the earlier GAO reports. Based on calculations using the tribes' final asking prices, GAO's estimated range of additional compensation is generally comparable with what the tribes were authorized in the 1990s. By contrast, the consultant estimated about $106 million and $186 million for Crow Creek and Lower Brule, respectively (in 2003 dollars). There are two primary reasons for this difference. First, GAO used the tribes' final rehabilitation proposals from 1961, rather than the 1957 proposals used by the consultant. Second, GAO's dollar amounts were adjusted only through 1996 and 1997 to compare them directly with what the tribes received at that time. The consultant, however, adjusted for interest earned through 2003, before comparing it with the payments authorized in the 1990s. The additional compensation already authorized for the tribes in the 1990s is consistent with the additional compensation authorized for other tribes on the Missouri River. GAO's analysis does not support the additional compensation amounts contained in H.R. 109 and S. 374. |
General aviation encompasses all civil aviation except scheduled passenger and cargo operations (i.e., commercial) and excludes military operations. It includes air medical-ambulance operations, flight schools, corporate aviation, and privately owned aircraft. Altogether, more than 200,000 aircraft—from small aircraft with minimal payload capacities to business jets and large jets typically used by commercial airlines, such as the Boeing 747—operate at more than 19,000 facilities, including heliports. The sole common characteristic of general aviation operations is that flights are on demand rather than routinely scheduled. General aviation operations take place at more than 5,000 public use airports, almost all of which serve general aviation exclusively. However, general aviation operations may also take place alongside scheduled airline operations at larger commercial airports. TSA, part of the Department of Homeland Security (DHS), is the primary agency responsible for civil aviation security, which includes general aviation operations. TSA provides the general aviation community with guidance on threats and vulnerabilities, and enforces regulatory requirements for specific airports with general aviation operations. However, because of competing needs for commercial aviation security funding and the vastness and diversity of the general aviation network, the bulk of the responsibility for assessing and enhancing security at the general aviation airports falls on airport operators. In 2004, TSA issued voluntary Security Guidelines for General Aviation Airports. These guidelines are intended to provide general aviation airport owners, operators, and users with recommendations for security concepts, technology, and enhancements. In addition, airport operators are encouraged to perform a self-administered risk assessment of their airports based on a measurement tool provided by TSA. TSA recommends that general aviation airports use this tool to determine what security enhancements may be most appropriate to make given the airport’s location, number of based aircraft, runway length, and number of annual operations. Based on the results of these self-assessments, the operators can decide whether to implement the appropriate countermeasures suggested, such as fencing; perimeter controls; locks on aircraft, hangars, or both; closed-circuit television (CCTV); lighting; access control systems; and other security features. In addition to issuing suggested security guidelines, TSA has implemented security requirements that are typically related to an airport’s location and size of aircraft. For example, pilots flying to and from general aviation airports within Washington, D.C., airspace must follow security measures including background checks and adherence to specific security procedures. For general aviation flights to and from Ronald Reagan Washington National Airport, TSA officials also inspect crew members and passengers, including performing background checks, and their baggage. In addition, TSA requires private charter services using aircraft that either (1) have a maximum takeoff weight greater than 100,309 pounds (45,500 kilograms) or (2) have 61 or more passenger seats to implement a security program that includes passenger screening through metal detection devices, X-ray screening for carry-on and checked baggage, and hiring a certified passenger and baggage screening workforce. Individual operators are generally responsible for conducting these requirements rather than airport officials. In addition, TSA encourages the general aviation community and the public to be vigilant about general aviation security by suggesting specific security awareness and measures for reporting suspicious activity and securing aircraft and aircraft facilities. Examples include aircraft with unusual modifications or activity; pilots appearing to be under the control of others; unfamiliar persons loitering around the field; suspicious aircraft lease or rental requests; anyone making threats; and unusual, suspicious activities or circumstances. The TSA program also advises aircraft operators to (1) always keep their aircraft locked, (2) refrain from leaving keys in unattended aircraft, (3) use secondary locks or aircraft disablers, and (4) lock hangars when they are unattended. The Implementing Recommendations of the 9/11 Commission Act of 2007 requires TSA to develop a standardized threat and vulnerability assessment program for general aviation airports and to implement a program to perform such assessments on a risk-managed basis at general aviation airports. From January through April 2010, TSA invited approximately 3,000 general aviation airport operators to complete its online General Aviation Airport Vulnerability Assessment Survey. The survey was intended to highlight the security conditions and vulnerabilities of the general aviation community. According to TSA, the results of the survey were calculated to discover the general strengths and weaknesses in the general aviation community, and to show an overall picture of general aviation security measures at a national level and by regions. In addition, TSA stated that the survey results may be used to show a need to develop grants or other means of funding to improve general aviation security measures. The 13 airports we visited had multiple security measures in place to protect against unauthorized access. The 3 airports that handle commercial flights in addition to general aviation flights (airports 11, 12, and 13 in figure 1) had implemented nearly all of the security measures we assessed. These 3 airports are required to follow TSA regulations because of their commercial flights. However, we identified potential vulnerabilities at the 10 general aviation airports that could allow unauthorized access to aircraft or airport grounds, facilities, or equipment. These vulnerabilities include security measures discussed specifically in TSA’s 2004 Security Guidelines for General Aviation Airports, which offered suggestions for general aviation airports to voluntarily enhance their security. Security measures varied across the airports we visited, as well as by the type of security measure. Of the 10 general aviation airports, nearly all had in place or partially in place the following security measures: perimeter fencing or natural barriers, lighting around hangars, aircraft and hangars locked and secured, and CCTV cameras in areas related to unauthorized access. None of the 10 general aviation airports had perimeter lighting in place, and only 1 of the general aviation airports had an intrusion detection system, as discussed below. Figure 1 shows the security measures we observed during our on-site assessments at 13 selected airports. In their technical comments, officials from some airports mentioned security measures that were implemented after we conducted our assessments or that we did not observe in place during our assessments; as such, we were unable to verify that these security measures are in place at the airports in question. For example, an official from airport 1 informed us that the airport has implemented sign-in and sign-out procedures for tracking transient pilots. In addition, an official from airport 9 stated that law enforcement officers provide training on aircraft and hangar security to operators and tenants at the airport. Fencing. All but one airport had complete or partial perimeter fencing or was protected in part by a natural barrier, such as a body of water. TSA’s guidelines suggest that fencing, natural barriers, or other physical barriers can be used to deter and delay the access of unauthorized persons onto sensitive areas of airports—such as terminal areas, aircraft storage, and maintenance areas—and also designed to be a visual and psychological deterrent as well as a physical barrier. One airport had no perimeter fencing in place. While we did not seek to systematically test the effectiveness of security measures in place at all the airports we visited, at this airport our investigators were able to freely drive onto the runway and bring their car next to a jet aircraft. They were not stopped or approached by any airport security, management, or personnel or other individuals while they approached and drove around near the aircraft. According to an official from this airport, it is one of many open field airports located in the United States. He added that pilot vigilance plays a key role in the airport’s security, as pilots are responsible for maintaining awareness of suspicious individuals on airport grounds. Figures 2 and 3 show our investigators driving their car onto the runway of this airport and approaching the jet aircraft mentioned above. Although 12 of the 13 airports had full or partial perimeter fencing, or other barriers in place, the fencing at 6 airports was partially bordered by bushes or trees, partially obstructed from view, or located next to a parking lot. TSA’s suggested guidelines caution that such factors may limit the effectiveness of perimeter fencing. For example, bushes or other growth can obstruct surveillance of the surrounding areas, and a parking lot may enable someone to use a vehicle to crash through the fence. According to TSA’s suggested security guidelines, such incidents have occurred. Figures 4 and 5 show perimeter fencing located next to trees or a parking lot. Lighting. All 13 airports we visited had lighting around their hangars, and all but 3 airports had lighting at designated access points. Ten of the airports we visited—the 10 airports that handle general aviation but not commercial aviation—did not have lighting along their outer perimeters. TSA’s suggested guidelines note the effectiveness of lighting in deterring and detecting individuals seeking unauthorized access to airports, but caution that such lighting should not interfere with aircraft operations. The three airports we visited that did have perimeter lighting in place serve a combination of commercial and general aviation traffic. Perimeter lighting provides both a real and psychological deterrent, and allows security personnel to maintain visual-assessment capability during darkness. At several airports we visited, airport managers or other officials stated that streetlights in the neighborhoods surrounding their airports— lights that are not operated or controlled by airport management— provided lighting of the perimeter. Secured aircraft. All 13 airports we visited had measures at least partially in place so that aircraft and hangars were locked and secured. The 3 airports that serve a combination of commercial and general aviation traffic all had these measures in place. At several general aviation airports, we found that keeping aircraft (4 of 10), hangars (7 of 10), or both locked and secured was the responsibility of individual aircraft or facility operators, owners, or tenants rather than airport management. Two of the airports we visited are located in New Jersey; at these airports, officials informed us that state law requires all aircraft to be secured through the use of two locks. TSA’s suggested guidelines note that securing aircraft is the most basic method of enhancing airport security, and that employing multiple methods of securing aircraft makes it more difficult for unauthorized individuals to gain access to aircraft. On-site security. While most of the airports we visited had on-site law enforcement or other security—such as private security guards—in place, several airports either had no on-site security at all or had on-site security present only during certain times of day, usually in the late evening and early morning. However, the three airports we visited that serve a combination of commercial and general aviation traffic all had this measure in place. Officials from several airports we visited stated that law enforcement officers conduct regular patrols of their airports or respond to emergencies within 3 to 5 minutes; however, these law enforcement officers are not on-site at these airports at all times. The presence of on- site security helps to prevent or impede attempts of unauthorized access, and could include inspection of vital perimeter and access points. TSA’s guidelines suggest that airports consider having local law enforcement officers regularly or randomly patrol ramps and aircraft hangar areas, potentially with increased patrols during periods of heightened security. Detecting intruders. Nearly all of the airports we visited—12 of 13—had CCTV cameras installed to monitor for unauthorized access; at 2 of these 12 airports, the CCTV cameras were monitored by individual operators. At one airport, the CCTV cameras were aimed at the administration building and other areas, but not at the perimeter or designated access points. Most of the airports we visited (9 of 13) lacked an intrusion detection system, which may consist of building alarms, CCTV monitoring, or both. TSA guidance states that such systems can replace the need for physical security personnel to patrol an entire facility or perimeter. For example, if an intrusion is detected, the system administrator could notify police, airport management, and other officials. The 3 airports that serve a combination of commercial and general aviation traffic all had CCTV cameras and intrusion detection systems in place. At the time of our visit, an official from airport 4 stated that his airport would soon have an intrusion detection system. Designated access point controls. Eleven of the 13 airports we visited had controls in place or partially in place at designated access points. All 3 airports that serve commercial and general aviation flights had designated access controls in place. TSA’s suggested guidelines note that access point controls should be able to differentiate between an authorized and an unauthorized user, and may be the determining factor in the overall effectiveness of perimeter security in the area of the access point. The airport mentioned above without any perimeter fencing effectively has no access point controls, aside from gates being closed overnight, as was demonstrated when our investigators drove onto the runway unchallenged. An official from this airport informed us that there are gates at the main access points, which are shut when security personnel are on- site from 10:00 p.m. to 6:00 a.m. Another airport had access gates, but they open for all visitors through the use of motion detectors. According to an official from this airport, the motion-controlled access gates allow for individuals to access the airport while keeping wildlife out. At a third airport, there were vehicle access controls that required a code to enter the airport, but there were no pedestrian access controls to prevent individuals from entering onto the ramp area of the airport. An official from this airport told us that individual operators are primarily responsible for controlling those access points. Effective access controls at dedicated vehicle and pedestrian access points help to detect threats and to reduce the possibility that unauthorized individuals will gain access to airports or aircraft. Screening. Most of the airports we visited did not implement physical screening of passengers and their baggage (8 of 13) or of packages and cargo (11 of 13) on general aviation flights. However, officials at multiple airports told us that pilots typically are familiar with their passengers and may escort them to the aircraft. TSA’s suggested guidelines related to passengers on general aviation flights state that prior to boarding, the pilot in command should ensure that the identity of all occupants is verified, all occupants are aboard at the invitation of the owner or operator, and all baggage and cargo is known to the occupants. Further, TSA notes that passengers on general aviation flights are generally better known to airport personnel and aircraft operators than most passengers on commercial flights. Two of the 3 airports providing combined commercial and general aviation services implement screening of passengers and their baggage and of cargo and packages. At the third airport, although passenger and baggage screening is conducted, airport officials stated that because they do not perform significant handling of cargo and packages, they do not screen these items. According to airport officials, several incidents of unauthorized access have occurred within approximately the past 10 years at three of the airports we visited. One airport provided documentation detailing two incidents. According to a local police report supplied by airport management and information provided by an airport official, in June 2002 an airline security guard observed a suspicious individual outside the airport’s perimeter, near a hangar being constructed. When airport security personnel spotted the individual, he jumped over the perimeter fence onto the airport grounds, and fled into a wooded area covering parts of the perimeter. Local police were called but could not locate the individual after an extensive search. In a 2004 incident, an intoxicated man drove his car onto airport grounds and down a taxiway at high speeds before airport authorities and law enforcement officials apprehended him. While the airport had vehicle access controls, the driver circumvented the controls by following closely behind an authorized vehicle that entered the airport through a gate. Neither of these incidents involved unauthorized individuals accessing aircraft. According to an official from this airport, corrective measures were put in place after each incident. The 2002 incident assisted airport management in developing new security procedures and policies, and the 2004 incident resulted in security training related to vehicle access point controls, among other improvements. Officials from two other airports described incidents of unauthorized access but did not provide documentation. One airport had two incidents in which aircraft were stolen or removed from the airport without approval: one aircraft was flown to another city in the same state by a teen who knew the combination to the locked hangar in which the aircraft was stored, and the second aircraft was recovered in Mexico. According to an official from this airport, no corrective actions were taken in response to the incident with the teen because he was well known to the aircraft owner and had actually received the combination to the lock from the aircraft owner. The airport also did not implement any corrective actions in response to the incident in which a stolen aircraft was flown to Mexico. However, the airport official stated that the absence of additional aircraft thefts since this incident demonstrates the effectiveness of the airport’s existing security measures. At a second airport, unauthorized individuals drove two Corvettes onto the taxiway after obtaining the security code for the vehicle access gate. An official from this airport informed us that the airport requested that local police conduct more frequent patrols in response to this incident. Officials from 7 of the 13 airports indicated that there were no incidents of unauthorized access at their airports within the past 10 years. We did not receive information about incidents of unauthorized access from officials at the 3 airports with both commercial and general aviation operations. We did not pursue this inquiry because it was not our primary objective. We met with TSA officials in January 2011 to brief them on the results of our assessments. These officials generally agreed with our findings. According to TSA officials, improvements in general aviation security as a result of TSA’s vulnerability assessment surveys will need to be narrowly focused on security measures that can be implemented at a large number of airports yet still prove effective, given the limited resources that may be made available. In written comments on our report, DHS generally concurred with the overall content and results of our report and indicated that TSA will work in partnership with the general aviation community to support their efforts to address the issues we identified. However, DHS noted TSA security requirements that are not discussed in the Background section of our report. Specifically, TSA requires certain operators of aircraft weighing over 12,500 pounds maximum takeoff weight, based on the type of operation, to adopt a security program and perform security measures, such as checking passenger names against the No-Fly and Selectee Lists, designating security coordinators, and having crewmembers undergo security threat assessments. While our report focused on the physical security measures in place at the specific airports we visited and was not intended to include a comprehensive discussion of all TSA general aviation security initiatives, we acknowledge that TSA has additional security initiatives in place beyond those discussed in our report. DHS also stated that TSA is in the process of issuing a rulemaking for additional security requirements for large general aviation aircraft. According to DHS, TSA expects the release of this rulemaking to further enhance aviation security and codify many of the best practices already implemented by the general aviation industry. In addition, DHS stated that while most airports would readily implement the security measures recommended by TSA, they are unable to put additional security measures in place primarily because of a lack of funding. DHS comments are reprinted in appendix III. As mentioned above, we provided officials from all 13 airports an opportunity to comment on our findings as they related to their specific airports. As appropriate, we incorporated their technical comments into our report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security, the Assistant Secretary of the Transportation Security Administration, selected congressional committees, and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are provided in appendix IV. To determine what physical security measures selected airports with general aviation operations have to prevent unauthorized access, we performed on-site assessments at a nonrepresentative selection of 13 airports that exhibit at least two of the following characteristics that potentially affect an airport’s security posture under TSA guidelines: (1) airport is a public use airport, (2) airport location is within 30 nautical miles of a mass population center of at least 1 million people, (3) based aircraft over 12,500 pounds are located at the airport, (4) airport has at least one runway with a length of at least 5,000 feet, and (5) over 50,000 annual aircraft operations—takeoffs and landings—occur at the airport. We selected airports from a variety of geographic locations and in clusters that would allow us to combine multiple on-site assessments on each visit, and that represented a range in the number of annual aircraft operations. Our selection of airports also includes 3 airports with both commercial and general aviation operations, and that operate under Transportation Security Administration (TSA) security requirements. We traveled to each of the 13 airports we selected and conducted an assessment of the physical security in place to prevent unauthorized access to the airports and aircraft located at the airports. We assessed each airport’s security measures against TSA’s 2004 voluntary security guidelines and other criteria based on our expertise in performing security assessments and a review of industry guidance. The security measures we assessed are primarily focused on outer airport perimeter security and curbside-to-planeside security. Physical security is just one aspect of overall security provisions. For the purposes of this report, we defined physical security as the combination of operational and security equipment, personnel, and procedures used to prevent unauthorized individuals from gaining access to aircraft or airport facilities and grounds. We did not test the effectiveness of the security, nor did we assess measures not directly related to physical security, such as pilot background checks or other intelligence-gathering activities. Although we focused on measures implemented by airports and therefore under direct control of airport management, we gave partial credit when individual aircraft or facility operators, owners, or tenants were responsible for implementing certain security measures. At each airport we visited, we interviewed airport management and other officials with knowledge of the security measures. We conducted our on-site assessments with advance notice to airport officials; we did not conduct any undercover testing on this engagement. During our visits, we also obtained photographic evidence of security measures; requested documentation related to any specific incidents of unauthorized access at each airport; and attempted to obtain information on each airport’s procedures, if any, for screening passengers, their carry-on items, and packages or cargo by requesting documentation pertaining to their security procedures and measures. Since TSA does not require the implementation of security measures for airports with only general aviation operations, our assessments are not meant to imply that any of the general aviation airports we visited have failed to implement required security measures. Rather, our assessments are meant to illustrate the variation in security conditions at the selected general aviation airports. We acknowledge that the specific security measures we selected for the purpose of our assessments are not the only security measures that general aviation airports can implement to attempt to prevent unauthorized access. For example, a state government can also impose requirements on general aviation operations within its jurisdiction; however, the examination of specific state laws, regulations, or other requirements applicable to general aviation operations was not part of our methodology. Moreover, fixed-base operators at these 13 airports may have additional security measures in place to prevent unauthorized access that we did not observe during our visits. We generally did not attempt to interview officials from individual operators. We did not test the effectiveness of the security measures that we found in place at the airports we visited. The results of our assessments cannot be projected to all general aviation airports nationwide. We received technical comments from officials representing the 13 airports we visited and incorporated these comments into our report as appropriate. We conducted work for this engagement from April 2010 to May 2011 in accordance with standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. To perform our security assessment of general aviation airports, we identified 14 key security measures that we determined would help airports to protect against the risk of unauthorized access. The security measures we assessed are primarily focused on outer airport perimeter security and curbside-to-planeside security. We based their selection on our expertise in performing security assessments, a review of security features described in TSA’s 2004 Security Guidelines for General Aviation Airports, and a review of industry guidance. A strong physical security system uses layers of security to deter, detect, delay, and deny intruders: Deter. Physical security measures that deter an intruder are intended to reduce the intruder’s perception that an attack will be successful—an armed guard posted at airport access gates, for example. Detect. Measures that detect an intruder could include video cameras and alarm systems. They could also include roving guard patrols. Delay. Measures that delay an intruder increase the opportunity for a successful security response. These measures include barriers such as perimeter fences. Deny. Measures that can deny an intruder include vehicle and pedestrian screening that only permits authorized individuals to access sensitive areas of the airport. Some security measures serve multiple purposes. For example, a perimeter fence is a basic security feature that can deter, delay, and deny intruders. However, a perimeter fence on its own will not stop a determined intruder. This is why, in practice, layers of security should be integrated in order to provide the strongest protection. Thus, a perimeter fence should be combined with an intrusion detection system that would alert security officials if the perimeter has been breached. A strong system would then tie the intrusion detection alarm to the closed-circuit television (CCTV) network, allowing security officers to immediately identify intruders. Table 1 shows the security measures we focused on during our assessment work. In addition to the contact named above, the following staff members made significant contributions to this report: Gregory D. Kutz, Director; Cindy Brown-Barnes, John Cooney, and Andy O’Connell, Assistant Directors; John R. Ahern, Christopher W. Backley; Betsy Isom; Maria Kabiling; Barbara Lewis; Olivia Lopez; Steve Martin; Flavio J. Martinez; George Ogilvie; Barry Shillito; and Tim Walker. | General aviation accounts for three-quarters of U.S. air traffic, from small propeller planes to large jets, operating among nearly 19,000 airports. While most security operations are left to private airport operators, the Transportation Security Administration (TSA), part of the Department of Homeland Security (DHS), provides guidance on threats and vulnerabilities. In 2004, TSA issued suggested security enhancements that airports could implement voluntarily. Unlike commercial airports, in most cases general aviation airports are not required to implement specific security measures. GAO was asked to perform onsite assessments at selected airports with general aviation operations to determine what physical security measures they have to prevent unauthorized access. With advance notice, GAO investigators overtly visited a nonrepresentative selection of 13 airports, based on TSA-determined risk factors. Three of the airports also serve commercial aviation and are therefore subject to TSA security regulations. Using TSA's voluntary recommendations and GAO investigators' security expertise, GAO determined whether certain security measures were in place. GAO also requested documentation of incidents of unauthorized access. Results of GAO's assessments cannot be projected to all general aviation airports and are not meant to imply that the airports failed to implement required security measures.. The 13 airports GAO visited had multiple security measures in place to protect against unauthorized access, although the specific measures and potential vulnerabilities varied across the airports. The 3 airports also supporting commercial aviation had generally implemented all the security measures GAO assessed, whereas GAO identified potential vulnerabilities at most of the 10 general aviation airports that could allow unauthorized access to aircraft or airport grounds, facilities, or equipment. For example, 12 of the 13 airports had perimeter fencing or natural barriers as suggested by TSA; but at 6 of the airports fencing was partially bordered by bushes or trees or located next to a parking lot, which can obstruct surveillance or allow someone to scale or topple the fence. GAO found that none of the 10 general aviation airports had lighting along their perimeters. Perimeter lighting provides both a real and psychological deterrent, and allows security personnel to maintain visual assessment during darkness. However, officials at several airports stated that neighborhood street lights provided perimeter lighting, and all 13 airports had lighting around their hangars. The 10 general aviation airports' use of intrusion monitoring varied, with closed-circuit TV (CCTV) cameras and onsite law enforcement being more prevalent than an intrusion detection system, which can consist of multiple monitors including building alarms and CCTV. TSA guidance states that such systems can reduce or replace the need for physical security personnel to patrol an entire facility or perimeter. According to airport officials, several incidents of unauthorized access have occurred within approximately the past 10 years at three of the airports, though they were unable to provide documentation in all cases. Three incidents did not involve access to aircraft, but rather to airport grounds. In separate incidents, two airplanes were stolen or taken from one airport but later recovered. Airport officials informed GAO that they took corrective actions in response to these incidents as appropriate. DHS generally concurred with GAO's findings and indicated that TSA will work in partnership with the general aviation community to address vulnerabilities. DHS also noted that a lack of funding will be a challenge for most airports. GAO shared its findings with officials at the 13 airports it visited and incorporated their comments as appropriate. |
The public faces a risk that critical services could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, airline flights grounded, and national defense affected. The many interdependencies that exist among governments and within key economic sectors could cause a single failure to have adverse repercussions. While managers in the government and the private sector are taking many actions to mitigate these risks, a significant amount of work remains, and time frames are unrelenting. The federal government is extremely vulnerable to the Year 2000 issue due to its widespread dependence on computer systems to process financial transactions, deliver vital public services, and carry out its operations. This challenge is made more difficult by the age and poor documentation of the government’s existing systems and its lackluster track record in modernizing systems to deliver expected improvements and meet promised deadlines. Unless this issue is successfully addressed, serious consequences could ensue. For example: Unless the Federal Aviation Administration (FAA) takes much more decisive action, there could be grounded or delayed flights, degraded safety, customer inconvenience, and increased airline costs. Payments to veterans with service-connected disabilities could be severely delayed if the system that issues them either halts or produces checks so erroneous that it must be shut down and checks processed manually. The military services could find it extremely difficult to efficiently and effectively equip and sustain their forces around the world. Federal systems used to track student loans could produce erroneous information on loan status, such as indicating that a paid loan was in default. Internal Revenue Service tax systems could be unable to process returns, thereby jeopardizing revenue collection and delaying refunds. The Social Security Administration process to provide benefits to disabled persons could be disrupted if interfaces with state systems fail. In addition, the year 2000 also could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years that contain embedded computer systems to control, monitor, or assist in operations. For example, heating and air conditioning units could stop functioning properly and card-entry security systems could cease to operate. Year 2000-related problems have already been identified. For example, an automated Defense Logistics Agency system erroneously deactivated 90,000 inventoried items as the result of an incorrect date calculation. According to the agency, if the problem had not been corrected (which took 400 work hours), the impact would have seriously hampered its mission to deliver materiel in a timely manner. In another case, the Department of Defense’s Global Command Control System, which is used to generate a common operating picture of the battlefield for planning, executing, and managing military operations, failed testing when the date was rolled over to the year 2000. Our reviews of federal agency Year 2000 programs found uneven progress. Some agencies are significantly behind schedule and are at high risk that they will not fix their systems in time. Other agencies have made progress, although risks remain and a great deal more work is needed. Our reports contained numerous recommendations, which the agencies have almost universally agreed to implement. Among them were the need to complete inventories of systems, document data exchange agreements, and develop contingency plans. Audit offices of some states also have identified significant Year 2000 concerns. Risks include the potential that systems supporting benefit programs, motor vehicle records, and criminal records (i.e., prisoner release or parole eligibility determinations) may be adversely affected. These audit offices have made recommendations including the need for increased oversight, Year 2000 project plans, contingency plans, and personnel recruitment and retention strategies. Data exchanges between the federal government and the states are also critical to ensuring that billions of dollars of benefits payments are made to millions of recipients. Consequently, in October 1997 the Commonwealth of Pennsylvania hosted the first State/Federal Chief Information Officer (CIO) Summit. Participants agreed to (1) use a four-digit contiguous computer standard for data exchanges, (2) establish a national policy group, and (3) create a joint state/federal working group. America’s infrastructures are a complex array of public and private enterprises with many interdependencies at all levels. Key economic sectors that could be seriously affected if their systems are not Year 2000 compliant are information and telecommunications; banking and finance; health, safety, and emergency services; transportation; utilities; and manufacturing and small business. The information and telecommunications infrastructure is especially important because it (1) enables the electronic transfer of funds, (2) is essential to the service economy, manufacturing, and efficient delivery of raw materials and finished goods, and (3) is basic to responsive emergency services. Illustrations of Year 2000 risks follow. According to the Basle Committee on Banking Supervision—an international committee of banking supervisory authorities—failure to address the Year 2000 issue would cause banking institutions to experience operational problems or even bankruptcy. Moreover, the Chair of the Federal Financial Institutions Examination Council, a U.S. interagency council composed of federal bank, credit union, and thrift institution regulators, stated that banking is one of America’s most information-intensive businesses and that any malfunctions caused by the century date change could affect a bank’s ability to meet its obligations. He also stated that of equal concern are problems that customers may experience that could prevent them from meeting their obligations to banks and that these problems, if not addressed, could have repercussions throughout the nation’s economy. According to the International Organization of Securities Commissions, the Year 2000 presents a serious challenge to the world’s financial markets. Because they are highly interconnected, a disruption in one segment can spread quickly to others. FAA recently met with representatives of airlines, aircraft manufacturers, airports, fuel suppliers, telecommunications providers, and industry associations to discuss the Year 2000 issue. Participants raised the concern that their own Year 2000 compliance would be irrelevant if FAA were not compliant because of the many system interdependencies. Representatives went on to say that unless FAA were substantially Year 2000 compliant on January 1, 2000, flights would not get off the ground and that extended delays would be an economic disaster. Another risk associated with the transportation sector was described by the Federal Highway Administration, which stated that highway safety could be severely compromised because of potential Year 2000 problems in operational transportation systems. For example, date-dependent signal timing patterns could be incorrectly implemented at highway intersections if traffic signal systems run by state and local governments do not process four-digit years correctly. One risk associated with the utility sector is the potential loss of electrical power. For example, Nuclear Regulatory Commission staff believe that safety-related safe shutdown systems will function but that a worst-case scenario could occur in which Year 2000 failures in several nonsafety-related systems could cause a plant to shut down, resulting in the loss of off-site power and complications in tracking post-shutdown plant status and recovery. With respect to the health, safety, and emergency services sector, according to the Department of Health and Human Services, the Year 2000 issue holds serious implications for the nation’s health care providers and researchers. Medical devices and scientific laboratory equipment may experience problems beginning January 1, 2000, if the computer systems, software applications, or embedded chips used in these devices contain two-digit fields for year representation. In addition, according to the Gartner Group, health care is substantially behind other industries in Year 2000 compliance, and it predicts that at least 10 percent of mission-critical systems in this industry will fail because of noncompliance. One of the largest, and largely unknown, risks relates to the global nature of the problem. With the advent of electronic communication and international commerce, the United States and the rest of the world have become critically dependent on computers. However, there are indications of Year 2000 readiness problems in the international arena. In September 1997, the Gartner Group surveyed 2,400 companies in 17 countries and concluded that “hirty percent of all companies have not started dealing with the year 2000 problem.” Although there are many national and international risks related to the year 2000, our limited review of these key sectors found a number of private-sector organizations that have raised awareness and provided advice. For example: The Securities Industry Association established a Year 2000 committee in 1995 to promote awareness and since then has established other committees to address key issues, such as testing. The Electric Power Research Institute sponsored a conference in 1997 with utility professionals to explore the Year 2000 issue in embedded systems. Representatives of several oil and gas companies formed a Year 2000 energy industry group, which meets regularly to discuss the problem. The International Air Transport Association organized seminars and briefings for many segments of the airline industry. In addition, information technology industry associations, such as the Information Technology Association of America, have published newsletters, issued guidance, and held seminars to focus information technology users on the Year 2000 problem. As 2000 approaches and the scope of the problems has become clearer, the federal government’s actions have intensified, at the urging of the Congress and others. The amount of attention devoted to this issue has increased in the last year, culminating with the issuance of a February 4, 1998, executive order establishing the President’s Council on Year 2000 Conversion. The Council Chair is to oversee federal agency Year 2000 efforts as well as act as spokesman in national and international forums, coordinate with state and local governments, promote appropriate federal roles with respect to private-sector activities, and report to the President on a quarterly basis. This increased attention could help minimize the disruption to the nation as the millennium approaches. In particular, the President’s Council on Year 2000 Conversion can initiate additional actions needed to mitigate risks and uncertainties. These include ensuring that the government’s highest priority systems are corrected and that contingency plans are developed across government. Agencies have taken longer to complete the awareness and assessment phases of their Year 2000 programs than is recommended. This leaves less time for critical renovation, validation, and implementation phases. For example, the Air Force has used over 45 percent of its available time completing the awareness and assessment phases, while the Gartner Group recommends that no more than about a quarter of an organization’s Year 2000 effort should be spent on these phases. Consequently, priority-setting is essential. According to OMB’s latest report, as of February 15, 1998, only about 35 percent of federal agencies’ mission-critical systems were considered to be Year 2000 compliant. This leaves over 3,500 mission-critical systems, as well as thousands of nonmission-critical systems, still to be repaired, and over 1,100 systems to be replaced. It is unlikely that agencies can complete this vast amount of work in time. Accordingly, it is critical that the executive branch identify those systems that are of the highest priority. These include those that, if not corrected, could most seriously threaten health and safety, the financial well-being of American citizens, national security, or the economy. Agencies must also ensure that their mission-critical systems can properly exchange data with other systems and are protected from errors that can be introduced by external systems. For example, agencies that administer key federal benefits payment programs, such as the Department of Veterans Affairs, must exchange data with the Department of the Treasury, which, in turn, interfaces with financial institutions, to ensure that beneficiary checks are issued. As a result, completing end-to-end testing for mission-critical systems is essential. OMB’s reports on agency progress do not fully and accurately reflect the federal government’s progress toward achieving Year 2000 compliance because not all agencies are required to report and OMB’s reporting requirements are incomplete. For example: OMB had not, until recently, required independent agencies to submit quarterly reports. Accordingly, the status of these agencies’ Year 2000 programs has not been monitored centrally. On March 9, 1998, OMB asked 31 independent agencies, including the Securities and Exchange Commission and the Pension Benefit Guaranty Corporation, to report on their progress in fixing the Year 2000 problem by April 30, 1998. OMB plans to include a summary of those responses in its next quarterly report to the Congress. However, unlike its quarterly reporting requirement for the major departments and agencies, OMB does not plan to request the independent agencies to report again until next year. Since the independent agencies will not be reporting again until April 1999, it will be difficult for OMB to be in position to address any major problems. Agencies are required to report their progress in repairing noncompliant systems but are not required to report on their progress in implementing systems to replace noncompliant systems, unless the replacement effort is behind schedule by 2 months or more. Because federal agencies have a poor history of delivering new system capabilities on time, it is essential to know agencies’ progress in implementing replacement systems. OMB’s guidance does not specify what steps must be taken to complete each phase of a Year 2000 program (i.e., assessment, renovation, validation, and implementation). Without such guidance, agencies may report that they have completed a phase when they have not. Our enterprise guide provides information on the key tasks that should be performed within each phase. Mr. Chairman, in your December 1997 letter to OMB, you expressed similar concerns that OMB reports be more comprehensive and reliable. In January 1998, OMB asked agencies to describe their contingency planning activities in their February 1998 quarterly reports. These instructions stated that contingency plans should be established for mission-critical systems that are not expected to be implemented by March 1999, or for mission-critical systems that have been reported as 2 months or more behind schedule. Accordingly, in their February 1998 quarterly reports, several agencies reported that they planned to develop contingency plans only if they fall behind schedule in completing their Year 2000 fixes. Agencies that develop contingency plans only for systems currently behind schedule, however, are not addressing the need to ensure the continuity of a minimal level of core business operations in the event of unforeseen failures. As a result, when unpredicted failures occur, agencies will not have well-defined responses and may not have enough time to develop and test effective contingency plans. Contingency plans should be formulated to respond to two types of failures: those that can be predicted (e.g., system renovations that are already far behind schedule) and those that are unforeseen (e.g., a system that fails despite having been certified as Year 2000 compliant or a system that cannot be corrected by January 1, 2000, despite appearing to be on schedule today). Moreover, contingency plans that focus only on agency systems are inadequate. Federal agencies depend on data provided by their business partners as well as on services provided by the public infrastructure. One weak link anywhere in the chain of critical dependencies can cause major disruptions. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. In its latest governmentwide Year 2000 progress report, issued March 10, 1998, OMB clarified its contingency plan instructions. OMB stated that contingency plans should be developed for all core business functions. Today, we are issuing an exposure draft of a guide to help agencies ensure the continuity of operations through contingency planning. The CIO Council worked with us in developing this guide and intends to adopt it for federal agency use. OMB’s assessment of the current status of federal Year 2000 progress has been predominantly based on agency reports that have not been consistently verified or independently reviewed. Without such independent reviews, OMB and others, such as the President’s Council on Year 2000 Conversion, have no assurance that they are receiving accurate information. OMB has acknowledged the need for independent verification and asked agencies to report on such activities in their February 1998 quarterly reports. While this has helped provide assurance that some verification is taking place through internal checks, reviews by Inspectors General, or contractors, the full scope of verification activities required by OMB has not been articulated. It is important that the executive branch set standards for the types of reviews that are needed to provide assurance regarding the agencies’ Year 2000 actions. Such standards could encompass independent assessments of (1) whether the agency has developed and is implementing a comprehensive and effective Year 2000 program, (2) the accuracy and completeness of the agency’s quarterly report to OMB, including verification of the status of systems reported as compliant, (3) whether the agency has a reasonable and comprehensive testing approach, and (4) the completeness and reasonableness of the agency’s business continuity and contingency planning. The CIO Council’s Subcommittee on the Year 2000 has been useful in addressing governmentwide issues. For example, the Year 2000 Subcommittee worked with the Federal Acquisition Regulation Council and industry to develop a rule that (1) establishes a single definition of Year 2000 compliance in executive branch procurement and (2) generally requires agencies to acquire only Year-2000 compliant products and services or products and services that can be made Year 2000 compliant. The subcommittee has also established subgroups on (1) best practices, (2) state issues and data exchanges, (3) industry issues, (4) telecommunications, (5) buildings, (6) biomedical and laboratory equipment, (7) General Services Administration support and commercial off-the-shelf products, and (8) international issues. The subcommittee’s effectiveness could be further enhanced. For example, currently agencies are not required to participate in the Year 2000 subcommittee. Without such full participation, it is less likely that appropriate governmentwide solutions can be implemented. Further, while the subcommittee’s subgroups are currently working on plans, they have not yet published these with associated milestones. It is important that this be done and publicized quickly so that agencies can use this information in their Year 2000 programs. It is equally important that implementation of agency activities resulting from these plans be monitored closely and that the subgroups’ decisions be enforced. Another governmentwide issue that needs to be addressed is the availability of information technology personnel. In their February 1998 quarterly reports, several agencies reported that they or their contractors had problems obtaining and/or retaining information technology personnel. Currently, no governmentwide strategy exists to address recruiting and retaining information technology personnel with the appropriate skills for Year 2000-related work. To date, the CIO Council has not addressed this issue although it is considering asking the Office of Personnel Management to review the possibility of obtaining waivers to rehire retired federal personnel. Given the sweeping ramifications of the Year 2000 issue, other countries have set up mechanisms to solve the Year 2000 problem on a nationwide basis. Several countries, such as the United Kingdom, Canada, and Australia, have appointed central organizations to coordinate and oversee their governments’ responses to the Year 2000 crisis. In the case of the United Kingdom, for example, a ministerial group is being established, under the leadership of the President of the Board of Trade, to tackle the Year 2000 problem across the public and private sectors. These countries have also established public/private forums to address the Year 2000 problem. For example, in September 1997, Canada’s Minister of Industry established a government/industry Year 2000 task force of representatives from banking, insurance, transportation, manufacturing, telecommunications, information technology, small and medium-sized businesses, agriculture, and the retail and service sectors. The Canadian Chief Information Officer is an ex-officio member of the task force. It has been charged with providing (1) an assessment of the nature and scope of the Year 2000 problem, (2) the state of industry preparedness, and (3) leadership and advice on how risks could be reduced. This task force issued a report in February 1998 with 18 recommendations that are intended to promote public/private-sector cooperation and prompt remedial action. In the United States, the President’s recent executive order could serve as the linchpin that bridges the nation’s and the federal government’s various Year 2000 initiatives. While the Year 2000 problem could have serious consequences, there is no comprehensive picture of the nation’s readiness. As one of its first tasks, the President’s Council on Year 2000 Conversion could formulate such a comprehensive picture in partnership with the private sector and state and local governments. Many organizational and managerial models exist that the Conversion Council could use to build effective partnerships to solve the nation’s Year 2000 problem. Because of the need to move swiftly, one viable alternative would be to consider using the sector-based approach recommended recently by the President’s Commission on Critical Infrastructure Protection as a starting point. This approach could involve federal agency focal points working with sector infrastructure coordinators. These coordinators would be created or selected from existing associations and would facilitate sharing information among providers and the government. Using this model, the President’s Council on Year 2000 Conversion could establish public/private partnership forums composed of representatives of each major sector that, in turn, could rely on task forces organized along economic-sector lines. Such groups would help (1) gauge the nation’s preparedness for the year 2000, (2) periodically report on the status and remaining actions of each sector’s Year 2000 remediation efforts, and (3) ensure the development of contingency plans to ensure the continuing delivery of critical public and private services. In conclusion, while the Year 2000 problem has the potential to cause serious disruption to the nation, these risks can be mitigated and disruptions minimized with proper attention and management. Continued congressional oversight through hearings such as this and those that have been held by other committees in both the House and the Senate can help ensure that the Year 2000 problem is given the attention that it deserves and that appropriate actions are taken to address this crisis. Mr. Chairman and Ms. Chairwoman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittees may have at this time. Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, Exposure Draft, March 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computers Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed year 2000 risks and actions that should be taken by the President's Council on Year 2000 Conversion. GAO noted that: (1) the federal government is extremely vulnerable to the year 2000 issue due to its widespread dependence on computer systems to process financial transactions, deliver vital public services, and carry out its operations; (2) unless this issue is successfully addressed, serious consequences could ensue, for example: (a) unless the Federal Aviation Administration takes much more decisive action, there could be grounded or delayed flights, degraded safety, customer inconvenience, and increased airline costs; (b) payments to veterans with service-connected disabilities could be severely delayed if the system that issues them either halts or produces checks so erroneous that it must be shut down and checks processed manually; (c) the military services could find it extremely difficult to efficiently and effectively equip and sustain its forces around the world; (d) federal systems used to track student loans could produce erroneous information on loan status, such as indicating that a paid loan was in default; (e) Internal Revenue Service tax systems could be unable to process returns, thereby jeopardizing revenue collection and delaying refunds; and (f) the Social Security Administration process to provide benefits to disabled persons could be disrupted if interfaces with state systems fail; (3) the year 2000 could also cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years that contain embedded computer systems to control, monitor, or assist in operations; (4) GAO's reviews of federal agency year 2000 programs found uneven progress; (5) one of the largest, and largely unknown, risks relates to the global nature of the problem; (6) agencies have taken longer to complete the awareness and assessment phases of their year 2000 programs than is recommended; (7) this leaves less time for critical renovation, validation, and implementation phases; (8) the Chief Information Officers Council's Subcommittee on the year 2000 has been useful in addressing governmentwide issues; (9) given the sweeping ramifications of the year 2000 issue, other countries have set up mechanisms to solve the year 2000 problem on a nationwide basis; and (10) there is no comprehensive picture of the nation's readiness and, as one of its first tasks, the President's Council on Year 2000 Conversion could formulate such a comprehensive picture in partnership with the private sector and state and local governments. |
SIPA established SIPC to provide certain financial protections to the customers of insolvent securities firms. As required under law, SIPC either liquidates a failed firm itself (in cases where the liabilities are limited and there are less than 500 customers) or a trustee selected by SIPC and appointed by the court liquidates the firm. In either situation, SIPC is authorized to make advances from its customer protection fund to promptly satisfy customer claims for missing cash and securities up to amounts specified in SIPA. Between 1971 and 2002, SIPC initiated a total of 304 liquidation proceedings and paid about $406 million to satisfy such customer claims. SIPC was established in response to a specific problem facing the securities industry in the late 1960s: how to ensure that customers recover their cash and securities from securities firms that fail or cease operations and cannot meet their custodial obligations to customers. The problem peaked in the late 1960s, when outdated methods of processing securities trades, coupled with the lack of a centralized clearing system able to handle a large surge in trading volume, led to widespread accounting and reporting mistakes and abuses at securities firms. Before many firms could modernize their trade processing operations, stock prices declined sharply, which resulted in hundreds of securities firms merging, failing, or going out of business. During that period, some firms used customer property for proprietary activities, and procedures broke down for proper customer account management, making it difficult to locate and deliver securities belonging to customers. The breakdown resulted in customer losses exceeding $100 million because failed firms could not account for their customers’ property. Congress became concerned that a repetition of these events could undermine public confidence in the securities markets. SIPC’s statutory mission is to promote confidence in securities markets by allowing for the prompt return of missing customer cash and/or securities held at a failed firm. SIPC fulfills its mission by initiating liquidation proceedings when appropriate and transferring customer accounts to another securities firm or returning the cash or securities to the customer by restoring to customer accounts the customer’s “net equity.” SIPC defines net equity as the value of cash or securities in a customer’s account as of the filing date, less any money owed to the firm by the customer, plus any indebtedness the customer has paid back with the trustee’s approval within 60 days after notice of the liquidation proceeding was published. The filing date typically is the date that SIPC applies to a federal district court for an order initiating proceedings. SIPA sets coverage at a maximum of $500,000 per customer, of which no more than $100,000 may be a claim for cash. SIPC is not intended to keep firms from failing or to shield investors from losses caused by changes in the market value of securities. SIPC is a nonprofit corporation governed by a seven-member Board of Directors that includes two U.S. government, three industry, and two public representatives. SIPC has 31 staff located in Washington, D.C. Most securities firms that are registered as broker-dealers under Section 15(b) of the Securities Exchange Act of 1934 automatically become SIPC members, regardless of whether they hold customer property. As of December 31, 2002, SIPC had 6,679 members. SIPA excludes from membership securities firms whose principal business—as determined by SIPC subject to SEC review—is conducted outside of the United States, its territories, and possessions. Also, a securities firm is not required to be a SIPC member if its business consists solely of (1) distributing shares of mutual funds or unit investment trusts, (2) selling variable annuities, (3) providing insurance, or (4) rendering investment advisory services to one or more registered investment companies or insurance company separate accounts. SIPA, as recently amended, also exempts a certain class of firms that are registered with SEC solely because they may affect transactions in single stock futures. SIPA covers most types of securities such as notes, stocks, bonds, and certificates of deposit. However, some investments are not covered. SIPA does not cover any interest in gold, silver, or other commodity; commodity contract; or commodity option. Also, SIPA does not cover investment contracts that are not registered as securities with SEC under the Securities Act of 1933. Shares of mutual funds are protected securities; but securities firms that deal only in mutual funds are not SIPC members, and thus their customers are not protected by SIPC. In addition, SIPA does not cover situations where an individual has a debtor-creditor relationship, such as a lending arrangement, with a SIPC member firm. Investors who attain SIPC customer status are a preferred class of creditors compared with other individuals or companies that have claims against the failed firm and are much more likely to get a part or all of their claims satisfied. This is because SIPC customers share in any customer property that the bankrupt firm possesses before any other creditors may do so. Although bankers and brokers are customers under SIPA, they are not eligible for SIPC fund advances. SIPA states that most customers are eligible for SIPC assistance, but SIPC funds may not be used to pay claims of any failed brokerage firm customer who is a general partner, officer, or director of the firm; the beneficial owner of 5 percent or more of any class of equity security of the firm (other than certain nonconvertible preferred stocks); a limited partner with a participation of 5 percent or more in the net assets or net profits of the firm; someone with the power to exercise a controlling influence over the management or policies of the firm; and a broker or dealer or bank acting for itself rather than for its own customer or customers. The SIPC fund was valued at $1.26 billion as of December 31, 2002, which it uses to make advances to trustees for customer claims and to cover the administrative expenses of a liquidation proceeding. Administrative expenses in a SIPA liquidation include the expenses incurred by a trustee and the trustee’s staff, legal counsel, and other advisors. The SIPC fund is financed by annual assessments on all member firms—periodically set by SIPC—and interest generated from its investments in U.S. Treasury notes. SIPC, after consultation with the SROs, sets the amount of member assessments based on the amount necessary to maintain the fund and repay any borrowings by SIPC. At different times during the 1970s, 1980s, and 1990s members were assessed at a higher rate. Rates fluctuated depending on the level of expenses. SIPC’s board of directors attempted to match assessment rate increases with declines in the fund balance, so that years of high SIPC expenses were followed by periods of higher assessments. Since 1996, SIPC has charged each broker-dealer member an annual assessment of $150. If the SIPC fund becomes or appears to be insufficient to carry out the purposes of SIPA, SIPC may borrow up to $1 billion from the U.S. Treasury through SEC (i.e., SEC would borrow the funds from the U.S. Treasury and then relend the funds to SIPC). In addition, SIPC has a $1 billion line of credit with a consortium of banks. SIPA gives SEC oversight responsibility over SIPC. SEC’s primary mission is to protect investors and the integrity of the securities markets. SEC seeks to fulfill its mission by requiring public companies to disclose financial and other information to the public. SEC is also responsible for conducting investigations of potential securities law violations and overseeing SROs such as securities exchanges, as well as broker-dealers (securities firms), mutual funds, investment advisors, and public utility holding companies. SEC may sue SIPC to compel it to act to protect investors. SIPC must submit all proposed changes to rules or bylaws to SEC for approval; and SEC may require SIPC to adopt, amend, or repeal any bylaw or rule. In addition, SIPA authorizes SEC to conduct inspections and examinations of SIPC and requires SIPC to furnish SEC with reports and records that it believes are necessary or appropriate in the public interest or to fulfill the purposes of SIPA. The law that created SIPC also required SEC to strengthen customer protection and increase investor confidence in the securities markets by increasing the financial responsibility of broker-dealers. Pursuant to this mandate, SEC developed a framework for customer protection based on two key rules: (1) the customer protection rule and (2) the net capital rule. These rules respectively require broker-dealers that carry customer accounts to (1) keep customer cash and securities separate from those of the company itself and (2) maintain sufficient liquid assets to protect customer interests if the firm ceases doing business. SEC and SROs, such as NYSE, are responsible for enforcing the net capital and customer protection rules. Under a typical SIPC property distribution process, SIPC customers are to receive any securities that the firms holds that are registered in their name or that are being registered in their name, subject to the payment of any debt to the firm. If some of the customer assets are missing and cannot be found by the trustee, the customer will receive a pro rata share of the firm’s remaining customer property. In addition, SIPC is required to replace missing securities and cash in an investor’s account up to the statutory limits. For firms with excess SIPC policies, this coverage would be available as well. For example, if a firm is liquidated by a SIPC trustee that should have $10 billion in customer assets, but the trustee can account for only $9.8 billion or 98 percent of the $10 billion in assets, each customer would receive 98 percent of their net equity (pro rata share). A customer with net equity of $10 million would receive 98 percent or $9.8 million of their $10 million. In addition the trustee may use up to $500,000 advanced from the SIPC fund to satisfy the customer’s claim, but only $100,000 may be advanced for cash. With a $200,000 advance from SIPC, the customer in this example would have received the entire $10 million in assets owed. To protect customers who have claims in excess of the SIPC limit, Travelers Bond first began offering excess SIPC coverage to brokerage firms in 1970, soon after SIPA was enacted. Other companies began to join the market in the mid-1980s. However, such claims above the SIPA limit are rare and regulatory and industry officials confirmed that most customers would not be affected by such policies because their accounts are within the SIPA limits. As seen in table 1, the amount of customer funds recovered determines if the investor will have a loss and whether excess SIPC would be triggered. For example, if the trustee determined that 50 percent of the customer assets were missing, a customer who is owed $1 million in assets would receive a $500,000 pro rata share from the estate and an advance from SIPC at its statutory limit of $500,000. However, a customer with $5 million in assets with the same 50 percent pro rata share would have $2 million in excess of the $500,000 SIPC advance and could be eligible for excess SIPC coverage if offered by the securities firm. Conversely, a customer with $5 million in assets and a pro rata share of 90 percent or higher would be made whole by SIPC and would not have losses in excess of SIPC limits. In our 2001 report, we made seven recommendations to SEC to address needed improvements to information it provided to investors about SIPC’s policies and practices, particularly regarding the evidentiary standard for unauthorized trading claims and to expand its review of SIPC operations among others. SEC has taken action to address all of the recommendations either directly or indirectly by delegating the implementation to the SROs. First, we recommended that SEC review sections of its Web site and, where appropriate, advise customers to complain promptly in writing when they believe trades in their account were not authorized. This advice should include an explanation of SIPC’s policies and practices regarding claims and a general warning about how to avoid ratifying potentially unauthorized trades during telephone conversations. In 2001, we found that SIPC liquidations involving unauthorized trading accounted for nearly two- thirds of all liquidations initiated from 1996 through 2000. SIPC’s policies and practices in these liquidation proceedings generated controversy, primarily because of the large numbers of claims that were denied and the methods used to satisfy certain approved claims. In addition, we found that SIPC’s policies and practices were often not transparent to investors and SEC had missed opportunities to provide investors with consistent information about SIPC’s evidentiary standard for unauthorized trading. For example, some sections of SEC’s Web site encouraged investors to call to complain about unauthorized trades, while other sections told the investor to complain immediately in writing. Although the telephone-based approach SEC recommended was reasonable if the firm acted in good faith to resolve problem trades, fraudulently operated firms were known to have used high pressure and/or fraudulent tactics to convince persons who called to complain about potentially unauthorized trades to ratify these trades. In response to our recommendation, SEC updated sections of its Web site to include consistent information on making unauthorized trading complaints in writing. In addition, they expanded the section entitled Cold Calling to include warnings about high-pressure sales tactics that some brokers may use. Second, we recommended that SEC require firms that it determines to have engaged in or are engaging in systematic or pervasive unauthorized trading to prominently notify their customers about the importance of documenting disputed transactions in writing. In 2001, we found that although SEC may identify and impose sanctions on firms that have engaged in pervasive unauthorized trading long before they ever become SIPA liquidations, it does not routinely require such firms to notify their clients about documenting unauthorized trading claims. For example, between 1992 and 1997, one securities firm operated under intensive SEC and court supervision in connection with, among other violations, pervasive unauthorized trading and stock price manipulation. However, there was no requirement that the firm notify their customers to document their complaints in writing. Imposing this requirement could help investors protect their interests and benefit unsophisticated investors who may not review the SIPC brochure or other disclosures made on account statements. At the time the report was issued, SEC had agreed to implement this requirement on a case-by-case basis. Since 2001, SEC officials said that they have not had a case that required this action. Moreover, SEC officials noted that their first course of action would be to shut down firms that engage in pervasive unauthorized trading. Third, we recommended that SEC update its Web site to inform investors about the frauds that may be associated with certain SIPC member firms and their affiliates as well as the steps that can be taken to avoid falling victim to such frauds. SIPC’s policies and practices in liquidations of member firms that had nonmember affiliates have also been controversial because SIPC and trustees have denied many claims in such liquidation proceedings. In 2001, we found that SEC had missed opportunities to educate investors about the potential risks associated with certain nonmember affiliates. SEC’s Web site provided limited information about dealing with nonmember affiliates, and investors may not have been fully aware of the risks that can be associated with certain nonmember affiliates. In response to this recommendation, SEC updated an on-line publication called Securities Investor Protection Corporation, which discusses the problems that can occur when investors place their cash or securities with non-SIPC members. Investors are also told to always make sure that the securities firm and clearing firm are members of SIPC because firms are required by law to tell you if they are not. Next, we recommended that SEC take several actions to improve its oversight of SIPC. Specifically, we recommended that SEC implement the SEC IG’s recommendation that the Division of Market Regulation, the Division of Enforcement, the Northeast Regional Office (NERO) and the Office of Compliance, Inspections, and Examinations (OCIE) conduct periodic briefings to share information related to SIPC. In 2000, SEC’s IG found that communication among SEC’s internal units regarding SIPC could be improved. Although the SEC IG report found that SEC officials tried to keep each other informed about relevant SIPC issues, there was no formal procedure for doing so. At the time our report was issued, SEC had not yet implemented this recommendation, and we recommended that they do so. SEC officials said that they began to hold quarterly meetings, but determined that more frequent, informal meetings were more effective. They said that they meet to discuss SIPC as issues arise, which is typically more than once every quarter. As long as SEC continues to meet frequently and share information among all the relevant units, this approach effectively responds to the concern our recommendation was intended to address. Fifth, we recommended that SEC expand its ongoing examination of SIPC to include a larger number of liquidations with claims involving unauthorized trading or nonmember affiliate issues. SEC periodically conducts examinations of SIPC’s operations to ensure compliance with SIPA. In May 2000, the Division of Market Regulation and OCIE initiated a joint examination of SIPC. As of March 2001, SEC had included four SIPA liquidations involving unauthorized trading in its sample, but had not included any liquidations involving nonmember affiliate issues. Given the controversies involving SIPA’s liquidations involving unauthorized trading and nonmember affiliates, we believed that including a larger number of liquidations with these types of claims was warranted. SEC agreed with this recommendation and included a larger number of liquidations involving unauthorized trading or nonmember affiliate issues in the sample used for the review. Of the eight liquidations in SEC’s sample, five involved unauthorized trading and two involved nonmember affiliate issues. SEC completed its examination in January 2003 and issued its examination report in April 2003, which assessed SIPC’s policies and procedures for liquidating failed securities firms and identified several areas of improvement that warrant SIPC’s consideration. SEC found that there was insufficient guidance for SIPC personnel and trustees to follow when determining whether claimants have established valid unauthorized trading claims. Although the evidentiary standards used were found to be reasonable, the standards differed between trustees. Therefore, SEC recommended that SIPC develop written guidance to help establish consistency between trustees and liquidations. SIPC agreed to adopt such written guidance for reviewing unauthorized trading claims. Concerning SIPC’s investor education programs, SEC found that SIPC should continue to review the information that it provides to investors about its policies and practices. For example, SEC found that some statements in SIPC’s brochure and Web site might overstate the extent of SIPC coverage and mislead investors. SIPC plans to continue to reexamine the adequacy of the information provided in its brochure and Web site to eliminate any potential confusion. SEC also found that SIPC should improve its controls over the fees awarded to trustees and their counsel for the services rendered and their expenses. SEC found that some descriptions of the work that the trustees performed were vague, making it difficult to assess whether the work was necessary or appropriate. SEC believed that SIPC could do a better job of reviewing and assessing fees that were requested. SIPC agreed to ask trustees and counsel in SIPC cases to submit invoices at least quarterly and arrange billing records into project categories. SIPC also agreed to instruct its personnel to document discussions with trustees and counsel regarding fee applications and to note any differences in amounts initially requested by trustees and counsel and those amounts recommended for payment by SIPC. In addition, SEC found that SIPC lacks a record retention policy for records generated in liquidations where SIPC appoints an outside trustee. It was found that trustees had different procedures for retention of records, and SEC was not able to review records from one liquidation because the trustee had destroyed the records. SIPC has agreed to develop a uniform record retention policy for all SIPA liquidations, following a cost analysis. SEC also found that the SIPC fund was at risk in the case of failure of one or more of the large securities firms. SEC found that even if SIPC were to triple the fund in size, a very large liquidation could deplete the fund. Therefore, SEC suggested that SIPC examine alternative strategies for dealing with the costs of such a large liquidation. SIPC management agreed to bring this issue to the attention of the Board of Directors, who evaluates the adequacy of the fund on a regular basis. Also as part of SEC’s ongoing oversight effort, in September of 2000, SEC’s Office of General Counsel (OGC) initiated a 1-year pilot program to monitor SIPA liquidations. According to SEC, the primary objective of the pilot program was to provide oversight of claims determinations in SIPA liquidation proceedings in order to make certain that the determinations were consistent with SIPA. According to SEC officials, this program has since been made permanent. SEC’s OGC now enters notices of appearance in all SIPA liquidation proceedings. The cases are followed mostly by NERO and the Midwest Regional Office, given the significant numbers of SIPA liquidations in these locations. The staff can recommend that Commission staff intervene in SIPA liquidations, if appropriate. Sixth, we recommended that SEC, in conjunction with the SROs, establish a uniform disclosure rule requiring clearing firms to put a standard statement about documenting unauthorized trading claims on their trade confirmations and/or other account statements. In 2001, we found that SEC, NASD, and the NYSE, did not have requirements that clearing firms notify customers that they should immediately complain in writing about allegedly unauthorized trades. A review of a judgmental sample of trade confirmations and account statements found that many firms voluntarily notify their customers to immediately complain if they experience any problems with their trades, but instructions about the next course of action varied and did not necessarily specify that the investor should complain in writing. Initially, SEC expressed concern about promulgating a rule itself. However, in 2003, SEC began to take steps to implement this recommendation. Specifically, SEC has asked NYSE and NASD to explore how this recommendation can be more fully implemented through SRO rulemaking and Notices to Members. As of June 9, the SROs were still evaluating how best to implement this recommendation. According to an SRO official, concern about potentially penalizing investors who may not complain in writing but may file claims in other forums, such as arbitration proceedings, will need to be resolved. However, SEC believes that they will be able to craft acceptable language that ensures that these investors are not harmed. Lastly, we recommended that SEC require SIPC member firms to provide the SIPC brochure to their customers when they open an account and encourage firms to distribute the brochure to existing customers more widely. This recommendation was an additional step aimed at educating and better informing customers about how to protect their investments. The SIPC informational brochure called How SIPC Protects You provides useful information about SIPC and its coverage. However, SIPC bylaws and SEC rules do not require SIPC members to distribute the brochure to their customers. The authority lies with SEC or the SROs to require the firms to provide the brochure to their customers. To date, it is unclear what action will be taken. SEC officials expressed concern about imposing another rule on securities firms. Instead, SEC included this recommendation in its letter to NYSE and NASD to explore how this could be implemented through SRO rulemaking and Notices to Members. According to SEC and SRO officials, both NASD and NYSE are in the process of exploring how best to implement this recommendation. SEC officials said that they did not expect the SROs to have problems implementing this recommendation. In our 2001 report, we made three recommendations to SIPC to improve the information available to investors about its coverage, particularly with regard to unauthorized trading. In addition to taking steps to implement our recommendations, SIPC has continued a nationwide investor education program that addresses many of the specific issues raised in our 2001 report. SIPC has a responsibility to inform investors of actions they can take to protect their investments and help ensure that they are afforded the full protections allowable under SIPA. Our 2001 report found that investors might confuse the coverage offered by SIPC, Federal Deposit Insurance Corporation (FDIC), and state insurance guarantee associations and not fully understand the protection offered under SIPA. This was significant because the type of financial protection that SIPC provides is similar to that provided by these programs, but important differences exist. To address these and other investor education issues, SIPC began a major public education campaign in 2000. As part of the campaign, SIPC worked with a public relations firm to make its Web site and brochure more reader friendly and less focused on legal terminology. The changes were designed to ensure that the Web site is easy to use and written in plain English. In addition to revising its brochure and Web site, SIPC produced a series of audio and video public service announcements (PSA). From June 15, 2002, to November 15, 2002, the PSAs were aired over 76,000 times. According to SIPC’s 2002 annual report, the TV PSAs have appeared on 129 stations, in 106 cities, in 46 states; and the radio spots have aired on 415 stations, in 249 cities, in 49 states. They have also been aired nationally on CNBC and the Fox News Channel. SIPC and its public relations firm are continuing to work together to improve investor awareness of SIPC and its policies. They are developing a new television and radio campaign scheduled to begin in July 2003. They are also working to better explain the claims process through a new brochure and video. The claims process brochure will provide information to individuals that do not have access to the Internet. This investor education campaign has increased the amount and clarity of information available about SIPC and has provided investors who review it with important information. As mentioned, in addition to identifying investor education concerns in our 2001 report, we recommended that SIPC take three specific actions to improve its disclosure. First, we recommended that SIPC revise its brochure and Web site to include a full explanation of the steps necessary to document unauthorized trading claims. SIPC has determined, and courts have agreed, that an objective evidentiary standard, such as written complaints, is necessary to protect the SIPC fund from fraudulent claims. However, in our 2001 report, we found that SIPC had also missed opportunities to provide investors with complete information about dealing with unauthorized trading. For example, we found that claimants in 87 percent of the claims we reviewed telephoned complaints to their brokers. Given that many investment transactions are largely made by telephone, we were concerned that investors were not aware of the importance of documenting their complaints in writing if they were ever required to file a claim with SIPC. Furthermore, we found the SIPC brochure did not advise investors that SIPA covers unauthorized trading and that investors should promptly complain in writing about allegedly unauthorized trades. As previously mentioned, the brochure was revised as part of the investor education campaign and now includes the statement, “If you ever discover an error in a confirmation or statement, you should immediately bring the error to the attention of the , in writing.” In addition, SIPC has created a Web page, entitled Documenting an Unauthorized Trade, which includes the same information on complaining in writing to the firm about any errors. “Most market losses are a normal part of the ups and downs of the risk-oriented world of investing. That is why SIPC does not bail out investors when the value of their stocks, bonds, and other investments fall for any reason. Instead, SIPC replaces missing stocks and other securities where it is possible to do so…even when investments have increased in value.” In addition, SIPC amended its advertising bylaws in 2002 to require firms that choose to make an explanatory statement about SIPC to include a link to the SIPC Web site. This will further enable the customer to access information about what SIPC does and does not cover. NASD and SEC have also begun to make disclosures about SIPC and market risk to investors. For example, the NASD Web site says, “SIPC does not protect against market risk, which is the risk inherent in a fluctuating market. It protects the value of the securities held by the as of the time the SIPC trustee is appointed.” SEC informs investors that “SIPC does not protect you against losses caused by a decline in the market value of your securities.” Furthermore, many securities firms also include similar statements about SIPC protection on their Web sites. SIPC’s statement about market risk and amended bylaws as well as the availability of other disclosures by the regulators and firms effectively responds to the concern our recommendation was intended to address. Finally, we recommended that SIPC revise its brochure to warn investors to exercise caution in ratifying potential unauthorized trades in telephone discussions with firm officials. SIPC believes that the statement discussed above encouraging investors to complain in writing about unauthorized trades in its brochure and Web site will make oral ratification unlikely. SIPC officials also maintain that this type of information is best handled in those publications and Web pages that warn investors about securities fraud. Therefore, in its brochure, SIPC provides links to several Web sites, such as SEC’s, that have investor education information about investment fraud. However SIPC provides links to only the main Web site and not to the specific Web pages that contain the relevant information, so investors may have difficulty locating information about specific types of fraud, such as unauthorized trading. For example, based on the Web address provided in the brochure, investors searching SEC’s Web site for “fraud,” would be linked to over 5,000 possible sites. SIPC also recommends the Securities Industry Association (SIA) Web site for information about investment fraud. However, based on the information SIPC provided, a search for “unauthorized trading” on this Web site yields only three results, none of which send the investor to useful educational information contained on the Web site. Investors are also directed to NASD’s Web site, which has a page entitled Investors Best Practices, which includes detailed information on cold calling and unauthorized trading. However, an investor may not be able to find this useful information without specific links to the relevant Web pages for this and other Web sites listed in the brochure. For example, a search for “unauthorized trading” on NASD’s Web site only yields one result, which provides a link to a definition for unauthorized trading but no reference to the useful educational information. Excess SIPC coverage is generally offered by well-capitalized, large, and regional securities firms and is generally marketed by the firms as additional protection for their large account holders. Our review of the excess SIPC policies offered by the four major insurers found the policies varied by firm and insurer in terms of the amount of coverage offered per customer and in aggregate per firm. In our review of some of the policies, we found that excess SIPC coverage was not uniform and was not necessarily consistent with SIPC protection. Attorneys familiar with the policies also agreed that the disclosure of the coverage and the terms of coverage could be improved. During our review, three of the four major insurers that offered excess SIPC coverage in 2002 stopped underwriting these policies in 2003 for a variety of reasons. Consequently, as the policies expire, most insurers are not renewing their existing policies beyond 2003 and have stopped underwriting new policies in general. At this time, it is unclear what some of the securities firms that had excess SIPC coverage plan to do going forward. Excess SIPC is generally limited to certain well-capitalized, large, and regional firms that have a relatively low probability of being part of a SIPC liquidation. Moreover, the policies—usually structured as surety bonds— are generally purchased by clearing firms. The insurance underwriters of excess SIPC policies told us that they use strict underwriting guidelines and have minimum requirements for a firm requesting coverage. Most insurers evaluate a securities firm for excess SIPC coverage by reviewing its operational and financial risks. Insurers also consider the firm’s internal control and risk management systems, the type of business that the firm conducts, its size, its reputation, and the number of years in business. Some insurers also required the firms to annually submit information on the number and value of customer accounts above the $500,000 SIPC limit, to help gauge their maximum potential exposure in the unlikely event that the firm became part of a SIPC liquidation. Firms below a certain dollar net capital threshold were generally not considered for coverage. Although an excess SIPC claim has never been filed in the more than 30 years that the coverage has been offered, we identified several potential investor protection issues. Our review of excess SIPC policies, which included one from each of the four major insurers, revealed that excess SIPC coverage is not uniform and that some policies are not always consistent with SIPC coverage. Although the policies were advertised as covering losses (or losses up to an amount specified in the policy) that would otherwise be covered by SIPC except for the $500,000 limit, we found that claims under the policies could be subject to various terms and limitations that do not apply to SIPC coverage. Attorneys familiar with SIPA and excess SIPC have also raised questions about who is covered in the policies and how the claims process would work in the case of a firm’s bankruptcy. These potential inconsistencies or concerns include Some policies included customers that would generally be ineligible under SIPA. The wording in some of the policies could be interpreted as protecting individuals who are not customers eligible for SIPC advances. Others contained specific riders that expanded the excess SIPC policy to include classes of customers beyond those covered by SIPC. For example, some policies have riders that extend coverage to officers and directors of the failed firm, as long as they are not involved with any fraud that contributed to the firm’s demise. As mentioned previously, SIPC coverage excludes certain customers, such as officers and directors of the failed firm and broker-dealers and banks acting on their own behalf. Some policies limited the duration of coverage. Each policy we reviewed provided coverage only if SIPC were to institute judicial proceedings to liquidate the firm while the policy was in effect. Three of the four policies provided for specific periods of time during which they were in effect, as well as for cancellation by the insurer under specified conditions. Although each of the three policies required the securities firm to notify its customers of a cancellation, none of the policies expected notification to the customers regarding expiration. According to NYSE and NASD, there are not any specific SRO rules that require these firms to notify their customers. However, NYSE said that they generally expect firms to notify investors of any changes in their excess SIPC protection under rules involving disclosure requirements for fees changes. NASD generally expects firms to notify their customers under NASD’s Just and Equitable Rule. Some excess SIPC policies varied from SIPA in scope of coverage. Certain policies also differed from SIPA in terms of the scope of excess coverage. Specifically, customer cash, which would generally be covered under SIPA, was not covered by two of the policies we reviewed. One of the policies specifically restricted coverage to lost securities; the other described coverage as pertaining only to a customer’s claim for “loss of securities.” Also, in addition to a cap on the amount of coverage per customer, one policy contained a cap on the insurer’s overall exposure—the policy established an aggregate cap of $250 million—regardless of the total amount of customer claims. SIPC has no such aggregate cap. The mechanics of the claims process were unclear. In addition to limitations on coverage, at least one policy had other characteristics that could either restrict a customer’s ability to recover losses that exceed the amount covered under SIPA or delay a customer’s recovery until long after the net equity covered by the insurance has been determined. The policy conditioned the customer’s recovery upon the customer providing the insurer with a claim notice subject to specific time, form, and content specifications. Among other things, the customer was required to submit a written claim accompanied by evidence satisfactory to the insurer and an assignment to the insurer of the customer’s rights against the firm. The other policies did not address when a customer must file a claim. The role of the trustee in the claims process was unclear. Another difference we found is the role of the trustee regarding customer claims under SIPA and excess SIPC coverage policies. Under SIPA, the trustee acts on behalf of customers who properly file claims to see that they recover losses as provided in SIPA. It is unclear whether the trustee could represent customers on claims for excess insurance because, in some cases, the policies indicate that only individual customers could bring claims and, in any case, the trustee may not have authority under the bankruptcy laws to do so. SIPC trustees and other attorneys experienced with SIPA liquidations also agreed that it was not clear who was responsible for filing the claim, the customer or the trustee. The policies did not clearly state when a claim would be paid. The policies also differed from SIPC coverage regarding when customers could recover their losses. For purposes of SIPC coverage, the trustee discharges obligations of the debtor from available customer property and, if necessary, SIPC advances, without waiting for the court to rule on customer property and net equity share calculations. Under the excess coverage policies, it is unclear when customers would be eligible to recover assets in excess of those replaced by SIPC. Some of the policies provide for “prompt” replacement or payment of the portion of a customer’s covered net equity. In contrast to SIPC coverage, however, they specify that the insurer shall not be liable for a claim until the customer’s net equity has been “finally determined by a competent tribunal or by written agreement between the Trustee and the Company,” which could take years. Under another policy, the insurer could wait until after liquidation of the broker-dealer’s general estate before replacing a customers’ missing assets. The general creditor claims process could also take several years. An attorney knowledgeable about SIPC and excess SIPC said that some policies indicate that the insurance company has no liability until the customer claim is paid by SIPC. However, in many cases SIPC does not directly pay investors, but does so through a trustee. Therefore, the policy, if taken literally, would preclude an investor from ever being paid through excess SIPC insurance. Excess SIPC coverage appears to be limited to clearing firm failures. Most of the excess SIPC polices we reviewed provide that only the policy holder, usually a clearing firm, is covered under the policy. Introducing firms of clearing firms may advertise the coverage provided by their clearing firm. For example, we reviewed the Web sites of 53 introducing firms and found that about 25 percent advertised the excess SIPC protection provided by the clearing firm. This creates the potential for investor confusion because the coverage would apply only in the case of the clearing firm’s failure. Because introducing firms do not clear securities transactions or hold customer cash or securities, the customer’s assets should be unaffected in the event of an introducing firm’s failure. However, there have been cases where customer funds were “lost” before they were sent to the clearing firm, typically due to fraudulent activity. If the introducing firm fails while the assets are still with the introducing firm but the clearing firm continues to operate, investors may not be aware that the excess SIPC protection would only apply in the event of the clearing firm’s failure. Conversely, SIPC will initiate liquidation proceedings against introducing firms and protect their investors in certain situations. During our review, three of the four major insurers that offered excess SIPC coverage in 2002 stopped underwriting these policies beyond 2003. The insurers provided various reasons for not continuing to underwrite excess SIPC policies, such as their concern about the complexity of quantifying their maximum probable loss. In addition, officials from securities firms and attorneys knowledgeable about excess SIPC had opinions about why the insurers are no longer underwriting excess SIPC policies. According to the insurers that have stopped offering excess SIPC, they made a business decision to stop offering the coverage after reviewing their existing product offerings. They said that this practice of periodically reviewing product lines and profitability is not uncommon. Most of the underwriters were property and casualty insurance companies, and the excess SIPC product was viewed as a relatively small part of their standard product line and provided low return in the form of premiums relative to the significant potential risk exposure. Some of the underwriters said that documenting and explaining the potential risk associated with excess SIPC policies is difficult. For example, the maximum potential loss for excess SIPC could be significant because it is simply the aggregate of all customer account balances over SIPC’s $500,000 limit. Quantifying the probability of loss, which would be significantly less, is much more difficult because insurers have never had a claims-related loss associated with the excess SIPC policies; therefore, no historical loss data exists. Another insurer said credit rating agencies began to ask questions about potential risk exposures from excess SIPC; and rather than risk a change to its credit rating, it opted to stop providing the coverage given the limited number of policies it underwrote. Others in the industry said that in light of the Enron Corporation failure and the losses experienced by the insurance underwriters that had exposure from Enron-related surety bonds, credit rating agencies have begun to more closely scrutinize potential losses and risk exposures of insurance companies overall. While surety bonds are still considered relatively low-risk products, insurers are more sensitive to their potential risk exposures. As mentioned, given the absence of actuarial data it is difficult for insurers to quantify the maximum probable losses from excess SIPC. Securities firms and others also had opinions about why insurers stopped underwriting the policies. Some believed that a general lack of knowledge about the securities industry and SIPC, in particular, might have contributed to the products being withdrawn from the market. Many firms said that the risk of an excess SIPC claim ever being filed is low for two primary reasons. First, securities firms that carry customer accounts are required to adhere to certain customer protection rules. Specifically, firms must keep customer cash and securities separate from those of the firm itself and maintain sufficient liquid assets to protect customer interests if the firm ceases doing business. Moreover, SEC and the SROs have established inspection schedules and procedures to routinely monitor broker-dealer compliance with customer protection (segregation of assets) and net capital rules. Firms not in compliance can be closed. Second, SIPA liquidations are rare in general and claims in excess of the SIPA limit are even more rare. For example, since 1998, more than 4,000 firms have gone out of business, but less than 1 percent or 37 firms became part of a SIPA liquidation proceeding. This is consistent with historical data dating back to the 1970s. Moreover, since 1971 of the almost 623,000 claims satisfied in completed or substantially completed cases as of December 31, 2002, a total of 310 were for values in excess of SIPC limits (less than one- tenth of 1 percent). Of these 310 claims, 210 were filed before 1978 when the limit was raised to $500,000. Only two firms involved in a SIPA liquidation have offered excess SIPC, but no claims have been filed to date. According to officials knowledgeable about a 2001 proceeding, which included a firm with an excess SIPC policy, claims for excess SIPC are likely to be filed. However, the amount of claims to be filed are unclear at this time. Most of the six holders of the excess SIPC policies we contacted are currently exploring a number of options; but at this time, it is unclear what most will do. Although most said that the coverage is largely a marketing tool, some felt that the policies increased investor confidence in the firm because an independent third party (the insurance company) had examined the financial and operational risks of the firm prior to providing them coverage. Several of the firms and those in the securities industry we contacted said that they were surprised to learn that the insurers planned to stop providing excess SIPC coverage. Therefore, most firms are still exploring a number of options on how best to proceed, including Self-insuring or creating a “captive” insurance company that would offer the coverage. However, firm officials involved in exploring the captive expressed concerns about whether they could establish the insurance company by the end of 2003. Others questioned whether this option was feasible given the competitive nature of the securities industry. Purchasing policies from the remaining major insurer. While some have already chosen this option, officials from some of the larger firms said that this might not be an acceptable option because the remaining insurer generally limits the amount of the coverage per firm. Firms that currently offer net equity coverage were concerned that their high net worth customers may not be satisfied with a policy that has a cap on its coverage. Additionally, the policy of the remaining underwriter raised the most questions about its consistency with SIPC coverage. Letting the policies expire and not replacing them. Some of the firms we spoke with said that the larger firms really do not need the excess SIPC because they are well capitalized and the existing customer protection rules offer sufficient protection. However, some officials said that if one larger firm continued to offer the coverage, they all would have to continue to offer the coverage in order to effectively compete for high, net worth client business. Other firm officials suggested that SIPA might need to be reexamined in light of the numerous changes that have occurred in securities markets since 1970. Some officials said that at a minimum, the SIPA securities limit of $500,000 should be raised to $1.5 million. Another said that it is still possible that another insurance company may decide to fill the void left by the companies exiting the business. Other industry officials said that they were still in negotiations with the remaining insurer to increase the coverage limits, which was a concern for the larger firms. Many of the securities firms we spoke with had policies that will expire by the end of 2003. All planned to notify affected customers, but many had not developed specific time frames. Most firms said that they planned to have some type of comparable coverage, which could mitigate the importance of notifying customers. In the interim, several securities firms have asked SIA to produce information for the firms to use when talking to their customers about SIPA and the protections they have under the act. The information being developed for the securities firms is to also include information about SIPC, excess SIPC, and how securities markets work. As mentioned previously, NYSE officials said that there is no specific rule that requires securities firms to notify investors if the SIPC coverage expires without being replaced. However, they generally expect firms to notify customers under rules concerning fee disclosure requirements. Likewise, NASD officials said that it had no specific rule requirements but would generally expect firms to notify affected investors under general rules concerning just and equitable principles. In March 2003, in response to concerns raised about excess SIPC coverage and the potential investor protection issues, SEC began its own limited review of these issues. Initially, SEC planned to collect information on the securities firms that offer the coverage, the major providers, and the nature of the coverage offered. Because most of the firms that have excess SIPC coverage are NYSE members, SEC asked NYSE to gather information about excess SIPC coverage and information about the policies. In response, NYSE compiled information on its members with excess SIPC insurance policies and their insurers. NYSE also analyzed other data and descriptive statistics such as assets protected under excess SIPC. NYSE also reviewed the coverage offered by the major insurers. Out of more than 250 NYSE members, they determined that 123 had excess SIPC insurance coverage and that most of the members were insured by one of the four major insurance providers. However, when several underwriters decided to stop providing the coverage, SEC suspended most of its review activity and has not actively monitored the changes in the availability of the coverage or the firms’ plans going forward. Given the changes occurring in this market and the potential concerns about the policies, SEC officials agreed that they should continue to monitor these ongoing developments to ensure that investors are obtaining adequate and accurate information about whether excess SIPC coverage exists and what protection it provides. SEC and SIPC have taken steps to implement all of the recommendations made in our May 2001 report. However, SEC has some additional work to do with the SROs to implement two of our recommendations. Although SEC has asked the SROs to explore actions to encourage broader dissemination of the SIPC brochure to customers and to include information on periodic statements or trade confirmations to inform investors that they should document any unauthorized trading complaints, no final actions have been taken to implement these recommendations. We also found that SIPC has substantially revamped its brochure and Web site and continues to be committed to improving its investor education program to ensure that investors have access to information about investing and the role and function of SIPC. By doing so, SIPC has shown a commitment to making its operations more transparent. We did note, however, that SIPC’s response to our recommendation about warning customers about unintentionally ratifying unauthorized trades, has not completely addressed our concern that investors have specific information about the risks of unintentionally ratifying trades when talking to brokers. In 2001, we recommended that SIPC revise its brochure to warn investors to exercise caution in discussions with firm officials. Rather than including this information in its brochure, SIPC revised its brochure to provide references or links to Web sites, such as SEC and NASD, but not to the specific investor education oriented Web pages discussing ratifying potentially unauthorized trades or fraud. We found that these broad references make it difficult or virtually impossible for investors to find the relevant information. More specific links to investor education Web pages within each Web site would mitigate this problem. Concerning excess coverage, three of the four major insurance companies stopped underwriting excess SIPC policies in 2003 after reevaluating their potential risk exposures and product offerings. Although an excess SIPC claim has never been filed to date, insurance companies have become more sensitive to potential risk exposures in light of their recent experience with Enron and other high profile failures. Most made business decisions to stop offering this apparently low-risk product. Many of the firms appear to have been surprised by this decision and are exploring several options, including letting the coverage expire, purchasing coverage from the remaining underwriter, or creating a captive insurance company to provide the coverage. Given the limitations and concerns we and others have raised about the protection afforded investors under excess SIPC, including limitations on scope and terms of coverage and an overall lack of information on the claims process and when claims would be paid, SEC and the SROs have vital roles to play in ensuring that existing and future disclosures concerning excess SIPC accurately reflect the level of protection afforded customers. As SIPC continues to revamp and refine its investor education program, we recommend that the Chairman, SIPC, revise SIPC’s brochure to provide links to specific pages on the relevant Web sites to help investors access information about avoiding ratifying potentially unauthorized trades in discussions with firm officials and other potentially useful information about investing. Given the concerns that we and others have raised about excess SIPC coverage, we also recommend that the Chairman SEC, in conjunction with the SROs, ensure that firms are providing investors with meaningful disclosures about the protections provided by any new or existing excess SIPC policies. Furthermore, we recommend that SEC and the SROs monitor how firms inform customers of any changes in or loss of excess SIPC protection to ensure that investors are informed of any changes in their coverage. SEC and SIPC generally agreed with our report findings and recommendations. However, SIPC said that providing more specific linkages in its brochure would prove problematic because of the frequency in which Web sites are changed. Rather, they agreed to provide a reference in the brochure to the SIPC Web site, which will provide more specific links to the relevant portions of the sited web pages. We agree that this alternative approach would implement the intent of our recommendation to provide investors with more specific guidance about fraud and unauthorized trading. SEC agreed that securities firms have an obligation to ensure that investors are provided accurate information about the extent of the protection afforded by excess SIPC policies and that the policies should be drafted to ensure consistency with SIPC protection as advertised. SEC officials reaffirmed their commitment to work with the SROs to ensure that excess SIPC as advertised, is consistent with the policies. Moreover, SEC agreed that investors should be properly notified of any changes in the coverage. Finally, SEC reiterates the recommendations it made to SIPC in its 2003 examination report, which as SEC describes are “important to enhance the SIPA liquidation process for the benefit of public investors.” Our objectives were to (1) discuss the status of the recommendations that we made to SEC in our 2001 report, (2) discuss the status of the recommendations that we made to SIPC in our 2001 report, and (3) discuss the issues surrounding excess SIPC coverage. Finally, SEC reiterates the recommendations made to SIPC in its 2003 examination report, which the letter describes as “important to enhance the SIPA liquidation process for the benefit of public investors.” To meet the first two objectives, we interviewed staff from SEC’s Market Regulation, OGC, OCIE, and the Division of Enforcement as well as SIPC officials to determine the status of the recommendations that we made in our 2001 report. We also reviewed a variety of SEC and SIPC informational sources, such as SIPC’s brochure and SEC’s and SIPC’s Web sites, to determine what SEC and SIPC disclosed to investors regarding SIPC’s policies and practices. We also reviewed the Web sites of the sources provided by SIPC, such as SIA, NASD, the National Fraud Information Center, Investor Protection Trust, Alliance for Investor Education, and the North American Securities Administrators Association. To address the third objective—to discuss the issues surrounding excess SIPC coverage—we interviewed agency officials, regulators, SROs, and trade associations to determine what role, if any, they play in monitoring excess SIPC. We also interviewed representatives or brokers of the four major underwriters of excess SIPC policies to obtain information about the coverage, their claim history, and their rationale for discontinuing the excess SIPC product. In addition, we interviewed six securities firms that had excess SIPC policies to (1) obtain their views on the scope of coverage, (2) determine what they were told about the excess SIPC product being withdrawn, and (3) to identify what they planned to do about replacing the coverage going forward. We also interviewed two SIPC trustees who had liquidated firms that had excess SIPC policies to obtain their views and opinions about the coverage. We also met with attorneys knowledgeable about SIPC and excess SIPC policies and coverage to obtain their views and perspectives on excess SIPC issues. Moreover, we also reviewed sample policies from the four major excess SIPC providers to determine the differences and similarities among the policies as well as their consistency with SIPC’s coverage. We also reviewed a random sample of clearing and introducing firms’ Web sites to determine if they advertised excess SIPC protection on their Web sites and the nature of the protection. We conducted our work in New York, NY, and Washington, D.C., from October 2002 through July 2003 in accordance with generally accepted government auditing standards. As agreed with your office, we plan no further distribution of this report until 30 days from its issuance date unless you publicly release its contents sooner. At that time, we will send copies of this report to the Chairman, House Committee on Energy and Commerce; the Chairman, House Committee on Financial Services; and the Chairman, Subcommittee on Capital Markets, Insurance and Government Sponsored Enterprises, House Committee on Financial Services. We will also send copies to the Chairman of SEC and the Chairman of SIPC and will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questons about this report, please contact Orice Williams or me at (202) 512-8678. Other GAO contacts and staff acknowledgments are listed in appendix III. In addition to those individuals named above, Amy Bevan, Emily Chalmers, Carl Ramirez, La Sonya Roberts, and Paul Thompson made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | As result of ongoing concerns about the adequacy of disclosures provided to investors about the Securities Investor Protection Corporation (SIPC) and investors' responsibilities to protect their investments, GAO issued a report in 2001 entitled Securities Investor Protection: Steps Needed to Better Disclose SIPC Policies to Investors (GAO-01-653). GAO was asked to determine the status of recommendations made to the Securities and Exchange Commission (SEC) and SIPC in that report. GAO was also asked to review a number of issues involving excess SIPC insurance, private insurance securities firms purchase to cover accounts that are in excess of SIPC's statutory limits. SEC has taken steps to implement each of the seven recommendations directed to SEC in GAO's May 2001 report. SEC has updated its Web site to provide investors with more information about SIPC's policies and practices, particularly with regard to unauthorized trading and nonmember affiliate claims. SEC has taken other steps consistent with our recommendations to improve its oversight of SIPC and is working with self-regulatory organizations (SRO) to increase investor awareness of SIPC's policies through distribution of the SIPC brochure and disclosures on account statements. Likewise, SIPC has taken steps to implement the three recommendations directed to SIPC in our 2001 report, but additional work is needed on one. SIPC has updated its brochure and Web site to clarify that investors should complain in writing to their securities firms about suspected unauthorized trades. SIPC also expanded a statement in its brochure that discusses market risk and SIPC coverage and amended its advertising bylaws to require firms that display an expanded statement about SIPC to include a reference or link to SIPC's Web site. Moreover, SEC, the NASD, and many securities firms provide the recommended disclosures about the scope of SIPC coverage to investors on their Web sites. SIPC also added links to Web sites in its brochure that offer information about investment fraud. However, investors could benefit from more specific links to investor education information. Until this year, certain well-capitalized, large, and regional securities firms were able to purchase and provide excess SIPC coverage from four major insurers. The insurance policies varied by firm and insurer in terms of the amount of coverage offered per customer and in aggregate per firm. Attorneys familiar with the policies agreed that the disclosure of the coverage and the terms of coverage could be improved. During the review, GAO found that three of the four major insurers that offered excess SIPC coverage in 2002 stopped underwriting these policies in 2003. Consequently, as the policies expire in 2003, most insurers are not renewing their existing policies and have stopped underwriting new policies. At this time, holders of the insurance policies have not decided what to do going forward. However, several options are being explored including self-insuring and purchasing policies from the remaining major insurer. |
The Highway Trust Fund is a fund supported by taxes highway users pay on fuel, tires, truck purchases, and the use of heavy vehicles. The revenue from these taxes supports highway construction and maintenance, highway safety, and transit. FHWA administers the Federal-Aid Highway Program and apportions trust fund revenues to state highway departments or transportation authorities, which oversee the construction of the individual projects. Once FHWA notifies a state that a particular highway project has been approved, a state can submit receipts to FHWA after it has incurred expenses. FHWA approves reimbursements to the state for its expenses, usually for 80 percent of a project’s costs; the state and local governments are responsible for the other 20 percent. Federal reimbursements from the Highway Trust Fund have risen from around $15 billion in 1990 to more than $35 billion in 2006, the last year for which data are available. The federal reimbursements that states receive vary by state; however, in general, in fiscal year 2006, the federal government provided about 35 percent of the money that states and local governments spent on highway projects. While the federal government provides most funding from the Highway Trust Fund directly to the states and the states oversee the use of these funds, by statute, states must provide some trust fund revenues to other organizations, such as MPOs, for planning purposes. Federally funded highway projects are typically carried out in four phases: planning, preliminary design and environmental review, final design and right-of-way acquisition, and construction. In the planning phase, state and local highway planners look at transportation alternatives and work with the public to choose projects that make the most sense for their areas. According to FHWA, this phase can take up to 5 years for a major highway project. During the preliminary design and environmental review phase, states identify engineering issues, roadway alignment alternatives, transit options, project costs, and other details. In addition, the proposed project and any alternatives are examined for potential impacts on the environment, public health, and welfare. This process can take 1 to 5 years, according to FHWA, depending on the complexity of the design and possible environmental considerations that must be considered. During the final design and right-of-way acquisition phase, states develop detailed engineering plans consistent with the results of the environmental review phase and acquire the right-of-way needed to construct the project. This phase typically takes from 2 to 3 years for a major new highway construction project, according to FHWA. Finally, during the construction phase, the state evaluates bids from contractors and then oversees the selected contractor’s construction of the project. The construction of a major project typically takes, according to FHWA, 2 to 6 years. See figure 1 for a more detailed description of the types of activities and stakeholders included in the phases of a highway project. The federal, state, and local governments all have a role construction of federally financed highway projects. However, the state DOT is the focal point for these activities. It is responsible for setting a state’s transportation goals and for planning safe and efficient transportation between cities and towns in the state. The state DOT also in the designs most projects, acquires right-of-way for highway constr awards contracts to build projects. Local governments also carry out man transportation planning functions, such as scheduling improvements and maintenance for local streets and roads. At the federal level, FHWA is the primary agency involved in transportation project decision making and oversight. FHWA oversees the transportation planning and project activities of state DOTs by approving state transportation plans and certifying that states have met all legal requirements associated with accepting federal funding. According to FHWA, over 7 federal funding for highway projects. Some of these requirements are transportation specific, such as requirements under the DBE program and the Buy America program, while others, such as NEPA, are general requirements that can apply to other construction projects, such as federal building construction. FHWA officials stated that they identify all requirements that states must meet in the documentation FHWA provides to states when funding for a project is approved. States in turn communicate these requirements to potential bidders, so the contractors know, for example, what wages they must pay or whether they must buy American-made iron and steel. 0 requirements may apply to states that accept The requirements for analyzing funded highway projects originated in NEPA, enacted in 1969. This legislation requires agencies to consider and, if possible, avoid or mitigate potential environmental degradation from federally funded infrastructure projects before these projects moved forward. FHWA ensures that federally funded projects go through an environmental review process, as prescribed in NEPA and its implementing regulations. FHWA officia stated that under FHWA’s NEPA implementation process, the lead agency must demonstrate that it will implement the project consistently with several environmental laws. Laws under FHWA’s NEPA “umbrella” include, but are not limited to the environmental impact of federally the Clean Water Act, protecting wetlands; water quality and ensuring protection of ir Act, protecting air quality; the Endangered Species Act, protecting t species and their habitats; hreatened and endangered Section 138, Title 23 of the U.S. Code, preventing the use of parkland or recreational areas in the development of highway projects, except where no feasible and prudent alternative exists; and the National Historic Preservation Act of 1966, identifying historic properties that may be damaged by the construction of infrastructure projects, and determining ways to avoid, minimize, or mitigate such damage. If no federal funds are used on a project or if a project does not require federal approval, NEPA is generally inapplicable; however, these projects still must comply with all applicable federal environmental laws, which can include the Clean Water and Clean Air Acts. While FHWA is generally the lead agency in ensuring that states comply with NEPA on federally financed highway projects, other federal agencies have responsibilities under these laws. These agencies include EPA (air and water quality, wetlands preservation); the Fish and Wildlife Service (terrestrial threatened and endangered species) within the Department of the Interior; the National Marine Fisheries Service (marine threatened and endangered species, effects on fish and spawning grounds) within the Department of Commerce; USACE (effects on U.S. waters, including wetlands); and ACHP (effects on historic properties). According to FHWA, under the NEPA process, FHWA decides how extensive an environmental review a federally funded highway project will undergo. This decision is based on the size and complexity of the project, as well as the project’s expected environmental impact. For example, FHWA may deem a project that is expected to have no significant environmental impact to be categorically excluded, meaning that the project will not need an environmental assessment (EA) or an environmental impact statement (EIS) to comply with NEPA. A project whose environmental impact is unknown or may be potentially significant will undergo an EA to determine if the impact could be significant and thus require an EIS. A project that is expected to have a significant environmental impact will require an EIS, which will determine the particular environmental impacts of the project and include plans for mitigating these impacts. States usually have only a few EISs under way at any one time, since they are performed generally for the largest highway projects, which pose significant impacts to the environment. For projects undergoing an EIS, FHWA issues a Record of Decision when the process is complete. The Record of Decision indicates whether a project complies with environmental laws and determines changes to the project for environmental mitigation, such as the creation of additional wetlands to mitigate the loss of wetlands or a change in route to avoid environmental impacts. EPA is responsible for reviewing and commenting on all major federal actions for which an EIS is required and for working with FHWA to ensure compliance with environmental statutes. FHWA has the final approval authority and determines when the EIS is in compliance with applicable environmental laws and other requirements. Outside the environmental arena, states must meet requirements for paying a prevailing wage for construction work when accepting federal highway funding. The Davis-Bacon prevailing wage requirement mandates that workers on all federal-aid highway projects receive at least the local prevailing wage for their work. The law stems from a Depression-era practice of transporting workers from a lower-paying area to bypass local workers who would demand a higher wage. The Davis-Bacon prevailing wage requirement prevented this practice by ensuring that workers on federal projects are paid at least the local prevailing local wage. DOL sets the minimum wage that must be paid in each county in the United States for various job categories, such as sheet metal worker or concrete finisher. DOL sets these minimum wage rates based on periodic surveys it conducts of employers in each county. To show they have paid the prevailing wage to their employees, highway contractors must provide their payroll data to the state DOT and certify that they have complied with the Davis-Bacon prevailing wage requirement. All subcontractors must provide this documentation to the lead contractor on a project, known as the prime contractor, who in turn provides it to the state DOT. The state then reviews the documentation to ensure compliance; if the state discovers noncompliance, the contractor must pay the employees supplemental wages to cover the difference between what was paid and the original agreed-to prevailing wage. If the contractor still does not comply with wage requirements, the state DOT may use contractual remedies, such as withholding progress payments, to ensure compliance. FHWA occasionally spot-checks the documentation to further ensure compliance with the Davis-Bacon prevailing wage requirement. The DBE program requires that states attempt to expend a portion of the funds they receive from U.S. DOT for highways, transit, and other transportation-related contracts to firms owned by members of disadvantaged populations. The intent of this program is to remove barriers to participation in federal contracting and ensure nondiscrimination in awarding federal contracts. Legislation, executive action, and judicial decisions have resulted in modifications to the initial program. U.S. DOT presumes disadvantaged population groups to include African-Americans, Hispanics, Asians, Native Americans, and other minorities found to be disadvantaged by the Small Business Administration. To be eligible to participate in the DBE program, firms must be at least 51 percent owned by a member or members of these groups. Where there is a contract goal on a particular contract (not all U.S. DOT-funded contracts must have contract goals), the state tells the prime contractor to subcontract a set percentage of the project’s work to a DBE subcontractor or, if unsuccessful, to demonstrate that a “good faith effort” was made to find a DBE subcontractor. Each state has a process for certifying firms that wish to participate in the DBE program. States use several criteria, established by U.S. DOT, to determine whether a firm can participate in the DBE program, including verifying that the owner of the DBE firm has a personal net worth under $750,000. Under the DBE rules, a DBE firm’s participation counts toward a goal only if the firm performs a “commercially useful function” to ensure that the firms are not hired simply to meet the program’s goals. FHWA works with states to ensure that they meet the program’s goals on highway projects and also periodically audits individual state programs to ensure that the programs are operating within the law. Other U.S. DOT entities, such as the Federal Transit Administration, ensure that the DBE program’s goals are met in other transportation areas. The U.S. DOT Inspector General investigates cases of possible fraud, such as where firms misrepresent themselves as minority-owned. Finally, FHWA’s Buy America program establishes requirements related to purchasing materials. Specifically, the Buy America program requires that federally funded highway projects use steel manufactured in the United States. FHWA officials said the goal of the program is to protect the U.S. steel industry from foreign competition. FHWA has the statutory authority to grant waivers to states when domestic iron or steel is unavailable or when there is another compelling public interest to use imported iron or steel, and FHWA has, through regulation, established a program threshold limiting the program to projects costing over $2,500. In addition, under an alternative bid procedure, states may use foreign iron and steel if the lowest total project bid using domestic materials exceeds the lowest total bid using foreign materials by 25 percent. Contractors working on federally funded highway projects must provide documentation and a certification regarding the country in which the iron and steel originated. All manufacturing of the iron and steel must take place in the United States. If any part of the manufacturing occurs outside the United States, the iron or steel is considered foreign. State DOTs spot-check iron and steel, and the appropriate certifications, to ensure compliance. FHWA must approve the procedures that states use to verify compliance and can also perform spot checks. If a state DOT or FHWA finds that foreign iron or steel was used in a highway project, the contractor must remove the offending iron or steel. This can delay the project and add costs, although in these cases, the contractor is responsible for the additional costs to correct the mistake. Many of the 30 studies we reviewed concluded that there are different types of benefits and costs linked to federal requirements for highway projects. However, only a few of these studies attempted to quantify these benefits or costs. For federal environmental requirements, the most visible and measurable benefits are fewer adverse impacts to the environment. The benefits also include improvements in air and water quality and preserving wetlands, among other things. While providing benefits, federal environmental requirements can also increase projects’ overall costs. Studies have quantified some of these costs, such as those for administering NEPA, but have not quantified other types of costs, such as those that occur when projects are delayed for environmental reviews. In general, quantitative information on environmental benefits and costs is limited because states have not tracked such information; however, some states are beginning to do so. The information on the benefits and costs of the Davis-Bacon prevailing wage requirement identifies benefits due to creating a level playing field for contractors and ensuring a prevailing wage for skilled workers and costs due to administering the requirement. However, the literature we reviewed is not exclusive to transportation or highway projects. Finally, although none of the studies we reviewed identified benefits of the DBE program, transportation officials identified some benefits of the program, such as providing greater opportunities for minority- and women-owned firms on federally funded projects. The studies we reviewed did identify benefits of the Buy America program, including protecting against unfair competition from foreign firms and costs of the DBE and Buy America programs, such as increased administrative costs to states and U.S. DOT due to participation in the DBE program and potentially higher iron and steel costs. However, none of the studies we reviewed separately estimated the costs of the Buy America program’s requirements. Despite the potential for bias in studies with economic and political implications, such as those we reviewed, we concluded from our review of the studies’ methodologies that the studies were sufficiently reliable for the purposes of our report. As noted, however, we did not independently verify the results of the studies. Several of the studies we identified described the benefits and costs of federal environmental requirements for highway projects. However, the studies generally did not attempt to quantify the benefits and only quantified some types of costs, such as mitigation costs and costs for administering NEPA. An FHWA benefit-cost study is one of the few we found that attempted to describe the costs and benefits of environmental requirements. For example, it noted that federal environmental requirements, including those associated with NEPA, have benefits that can reduce adverse effects on the human and natural environment. These benefits can include measured improvements in air and water quality and noise pollution levels; the preservation of water supplies and of historic, cultural, park, and natural resources; and increased protection of wetlands. However, the FHWA benefit-cost study indicated that assessing these benefits in economic terms and measuring them in dollars is difficult because the valuation of environmental benefits is highly subjective. The study also indicated that government agencies are not required to track and quantify these benefits and, therefore, generally do not attempt to do so. Other studies we reviewed also found that, while federal environmental requirements produce benefits, these requirements also can cause states to incur costs. In their NEPA documents, state DOTs must include plans for complying with environmental laws, as well as consider mitigating any environmental damage. According to a study FHWA commissioned in 2006, these mitigation efforts—for example, replacing wetlands, building sound walls to insulate surrounding areas from highway noise, or changing the route of a project to avoid environmental damage—can create costs. Some of the studies that we reviewed attempted to quantify mitigation costs. A 2003 study by the Washington DOT evaluated a sample of 14 projects and concluded that mitigation efforts and costs vary from project to project. Furthermore, a 2003 study published by the National Cooperative Highway Research Program (NCHRP), an effort sponsored by the American Association of State Highway and Transportation Officials (AASHTO) in cooperation with FHWA, calculated that the environmental review process adds costs to highway projects for environmental mitigation activities and that more in-depth reviews add more costs than less detailed reviews. For example, categorical exclusions on average added 1.1 percent to a project’s overall construction cost, EAs on average added 1.4 percent, and projects requiring EISs on average added 2.3 percent. The 2006 FHWA study, which was conducted by TransTech Management, a management consulting company, reached similar conclusions about environmental-related cost increases, including costs to process NEPA documents and mitigation costs. In this study, TransTech consultants conducted case studies of six highway and bridge projects in Maryland, Montana, New Jersey, Oregon, Utah, and Washington. The study concluded that overall environmental costs for these projects—which included replacing bridges and interchanges and widening and upgrading arterial highways from two lanes to four lanes—-ranged from 2 to 12 percent of total project costs and accounted, on average, for 8 percent of total project costs. The study attributed some of this cost to requirements for completing NEPA documentation, which involves coordinating with other agencies, performing a detailed review of project alternatives, acquiring permits, and conducting public outreach. In addition, the study identified costs for the construction of stormwater facilities, mitigation of wetland losses, erosion control, and landscaping to mitigate likely harms to the environment from the projects. When a highway project is delayed, inflation and additional administrative and labor expenses increase its costs, and environmental requirements are one of several potential causes of project delays we identified. A 2003 GAO study reported that according to FHWA, for projects requiring an EIS and for which FHWA approved the EIS in 2001, the environmental review took an average of approximately 5 years to complete. Furthermore, environmental reviews can take up a significant portion of projects’ overall time frames. For example, FHWA’s 2001 baseline report stated that for projects constructed in the last 30 years, environmental review for projects requiring an EIS accounted for an average of 3.6 years, or approximately 28 percent of the overall time for project completion. In addition, a study jointly sponsored by FHWA and AASHTO reported that right-of-way acquisition is a major cause of delay in highway projects, and where relocation is required, it takes an average of 1 to 2 years to purchase a right-of-way after negotiations have begun. Because states generally cannot begin to acquire right-of-way until the NEPA process is complete, the additional time needed for these purchases has the potential to further delay completion of a highway project. The study also cited efforts to accommodate and relocate utilities as another cause of delays during the design and construction phases of highway projects. While several state DOT officials told us that delays can increase the overall cost of a project, none could estimate how much they add to a project’s costs, and the studies we reviewed did not estimate the costs attributable to environmental-related project delays. In general, we found that environmental cost data are not routinely collected. For example, the 2003 NCHRP report found that (1) no complete and consistent data on environmental costs were available at the state level and (2) a majority of states do not track environmental costs separately from overall project costs and no state has an environmental accounting system that tracks these costs. Additionally, in its benefit-cost study, FHWA concluded that none of the 32 state DOT environmental officials that responded to a survey in the 2003 NCHRP report had studied or tracked planning, design, and environmental costs related to environmental review activities. According to its benefit-cost study, FHWA is taking steps to strengthen its own environmental cost-tracking efforts, by conducting a multiphase effort to measure the impact of NEPA and identify trends. As noted above, in 2001, FHWA completed a comprehensive baseline study that assessed the impact of the NEPA process on the total time and costs involved in completing highway projects. Phase one of the study will be used to assess future environmental streamlining efforts, including an ongoing detailed analysis of the time required to complete FHWA’s EIS documents. However, for phase two of the study, data limitations, such as a lack of centrally located official completion dates for projects that have gone through the NEPA process have prevented FHWA from analyzing the costs associated with NEPA compliance efforts. Furthermore, recognizing the need to improve environmental cost estimating methodologies for transportation projects, including highway projects, NCHRP is creating guidelines for developing such improved methodologies. These guidelines are scheduled to be completed in late 2008. Additionally, four states (Montana, Washington, Oregon, and Wisconsin) have begun or plan to begin efforts to quantify the environmental costs associated with transportation project delivery. For example, an Oregon DOT official told us that his department has been tracking annual overall environmental costs for project development since 2000, as required by the Oregon legislature. These costs have consistently averaged 4.5 percent of overall project costs. Several studies we reviewed attempted to quantify benefits and costs of the Davis-Bacon prevailing wage requirement, but these studies did not provide data exclusive to transportation or highway projects. According to FHWA’s benefit-cost study, benefits associated with the Davis-Bacon prevailing wage requirement include (1) creating a level playing field for honest contractors, (2) ensuring that skilled workers are paid wages that prevail in the communities where the work is performed, and (3) minimizing predatory contracting practices that could undercut local contractors. FHWA’s benefit-cost study also found that the requirement also promotes more training for labor, resulting in more experienced and qualified contractors working on highway projects. In addition, the National Alliance for Fair Contracting, a labor-management organization, and the Construction Labor Research Council, an organization that researches construction labor costs, conducted studies in 1995 and 2004, respectively, which concluded that higher prevailing wages under the Davis-Bacon prevailing wage requirement contributed to higher productivity on federal highway projects. The studies concluded that the cost per mile for highway construction was inversely related to the hourly wage paid to contractors—specifically, that a higher wage rate resulted in a lower highway cost per mile—which could indicate a positive effect of the Davis-Bacon prevailing wage requirement. According to the report, higher wages attracted high-quality, highly skilled labor; enhanced productivity; and possibly offset potential labor cost savings from lower wages. A.J. Thieblot, “A New Evaluation of Impacts of Prevailing Wage Law Repeal,” Journal of Labor Research (Spring 1996). through the continued use and development of certain industries within the U.S. economy, like the iron and steel industries. In terms of costs, a 2001 GAO report indicated that U.S. DOT, states, and local transportation agencies incur costs in implementing and administering the DBE program. For example, U.S. DOT estimated that it incurred about $6 million in costs, including salaries and training expenses, to administer the DBE program for highway and transit authorities in fiscal year 2000. Sixty-nine percent of the states and transit authorities that responded to GAO’s survey for the 2001 report estimated that they incurred a total of about $44 million in costs to administer the DBE program in fiscal year 2000. For individual state respondents, these administrative costs ranged from a high of $4.5 million to a low of about $10,000. However, U.S. DOT, states, and local transportation agencies had not studied or analyzed other DBE-related program costs. For example, according to the 2001 GAO study, states and transit authorities had said that the DBE program increased project costs, but 99 percent of the states and transportation agencies surveyed for the report had not conducted a study or analysis to quantify whether the DBE program has an impact on their contract costs. We reported that U.S. DOT had also not conducted such an analysis. Finally, none of the studies we reviewed attempted to quantify the costs of Buy America program requirements. One study—FHWA’s benefit-cost study—identified higher iron and steel prices, higher overall project costs, reduced bidding competition, and project delays as the major types of costs that federally funded transportation projects could incur in complying with Buy America program provisions, but the study did not attempt to quantify these costs. According to our survey results, the federal requirements we reviewed are among the factors that influence states’ decisions to use nonfederal or federal funds for highway projects. Most state transportation officials we interviewed told us that the federal requirements may encourage them to use nonfederal funds for certain highway projects eligible for federal aid because they may be able to save time and costs, but they also told us that other factors influence their decisions to use nonfederal funds. Conversely, some state officials we interviewed told us they may use federal funds to avoid certain limitations associated with nonfederal funds or to obtain certain benefits associated with using federal funds. In general, the type of funding a state chooses to use—nonfederal or federal—varies and depends on the circumstance in the state. Some states, for example, have requirements similar to the federal requirements we are reviewing. This may reduce some of the time or cost savings states might otherwise realize by using nonfederal funds. Furthermore, a state’s decision to use nonfederal or federal funds is generally influenced by the relative availability of these funds. Most state transportation officials told us that costs and delays associated with the federal requirements we reviewed have, in certain instances, encouraged them to use nonfederal funds for certain highway projects eligible for federal aid; however, other factors, such as a state legislature’s requirements and the availability of nonfederal funds, also contribute to a state’s decision to use nonfederal funds. More specifically, 39 of the 51 state DOTs we surveyed reported that, in the past 10 years, the federal requirements had, in at least one instance, influenced their decision to use nonfederal funds for highway projects that were eligible for federal aid. A majority (33 states) of these 39 states reported that the NEPA requirement factored into their decision to use nonfederal funds rather than federal funds for highway projects. Some of the 39 states also reported that the other requirements we reviewed also influenced their decision making: 5 states noted that the Davis-Bacon prevailing wage requirement factored into their decision to use nonfederal funds; 2 states noted that the DBE program factored into their decision to use nonfederal funds; and 5 states noted that the Buy America program factored into their decision to use nonfederal funds. See figure 2 for more information on how many states reported using nonfederal funds and the reasons behind these decisions. The survey used for this study is reproduced in appendix II. Some state DOT officials we interviewed stated that by using nonfederal funds instead of federal funds for certain projects, they avoided project delays and costs associated with the federal requirements. Maine DOT officials, for example, told us that if they had used federal funds for several particular state-only funded projects, the projects would have been delayed by one or more construction seasons due primarily to a requirement designed to protect parklands and recreational areas. Instead, Maine DOT used state resources and worked with the State Historic Preservation Officer to expedite critical bridge improvements through an accelerated review process. Maine DOT officials told us that although they cannot finance major EIS projects using only state funds, they are confident that if they used only state funds for these projects, planning studies at the EIS level could be expedited by a year or more without any major changes in the outcome. According to the Maine officials, state legislation outlines steps necessary in a transportation decision-making process that consider impacts to the human, social, and natural environment that are as precise or more as NEPA, but do not contain the added federal administrative responsibilities. A few states reported in our survey that the Davis-Bacon prevailing wage requirement and Buy America program also factored into their decision to use nonfederal funds on certain projects. A New Hampshire DOT official that we interviewed told us that the Davis-Bacon prevailing wage requirement can slow a project because it imposes payroll processing requirements that create additional administrative responsibilities, particularly for small highway contractors who may not understand what they must do to comply. As a result, the state official told us they use state funds for many small resurfacing projects to reduce the administrative responsibilities for contractors. Similarly, Washington DOT officials we interviewed said that they used nonfederal funds for the Tacoma Narrows Bridge project—which cost nearly $850 million—and saved $30 million to $35 million by purchasing foreign steel instead of domestic steel. Had they used federal funds for the project, they would have had to spend these funds for domestic steel under the Buy America program. Some states have minimized project delays by using nonfederal funds for certain aspects of a project. For example, some states have used nonfederal funds to acquire the right-of-way for a project—the rights to the land over which the highway will pass—so that they could conduct the NEPA review at the same time. Generally, federal funds cannot be used to acquire a right-of-way until FHWA completes the NEPA process. Some state DOT officials told us that because states cannot conduct certain NEPA activities concurrently with other project activities, such as developing an EIS and acquiring right-of-way, projects can face delays. Ohio DOT officials said that there are risks in acquiring right-of-way before the NEPA review has been finalized. For example, after acquiring the right- of-way, the NEPA document may not be approved or may be significantly modified to require a right-of-way in a different location. Ohio DOT officials said, however, that deciding on a right-of-way alternative after obtaining sufficient information and involving the public involvement lessens this risk. By using state funds for right-of-way purchases, Ohio DOT officials said that they are able to reduce project costs because they avoid the impact of inflation (which would raise property and construction costs) and complete the project faster. However, these officials had not tracked or quantified the savings resulting from this practice. FHWA officials said that states have the option of acquiring right-of-way with nonfederal funds but that states that do this will not be eligible to have those acquisition costs reimbursed with federal funds. Agreeing with Ohio DOT officials, FHWA officials also said that the state bears the risk in acquiring right-of-way before the NEPA process is completed. In addition to the federal requirements, some state officials noted that other factors play a role in their decisions to use nonfederal funds for some projects. For example, Washington DOT officials informed us that their state legislature passed transportation revenue packages in 2003 and 2005 requiring them to use state funding for selected projects. They had wanted to use federal funding for some of these projects, particularly those that already have federal agency involvement due to environmental issues such as a need for permits to build in a wetland area, but the legislature denied the request. Sometimes, although not generally, a state may use nonfederal funds for projects because it has a significant amount of nonfederal funds available to it. For example, in California, more than 85 percent of funding available for transportation, including highways, originates from nonfederal sources. As a result, California funds many projects with state and local funds. However, California DOT officials explained that they use state and local funds for these projects—not because of the federal requirements—but because state funds are more available than federal funds. States may face a number of limitations when they use nonfederal funding for highway projects and may use federal funds to avoid these limitations, or they may use federal funds because they can obtain certain benefits by using these funds. Some states told us that one limitation associated with using nonfederal funds for projects is that using these funds and not complying with certain federal requirements can preclude or delay states from obtaining federal funds later if needed. More specifically, if a state uses nonfederal funds for a specific highway project, this project is not required to meet certain federal requirements, such as the federal design standards. Consequently, if state officials need additional funding for the project during its later stages, they may find it difficult to obtain federal funds because federal requirements were not previously met. However, some state officials we interviewed said that they follow or try to follow federal requirements even if they use nonfederal funds for a project because they then have the flexibility to add federal funds to the project at any stage. Furthermore, Ohio DOT officials explained that using state funds for highway projects depletes state funds that could be used to match federal funds for other highway projects or other state priorities. Finally, according to Ohio DOT officials, if nonfederal funds are used on projects, public involvement in projects may be limited or environmental issues may not undergo systematic reviews since these projects do not have to comply with the public and environmental review processes under NEPA. However, as noted below, some states have environmental requirements that are similar to NEPA’s requirements, which could lessen the impact of this limitation. Transportation officials in two states told us they often prefer to use federal funds because they can obtain certain benefits associated with using these funds. Washington DOT officials told us, for example, that if they use federal funds for a highway project, FHWA serves as the lead agency under NEPA and is responsible for coordinating the many federal agencies that are responsible for the various federal environmental requirements. However, if they use only nonfederal funds, states still must comply with federal environmental laws (such as those involved with protecting air and water quality) but must coordinate directly with the federal agencies that are responsible for those requirements, and need not go through the NEPA process. In some instances, according to the Washington DOT officials, they preferred to partially fund a state project with federal funds because they have a good working relationship with FHWA. Furthermore, FHWA can be more effective than the state in coordinating environmental issues at the federal level. Also, Massachusetts DOT officials said that federal agencies are more inclined to cooperate with and respond to another federal agency, such as FHWA, than to the state DOT, and such cooperation and responsiveness can contribute to a project’s success. For example, FHWA can obtain Coast Guard permit exemptions that state DOTs cannot, allowing some federally funded projects to proceed faster than comparable nonfederal projects. Some states have requirements similar to the four federal requirements we reviewed, and some state officials told us that they consider the differences between these requirements when deciding whether to fund highway projects with nonfederal or federal funds. Furthermore, having state requirements that are similar to the federal requirements may reduce some of the time or cost savings states might otherwise gain by using nonfederal funds. According to the Council on Environmental Quality, a federal agency that oversees NEPA, 16 states and the District of Columbia have environmental planning requirements similar to NEPA requirements. Other states, including New Hampshire and Illinois, have state environmental requirements that address specific environmental issues, such as wetlands protection, but do not have an environmental planning law like NEPA that provides for an environmental review process. FHWA’s benefit-cost study noted that the extent to which state environmental requirements overlap with federal requirements varies from state to state. The study also noted that state requirements that parallel NEPA requirements could be more than, less than, or just as stringent as the federal requirements. FHWA officials agreed, noting that, while some environmental processes—such as California’s Environmental Quality Act—are fairly stringent like NEPA, other state environmental processes may not be. Furthermore, in some cases, a federal agency can authorize a state to use its own state environmental requirement to meet the federal requirement. For example, in the National Pollutant Discharge Elimination System (NPDES) stormwater permit program, EPA has approved most state NPDES permit programs and allows these approved states to administer permits, in lieu of EPA, to allow discharges into U.S. waters. In addition to state environmental requirements, some states have requirements that are roughly equivalent to the Davis-Bacon prevailing wage, DBE, and Buy America requirements we reviewed: According to FHWA’s benefit-cost study, 32 states and the District of Columbia have active prevailing wage laws. State prevailing wage laws may require higher or lower wages than Davis-Bacon prevailing wages. For example, state DOT officials told us that in certain portions of Utah and Oregon, the federal Davis-Bacon prevailing wage rate is higher than the state prevailing wage rate; however, Maryland officials told us that for many projects, Maryland’s prevailing wage rate is higher than the federal Davis-Bacon prevailing wage rate. Furthermore, some contractors said that they pay their employees wages that are higher than the federal Davis- Bacon prevailing wage. Similarly, Hawaii DOT officials said that they, with little or no exception, award their federal-aid highway construction contracts to unionized contractors and that union wages in Hawaii are typically higher than Davis-Bacon prevailing wage rates. Some states have laws to encourage participation from minority-owned enterprises in transportation projects. For example, in Maryland, there are federal and state DBE programs. FHWA officials told us that state DBE programs may or may not mirror the federal DBE program and that state DBE programs vary. For example, some state programs have residency requirements to encourage local businesses, while other states do not. Some states have laws that require the use of domestically made steel and other materials. State requirements that are parallel to Buy America requirements are often noted in a state’s standard specifications, which are included in the bid documents provided to highway contractors. For example, West Virginia has a standard specification that requires that projects use aluminum, glass, steel, and iron products that are domestically fabricated. Texas also has a steel preference provision. This provision notes that a contract awarded by Texas DOT that does not use federal aid must contain the same preference for steel and steel products as required by the federal Buy America program. In terms of environmental requirements, some state officials said they consider the differences between state and federal requirements, while other officials may not. For example, Hawaii DOT officials told us that the differences between the state and federal environmental requirements may influence their funding decision making because the state process is less rigorous, less time-consuming, or both, and as a result, the state process is less costly than the NEPA process. Washington DOT officials noted, however, that a project employing nonfederal funds may not realize time and cost savings because projects that use these funds still have to comply with a number of federal environmental laws that require coordination among and the involvement of federal agencies to, for example, provide permits to impact wetlands. Accordingly, Washington DOT officials may or may not consider the differences between federal and state environmental requirements when deciding whether to use nonfederal or federal funds for a highway project. In considering prevailing wage requirements, states may choose to use nonfederal or federal funds for projects when federal Davis-Bacon prevailing wage rates are higher than the state’s prevailing wage rate. For example, in an interview, Utah DOT officials told us that Davis-Bacon prevailing wages are higher than market wages in portions of Utah. Consequently, Utah DOT tries to fund complete road reconstruction projects—which are labor-intensive—with nonfederal funds so that state dollars can be stretched further. Conversely, they use federal funds—and, therefore, pay the Davis-Bacon prevailing wage rates—for smaller rehabilitation or preservation projects. This lowers Utah DOT’s overall costs, but Utah DOT officials were unable to quantify savings. Similarly, Oregon DOT officials noted that in some areas of Oregon, the federal Davis-Bacon prevailing wage is higher than the state prevailing wage. However, the Oregon officials told us that state law requires that—when federal funds are involved—contractors compare the federal Davis-Bacon prevailing wage rates and the state prevailing wage rates and pay the higher of the two. Regardless of whether states decide to use nonfederal or federal funds for their highway projects, their decisions are generally influenced by the relative availability of these funds. Officials from many states told us that their nonfederal funds are more limited than their federal funds. Hence, the extent to which states use nonfederal funds to avoid the federal requirements is limited. Our survey responses indicate that 37 states did not often use nonfederal funds on highway projects to avoid federal requirements. More specifically, these 37 states reported that they used nonfederal funds to avoid the federal requirements less than 50 percent of the time. Officials from one of these 37 states, Hawaii, said that they have limited nonfederal funds available. As a result, the officials said that they do not often use nonfederal funds to avoid federal requirements and that they have to rely on federal funds to finance their highway projects. Similarly, other states we spoke with also rely on federal funds to finance their highway projects. In our interviews, officials from some states that rarely use nonfederal funds to avoid federal requirements told us that if they had more nonfederal funds available, they would use those funds for highway projects more frequently in order to expedite projects. Utah is one state that has a significant amount of nonfederal funds available for highway projects, and it uses these funds to expedite projects. Utah obtains about 75 percent of the funds for its highway program from the state and about 25 percent from the federal government. Because Utah has such a high proportion of state funds available, Utah officials reported on our survey that they use nonfederal funds to avoid the federal requirements more than 50 percent of the time, but not always. Officials we spoke with also told us that because Utah has abundant state funding, the state tries to fund its smaller projects with federal funds and its larger, more complex projects with nonfederal funds. Utah officials also noted that using state funds has the benefit of generally reducing the time and cost to complete a project, though they have not quantified or tracked this information. The federal, state, and local government agencies and contractors we interviewed said that they face a number of challenges complying with the federal requirements associated with federal highway projects and that these challenges contributed to increased project costs and delays. The challenges deal with (1) administrative requirements and coordination with multiple government agencies and (2) provisions that state transportation officials and contractors say make it difficult for them to implement the requirements as efficiently as possible. Officials are implementing a number of strategies to address these challenges, including federal-level programs that provide states with guidance and opportunities to participate in streamlining pilot programs, as well as state initiatives to make their compliance processes more efficient. Some state and local transportation officials and contractors stated that the federal requirements we reviewed add to their administrative requirements, such as preparing detailed documentation, which require substantial resources, adding to project costs and delays. They also claimed that coordinating with the multiple stakeholders involved in planning a highway project can be challenging because agencies may have competing interests and lack enforceable time frames. Some state and local transportation officials and contractors told us that the amount of documentation they prepare to comply with federal requirements can add to their administrative requirements. For example, state transportation officials we interviewed told us that lawsuits challenging environmental decisions can cause delays and increase costs, in part because they sometimes prepare more documentation to satisfy federal agencies that are taking precautions to avoid lawsuits. FHWA officials told us that documentation requirements are intended to enable time savings later in the highway project process. Additionally, at a September 2008 Transportation Research Board conference, several state and local transportation planners said that federal agencies encourage them to develop multiple alternative project designs that they think will never be selected just to satisfy specific federal agencies and environmental groups and to avoid lawsuits from opponents of the project. According to FHWA guidance, however, the identification, consideration, and analysis of alternatives are important components of the NEPA process and contribute to objective decision making. Furthermore, the guidance states that the consideration of alternatives leads to a solution that satisfies the transportation need, while at the same time protecting environmental and community resources. Separately, according to an AASHTO study, most EIS documents exceed 300 pages and some may even exceed 1,000 pages, even though federal regulations state that this document should normally be no more than 150 pages and those associated with complicated projects no more than 300 pages. Idaho DOT officials said that for some projects designated as categorical exclusions, where the projects were expected to have no significant impact, they had to prepare the same amount and level of documentation as for projects requiring more complex EAs, which requires a longer and more detailed process than categorical exclusions because the environmental impact, if any, needs to be determined. FHWA officials, however, said that they are not aware of any recent changes in documentation trends for categorical exclusions. According to many state transportation officials, redundancy in the requirements also increases the amount of documentation they must prepare. For example, Florida DOT officials told us that when states have requirements similar to the federal requirements, officials frequently must prepare separate documentation for both sets of requirements, raising administrative costs. However, Maryland DOT officials told us that their state and federal requirements are combined into one process in order to meet both obligations and are supportive of each other. As a result, Maryland DOT officials indicated that there did not appear to be project delays or increased project costs due to redundancies. Separately, state DOT officials said that redundancies in the federal requirements can increase their administrative costs. For instance, officials from two states told us that the documentation required for a section of the National Historic Preservation Act is very similar to documentation for the requirement aimed at protecting parklands and recreational areas, but the paperwork prepared for one does not always satisfy the other, potentially increasing the states’ administrative responsibilities. According to federal, state, and local transportation officials we spoke with, requirements related to the Davis-Bacon prevailing wage requirement can also impose administrative responsibilities on states and contractors that can raise costs. For example, the Davis-Bacon prevailing wage requirement requires contractors to submit all weekly payrolls for all employees and any requests for new job classifications to their state DOTs and ultimately to DOL. Contractors that we spoke with submit both by hard copy because they were under the impression that DOL requires a manual signature for payroll certification. As a result, according to officials from some state DOTs, states handle a large amount of related paperwork, which may add to project costs. Texas DOT, for example, estimates that they receive over 4,000 certified payrolls each week from their active contractors and subcontractors, and is responsible for reviewing 10 percent of all payrolls submitted for each contract. State and local transportation officials said that electronic submission of weekly payroll statements and certifications would make Davis-Bacon prevailing wage paperwork processing more efficient and more thorough and would decrease administrative responsibilities. Recognizing that online processing would be useful, DOL created a pilot program for selected contracting agencies and contractors to submit Davis-Bacon prevailing wage payroll statements and certifications online, and FHWA encouraged contracting agencies, such as state DOTs, to participate in the program. (See app. III for more information.) Additionally, some state transportation officials told us that the Davis- Bacon prevailing wage classifications have not been established for common highway jobs, which contributes to additional paperwork. These officials and contractors also told us that even though the Davis-Bacon prevailing wage tables are outdated, they must still complete paperwork to comply with the requirement. For example, in the wage table for Tampa, Florida, heavy construction does not include wages for two basic classes of jobs for bridge construction, concrete finisher and pile driver operator. Some state transportation agency officials said that if a job classification is not listed on the wage tables, contractors submit requests for a wage determination to DOL for each contract that involves that type of work. State transportation officials said that this requirement increases their paperwork responsibilities, which in turn increase costs because fulfilling these responsibilities requires an extensive amount of staff resources. For example, officials at Florida DOT said that when a new classification is added on a contract, it is only good on that particular contract and that they process hundreds of these requests each year. In general, DOL officials stated that the job classifications are sufficient. Regarding the wage tables, officials from Florida also said that the Davis-Bacon prevailing wage surveys that DOL uses to develop the wage tables are outdated. For example, DOL bases Davis-Bacon prevailing wages for highway construction in some counties in Florida on 1993 wage surveys. As a result of the outdated surveys, Florida DOT officials said that contractors typically pay higher wages than the federal Davis-Bacon prevailing wages to attract and keep employees. The Florida officials said that although contractors pay higher wages than the Davis-Bacon prevailing wage, they still must show compliance to the Davis-Bacon prevailing wage requirement on federal projects. As such, Florida officials stated the compliance process is an “exercise in paperwork.” Contractors in Idaho agreed with Florida DOT officials, stating that although they pay employees the market rate (which is higher than the Davis-Bacon prevailing wage rate), they still have to adhere to Davis-Bacon prevailing wage paperwork requirements, which is costly and time-consuming to complete and submit. DOL officials stated that the process they use for updating wage tables is appropriate. These officials also said that they update the wage tables at their discretion, but not on a set schedule, and that they take into account the age of the previous survey, anticipated construction volume in a state, and other factors in deciding when to update a wage table. Some state DOT officials said that interagency coordination is a challenge in the NEPA process—both in getting all the government agencies to coordinate on a project’s design and obtaining necessary permits. FHWA, along with federal agencies with environmental review responsibilities (known as resource agencies), relevant state agencies, and other planning stakeholders participate in and review detailed assessments of environmental impacts, in accordance with their responsibilities under federal or state laws. Florida DOT officials noted that they may coordinate with as many as 23 different entities in planning, reviewing, and constructing highway projects. The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) amended the law to require transportation agencies to engage government agencies and other planning stakeholders to collaborate during the initial project planning and throughout the NEPA process. However, numerous federal, state, and local transportation officials said that it is challenging to coordinate these government agencies and planning stakeholders because these entities (1) have limited funding and staff, (2) have responsibilities and priorities beyond transportation projects, and (3) may have competing interests and missions that can be difficult to resolve. Our previous report on highways and the environment also found similar challenges. More specifically, this report found that resource agency officials viewed their core regulatory duties as their main responsibility and that resource constraints, according to these officials, hampered the resource agencies’ ability to take on extra responsibilities. These constraints may limit their ability to fully participate in the early stages of environmental reviews. Furthermore, competing interests and missions can increase the time frame of a project. For example, Florida DOT officials said that on a historic bridge project, the Coast Guard wanted to build a new bridge for navigational purposes, but other federal and state agencies who were responsible for historic bridges wanted to save the historic integrity of the bridge by rehabilitating it rather than constructing a new one. The disagreement between the two parties caused delays in the development of the EIS, causing the EIS to take about 5 years to complete. Several state transportation officials and FHWA officials told us that while they collaborate with each other and the resource agencies to set deadlines, once they identified the agencies that will need to be involved in the project, approval and permitting agencies routinely miss deadlines, often delaying projects. For example, a project must receive a permit from USACE if the project involves the discharge of dredged or fill material into water. Several state DOT officials told us that this permitting process can be particularly time-consuming. One Idaho transportation official told us that for a bridge project in Idaho, Three Cities Rivers Crossing, it took an additional 1.5 years to review the EIS, partly due to USACE missing its deadline for issuing comments. FHWA officials said that there is no consequence to resource agencies or relief to transportation agencies if the resource agencies fail to meet the deadline. USACE officials said that requests for highway project reviews are evaluated in a timely manner, given that USACE has many applicants requesting authorization to impact U.S. waters, including other state and federal agencies and the general public. According to some state transportation officials and contractors, certain provisions within the federal requirements we reviewed appear to be outdated, narrowly defined, or unclearly defined, resulting in difficulties in implementing the requirements, and potentially increasing project costs and delays. In general, FHWA or other federal government officials did not agree with the state officials’ assessment that the provisions are outdated, narrowly defined, or unclearly defined. Several state DOT officials told us that, in their opinion, the $2,500 regulatory cost threshold for compliance with the Buy America program and the $750,000 regulatory personal net worth ceiling for the DBE program were outdated. FHWA established the Buy America threshold to avoid burdening states with administrative responsibilities for small projects but has not revised the threshold since 1983. FHWA officials said that they have not revised the threshold because limited staff resources and other potential statutory program changes have delayed scheduled revisions. State DOT officials said that the cost of steel for most projects, even small ones, falls above this threshold, given recent increases in steel prices. As a result, states may not obtain the administrative relief the law intended for small projects. FHWA officials agreed that the threshold should be re-evaluated or updated, and officials at one state DOT suggested that the threshold be adjusted for inflation. Additionally, according to some state transportation officials we met with, the DBE program’s $750,000 ceiling on personal net worth is outdated. According to the state DOT officials, the ceiling does not meet current economic standards and has not kept up with current inflation rates. U.S. DOT established this ceiling in 1999 to ensure that wealthy individuals are not allowed to participate in the program. U.S. DOT established the $750,000 limit based on what they believed to be a well-established and effective part of the Small Business Administration’s (SBA) assistance programs for small disadvantaged businesses and because the $750,000 figure provided for a reasonable middle ground in view of the wide range of suggestions calling for higher or lower ceilings. However, U.S. DOT officials said that they have not revised this ceiling since 1999 because SBA has not adjusted the thresholds for its SBA programs. Furthermore, according to a U.S. DOT official, since courts look closely at whether the DBE program is “over-inclusive” (i.e., serving people that it is not intended for), the ceiling has become important to the constitutional defense of the program as several federal court decisions have cited the existence of the ceiling as one of the factors leading them to uphold the program’s constitutionality. California transportation officials said that one challenge with an outdated personal net worth ceiling for the DBE program is that the low ceiling makes it difficult to recruit new DBEs for certification and retain them in the DBE program. U.S. DOT reviewed the ceiling in 2005 when they reviewed the DBE airport concessions rule. At that time, the U.S. DOT concluded that the $750,000 cap was appropriate, as it ensured that wealthy individuals did not participate in the program. FHWA officials agreed, however, that the personal net worth ceiling should be adjusted for inflation. According to officials at some state government agencies and contractors, the Buy America program’s definition of foreign steel may be too narrowly defined, which they say has caused delays or has increased project costs. More specifically, FHWA regulations for the Buy America program state that all manufacturing processes that modify a product’s physical size, shape, or chemical content must occur in the United States. For example, if steel materials are sent to a foreign country to be rolled or if a piece of machinery includes one small component of foreign steel, that product is considered to be foreign made and is not in compliance with Buy America. Florida DOT officials said one challenge with this definition is that it is difficult for them to find domestic manufacturers of mechanical systems for certain movable bridges. Florida DOT officials and contractors told us that the time they spend searching or waiting for domestic materials to be produced adds to project delays. State DOT officials also said that the Buy America provision can cause construction delays if it is discovered that the requirement is not being met after construction begins. Construction delays are generally the result of the domestic product not being available in sufficient quantities to meet project schedules or if the domestic product is not regularly produced. Furthermore, Florida DOT officials also told us that for the movable bridges, there are many components that require some level of work in a foreign manufacturing shop that then renders the entire component as foreign, even though the majority was domestically produced. In such cases, a waiver can be requested from FHWA. However, FHWA officials said that domestic suppliers are found for the majority of waiver applications. If FHWA does not grant a waiver, the design of the project needs to be corrected or the foreign components need to be replaced with domestic components. Lastly, some state transportation agency officials also said that the waiver provisions in the Buy America program are not clearly defined, and as a result, the waiver process may be inconsistently interpreted or applied at the federal level. Some state transportation agency officials told us that they often do not apply for Buy America waivers because the process lacks defined criteria and has led to inconsistent FHWA approvals. According to state transportation officials, waivers could help state transportation agencies reduce project costs by using potentially less expensive foreign steel. FHWA recently started posting notice of waiver requests on its Web site for public comment for a 15-day period and also published notices of findings on waiver requests in the Federal Register. These notices include more detailed justification for approving the waiver. Officials from the American Iron and Steel Institute, an industry trade association, told us they think these changes will result in more transparent approvals; however, FHWA officials said these new notification processes will add more time to projects because additional time is needed to receive and respond to public comments, especially when there are potential domestic manufacturers of products that oppose the waiver. In addition, FHWA officials said that the process of publishing a notice of findings in the Federal Register requires additional time and could delay a project if a waiver is requested after construction has already begun. Congress and federal and state government agencies have developed strategies to address many of the challenges federal and state transportation agencies and contractors face in completing highway projects and complying with federal requirements. According to various agency officials and highway contractors, some of these initiatives are resulting in decreased project costs and delays, though they could not quantify the cost savings or delay reductions. Specifically, Congress has attempted to improve project delivery time frames. As we have previously reported, with SAFETEA-LU, Congress made a number of changes to the environmental review processes required of state and local transportation agencies. For example, SAFETEA-LU Section 6004 amended title 23 of the U.S. Code to allow state DOTs to assume FHWA’s responsibility for determining whether certain highway projects can receive categorical exclusions, in accordance with criteria to be established by FHWA. If a state assumes this responsibility, FHWA would no longer approve categorical exclusions and serve more in a monitoring role. This change made by SAFETEA-LU was intended to facilitate more efficient reviews of transportation projects, expediting completion without diminishing environmental protections. Additionally, in 2002, the President issued an executive order for expedited environmental reviews. This executive order directs executive departments and agencies to accelerate their environmental reviews for permits and approvals for transportation infrastructure projects designated by the Secretary of Transportation to be “high priority.” FHWA and state transportation agency officials said that the executive order has helped expedite the NEPA process. Separately, FHWA has taken initiatives to provide guidance and opportunities to better streamline compliance with the federal requirements. For example, FHWA has developed a database—the State Environmental Streamlining and Stewardship Practices Database—that provides opportunities for states to share examples of streamlining and stewardship practices. This database is available to all state DOTs through FHWA’s Web site. EPA is also using electronic or online processes to assist them in streamlining projects. For example, they are using systems to allow stormwater permittees to electronically file permitting information, which reduces the amount of time that EPA needs to receive and process this information. Separately, in 2003, DOL created, and FHWA is facilitating, a pilot program for selected state DOTs to test software that provides for a Web-based format for the submission of Davis-Bacon prevailing wage payroll statements and weekly contractor certifications. The software was designed to eliminate the paperwork burden associated with labor compliance requirements for contractors and state DOTs. Other federal agencies, together with industry associations, have also offered guidance and training to state and local transportation officials and contractors to help them build better practices to streamline compliance activities. According to some state transportation officials, some of these federal efforts have helped states reduce project costs and delays. FHWA has recognized that project delays impede transportation system improvements and that streamlining environmental reviews and documentation is essential to mitigate the delays and implement highway projects more quickly and cost-effectively. Accordingly, FHWA has developed a performance measure—known as the Vital Few Environmental Streamlining and Stewardship Goal (Environment VFG)— to track the time it takes for projects to go through EAs and EISs, so that FHWA can improve the timeliness of environmental review processes, and ultimately, reduce project delays. Furthermore, by tracking time frames for environmental reviews, FHWA should be able to develop a better understanding of the key impediments to, or shortcomings in, the environmental review process, and address congressional, state, and other concerns about the process. In fiscal years 2007 and 2008, the goal of the Environment VFG was to decrease the median time to complete EAs and EISs to 12 and 36 months, respectively. In developing these goals, FHWA advised state DOTs to establish deadlines, through negotiation with FHWA division offices and resource agencies, and track data to measure success through FHWA’s Environmental Document Tracking System (EDTS). Despite this framework, FHWA has not met its goals for the Environment VFG performance measure. As figure 3 illustrates, since fiscal year 2004, the median time for completing EISs has increased by almost 26 percent, while FHWA’s goal for completing EISs has decreased. Furthermore, in fiscal year 2007, the median time to complete EISs reached 68 months— almost 89 percent above FHWA’s goal of 36 months. The median time to complete EAs in the same fiscal year was about 67 percent greater than FHWA’s goal of 12 months. FHWA officials told us that progress in meeting their goals has been slow because delays arise from federal and state governments’ need to address issues that emerge during project development, such as those issues that are mentioned in this report. Some state DOT officials also said that environmental issues that are discovered during the environmental review or changes in environmental rules established by EPA or at other federal agencies also contribute to delays. Furthermore, according to FHWA, the federal environmental review process, as well as state and local impediments such as funding and local controversy, can cause project delays. Additionally, as noted earlier, FHWA officials noted that there are no legal consequences for missing deadlines. Nonetheless, to improve the time frames, FWHA has analyzed the reasons for why environmental time frames have not been met and is attempting to improve the time frames by improving the environmental review process, as required by section 139, 23 U.S.C., since modified by SAFETEA-LU Section 6002, and by developing additional streamlining initiatives. In addition to the federal government, several state transportation agencies are implementing strategies to expedite compliance with the federal requirements we are reviewing. These initiatives include streamlining agreements, called programmatic agreements, that state DOTs have reached with federal government agencies responsible for environmental approvals and permits. For example, Texas DOT has a programmatic agreement with FHWA, the Texas State Historic Preservation Officer, and ACHP to ensure that compliance with the National Historic Preservation Act is streamlined. Under this agreement, Texas DOT acts as FHWA’s agent to carry out its responsibilities under the National Historic Preservation Act, allowing the state to make findings and determinations on whether there is an adverse effect to historic properties and to complete the consultation requirements required by the act. Some state transportation officials told us that they can save time by entering into agreements with FHWA and resource agencies to spell out broad categories of projects that can be advanced under preagreed conditions, with little or no need for individualized review. Separately, to help resolve staffing shortages at resource agencies, some state DOTs fund positions for additional staff at federal and state agencies to perform environmental review activities, including approval and permitting actions for transportation projects. As we have previously reported, while some states approve of the practice of funding positions at federal and state resource agencies for environmental reviews, other states believe the resource agencies should fund their own activities. USACE officials said that it is helpful to them to have stable positions at their office, funded by a state, to focus specifically on transportation issues and permitting because such a strategy helps the permitting process move more quickly and consistently. Finally, some state DOTs have developed ways to streamline the processing of the federal requirements. For example, Florida DOT officials developed the Efficient Transportation Decision Making process to address challenges they were facing in coordinating resource agencies during the NEPA process. This process seeks input from the resource agencies through an online interactive database for major projects throughout the NEPA process. According to a Florida DOT review of the Efficient Transportation Decision Making process, the process has yielded improved decision making and improved interagency relationships, among other benefits. See appendix III for more information on the initiatives mentioned in this section. As the demand for highway capacity has increased and as project costs have risen, the demand for nonfederal and federal highway funds has grown, making it essential that states and localities use these funds as efficiently as possible. The four federal requirements we reviewed have important economic and environmental benefits, but the steps involved in compliance may add time and costs to projects. Federal and state strategies have helped to address some of the challenges involved in compliance. However, quantitative information is limited. For example, we found little information quantifying the benefits, delays, and costs of the requirements we reviewed, though some states are beginning to track environmental costs incurred during highway projects. Without quantitative information, agencies cannot compare costs and benefits or assess the impact of their actions on project time and costs. With state and local governments constructing and expanding roads at a time when transportation dollars are limited, it is critical that states use federal dollars efficiently to finance their highway projects. In addition, some outdated provisions in the federal requirements we reviewed can limit states’ ability to spend transportation dollars as effectively as possible. The $2,500 regulatory threshold for the Buy America requirement no longer serves its original purpose of exempting states from the administrative burden associated with this requirement for small projects. This administrative burden may increase the costs of small projects, and it reduces the resources available for other projects. Finally, the $750,000 regulatory personal net worth ceiling of the DBE program has not changed since 1999, and according to state transportation officials, increasing this threshold could facilitate the hiring of minority- and women-owned firms. To address the challenges associated with the federal requirements we reviewed, to better ensure that federal funds are used as efficiently as possible, and to assist states in minimizing project delays and costs associated with federal requirements, we recommend that the Secretary of Transportation re-evaluate the $2,500 regulatory threshold for the Buy America program and the $750,000 regulatory personal net worth ceiling of the DBE program, and modify them, if necessary, through appropriate rulemaking. We provided a draft of this report to USACE, ACHP, DOL, U.S. DOT, and EPA for their official review and comment. USACE, ACHP, U.S. DOT, and EPA provided technical comments, which we incorporated into the final report where appropriate. U.S. DOT took no position on our recommendation regarding the Buy America program threshold and DBE personal net worth ceiling. DOL officials notified us that they had no comments on this report. We are sending copies of this report to interested congressional committees, the Secretaries of Transportation and Labor, the Administrator of EPA, the Chief of Engineers at USACE, and the Executive Director of ACHP. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. The objectives of this report were to review (1) the types of benefits and costs associated with selected federal requirements for federal-aid highway projects; (2) the influence of these federal requirements on states’ decisions to use nonfederal or federal funds for highway projects; and (3) the challenges associated with the federal requirements and strategies that federal, state, and local government agencies and contractors have used or proposed to address these challenges. Although many requirements apply to federally funded highway projects, our review focused on four federal requirements: the National Environmental Policy Act (NEPA), the Davis-Bacon prevailing wage requirement, the Disadvantaged Business Enterprises (DBE) program, and the Buy America program. We selected these four requirements for our review on the basis of (1) initial interviews with officials in the headquarters offices of the Federal Highway Administration (FHWA), the U.S. Army Corps of Engineers (USACE), the Environmental Protection Agency (EPA), the Department of Labor’s Wage and Hour Division, and the Advisory Council on Historic Preservation’s Office of Federal Agency Programs; and (2) interviews with experts at industry associations, including the National Conference of State Legislatures, the American Highway Users Alliance, the American Association of State Highway and Transportation Officials (AASHTO), the Associated General Contractors of America, the American Road and Transportation Builders Association, and the American Iron and Steel Institute. Furthermore, rather than focusing our review on broader requirements associated with transportation planning, such as requirements for developing a transportation improvement program, we focused our review on project-specific requirements. To identify the types of costs and benefits associated with these requirements for federal-aid highway projects, we reviewed published research and studies. We identified 30 relevant studies by searching bibliographic databases, using as our criteria studies or reports that identified benefits, costs, challenges, and strategies used to address the challenges of complying with the federal requirements. After identifying the studies, we reviewed each one to determine its relevance and applicability to our objectives. The studies we reviewed included reports on highway requirements issued by the Congressional Research Service and the Congressional Budget Office, as well as studies issued by state departments of transportation (DOT), AASHTO, and the National Cooperative Highway Research Program. We also reviewed GAO reports that addressed agencies’ tracking of costs and benefits of certain federal regulations. Finally, we reviewed an FHWA report entitled, The Costs of Complying with Federal-aid Highway Regulations. For each of the studies we identify in this report, we reviewed its methodology, including the study’s datasets, sample size, and data collection techniques, and concluded that the methodology is sufficiently reliable for the purposes of our report; however, we did not independently verify the results of these studies. To determine the influence of these federal requirements on states’ decisions to use nonfederal or federal funds for highway projects, we surveyed state DOT officials in all 50 states and the District of Columbia. In the survey, which appears in appendix II, we asked the officials how the selected federal requirements factored into their funding decisions for highway projects eligible for federal aid. After we drafted the survey, we pretested it in one state to ensure that the questions were clear and unambiguous, the terminology was used correctly, the survey did not place an undue burden on agency officials, the information could be feasibly obtained, and the survey was comprehensive and unbiased. We found the results of the pretest sufficient to administer the survey to the other states. To administer the survey, we obtained from FHWA the appropriate points of contact for transportation officials at each state DOT. Beginning on March 31, 2008, we e-mailed the survey to these transportation officials. We received a survey response from every state DOT, thereby achieving a 100 percent response rate. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey, such as problems in interpreting a response to a particular question or entering data into a spreadsheet, may introduce nonsampling errors. To minimize such errors, we pretested the survey, as noted, and verified the accuracy of the data keyed into our data collection tool by comparing the data with the corresponding survey. The survey used for this study is reproduced in appendix II. To supplement the survey, and to give respondents an opportunity to elaborate on their survey responses, we selected 10 states for follow-up telephone interviews. In determining which states to select for interviews, we excluded the 5 states we used as case studies—California, Florida, Idaho, Maryland, and Texas—and chose our sample from the remaining states. We also based our selection of these 10 states on their responses to the survey, their funding levels, and geographic dispersion. The 10 states we selected for follow-up interviews with DOT officials were Hawaii, Illinois, Maine, Massachusetts, New Hampshire, Ohio, Oregon, Utah, Virginia, and Washington. To identify the challenges associated with the federal requirements and strategies that various highway project stakeholders have used or have proposed to address these challenges, we visited and interviewed officials in California, Idaho, Maryland, and Texas, and interviewed officials in Florida by telephone. To select these states, we considered a number of factors. We identified a nongeneralizable sample based on whether a state (1) participated in the Surface Transportation Project Delivery Pilot Program, which allowed states to assume NEPA review authority, or (2) had projects designated for streamlined environmental review, pursuant to Executive Order No. 13274. In addition, we interviewed officials from federal agencies and representatives from industry associations such as AASHTO. These agency officials and industry association representatives identified states that had initiated notable streamlined transportation planning and project development processes. Finally, we included in our sample states that had received varying levels of federal funding. At the five states in our sample, we interviewed officials from FHWA division offices; other federal organizations, such as USACE and EPA division offices; state and local transportation offices; and metropolitan planning organizations, as well as private industry contractors and consultants who worked on federally funded highway projects. To understand the strategies used to address challenges, we reviewed public and private sector research, studies, agreements, and proposals on methods and programs to streamline strategies at the federal, state, and local levels. We conducted this performance audit from October 2007 through November 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal and state agencies have implemented or proposed the following strategies to address the challenges associated with federal requirements for highway projects. The 2002 executive order authorizes the Secretary of Transportation to designate infrastructure projects for expedited environmental reviews. Since these reviews were authorized, 19 projects have been selected for expedited review, including 15 highway or bridge projects. Three of our case study states—California, Texas, and Maryland—have had projects designated for expedited reviews and placed on the order’s project list. State officials in California, Maryland, and Texas reported mixed results on the effectiveness of the executive order in expediting environmental reviews. Maryland transportation officials told us that placing their Inter-County Connector (ICC) project on the executive order’s project list helped move the project forward. Previously, during the 1980s and 1990s, the project was stalled by high levels of controversy over environmental issues, lack of support for the project from state government leaders, and difficulty in getting stakeholders to collaborate on the project. Putting the ICC on the project list in 2003 enabled Maryland DOT to build on renewed support for the project from state government leaders by formalizing a collaborative process among stakeholders that sped up the project’s delivery. This collaborative process involved the creation of an interagency workgroup through which staff-level stakeholders resolved disagreements over environmental issues before the issues were elevated to higher-level government agency authorities. Once placed on the executive order project list, the ICC moved from the planning stage to a final Record of Decision in 2006 for environmental issues in 3 years. By contrast, California DOT officials said that the executive order raised agency awareness for the projects placed on the list in their states. The task force, which formulated in 2002, is comprised of representatives from a variety of different federal agencies, such as the Environmental Protection Agency (EPA) and the U.S. Forest Service. This task force reviews current National Environmental Policy Act (NEPA) implementing practices and procedures and recommends improvements to make NEPA more effective, efficient, and timely. The task force developed several products, including a handbook, Collaboration in NEPA – A Handbook for NEPA Practitioners, published in October 2007, to improve the NEPA process through collaboration. State DOT officials were familiar with this task force; however, we heard varied responses from these officials on whether the products produced by the task force helped streamline environmental review processes. SAFETEA-LU Section 6002 (23 U.S.C. §139) established a new process to promote efficient project management by federal agencies and enhanced opportunities for coordination with the public and other agencies. Several changes were made to the environmental review process, including a new requirement for a coordination plan for public and agency participation. We previously reported that changes in the review process can result in better project decisions; however, some state transportation officials told us that the process may not necessarily be more efficient, since extra steps required to comply with the provision adds time to environmental review. Section 6002 also changed the law to allow a 180-day limit on lawsuits challenging final federal agency environmental decisions—such as the approval of an environmental impact statement—on highway projects, when notices of those decisions are published in the Federal Register. We previously reported state transportation officials’ opinions that this could lead to cost savings because it limits lawsuits to a period when it would not cost as much to change project plans and, after this period, work can proceed on a project without the risk of a lawsuit. SAFETEA-LU Section 6004 (23 U.S.C. §326) changed the law to allow state DOTs to assume responsibility for determining whether certain highway projects can receive categorical exclusions, in accordance with criteria to be established through a Memorandum of Understanding between the U.S. Department of Transportation’s Federal Highway Administration (FHWA) and the state. If a state assumes this responsibility, FHWA does not approve categorical exclusions but does monitor whether the state is adequately applying FHWA’s criteria. A state can assume this responsibility after waiving its sovereign immunity. To date, only two DOTs, California DOT and Utah DOT, have assumed this authority, and only Alaska DOT is seeking it. Some state transportation officials we spoke with told us that they did not pursue this approval authority because they already have agreements in place with FHWA that streamline approvals for categorical exclusions. State officials also identified the requirement to waive sovereign immunity as an obstacle to their taking advantage of the categorical exclusions approval authority. California DOT reported to FHWA that, as a result of Section 6004, they saved a median of about 28 days and a mean of 7 days for categorical exclusion determinations statewide due to administrative efficiencies and time savings associated with consultations and coordination with federal resources agencies. FHWA officials said that, in general, agreements between state DOTs and FHWA Division Offices are valuable and they can entail shorter or defined time frames for reviews and responses. However, we previously reported that, according to one resource agency, the state assumption of responsibility for categorical exclusion reviews could decrease the input from resource agencies in addressing environmental issues. Although overall interest in Section 6004 is limited, states may be experiencing similar time savings with their own streamlining agreements with FHWA. SAFETEA-LU Section 6005 (23 U.S.C. §327) established a pilot program that gave five states—Alaska, California, Ohio, Oklahoma, and Texas—the opportunity to assume FHWA’s environmental responsibilities for highway projects under NEPA and other federal environmental laws, after waiving sovereign immunity and entering into a Memorandum of Understanding with FHWA. FHWA continues to provide oversight. This program is designed to provide information on whether delegating these responsibilities to the state will result in more efficient environmental reviews, while meeting all federal requirements for these reviews. California, whose state DOT assumed this responsibility in July 2007, is the only state participating in the program, although Alaska has expressed interest to FHWA in applying for this in the future. The other three states declined the opportunity for various reasons, including the restriction on using state funds to acquire right-of-way for highway projects prior to the NEPA decision and the inability of the states to obtain approval from their legislatures to waive sovereign immunity, which is required for the program. Furthermore, as we previously reported, states are concerned about the amount of work required to set up such a program and want to see how the program works in California. California DOT officials told us that the time to conduct environmental reviews has decreased for the projects that have undergone the NEPA process since California assumed this authority. They told us that the median review and approval times for draft environmental documents and final environmental documents was shorter for pilot program projects compared to prepilot program projects. For example, they said it took prepilot program projects a median time of 6.1 months to complete draft environmental documents, while it took pilot program projects a median of 1.6 months, a savings of 4.5 months, to complete. Additionally, they said it took a median time of 2.0 months to complete final environmental documents for prepilot program projects and 0.8 months for pilot program projects, a savings of 1.2 months. Part of the time savings, they say, occurs because FHWA has reduced its review. FHWA published its first audit on September 23, 2008. The audit reviewed fundamental processes and procedures the state put in place to carry out the assumptions of the roles and responsibilities, but it did not report on the program’s impact on environmental review time frames because it was the first audit that FHWA has conducted. Overall, FHWA found that the California DOT has made reasonable progress in implementing the startup phase of Pilot Program operations and is learning how to operate the program effectively. FHWA’s second audit was held in July 2008, and the results are forthcoming in the Federal Register. In this audit, FHWA examined performance measures. Specifically, FHWA examined changes in the time states spent on completing environmental documents. SAFETEA-LU Strategic Highway Research Program 2 (SHRP 2) SAFETEA-LU authorized funding for SHRP 2, a research program that FHWA, AASHTO, and the Transportation Research Board jointly conduct to obtain information on highway safety, renewal, reliability, and capacity. Some of the research focuses on approaches and tools for systematically integrating environmental considerations into project analysis and planning. One project involves developing a collaborative decision-making framework for transportation planners to use to enhance collaboration from project planning through project development. The study includes 25 case studies of the challenges faced by state and local transportation officials when trying to manage multiple stakeholders. The report is currently in draft. Refocus. Reform. Renew.: A New Transportation Approach for America (Proposed) To better streamline federal requirements, in 2008, U.S. DOT has proposed to (1) allow state requirements to satisfy “substantially similar” federal requirements, (2) exempt projects with less than 10 percent federal funding from federal requirements, and (3) pilot a project for some states to opt out of federal requirements under titles 23 and 49 (except the Davis-Bacon prevailing wage requirement) in exchange for a reduction in the percentage of federal funding they would otherwise receive. The proposal also includes several specific reforms to the NEPA process: clarifying what constitutes a reasonable alternative, reducing FHWA documentation requirements by allowing the final environmental impact statements to be combined with the Record of Decision into one document to simplify the process, and broadening categorical exclusion assignment authority to states. This proposal has not been finalized. This database contains information on streamlining and stewardship practices used by states as ways to efficiently and effectively fulfill their NEPA obligations. FHWA officials said that they regularly update the database with state-nominated practices and all are available to the public on the Internet. For example, for Maryland, FHWA has 30 practices listed, including a workshop to address working relationships between participating agencies in environmental reviews, various copies of programmatic agreements (described later in this appendix), and templates for evaluating categorical exclusions. FHWA officials said that this database provides states with examples and enables states to share practices for streamlining the environmental review process. Maryland DOT officials said that this database is helpful in that it helps them acquire information more efficiently and that it expands their thinking in the development of their environmental streamlining agreements—which can ultimately reduce project costs and delays. Officials at California DOT said, however, that it is difficult for them to implement solutions from another state that does not have to comply with as many state environmental laws and regulations as California. Other states say that the database has not helped them decrease projects costs or delays. To reduce paperwork burdens described earlier in this report, in 2003, the Department of Labor (DOL) created and FHWA is facilitating this pilot program for selected state DOTs to submit Davis-Bacon prevailing wage weekly payroll statements and contractor certifications on the Internet. Software automatically downloads information from payroll processors and performs diagnostics (including issuing an alert if an employee rate is incorrect). Officials from the two participating state DOTs that we contacted—Arizona DOT and Wisconsin DOT—told us that the new process provided automatic electronic approval of payrolls and eliminated the need for staff to manually review payrolls. An Arizona DOT official told us that it reduced the amount of paperwork and repetitive steps and created consistency for payroll submission across contractors. Both state DOT officials told us that they have received positive feedback from some contractors. Although, the Wisconsin DOT official told us that large contractors had challenges formatting their payroll systems to input data into the software and that small contractors had problems if they did not have access to computers. Other challenges both states mentioned focused on programming the software and educating contractors. Neither state DOT has tracked cost savings. In consultation with FHWA, AASHTO developed the Center for Environmental Excellence to help streamline environmental review and transportation delivery processes and encourage environmental stewardship. It provides transportation professionals with guidance, training, and access to environmental tools, among other types of information. The center also provides practitioners with technical assistance, including on-call help. State DOTs have taken the lead to enter into these formal agreements with FHWA and federal and state agencies to establish actions and processes for streamlining compliance with environmental regulations. The agreements often identify categories of projects that can be advanced under preagreed conditions, with little or no need for individualized review by those agencies. Agreements address a number of issues, such as compliance with the National Historic Preservation Act, NEPA, the Endangered Species Act, and other requirements. For example, the Texas DOT entered into an agreement with FHWA and the State Historic Preservation Office (SHPO) in Texas. Under this agreement, Texas DOT acts as FHWA’s agent to carry out responsibilities under the National Historic Preservation Act, allowing the state to make findings and determinations on whether there is an adverse effect to historic properties and to complete the consultation requirements required by the act. According to FHWA officials, programmatic agreements have helped streamline the environmental review process. Officials at the Advisory Council on Historic Preservation also indicated that programmatic agreements have improved project delivery time frames. To help resolve staffing shortages at resource agencies, state DOTs began in the early 1990s to fund positions for additional staff at federal and state agencies to perform environmental review activities, including approval and permitting actions for transportation projects. Previous transportation legislation, the Transportation Equity Act for the 21st Century, gave states the option of using a portion of their federal-aid highway funds to pay for the positions to conduct environmental reviews and expedite NEPA activities. State DOTs must obtain the approval of FHWA division offices for such uses of these funds. According to a 2005 AASHTO survey, 68 percent of state DOTs (34 states) fund positions. Two-thirds of these positions were at state agencies and the remainder were at federal agencies. SAFETEA-LU amended the law to allow states to pay for positions whose activities extended beyond NEPA, including planning activities that precede NEPA. As we have previously reported, states have mixed views on using state funds for positions at federal agencies. While some states fund positions at the federal agencies for environmental reviews, other states believe the federal agencies should fund their own activities. USACE officials told us that it is helpful to them to have stable positions at their office—funded by a state—to focus specifically on transportation issues and permitting. Efficient Transportation Decision Making (ETDM) ETDM, established in 2004, provides an interactive online database for government agencies to provide and review environmental and other information on a project. Florida DOT and multiple federal and state agencies developed ETDM to address difficulties in getting involved federal and state agencies to coordinate and provide timely responses for highway projects. The database provides information that these agencies need to make decisions, such as project descriptions and geographic information system maps showing locations of resources. ETDM asks agencies to concur at certain points in the process to help ensure their involvement throughout the process and reduce the likelihood that they will challenge the project later. Since its implementation, 332 transportation projects have been screened through EDTM. Florida DOT conducted a review of how ETDM was functioning. District officials at Florida DOT reported several benefits using ETDM, including that ETDM (1) provided them with better understanding of environmental issues early in planning and project development, (2) improved decision making throughout the process, (3) improved interagency relationships, and (4) improved agency responsiveness. They also estimated that between October 2004 and March 2008, over 3 years, they have saved 600 months to complete NEPA-related review activities and around $16 million in project costs for all of their projects combined. However, they said since ETDM has not been in place long enough and most projects have not been through the full project development cycle, project cost and time savings may not be fully realized. District officials also reported some challenges with ETDM. For example, they reported that some agencies commented on environmental reviews outside their jurisdictional areas and that ETDM increased project scrutiny. Other government agencies have also reported ETDM’s benefits. For example, in 2004, the U.S. Fish and Wildlife Service and EPA reported that since the implementation of ETDM, they have been able to review more projects and at a higher level of review. USACE reported in 2006 that ETDM improved their staffs’ knowledge of all the various pieces of the transportation and planning process, and as a result, removed one of the barriers of communication between USACE and Florida DOT. However, some Florida contractors told us that ETDM is not as beneficial as it could be because not all Florida government agencies participate as actively as others. The Standard Environmental Reference is an online guidance document available for California transportation agencies to help them comply with NEPA and the California Environmental Quality Act. The document provides users with information on what documentation is needed in a user-friendly format. California DOT officials told us that the Standard Environmental Reference enables local transportation agencies to focus their resources on necessary elements and helps them to avoid any potential revisions later in the NEPA process. In addition to the contact named above, Kate Siggerud (Managing Director), Ray Sendejas (Assistant Director), Tim Bober, Roshni Davé, Anne Dilger, Bess Eisenstadt, Denise Fantone, Dave Hooper, Bert Japikse, Alex Lawrence, Ashley McCall, Patricia McClure, Elizabeth McNally, Amanda Miller, SaraAnn Moessbauer, Revae Moran, Josh Ormond, and Amy Rosewarne made key contributions to this report. | As highway congestion continues to be a problem in many areas, states are looking to construct or expand highway projects. When a state department of transportation (DOT) receives federal funding for highway projects from the Federal Highway Administration (FHWA), the projects must comply with the National Environmental Policy Act (NEPA), the Davis-Bacon prevailing wage requirement, the Disadvantaged Business Enterprise (DBE) program, and the Buy America program. While complying with these requirements, states must use limited transportation dollars efficiently. As requested, GAO addressed (1) the types of benefits and costs associated with these requirements for federal-aid highway projects; (2) the influence of these federal requirements on states' decisions to use nonfederal or federal funds for highway projects; and (3) the challenges associated with the federal requirements and strategies used or proposed to address the challenges. To complete this work, GAO reviewed 30 studies, surveyed DOTs in all states and the District of Columbia, and interviewed transportation officials and other stakeholders. Several of the studies GAO reviewed describe the benefits of environmental requirements for highway projects, such as better protection for wetlands, but none attempted to quantify these benefits. Some studies quantified certain types of environmental costs, such as costs for administering NEPA. In general, however, quantitative information on environmental benefits and costs is limited because states do not generally track such information. Several studies attempted to quantify the benefits and costs of the Davis-Bacon prevailing wage requirement; however, these studies did not focus on transportation projects specifically. Furthermore, while the studies reviewed did not identify the benefits of the DBE program, transportation officials identified some benefits of the program, such as providing greater opportunities for DBE firms. One study we reviewed identified the benefits of the Buy America program, including protecting against unfair competition from foreign firms. The studies reviewed also identified, and in some cases quantified, the costs of the DBE and Buy America programs, including administrative costs and the use of higher priced iron and steel in projects. Of the 51 state DOTs GAO surveyed, 39 reported that, in the past 10 years, federal requirements had influenced their decision to use nonfederal funds for highway projects that were eligible for federal aid. Thirty-three of these state DOTs reported that NEPA factored into their decision to use nonfederal funds, while the other three requirements GAO reviewed were a factor only in a few states. State officials said that they use nonfederal funds for certain projects to avoid project delays or costs associated with the federal requirements or because of other factors, such as requirements imposed by a state legislature. A state's funding decision may depend on whether the state has requirements similar to these federal requirements. The decision may also take into consideration the availability of nonfederal and federal funds. For example, officials from one state said that they have limited nonfederal funds available, and as a result, like other states GAO interviewed, rely on the federal funds to finance their highway projects. According to transportation officials and contractors, administrative tasks associated with the federal requirements pose challenges. For example, analyzing impacts and demonstrating compliance with NEPA requires extensive paperwork and documentation. State officials also said that coordinating with multiple government agencies on environmental reviews is challenging, in part because these agencies may have competing interests. Furthermore, according to state DOTs, some provisions of the federal requirements may be outdated. For example, the $2,500 regulatory cost threshold for compliance with the Buy America program for purchasing domestic steel and $750,000 regulatory personal net worth ceiling of the DBE program have not been updated since 1983 and 1999, respectively. All of these challenges may cause delays and increase project costs. Some government agencies have implemented strategies to address these challenges and these strategies have had varied success in decreasing project costs and delays. |
Head Start is administered by HHS and was begun in 1965 as part of the “War on Poverty.” The program was built on the philosophy that effective intervention in children’s lives could best be accomplished through family and community involvement, as evidenced by the broad range of services, which include educational, medical, dental, mental health, nutritional, and social services, offered to Head Start families. In 1992, the Congress added a requirement that Head Start offer family literacy services. Today Head Start dwarfs all other federal early childhood programs both in funding support and the size of the population served. In the year 2000, Head Start served about 846,000 families and about 923,000 children. Although it began as a summer program with a budget of $96.4 million, Head Start funding today totals more than $6 billion. Head Start grantees operate programs in every state, primarily through locally based service providers. Recognizing that the years from conception to age three are critical to human development, the Congress established Early Head Start in 1994, a program that serves expectant mothers, as well as infants and toddlers. Over the course of its 36-year history, Head Start has served over 19 million children. In contrast, Even Start is substantially smaller than Head Start. First funded in 1989 under Title I of the Elementary and Secondary Education Act, Even Start also has a much shorter history of serving needy children and families than its HHS counterpart. The program’s approach is rooted in the philosophy that the educational attainment of parents in particular and the quality of the family’s environment in general are central to a child’s acquisition of literacy skills and success in school. Administered by Education, Even Start’s budget has expanded considerably, from about $15 million at the program’s beginning, to $250 million in the year 2002. During its 1999–2000 program year, Even Start served about 31,600 families and 41,600 children in programs around the country. In addition, the Congress established separate Head Start and Even Start migrant and Native American programs. These programs are not covered in this report. See figure 1 for a comparison of the numbers of children and families served by both programs. See figure 2 for a comparison of Head Start and Even Start appropriations over the last decade, 1990–2002. Although Head Start is administered by HHS, President Bush, as part of his emphasis on child literacy and school readiness, proposed transferring Head Start from HHS to Education. President Carter advocated a similar transfer in 1978. Opponents of the move argue that the social and human services component of Head Start is just as important as the educational program in achieving school readiness and the overall well being of the child. They have expressed concern that moving the program to Education would result in a narrower menu of services almost exclusively educational in nature. The separate legislation governing Head Start and Even Start established programs that overlap somewhat in goals, target population and services, but also have a number of significant differences. Even Start and Head Start similarly target disadvantaged populations, seeking to improve their educational outcomes. While both programs are required to provide education and literacy services to children and their families, Head Start’s goal is to prepare children to enter school while Even Start’s goal is to improve family literacy and education. Both programs measure achievement of their goals for children against similar criteria or measures, but only Even Start has developed measures to gauge adults’ educational attainment and literacy. Although the programs have similar legislative provisions, the federal government administers Head Start and directly funds local Head Start programs while the states administer Even Start and allocate federal funds to local Even Start programs. The separate legislation establishing Head Start and Even Start created overlapping programs, although there are many legislative differences between the two programs (see table 1). Both programs were created to address a similar problem, poor educational outcomes and economic prospects for low-income people. However, Head Start’s goal is to promote school readiness by enhancing the social and cognitive development of low-income children. Even Start’s goal is to improve the literacy and education in the nation’s low-income families. The legislation creating each program specifies the broad target group as low-income people; however, each program’s legislation specifically targets a different group of low-income individuals. Consistent with its school readiness goal, Head Start specifically targets poor preschool age children and their families. The regulations governing Head Start require that at least 90 percent of the children enrolled in Head Start come from families with incomes at or below the federal poverty guidelines or from families eligible for public assistance. Consistent with its family literacy goal, Even Start is authorized to serve low-literate parents and their young children. To participate in Even Start, the parent or parents must be eligible for participation in adult education and literacy activities under the Adult Education and Family Literacy Act. For example, at least one parent must not be enrolled in school and must lack a high school diploma or its equivalent or lack the basic skills necessary to function in society.The parent must also have a child who is below age 8. Although Even Start targets low-income families, its legislation does not specifically limit participation to low-income individuals nor does it define “low-income,” as does Head Start. However, the legislation creating Even Start does require that priority for funding be given to families who are in need of such services as indicated by their poverty and unemployment status. In line with its focus on literacy, Even Start legislation does assign priority for funding to families who are in need of such services as indicated by parent illiteracy, limited-English proficiency and other need related indicators. Although both programs target young children, there are differences in the ages the two programs are authorized to serve. Head Start is authorized to serve children at any age prior to compulsory school attendance. In 1994, as part of Head Start, the Congress established Early Head Start to ensure that infants and toddlers are served in greater numbers. This program is also authorized to provide services to pregnant women. Even Start is authorized to serve preschool age children as well, but unlike Head Start, it is also authorized to serve school-age children to age 8. Even Start is not authorized to serve pregnant women who do not have children below the age of 8. Head Start grantees are also required to reserve 10 percent of their enrollment for children with disabilities. Even Start has no such requirement. With respect to services, Head Start historically has been authorized to provide services that specifically support children’s development, such as early childhood education, nutrition, health, and social services. Head Start legislation has long required that local programs provide parent involvement activities that ensure the direct participation of parents in the development, conduct, and overall program direction of local programs. However, in 1992, the Congress added a requirement that Head Start provide family literacy services, if these services are determined to be necessary. In the 1998 reauthorization of Head Start, the Congress clarified the definition of family literacy, requiring that Head Start family literacy services be of sufficient intensity and duration to make sustainable changes in a family. The legislation also required that family literacy programs integrate early childhood education, parenting education, parent and child interactive literacy activities, and adult literacy services. The same definition of family literacy services is found in Even Start’s legislation. Even Start legislation also requires that it integrate early childhood education, adult literacy or adult basic education and parenting education into a unified family literacy program. Head Start and Even Start have some similar measures to assess children’s progress but different measures for adult literacy and educational attainment (see table 2). For example, to measure children’s cognitive growth, both programs measure language development. As shown in table 2, Even Start measures adult literacy and educational attainment by measuring gains in math and reading and by counting the number of participants earning a high school diploma or its equivalent. Head Start measures adults’ progress toward their educational, literacy and employment goals, by the number who are employed as Head Start staff–not a direct measure of adult literacy or educational attainment. According to HHS performance standards, Head Start is an important place for employment opportunities for parents and a vehicle for providing additional skills for parents who are seeking employment or who are already employed. Head Start and Even Start are managed and operated in fundamentally different ways (see fig. 3). First, Head Start is administered by the federal government and Even Start is administered by the states. Unlike some other social programs, the federal government (HHS) directly funds local Head Start programs. Many organizations that receive Head Start grant funding deliver services to Head Start participants. In some cases, the organization that receives the grant contracts with other organizations to deliver services to Head Start participants. HHS’ 10 regional offices, which are geographically dispersed throughout the nation, are responsible for program oversight and management. Even Start is administered by the states, with the federal government allocating the funds to the states. The states are responsible for oversight and management of local programs and make decisions about which programs to fund. Second, although Head Start and Even Start are both formula programs, the formulas for allocating funds differ. Although the formulas for both programs are multifaceted and complex, Head Start funding is based in part on the number of children in a state under age 5 living in poverty. The Even Start formula is based, in part, on the number of poor school-age children, ages 5 to 17, in a state. Third, Head Start and Even Start legislation have different requirements for the types of local organizations that are eligible to receive funding. For Head Start, local community organizations are authorized to administer Head Start services. Even Start’s legislation gives school districts a central role in delivering services. The law requires local organizations to form partnerships with school districts in order to receive funds. Thus, eligible entities are school districts in partnership with nonprofit community based organizations, institutions of higher education, or other nonprofit organizations. Finally, Head Start and Even Start have different matching fund requirements and different requirements for the sources of these matching funds. Head Start grantees annually may receive up to 80 percent of total funding from the federal Head Start program funds. The remaining 20 percent must come from nonfederal sources and may include such in-kind contributions as space, staff, supplies and equipment. In contrast, Even Start grantees receive a maximum of 90 percent of their total funding in the first year from the federal Even Start program, but in subsequent years this share declines. In the ninth and subsequent years of the grant, the family literacy programs are expected to largely operate independent of Even Start funding, receiving only a maximum of 35 percent of total funding from the federal Even Start program (see table 3). However, matching funds, which also include in-kind contributions, may come from other non-Even Start federal sources, such as Adult Education Act funds. In 1999–2000, both Head Start and Even Start grantees served poor families with young children, but the parents they served had different education and literacy needs and the extent to which parents received services to meet those needs differed. Even Start parents were much more likely than Head Start parents to lack a high school diploma and speak a language other than English. According to agency data, parents who enrolled their children in Head Start expected primarily to receive education services for their young children, whereas Even Start parents sought education and literacy services for themselves as well. At the sites we visited, both programs provided early childhood development and education services, as well as health and nutrition support to young children, but we found that adults participating in Even Start programs were more likely to need and thus receive a range of adult education and literacy services. According to agency data, both Head Start and Even Start grantees primarily served poor families with young children, although Even Start served infants and toddlers to a larger degree than Head Start. Almost all Head Start children were under age 5–95 percent–and most were 4 years old. About one percent of the participants were pregnant women. About two-thirds of Even Start children were under age 5, and the remaining one- third were school-age children, 5 and older (see table 4). In both programs, these young children came from very poor families. Most Head Start and Even Start families reported incomes of less than $15,000. While Even Start participation is not restricted by income, grantees give priority for services to families at or below federal guidelines for poverty, families receiving public assistance, and families with no earned income. Almost one-third of the families served by Head Start and Even Start received government assistance, such as Temporary Assistance to Needy Families, according to program year 1999–2000 data. While both programs primarily served very poor families with young children, the families differed in their parent educational attainment, ethnicity and primary language. For example, the proportion of Even Start parents without high school diplomas was substantially higher than those participating in Head Start. About 86 percent of Even Start parents reported that they had not completed high school, compared to about 27 percent of Head Start parents. Hispanic children represented about a quarter of the children attending Head Start programs and almost half of the children attending Even Start programs. These differences in ethnicity were accompanied by differences in the primary languages of children participating in each program. Even Start children were much less likely to speak English as their primary language than Head Start children, according to agency data. The vast majority–about three-fourths–of Head Start children spoke English as their primary language, compared to a little over half of Even Start children. For about one-third of Even Start children, Spanish was the primary language, compared to only one-fifth of Head Start children. In part, the tendency of Even Start children to speak English as a second language may reflect their parents’ immigration from non-English speaking countries. According to Education’s data, about two-thirds of parents with children in Even Start have lived outside of the United States, about one- fifth have lived in the United States 5 years or less, and about a third of Even Start parents were educated outside the United States. Head Start and Even Start both provided children with similar early learning and other developmental and support services. Head Start served primarily 3 and 4 year olds, while Even Start served a greater percentage of children below the age of 2. However, the extent to which parents of enrolled children received education and literacy services differed between these two programs. According to Head Start and Even Start program data, both programs provided young children with early childhood education services that included developmentally appropriate learning activities. Both programs offered home-based instruction and center-based, half-day programs several days per week, which often included meals, snacks, and health care support, such as mental health, vision, immunizations, and screenings. There are some differences, however, in services offered to children. For example, as we saw in Niceville, Florida, the Even Start program offered home-based, afterschool reading support and other learning activities for school-aged children. Although there were few differences in services for children, the major difference among these programs was the extent to which adults need and thus received education and literacy services. Only the Even Start programs we visited considered adult education and literacy services to be among their primary services. According to Education’s data, Even Start grantees provided such services as basic adult education, adult secondary education services, general equivalency diploma (GED) preparation, and English language instruction. Many Even Start programs provided flexible hours of instruction, such as evening and weekend instruction, to accommodate the scheduling needs of parents. Parents most often participated in GED preparation services and English language instruction. About half of the parents indicated that obtaining their GED was a primary reason for Even Start enrollment, although learning English, improving their chances of getting a job, improving parenting skills, and obtaining early learning experiences for their children were also important, according to Education’s data. This was true of the eight Even Start parents we spoke with during our site visits who also told us that their primary reason for enrolling in Even Start was to obtain adult education and literacy services. Two of the Even Start programs we visited enrolled large numbers of primarily Spanish speaking parents and other sites we visited enrolled many recent immigrants with limited English skills. Many of these Even Start parents received English language instruction. In Frederick, Maryland for example, the Even Start official said that many parents with limited proficiency in English had enrolled in the program to improve their English language skills. Often, she said, parents participate only long enough to acquire the basic skills needed to find a job. Most of the adults participating in Even Start–almost three quarters–were unemployed, according to Education’s data, allowing Even Start programs to enroll both the parent and the child in a program that consisted of child and adult education and literacy, parenting education, and interactive literacy activities between the parent and child. At the Even Start sites we visited, adults often received instruction during the day as their children simultaneously received early childhood services nearby, often in the same building. They also participated in joint learning activities (see fig. 4). For example, at the Frederick, Maryland, Even Start program, parents and children arrived together at the community center, which houses both the child development center and adult and family literacy center. Parents dropped off their children at the child development center and attended either adult literacy or basic education classes taught by an Even Start instructor. The parents later rejoined their children to participate in joint activities, such as reading, painting, or playing, often sharing lunch. In this way, the Even Start program integrated early childhood education, adult literacy or adult basic education, and parenting education into a unified family literacy program. Not all Even Start programs we visited locate children and their parents in a single building; however, they all provided space at some location for joint child and parent activities and required the joint participation of parents and children in the program. In contrast, 73 percent of the parents of children enrolled in Head Start had a high school diploma and thus may not have needed adult education and literacy services. Head Start programs did not require the joint participation of parents and children in the program. At the sites we visited, parents typically left the Head Start center after dropping off their children. For example, one Head Start parent told us that she thought of Head Start as an early learning program for children and had enrolled her child in Head Start to obtain early childhood education. This parent said she had completed high school and did not need adult education or literacy services. However, for those parents in need of adult education and literacy services, Head Start programs often referred them to the local public school district, local community college, or Even Start for help. For example, Head Start officials in Niceville, Florida told us that they refer adults in need of such services to Even Start. The Albany Park Community Center Head Start in Chicago offered an array of adult learning opportunities. However, unlike other sites we visited that received either a Head Start or an Even Start grant, Albany Park received both Head Start and Even Start grants, using funding from both to provide a unified family literacy program. Because Head Start does not currently collect data on the types of adult education or literacy services it provides, however, we could not determine the specific types of education and literacy services these parents received. No recent, definitive, national-level research exists about the effectiveness of Head Start and Even Start for the families and children they serve. However, both programs have effectiveness studies underway using a methodology that many researchers consider to be the most definitive method of determining a program’s effect on its participants. These studies reflect each program’s primary focus and population of interest. For instance, consistent with Head Start’s school readiness goal, its study focuses on children. Consistent with Even Start’s family literacy goal, its study is focusing on children and adults. Although final results of these studies are not yet available, HHS and Education have conducted a number of other studies that provide useful information about the Head Start and Even Start programs. These studies have prompted both legislative and programmatic changes intended to improve program operations. Although there is little definitive information about the effectiveness or relative effectiveness of Head Start and Even Start, both programs are undergoing rigorous evaluations that will provide more definitive information about their effectiveness. Both programs are currently being evaluated using an “experimental design” in which groups of children are randomly assigned either to a group that will receive program services or to a group that will not receive program services. This is an approach many researchers consider the best for assessing program effectiveness when factors other than the program are known to affect outcomes. To illustrate, in the case of a child, many influences affect his or her development. Nutrition, health, family and community, in conjunction with education and care, play a role in his or her learning. In light of all these influences, it becomes difficult to distinguish between the effects of the program and the other factors that influence a child’s learning. Figure 5 shows how this approach produces information that shows the effect of the program being studied, rather than the effects of other developmental influences on young children. In First Grade Both HHS and Education are using experimental design impact studies performed by independent research firms to measure the effect of Head Start and Even Start on the populations they serve. The Head Start study focuses on children, while the Even Start study focuses on both children and their parents. Head Start has two studies underway: one for the Head Start program and a separate effort to evaluate Early Head Start. See table 5 for a summary of the objectives for these studies. The Head Start study is a $28.3 million, national impact evaluation that follows participants over time. The study has been divided into two phases. The first phase, a pilot study designed to test various procedures and methods, was conducted last year. The second phase is scheduled to begin in the fall of 2002 and will entail data collection on 5,000 to 6,000 3 and 4 year-olds from 75 programs and communities across the country. The study will track subjects through the spring of their first grade year, and results are expected in December 2006. Although Head Start is scheduled to be reauthorized in 2003, an HHS official told us that the interim report scheduled for 2003 will likely not contain findings. The Early Head Start evaluation is a 6-year, $21 million study enlisting 3,000 families and their children, a sample drawn from 17 different Early Head Start programs. Under the Early Head Start evaluation, study participants are assessed at 14, 24 and 36 months after birth. The final report is scheduled for completion in June 2002. The preliminary findings were released at the beginning of 2001. According to HHS officials, these early results suggest that participation in Early Head Start has positive effects on both children and their parents. The Even Start study is expected to be a 6-year, $3.6 million study tracking 400 Even Start families from 18 program locations and focuses on measuring children’s readiness for school and adult literacy. The final report is scheduled for completion in 2003. The current study is actually the second Even Start impact study conducted using an experimental design. The first evaluation examined Even Start programs operated by five grantees. As we observed in our earlier study, the small number of sites examined by the study and the lack of information on control group experiences did not permit conclusions about program effectiveness. Although experimental-design impact evaluations are considered by many researchers to be the most definitive method of determining the effect of the program on participants, other types of studies have been conducted by HHS and Education that provide a wide variety of data valuable to program managers and policymakers. Often, to answer varied, complex, and interrelated questions, policymakers may need to use several different designs to assess a single program. Different study designs are used depending on the questions to be answered, the nature of the program being studied and the type of information needed. For instance, Head Start is collecting outcome data on a nationally representative sample of Head Start children and families as part of its Family and Child Experiences Survey (FACES). FACES collects a range of data that includes cognitive, social, emotional and physical development of Head Start children; the well-being and accomplishments of Head Start families, and the quality of Head Start classrooms. Since this study does not employ an experimental design, researchers cannot attribute changes in children’s performance to the Head Start program. A study of Early Head Start, which assessed the degree to which the program is being administered as the Congress intended, has been completed. This study gathered information on the characteristics of participants and the services they received. Information from this study will be integrated with the results of the experimental design study. Since Even Start’s first national evaluation, Education has also made an effort to monitor Even Start’s evolution in relation to its legislative mandate. For example, Even Start’s first study was broad in scope and designed to examine the characteristics of Even Start participants and projects, and services provided to assess how closely they resembled what had been envisioned for the program. The study served as a catalyst for changes in the program’s legislation, including a shift in focus on those most in need. As a result of the study, teen parents and previously ineligible family members can now participate. The Head Start and Even Start programs have similar goals and grantees in both programs provided similar services to children. However, the programs differ in the extent to which they served adults. Nevertheless, their common focus on improved educational outcomes for poor children and their families calls for coordination between the two programs. Indeed, federal law requires such coordination. Head Start and Even Start activities are coordinated with each other on many levels, with federal coordinating efforts more often focusing on the early childhood development aspects of the two programs, rather than on broader family literacy activities. While most Head Start and Even Start grantees have reported they collaborate with one another in some way, at the program sites we visited, we found that differences in participants and service areas may mean that collaboration involves only limited opportunities for program staff to work together. Both Head Start and Even Start programs are required to coordinate with one another and with other organizations to provide child and family support services. As a result, the programs are involved in several efforts to coordinate their activities with one another at the federal, state and local levels. Even Start’s primary effort to coordinate directly with Head Start at the federal level focused on creating complementary systems for measuring developmental and educational outcomes for young children. Both programs have defined program goals and performance indicators for young children in consultation with each other and Even Start is also developing a new tool for collecting program data that will allow it to obtain information on early childhood and family outcomes similar to that collected by Head Start through a separate data collection effort.Coordinated data collection is intended to help the HHS and Education compare programs and determine their combined contribution to children’s school readiness. However, officials from both departments said that cooperation in developing outcome measures for other components of family literacy, such as parenting and adult education, has not occurred because Head Start has made only a limited effort to measure its performance in this area. In another federal collaborative effort, Even Start has provided about $250,000 in funding to support Head Start’s family literacy initiative. The funding helps to support an evolving “promising practices” national network of Head Start family literacy programs as well as training on how to build a family literacy program. Lessons learned from model family literacy initiatives and technical assistance are to be shared with Even Start grantees. Other initiatives by Education and HHS support state and local coordination efforts. For example, HHS and Education have both awarded grants to states to create coordinating councils that include state-level administrators of federal and state-funded early childhood and human services agencies. Head Start has funded Head Start Collaboration Offices in each state, while Even Start has funded an Even Start Consortium in 36 states. Membership in each Even Start consortium must include a representative from Head Start. Head Start Collaboration Offices are encouraged to forge links with organizations promoting family literacy, such as Even Start. In addition, Even Start and Head Start have jointly sponsored training for state and regional administrators on topics such as family literacy and interagency coordination. According to an Education contractor that provides the Even Start consortia with technical assistance, some state Even Start administrators have also collaborated with local Head Start officials to identify appropriate state-level performance indicators for children. At the local level, about 74 percent of Even Start grantees reported in program year 1999–2000 that they collaborated with Head Start in some way, including receiving cash funding, instructional or administrative support, technical assistance, and space or job training support from Head Start grantees. However, the type of support most often reported by Even Start grantees was technical assistance, especially public relations support in which Head Start helped to disseminate information about the program through the community. About one-third of Even Start grantees reported receiving direct instructional, administrative support or space from Head Start grantees. Instead, Even Start grantees more often received such support from the public schools. About one-fourth of Even Start programs had formal partnerships with Head Start. At program sites we visited, we observed that local coordination activities between Head Start and Even Start grantees seemed to be greater where grantees were trying to serve the same group of families living in the same geographic area. Grantees described less interaction between the programs where the families served were different and service areas did not overlap. For example, in the state of Washington, where a Head Start and Even Start program are formal partners and are both administered by the Renton Public Schools, only a few families are enrolled in both programs. Local officials said this is partly due to the location of the two sites in different neighborhoods several miles apart, differences in the ages of the children served by each program, and differences in the adult education needs of the families. Renton Head Start does not serve infants and toddlers, whereas Even Start does. Working Head Start parents can participate in adult education classes primarily in the evenings, whereas Even Start offers adult education classes during the day only. Cooperation between the programs has primarily focused on joint participation in training events and sharing information on the few families that are enrolled in both programs. In contrast, in the Albany Park neighborhood of Chicago, the Even Start and Head Start programs are not only administered by the same grantee, but they also are located in the same community center building. Administrators told us that cooperation and collaboration is extensive, with a large proportion of families enrolled in both Head Start and Even Start programs. Albany Park staff said that Even Start and Head Start administrators work together extensively to coordinate the curriculum between the programs and to accommodate the work schedules and learning needs of the many families they serve together. Although Head Start and Even Start both serve poor children, they differ because these children’s parents differ substantially in their educational attainment and literacy. To meet the needs of parents who do not have high school diplomas or who have literacy needs, Even Start, from the beginning, designed its program to include adult education and literacy as core services. It also established a system for measuring the progress of adults in attaining adult education and literacy skills. Although a much larger percentage of parents with children enrolled in Head Start have high school diplomas, Head Start is a much larger program. Thus, there are still thousands of Head Start parents who might need and benefit from education and literacy services. Recognizing that these programs serve a similar population of children, Head Start and Even Start have jointly developed similar outcome measures for children. This common framework allows policymakers and program administrators to assess how well each program contributes to children’s development. Joint development of indicators for adults’ progress has not occurred. Head Start’s current measure of adult literacy is not a direct measure of adult literacy skills and is not comparable with indicators used by Even Start. Lacking similar measures for assessing the educational and literacy level of parents, policymakers lack information on the relative contribution each program is making toward improving the education and literacy of the parents it serves. We recommend that the secretaries of HHS and of Education direct the administrators of Head Start and Even Start to coordinate the development of similar performance goals and indicators for adult education and literacy outcomes and that the effort include the identification of indicators that specifically measure adult education and literacy. In commenting our report, Education observed that the report presents a comprehensive discussion of the similarities and differences between the Even Start Family Literacy program and the Head Start program. Education generally agreed with our presentation. However, since our recommendation focused on adult literacy indicators, Education thought it would be helpful if we included a discussion of adult education programs and the purpose of the Adult Education and Family Literacy Act . Moreover, Education suggested that we recommend that the Head Start Bureau should coordinate with the department’s Division of Adult Education and Literacy, not just Even Start, in its development of adult education-related performance indicators. Education also pointed out that Even Start’s family literacy goal encompasses school readiness for participating children. (See app. I.) Education also gave us technical comments that were incorporated as appropriate. We agree that some additional information on the Adult Education and Family Literacy Act would provide related contextual information and included a limited discussion of the act in the report. However, because the Adult Education and Family Literacy Act programs were not part of this review, we have kept our recommendation limited to the Head Start and Even Start programs. This should not be interpreted as precluding the Secretary of Education facilitating discussions between Head Start and any other office in Education that could be helpful in developing comparable indicators. Finally, although one could broadly interpret Even Start’s family literacy goal as encompassing school readiness, this is not the stated goal of the program. Therefore we have not added anything to our discussion of the Even Start goal. The Head Start Bureau, Administration of Children and Families, said HHS had no comments on the report. We are sending copies of this report to the secretaries of Health and Human Services and the Department of Education and appropriate congressional committees. Copies will also be made available to other interested parties upon request. If you have questions regarding this report, please call me at (202) 512-7215 or Eleanor Johnson, assistant director, at (202) 512-7209. Other contributors can be found in appendix II. In addition to those named above, Tiffany Boiman, James Rebbe, Stan Stenersen, and Jill Peterson made key contributions to this report. Bilingual Education: Four Overlapping Programs Could Be Consolidated GAO-01-657. Washington, D.C.: May 14, 2001. Early Childhood Programs: Characteristics Affect the Availability of School Readiness Information. GAO/HEHS-00-38. Washington, D.C.: February 28, 2000. Early Childhood Programs: The Use of Impact Evaluations to Assess Program Effects. GAO-01-542. Washington, D.C.: April 16, 2001. Early Education and Care: Overlap Indicates Need to Assess Crosscutting Programs. GAO/HEHS-00-78. Washington, D.C.: April 28, 2000. Evaluations of Even Start Family Literacy Program Effectiveness. GAO/HEHS-00-58R. Washington, D.C.: March 8, 2000. Head Start: Challenges in Monitoring Program Quality and Demonstrating Results. GAO/HEHS-98-186. Washington, D.C.: June 30, 1998. Head Start Programs: Participant Characteristics, Services, and Funding. GAO/HEHS-98-65. Washington, D.C.: March 31,1998. Head Start: Research Provides Little Information on Impact of Current Program. GAO/HEHS-97-59. Washington, D.C.: April 15, 1997. Title I Preschool Education: More Children Served but Gauging Effect on School Readiness Difficult. GAO/HEHS-00-171. Washington, D.C.: September 20, 2000. | The Head Start and Even Start Family Literacy programs have sought to improve the educational and economic outcomes for millions of disadvantaged children and their families. Because the two programs seek similar outcomes for similar populations, GAO has pointed out that they need to work together to avoid inefficiencies in program administrative and service delivery. Questions have also arisen about the wisdom of having similar early childhood programs administered by different departments. Head Start's goal is to ensure that young children are ready for school, and program eligibility is tied to specific income guidelines. In contrast, Even Start's goal is to improve family literacy and the educational opportunities of both the parents and their young children. Even Start eligibility is tied to parents' educational attainment. Despite these differences, both programs are required to provide similar services. Both programs have some similar and some identical performance measures and outcome expectations for children, but not for parents. Head Start and Even Start grantees provided some similar services to young children and families, but how these programs served adults reflect the variations in the need of the parents. No recent, definitive information exists on the effectiveness of either program so it is difficult to determine which program uses the more effective model to improve educational outcomes for disadvantaged children and their parents. At the local level, differences in the needs of participants and the location of neighborhoods served by the two programs may mean some Head Start and Even Start grantees find only limited opportunities to work together. At the national level, the Departments of Health and Human Services and of Education have begun to coordinate their efforts, including the funding of state-level organizations to improve collaboration among groups serving poor children and their families. |
TPPs prepare teaching candidates to employ effective teaching techniques and gain real-world experience in the classroom. TPPs take many forms and may be operated by a variety of organizations (see table 1). For example, the structure of TPPs can vary widely, from “traditional” TPPs such as four-year undergraduate programs with student teaching requirements, to “alternative route” TPPs such as those wherein candidates serve as a classroom teacher while concurrently completing their coursework. State oversight responsibilities related to TPPs may be held by one or more state agencies, including the state department of education, the state board of education, or a state independent standards board. States have discretion in how they conduct oversight of TPP quality, by, for example: defining the types of TPPs that may operate in the state, such as undergraduate or post-baccalaureate TPPs, or alternative route TPPs; reviewing and approving individual TPPs to operate and periodically assessing them for renewal; assessing whether any TPPs in the state are low-performing, as required under the Higher Education Act, using criteria of the state’s choosing; and, setting licensing requirements that teaching candidates must satisfy that often include completing an approved TPP and passing licensing tests that assess subject-matter knowledge or other skills. States are also responsible for adopting academic content standards for K-12 students. To address concerns about inadequately prepared students, all states are now using or developing academic standards that are explicitly tied to college and career preparation (referred to in this report as new K-12 standards). In 2010, the Council of Chief State School Officers and the National Governors Association spearheaded the effort to help states develop common college- and career-ready standards for grades K-12 in math and English, which resulted in the Common Core State Standards. As of the beginning of the 2014-15 academic year, 44 states and the District of Columbia were using the Common Core Standards that were developed and published in 2010, and the remaining states were using or developing their own college- and career-ready standards. Education does not have direct oversight authority over TPPs.main ways it influences the quality of TPPs are (1) implementing the Higher Education Act Title II reporting requirements and (2) awarding and administering several competitive grants. Title II of the Higher Education Act requires states and institutions of higher education (referred to in this report as “colleges and universities”) that conduct TPPs to annually report specific information. States and most colleges or universities that offer TPPs submit the required data using reporting templates developed by Education (see table 2). In this report we use the term “college or university” to mean “institution of higher education” as defined by the act. This table presents a summary of the templates developed by the Department of Education for reporting purposes and is not intended to provide an exhaustive list of the Higher Education Act Title II reporting requirements for states or institutions of higher education. For the statutory reporting requirements, see 20 U.S.C. §§ 1022d-1022f. Education is responsible for ensuring that states and colleges and universities offering TPPs provide the required information. The agency also compiles and disseminates the information to the public in annual reports, webpages and data spreadsheets. Education contracts with a private research organization (Westat) to provide states and colleges and universities with technical assistance in collecting the required information and to assist the agency in compiling, analyzing, and publishing the resulting data. On December 2, 2014, Education published a notice of proposed rulemaking, which among other things, proposed to modify the The proposed rule was available for Title II reporting requirements.public comment through February 2, 2015. As part of this rulemaking effort, Education has also proposed revisions to the templates for reporting the Title II data. Education administers several competitive grant programs that provide funding for TPP reforms. Of these competitive grants, the largest that is focused specifically on improving TPP quality is the Teacher Quality Partnership Grant program. In September 2014, Education selected 24 partnerships, including TPPs and partnering school districts, to receive a combined $35 million in Teacher Quality Partnership grant funds to improve teacher preparation primarily for science, technology, engineering, and math teachers. The Transition to Teaching grant program also provides grants to recruit and retain teachers in high-need schools and encourage the development and expansion of alternative route TPPs. Aside from these two grant programs, applicants for grants from the Race to the Top Fund, Investing in Innovation Fund, and Supporting Effective Educator Development program may also choose to develop proposals related to teacher preparation programs and activities, among other topics. For example, Investing in Innovation grants funded 25 projects related to TPPs out of the 143 projects the program has funded during fiscal years 2010-2014. All states reported that they review traditional TPPs before approving them to prepare new teachers and may renew approval on a periodic basis. To do this, nearly all states reported that they review TPP program design and data about candidates before approving them to prepare new teachers, and more than half also use one or more types of information to assess graduates’ effectiveness as teachers. To assess program design, 49 states and the District of Columbia reported in our survey that they assess whether TPPs seeking approval are meeting standards for program quality.specify that teaching candidates should be trained to identify the For example, these standards may appropriate teaching techniques for particular learning needs or achieve a particular threshold of subject matter knowledge. States also reported that they typically reviewed program design for traditional TPPs by reviewing syllabi or other course material (43 states), and interviewing TPP faculty or staff (41 states). To conduct these reviews, states reported that they may use teams made up of peers from other TPPs, state staff, national accreditation organization staff, or a combination, and make determinations using professional judgment. For example, when conducting these reviews, 43 states reported that they consider information collected by an external TPP accreditation organization— such as the Council for the Accreditation of Educator Preparation (CAEP)—for at least some traditional TPPs. In addition to reviewing program design, nearly all states reported in our survey that they examine data about teaching candidates. Most states reported using data about the proportion of candidates who obtain a teaching license (48 states) and the proportion of candidates who graduate (29 states) as part of their approval process for all traditional TPPs (see fig. 1). These results are based on our survey of all 50 states and the District of Columbia. In some instances, the wording of the original survey question has been modified for brevity and to remove technical terminology. See the related e-supplement, GAO-15-599SP, for the original language. Additionally, for information about the teaching candidate data that states used as part of their approval process for alternative route TPPs, see questions 9a, 9f, 9h, and 9i in the related e- supplement. “Pre-service assessments” could include evaluations of candidates’ performance as student teachers. More than half of states reported that they also review certain information about graduate effectiveness when assessing TPPs for approval or renewal. States that incorporate this information reported doing so in several ways. Most commonly, 30 states reported using surveys that assessed principals’ and other district personnel’s satisfaction with recent traditional TPP graduates. Fifteen states reported assessing traditional TPPs based on other outcomes data such as the test scores (i.e., K-12 student assessment results) of public school students taught by recent TPP graduates (see fig. 2). For example, one of our case study states uses such data to help TPPs identify potential problem areas in the training they provide to teaching candidates. Officials in this state told us they used these data to help a TPP identify shortcomings in its social science program. As of the 2014-2015 academic year, at least 10 additional states reported that they planned to begin using graduate effectiveness information or expand their current use of such information as part of their approval process, according to our case study state interviews and several survey responses. For example, officials in Tennessee told us that they currently review some effectiveness data in their approval process and plan to begin reviewing recent TPP graduates’ teacher evaluation results in the future. Additionally, officials in Arizona told us they plan to begin reviewing data on graduate effectiveness as part of their approval process. They told us that the impetus behind adding this new data is to better align the state’s TPP quality standards with recommended requirements presented by CAEP, so that TPPs that already have CAEP approval may receive an expedited state approval process. States and TPPs reported challenges collecting information on graduates’ effectiveness. Officials from 3 of our 5 case study states and 7 of the 14 TPPs we spoke to said that collecting this type of data is difficult. For example, state oversight offices or TPPs would need to obtain key information about TPP graduates—such as performance evaluations or employer survey responses—from local districts and several of these officials noted districts may be difficult to identify or not be willing to provide such information. Officials in one of our case study states noted that it was especially challenging to obtain data on teachers who work in another state or in private schools. As shown in figure 2 above, states more commonly reported using information about the effectiveness of graduates teaching in public schools within their state versus information about those teaching in private schools or in other states. When deciding whether to approve or renew TPPs for operation, 22 states reported using fewer sources of information for alternative route TPPs compared to traditional TPPs. For example, eight states reported that they assessed alternative route TPPs against state developed standards less frequently than traditional TPPs and seven states reported using observations of alternative route TPP courses or experiences less frequently (see table 3). The differences in how states approve alternative route TPPs compared to traditional TPPs may be a consequence of several factors, including how states define alternative route TPPs and differences in state requirements for alternative route and traditional TPPs. For example, some alternative route TPPs may not include a student teaching requirement, so information about student teachers’ performance would not be relevant for making approval decisions. Additionally, in response to one of our survey questions, a representative from one state explained that the requirements for alternative route and traditional TPPs are different because the requirements for traditional TPPs are set by state regulations while some alternative TPPs may be approved through other mechanisms. Some states reported not having a process for identifying low-performing TPPs, as required under the Higher Education Act. In order to receive funds under the Higher Education Act, states are required to conduct an assessment to identify low-performing TPPs in the state and provide Education with an annual list of such programs and any programs at risk of being placed on the list. States have flexibility in the criteria they use to identify low-performing TPPs, but are required to describe those criteria to Education as part of their annual Title II reports. This provides an avenue for states to assess TPP quality and make their determinations public. However, officials from seven states reported in our survey that they did not have a process for identifying low-performing TPPs in academic year 2014-2015. In response to our follow-up inquiries, state officials who reported not having a process in our survey told us they believed their other oversight procedures are sufficient to ensure quality without having a process to identify low-performing TPPs or that they were in the middle of changing their state’s process for identifying low-performing TPPs, among other reasons. For example, officials in 2 of the 7 states that reported they do not have a process to identify low-performing programs in our survey told us having a process for identifying low-performing TPPs was not necessary to ensure that all TPPs are performing sufficiently. While it is possible that all TPPs are meeting states’ performance criteria and do not merit a low-performance designation, states are still required to conduct an assessment. Officials from two other states told us that they are developing or planning to develop a new process for identifying low- performing TPPs, but do not have a process in the interim. Education officials told us that while states can choose to change their process from time to time, they are expected to use the previous process until the new one is implemented. Education does not verify whether states use the process they describe in their Title II reports to identify low-performing programs or ensure that all states have such a process. According to Standards for Internal Control in the Federal Government, an agency’s management should provide reasonable assurance of compliance with applicable laws and regulations. In addition, under the Higher Education Act, Education has responsibility for ensuring the quality of the data submitted in Title II reports. Education officials told us that state officials are expected to certify the accuracy of the Title II data they submit to Education and Education reviews the state reports for obvious instances of non- compliance. However, agency officials were not aware that two states did not describe a process to identify low-performing programs in their most recent Title II reports. Education officials also said that Education does not verify that states are in fact implementing the procedures they describe in their Title II reports, due to financial constraints. All seven states that reported in our survey that they did not have a process for identifying low-performing TPPs described a process to Education in the Without a monitoring Title II report that was reported in October, 2014.process to verify the accuracy of this information in state reports, Education may miss instances of noncompliance. If states are failing to comply with the federal requirement to conduct an assessment to identify low-performing TPPs, struggling TPPs may not receive the technical assistance they need and potential teaching candidates or hiring school districts will have difficulty identifying struggling TPPs. This may impact the quality of training provided to new teachers and result in their being inadequately prepared to educate children. The majority of states (43) reported to us that they had a process for identifying low-performing TPPs or TPPs that were at risk of becoming low-performing. The most common criteria they used to identify low- performing TPPs were failure to meet the state’s TPP or teaching standards (used by 35 states) and denial or conditional approval during the state approval or renewal process (used by 34 states).used teacher evaluations (9) or student assessments (8) to identify low- performing TPPs. According to data that states submitted as part of their Title II report, 6 states identified one or more TPPs as low-performing and 13 states identified one or more TPPs as at risk of becoming low-performing in 2013 or 2014. Education Act requires that states provide them with technical assistance. Of the 6 states that identified low-performing TPPs in 2013 or 2014, all reported in our survey that they provided technical support and informed the TPPs of their status. Most also publicized this status (4) and increased their monitoring of the TPPs (5). Two of the six states that identified low-performing programs also identified programs at risk of becoming low-performing. U.S. Department of Education, At-Risk and Low- Performing Programs by State for 2013 and 2014, in the Title II Data Tools, accessed April 2015, https://title2.ed.gov/Public/DataTools/Tables.aspx. As states shifted to new K-12 standards, most reported taking steps to help TPPs prepare prospective teachers to teach lessons aligned with the new standards. To help TPPs understand the standards, 37 states reported in our survey that they provided TPPs with written resources, information sessions, or both. All of our five case study states offered TPPs information about the standards using various approaches. For example, one state convened a half-day workshop that included information about the new standards, related changes to TPP oversight processes, and examples of how TPPs might choose to adapt to the new standards. Officials then posted a recording of the session and additional resources about the standards online. Three states reported inviting TPPs to K-12 conferences that discussed the new standards. Officials in one of those states said that this approach allowed them to foster communication and coordination among K-12 districts and TPPs. Apart from offering information, most states reported that they modified their oversight activities to verify that TPPs were aligning with new K-12 standards. In particular, 34 states reported deliberately modifying their TPP approval process to assess such alignment, by, for example, modifying state standards for TPP quality to align them with new K-12 standards. Twelve other states did not report modifying their approval process for this specific reason, but did report assessing some or all TPPs against standards for TPP quality that may nonetheless provide information about alignment. For example, the Interstate Teacher Assessment and Support Consortium (InTASC) standards are commonly used standards for TPP quality and were designed to align with the Common Core State Standards.reported placing a TPP under conditional approval due in part to limited alignment with new K-12 standards during the 2013-2014 academic year, and no states reported denying approval for this reason. According to our survey, two states Fewer states reported modifying their process for identifying low- performing TPPs to assess alignment with new K-12 standards. Specifically, in our survey, 27 states reported taking steps to modify the process for identifying low-performing TPPs. For example, California reported modifying its process for approving TPPs to assess alignment with new K-12 standards. It also continued its previous practice of identifying TPPs as low-performing if they receive conditional renewal decisions. In our survey, no states reported identifying a TPP as low- performing during the 2013-2014 academic year because of limited alignment with K-12 standards. States also used modified licensing tests designed to assess individual teaching candidates’ preparation for the new K-12 standards, according to officials from the companies that develop the tests. Two national companies—Educational Testing Service and Pearson—developed tests related to math and English language arts and reported that such tests are used by 43 states. The testing companies both reported modifying those tests to align with new K-12 standards, although we did not independently evaluate the extent of this alignment. Among the 8 states that do not use such tests, 4 reported in our survey that they modified their licensure requirements in other ways in response to the new K-12 standards. For example, all four of these states contract with testing companies to design customized tests and may request revisions to align with the new K-12 standards. Pearson officials described working with one such state to modify its custom tests to align with new K-12 standards by adding more questions that measure teaching candidates’ ability to teach non-fiction texts and address how to help hypothetical K- 12 students understand complex subject-matter information. All 14 TPPs we interviewed made changes that ranged from large-scale reforms to more modest modifications.the following three categories: (1) increasing subject-matter knowledge, (2) modifying coursework related to teaching techniques, and (3) using classroom training to provide real world experience. Examples of such changes, and related challenges, are listed below. The changes generally fell within Increasing subject-matter knowledge: Officials from 11 of 14 TPPs described changing coursework or coursework requirements to ensure that teaching candidates had sufficient subject-matter knowledge to teach to the new K-12 standards. The three TPPs that did not make such changes were graduate-level programs or otherwise required teaching candidates to obtain a bachelor’s degree before attending the TPP and officials said most or all subject-matter knowledge should be obtained prior to starting the TPP. The TPPs that did make changes sometimes coordinated with other departments, such as math and English. For example, one TPP began offering courses that were co-taught by subject-area and TPP faculty. Officials from a few TPPs stated that some academic departments were more receptive to modifying their curriculum than others, due to department priorities or other factors. For example, officials from one TPP observed that the new K-12 standards for English require the participation of teachers from multiple academic disciplines, but TPP faculty members who were not in the English department were sometimes reluctant to modify their courses accordingly. Modifying coursework related to teaching techniques: Officials from all but one of the 14 TPPs we contacted described modifying coursework related to teaching techniques. For example, officials from one TPP said that candidates should learn teaching techniques that focus on collaboration and officials from another said that it was important for candidates to make connections between different subject areas. Officials from five TPPs also said the new K-12 standards led them to start or expand courses on teaching techniques that are subject-matter specific. Using classroom training to provide real world experience: Officials from all 14 TPPs we spoke with highlighted the importance of providing candidates with ample opportunities to apply the new K-12 standards in real classrooms and receive feedback or support from mentor teachers or TPP staff. For example, officials from one TPP and a school district said that mentor teachers can be important role models for teaching candidates because they can illustrate how to apply new teaching techniques and adapt to changing expectations. Officials from 9 of the 14 TPPs described assessing candidates’ preparedness for the new K-12 standards when reviewing their performance or soliciting district feedback about teaching candidates’ performance. However, officials from half of the TPPs we interviewed acknowledged the difficulty of training new teachers in real classroom settings. In particular, officials from 6 TPPs said that school districts are training veteran teachers in the new standards at varying rates, and officials from several TPPs observed that teaching candidates may not always be paired with a veteran teacher who knows the standards as well as the teaching candidate. As states continue to implement the new K-12 standards, several TPPs we spoke with said that they planned to make further modifications. For example, one TPP that made a number of modifications to its program recently surveyed faculty and administrators to evaluate its efforts and identify any ongoing needs. In addition, many states are beginning to adopt assessments to measure K-12 students’ performance on the new standards, and officials from five TPPs said it will be important to incorporate information about these assessments into their programs. The current Title II data requirements may be of limited use in helping to improve the quality of TPPs and Education has not taken steps to evaluate whether any of them should be eliminated. Each state collects Title II information from colleges and universities that offer TPPs in the state and submits information about TPPs and some information about state oversight processes to Education annually. For example, they report the number and demographics of teaching candidates enrolled in and completing TPPs. As described below, states, TPPs, and other stakeholders often reported to us that, while they may use some data elements, others are not useful. This difference between the Title II information states and colleges and universities are required to report about TPPs and the information they and other stakeholders ultimately use to make decisions is contrary to leading practices for data-driven management, which state that measures should be selected based on their relevance and ability to inform decisions by key stakeholders. Moreover, state oversight entities are important potential users of the Title II data, but they reported mixed views about whether the data were useful for their oversight, even though it takes a relatively large number of staff hours to prepare the data. In our survey, states most frequently reported spending between 21-100 staff hours completing the annual Title II state- level reports and another 21-100 hours assisting with institution and program-level Title II reports. (See fig. 3). After devoting this time to the task, 5 states said the Title II data were “very useful,” 19 said they were “moderately useful,” and 25 states said they were “neither useful nor not useful,” “slightly useful,” or “not useful.” States most frequently reported using the Title II data to inform the approval process or inform state agency discussions about proposed TPP requirements. Very few states reported using the data to inform state funding decisions or to provide information to school districts that are hiring teachers from TPPs. Nearly every state reported some cases where certain requirements may not be helpful. For example, less than half the states told us that they use the goals and assurances section of the Title II reports, which includes information about institutions’ progress toward their goals for addressing teacher shortage areas. Further, 48 states told us either that they are not using some sections of the Title II reports or that they already collect most of the useful Title II data elements through other mechanisms. These results suggest that, even among states that find the Title II data generally useful, states are frequently required to complete some reporting requirements that they report are not contributing to their oversight activities. Most of the TPPs and K-12 districts and several other stakeholders with whom we spoke questioned the usefulness of some Title II data to themselves and other stakeholders. Officials from 6 of the 14 TPPs with whom we spoke reported spending at least a month completing the annual Title II reports. Yet, 8 of the 14 TPPs told us that very little of the Title II data was useful to them for assessing the performance of their own programs. Officials from several TPPs also expressed confusion about the overarching goal of the Title II reporting requirements. The TPP officials said they have not seen any indication that other stakeholders, such as state and federal regulators or prospective teaching candidates, were using the Title II data to inform decisions. Further, none of the officials in the six K-12 school districts we spoke with said they use the Title II data when comparing TPP performance or recruiting new teachers. Similarly, seven researchers and stakeholder organizations we spoke with questioned the usefulness of some of the Title II data for research purposes. Education officials said they have to continue to collect the current data, due to statutory requirements. According to Education officials, the agency can require states to submit new Title II data elements, but they do not have authority to remove existing data elements that are required by statute. Education described some benefits to the existing data, including that it provides a robust picture of the demographics of teaching candidates in each state, as well as the number of individuals who complete a program. However, Education also noted in the preamble to its December 2014 proposed rule that “data that are collected and reported have not led to an identification of significant improvements in teacher preparation program performance in part because the data are not based on meaningful indicators of program effectiveness.” Education officials told us they have not conducted a study to inform Congress or the agency about whether any current reporting requirements are not useful. Therefore, Education has an incomplete picture of the usefulness of various data fields and states and colleges and universities will continue submitting data that are time-consuming to gather and may not be contributing to state oversight, TPP improvement, or the public’s knowledge about TPPs. Education permits states to report some Title II data in different ways to account for their differing approaches to overseeing TPPs and the differing structures of the TPPs themselves. This affects the consistency and clarity of the data that Education ultimately disseminates. Examples of key elements that may vary include: Alternative route TPPs: States may use different definitions of alternative route TPPs when reporting information such as the number of alternative route TPPs in their state and the number of teaching candidates who enroll in or complete such TPPs each year.example, according to Education officials, one of the top teacher- producing states defines some TPPs as traditional TPPs that most states would consider alternative route TPPs. Education officials said that such decisions are within the states’ authority, but have repercussions for the consistency of national Title II data. Teaching candidate enrollment: TPPs define when a teaching candidate is formally enrolled and these definitions may range from when a teaching candidate first takes a course to after they have completed other requirements such as a certain sequence of courses. Moreover, one large online TPP defines most of its teaching candidates as enrolled in the state where the TPP is headquartered, even though they live throughout the country and most likely plan to teach in other states. Consequently, the Title II report lists that state as one of the nation’s top teacher-producing states, while state officials told us that in fact the state faces teacher shortages. Program Completers: Education’s Title II guidance defines a TPP completer as someone who has met all the educational or training requirements in a state-approved course of study for initial teacher certification or licensure. However, this definition allows for states or TPPs to choose whether they require their students to take and pass all state licensing tests before they can complete all of those educational or training requirements. As a result, different definitions of completer can lead to inconsistent data and make it difficult to make comparisons across TPPs or obtain a national picture regarding TPP completers. These varying ways of calculating and reporting key Title II data have persisted despite Education’s efforts to clarify guidance and improve reporting tools. Such efforts, according to Education and its Title II contractor, have included clarified definitions, new guidance documents, some on-site technical assistance visits, and additional data checks in the online Title II reporting system. Several states and TPPs with whom we spoke praised Education’s efforts to facilitate the Title II submission process, noting particularly that they have received excellent technical assistance when they have questions about the process. However, states and TPPs also noted remaining challenges, such as how to interpret the Title II requirements within the context of each state or TPP’s specific circumstances. Education identified some potential data inconsistencies, such as differences in state definitions of alternate route TPPs, in its most recent Title II annual report, which presented information about the 2009-2010 academic year. However, it has not provided similar information about limitations of data it has disseminated since then. In more recent Title II data in published spreadsheets and on its website, Education does not include clarifications about potentially inconsistent data elements, such as alternative route TPPs or definitions of enrollees and completers. Education officials told us they did not include such explanatory material in these other formats because they considered the explanations in the previous annual report to be sufficient. However, these officials also noted that the agency may consider adding such material in the future. By not providing the explanatory material, the data may be potentially misleading and make it difficult for users to compare across states or programs. In addition, by not providing these explanations, Education’s approach is not consistent with federal internal control standards, which require that pertinent information should be identified, captured, and distributed in a form that permits users to perform their duties efficiently. Education has made few efforts to share expertise about TPP quality among its offices. Various offices and programs within Education influence TPP quality, including the Office of Postsecondary Education, which administers Higher Education Act Title II reporting requirements, and several offices that administer competitive grants. However, the agency does not have mechanisms in place to promote regular, sustained information-sharing among these offices. Education officials said the agency and its Title II contractor occasionally create custom Title II data runs for Education program offices, but there is no systematic effort to share Title II data within the agency. Further, Education officials in one office that administers competitive grants related to TPP quality described discussing grant results internally, but did not systematically discuss TPP quality with staff in other offices and programs. Education formerly convened a teacher quality workgroup on a regular basis that included opportunities to share information across the agency regarding issues related to TPP quality. Education officials noted that such workgroups, particularly when operating out of high-level offices in the agency, have been very helpful for systematically sharing information across offices. However, the teacher quality workgroup has been inactive since the office in which it was housed reorganized in the fall of 2014 and the agency has not subsequently resumed these information-sharing efforts. This represents a missed opportunity to use relevant information to bolster the effectiveness of several Education programs, such as Race to the Top and Supporting Effective Educator Development, which fund TPP improvements, among other priorities. Federal internal controls standards emphasize that effective information-sharing efforts are those that flow broadly across an agency in a form that is helpful for those who need it to carry out their responsibilities. Without such mechanisms to promote information-sharing, programs and offices within Education may not have access to clear and useful information about TPP quality. Furthermore, Education’s efforts to support or enhance TPP quality reach a limited number of states. For example, according to Education officials, several technical assistance and research entities and the office that administers competitive grants such as the Teacher Quality Partnership Grant Program have recently undertaken research or disseminated good practices about teacher preparation. However, in our survey, only about one third of states reported receiving information from Education about TPP oversight or enhancing TPP quality, and about half of all states said they would like additional support from Education on this topic. This suggests that Education is also missing opportunities to support states that could use relevant research and assistance from Education to enhance TPP quality. Additionally, 15 states reported in our survey that they oversee TPPs through an independent standards board, oversee the licensing of new teachers through such boards, or both. Among the eight such states that responded to our follow-up inquires, officials from four expressed concern about their access to Education resources for TPP improvements because they are independent from the primary state educational agency that has formalized relationships with Education’s technical assistance providers. Gaps in the agency’s efforts to disseminate information result from information- sharing being left to individual offices’ initiative rather than an agency-wide mechanism, and Education officials noted that more could be done to share information with states and other stakeholders. Education officials also noted that sharing information with states can be challenging for some competitive grant programs, because program funds are not always available for Education to use for national activities, including providing technical assistance to non-grantees. However, this also underscores the importance of Education systematically leveraging existing resources to disseminate knowledge about enhancing TPP quality. Federal internal controls standards emphasize the importance of agencies ensuring adequate information-sharing with key stakeholders. Without such an approach, Education may be missing opportunities to support state efforts to enhance TPP quality. For example, states may be unaware of information about good practices for TPP quality that could assist them in their oversight. TPPs serve a vital role in preparing new teachers—and thereby, K-12 students—for future success. The recent shift to college- and career- ready K-12 standards further highlights the importance of such programs. Education is responsible for collecting and disseminating Higher Education Act Title II data and administering grant programs that provide an opportunity to contribute to TPPs’ continuous improvement. However, unless Education ensures that all states assess whether TPPs are low- performing as required by the Higher Education Act, low-performing TPPs may not be identified, potentially resulting in new teachers being ill- prepared to teach K-12 students. Further, if Higher Education Act Title II reports include data elements that are not useful, states and colleges and universities that offer TPPs will expend unnecessary effort on collecting information that is unlikely to improve TPP quality. Among reporting requirements that are useful, if Title II reports do not include important limitations on how the data should be used, policymakers and practitioners could draw incorrect conclusions based on the data. Finally, without increasing information-sharing within the Department and with states, Education may miss opportunities to disseminate information that could enhance TPP quality. We recommend that the Secretary of Education take the following four actions: 1. Develop a risk-based, cost-effective strategy to verify that states are implementing a process for assessing whether any teacher preparation programs are low-performing. 2. Study the usefulness of Title II data elements for policymakers and practitioners, and, if warranted, develop a proposal for Congress to eliminate or revise any statutorily-required elements that are not providing meaningful information. 3. Identify potential limitations in the Title II data and consistently disclose these limitations in the reports, websites, and data tables the agency uses to distribute the results. This could include more detailed information about data elements where definitions vary substantially from state to state or teacher preparation program to teacher preparation program. 4. Develop and implement mechanisms to systematically share information about teacher preparation program quality with relevant Department of Education program offices and states (including state Independent Standards Boards). We provided a draft of the report to the Department of Education for review and comment. Education’s comments are reproduced in Appendix II. Education agreed with our four recommendations. Regarding our first recommendation, Education noted that its proposed regulations include new requirements for how states report on TPP performance, including whether any TPPs are low-performing. Education anticipates that its final regulations will guide the agency’s future efforts to monitor states’ processes for identifying any low-performing TPPs. While it finalizes its regulations, Education also plans to work closely with select states to help ensure they comply with Title II requirements related to identifying low- performing TPPs. We believe that interim monitoring will be important, particularly if the regulations are not finalized prior to the next Title II reporting cycle. Education also agreed with our other three recommendations, stating that the agency would: examine statutorily- defined Title II reporting requirements and make recommendations to Congress to remove or revise requirements, as warranted; identify potential limitations in the Title II data and disclose such limitations in its reports, websites, and data tables; and, enhance information-sharing about TPP quality within Education and relevant state agencies. We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Education, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of this study were to answer the following questions: (1) How do states oversee teacher preparation programs (TPP); (2) What are states and select TPPs doing to prepare new teachers for new K-12 standards; and (3) To what extent does the U.S. Department of Education’s (Education) data reporting and other information-sharing support and encourage high quality TPPs? We used a variety of methods to examine all three objectives. We reviewed relevant federal laws and regulations; federal internal controls standards; and leading practices for program management. We also reviewed documents from Education, select states, and research organizations. We also conducted a web-based survey of the state entities responsible for overseeing TPPs in all 50 states and the District of Columbia (see below for more information about this survey). In addition, we conducted interviews with a wide range of stakeholders including Education officials and contractors; companies responsible for administering state licensing tests; the national accreditation organization for teacher preparation programs; researchers; as well as organizations representing teachers, teacher preparation programs, officials who head state departments of education, and governors. We also conducted case studies in 5 states. In each of these states we interviewed officials from state oversight entities, officials from two or three TPPs, and officials in at least one K-12 school district (see below for more information about these case study interviews). Finally, we analyzed data collected to fulfill Higher Education Act Title II reporting requirements and Education’s related reports and technical assistance guidance (see below for more information about this analysis). We conducted our work from March 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify how states oversee TPPs, modified their practices in response to new K-12 standards, and used Higher Education Act Title II data and Department of Education information about TPPs, we surveyed state oversight entities from all 50 states and the District of Columbia, achieving a 100 percent response rate. The survey was administered from November 2014 to March 2015. The survey used self-administered, electronic questionnaires that were posted on the Internet. We sent the survey to the official responsible for submitting each state’s Higher Education Act Title II reporting requirements and requested that this person consult with other officials in order to provide an official state response. We reviewed state responses and followed up by e-mail or telephone with select states for additional clarification and obtained corrected information for our final survey analysis. We also published survey responses in an e-publication supplemental to this report, “Teacher Preparation Programs: Survey of State Entities that Oversee Teacher Preparation Programs, an E-Supplement to GAO-15-598” (GAO-15-599SP, July, 2015). The quality of survey data can be affected by nonsampling error. Nonsampling error includes variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, and data collection and processing errors. We included steps in developing the survey, and collecting, editing, and analyzing survey data, to minimize such nonsampling error. In developing the Web survey, we pretested draft versions of the instrument with state officials in four states and consulted with officials from the Department of Education to check the clarity of the questions and the flow and layout of the survey. On the basis of the pretests, we made revisions to the survey. Further, using a web- based survey also helped remove error in our data collection effort. By allowing state officials to enter their responses directly into an electronic instrument, this method automatically created a record for each state official in a data file and eliminated the errors associated with a manual data entry process. In addition, the program used to analyze the survey data was independently verified to ensure the accuracy of this work. To obtain more detailed information about how states oversee teacher preparation; the steps that states and TPPs have taken in response to new K-12 standards; and, state and TPP views about Higher Education Act Title II reporting requirements and Department of Education grants and information-sharing related to TPPs, we conducted case studies of five states: Arizona, California, Maine, Tennessee and Virginia. We selected this non-generalizable sample of states because they represent a variety of sizes (i.e., numbers of TPPs operating in the state) and approaches to overseeing TPPs (including how they use data about graduate effectiveness to evaluate TPP performance) and to reflect different types of college- and career-ready K-12 standards. As part of these case studies, we reviewed state documents and interviewed officials from the state oversight agency, two to three TPPs, and at least one K-12 school district. In total, we interviewed 14 TPPs. We ensured that this non-generalizable sample of TPPs varied in a number of ways including whether they were located within or separate from colleges and universities; were alternative route or traditional; and offered in-person or online coursework. We also ensured that the TPPs represented a range of sizes and urban, suburban, and rural locations. We selected K-12 districts that worked with these TPPs, by providing student teaching experiences, hiring graduates, or both. Interviews with state, TPP, and K- 12 district officials from Arizona, California and Maine were conducted in person and interviews with officials from Tennessee and Virginia were primarily conducted by telephone. To assess the extent that Education’s data reporting supports and encourages high quality TPPs, we reviewed select state Title II reports; Education’s reporting templates for the state reports and institutional and program reports; Education’s December 2014 proposed rule; technical assistance documents developed by Education and its contractor; and Education’s reports, data spreadsheets and web pages that compile data from Title II state reports. We interviewed Education officials and contractors responsible for collecting, analyzing and reporting such data. We also compared Education’s reports, spreadsheets and web pages— including their definitions and descriptions of the appropriate use of reported data—against federal standards for internal controls and leading practices for data-driven program management.reliability of fields in Education’s Title II data spreadsheets regarding the number of TPPs that states reported identifying as low-performing or at risk of being low-performing by: (1) reviewing agency documents, (2) interviewing relevant agency officials, and (3) testing the accuracy of these reporting fields for a non-generalizable sample of states. We determined that Education’s compilation of state data about TPPs that have been identified as low-performing was sufficiently reliable to include in our report. In addition to the contact named above, Scott Spicer (Assistant Director); Barbara Steel-Lowney and Daren Sweeney (analysts-in-charge); Lucas Alvarez, and Hedieh Fusfield made key contributions to this report. Also contributing to this report were: Deborah Bland, Kate Blumenreich, Joanna Chan, Sarah Cornetto, Holly Dye, Kirsten Lauber, Ashley McCall, Sheila McCoy, Jennifer McDonald, John Mingus, Mimi Nguyen, and Tom James. | TPPs play a vital role in preparing teachers, including helping them teach to new K-12 college- and career-ready standards recently adopted or under development in all states. Under Title II of the Higher Education Act, states collect information on TPPs and report it to Education, which reports it to the public. Education also administers competitive grant programs related to teacher preparation. In light of new K-12 standards and questions about TPP quality, GAO was asked to review TPP, state, and federal efforts. This report examines: (1) state oversight activities, (2) state and TPP actions related to new K-12 standards, and (3) the extent to which Education shares information about TPP quality. GAO reviewed relevant federal laws and documents, surveyed all state oversight offices (with a 100 percent response rate), and interviewed Education officials and various stakeholders, as well as a non-generalizable sample of officials in five states with varied approaches to oversight and 14 TPPs in those states. State oversight officials reported that they approve teacher preparation programs (TPP) by assessing the quality of program design and analyzing candidate data such as program graduation rates, according to GAO’s 2014-2015 survey of states and the District of Columbia. However, some states reported that they do not assess whether TPPs are low-performing, as required by federal law. To receive funding under the Higher Education Act, states are required to conduct an assessment to identify TPPs that are low-performing. Seven states reported to GAO that they do not have a process to do so. State officials who reported not having a process in GAO’s survey cited several reasons including that they believed other oversight procedures were sufficient to ensure quality. Education officials told GAO they have not verified states’ processes to identify low-performing TPPs. In accordance with federal internal control standards, Education should provide reasonable assurance of compliance with applicable laws. If states fail to assess whether TPPs are low-performing, potential teaching candidates may have difficulty identifying low-performing TPPs. This could result in teachers who are not fully prepared to educate children. Officials in most surveyed states and all 14 of the TPPs GAO interviewed reported making changes to prepare teaching candidates for new state K-12 standards. Thirty-seven states reported providing TPPs with guidance about the new standards and a similar number of states reported adjusting their process for approving TPPs. Most states also required prospective teachers to pass licensing tests that have been modified in response to the new standards. Officials from all of the 14 TPPs GAO interviewed reported making changes that generally fell within the following three categories: (1) increasing subject-matter knowledge of teachers, (2) modifying coursework related to teaching techniques, and (3) using classroom training to provide real world experience. Education missed opportunities to share information about TPP quality internally and with state oversight entities. Federal internal controls standards highlight the value of effective information-sharing with internal and external stakeholders. However, Education does not have mechanisms in place to promote regular, sustained information-sharing among its various program offices that support TPP quality because the workgroup that used to facilitate such information-sharing was discontinued. Without such a mechanism, Education cannot fully leverage information about TPP quality gathered by its various programs. Furthermore, Education's current efforts to share information about TPP quality with states only reach about a third of states, according to GAO's survey, although about half of all states reported that they wanted more of such information. Gaps in the agency's efforts to disseminate information result from information-sharing being left to individual offices' initiative rather than an agency-wide mechanism. Education officials acknowledged that more could be done to share information with states and other stakeholders. Without such efforts, Education may miss opportunities to support state efforts to improve TPP quality. For example, states may be unaware of good practices identified by Education that could assist them in their oversight. Among other things, GAO recommends that Education monitor states to ensure their compliance with requirements to assess whether any TPPs are low-performing and develop mechanisms to share information about TPP quality within the agency and with states. Education agreed with our recommendations. |
Ground ambulance services are provided by a wide range of organizations that differ in organizational structure, staffing types, types of transports offered, and revenue sources. Medicare payments for ambulance services are made up of two components: a service-level payment for the type of transport provided and a mileage payment. Providers may be affiliated with an institution (such as a hospital or a fire department) and share resources and operational costs, or they may be independent and freestanding. In addition, providers may be for-profit, nonprofit, or government-based. Providers may rely heavily on volunteers, use both volunteers and paid staff, or use only paid staff. Providers may specialize in nonemergency transports, or offer both nonemergency and emergency (those responding to a 911 call) transports. Also, some providers offer only basic life support (BLS) services, while others offer advanced life support (ALS) services. ALS services require the skills of a medical technician who is more specialized and trained, such as a paramedic, than the technician who can provide BLS services. Revenue sources depend on the resources available in communities and communities’ choices about funding ambulance services. They may include community tax support, charitable donations, in-kind contributions, state and federal grants, subscription programs,payments from Medicare or Medicaid and private health insurance companies (including patient copayments or coinsurance). The mix and amount of revenues available may vary. Communities differ by the level of tax support for specific services, such as ensuring a minimum level of service in remote areas, sophistication of transport vehicles, and the training level of the staff. Medicare pays ambulance providers through a national fee schedule. (See fig. 5 in app. I for an overview of the Medicare ambulance payment formula.) Payments have two components: 1. service-level payment: for the type of transport provided, such as an ALS Level 1 transport;2. mileage payment. The mileage payment is determined by the number of miles traveled with a patient during an ambulance transport and the mileage base rate. Since 2002, CMS has increased the rural mileage rate (which also applies to super-rural transports) by 50 percent for miles 1 through 17. See 67 Fed. Reg. 9100 (Feb. 27, 2002) (adding subpart H to 42 C.F.R. part 414); 42 C.F.R. § 414.610(c)(5)(i)(2011) (this mileage rate increase is not set to expire). Also see fig. 5 in app. I for an overview of the Medicare ambulance payment formula. Improvement, and Modernization Act of 2003, temporarily extended by subsequent acts, and most recently extended through the end of 2012 by the Middle Class Tax Relief and Job Creation Act of 2012. Providers paid under a fee schedule generally have an incentive to keep their costs to deliver services at or below the fee schedule rate. Some providers rely heavily on Medicare revenues, and adequate Medicare margins for these providers may help ensure that beneficiaries have access to ambulance services. In our 2007 report, we found that providers with lower transport volumes generally had higher costs per Because of transport than providers with greater transport volumes.high fixed costs for maintaining readiness—the availability of an ambulance and crew for immediate emergency responses—providers with low volumes, which still need to maintain readiness, tended to have higher costs per transport. Other significant factors that affected cost per transport included level of volunteer staff hours, percentage of Medicare transports that are BLS, percentage of Medicare transports that are super-rural, and level of community tax support. Providers’ costs for providing ground ambulance transports were highly variable in 2010, ranging from a low of $224 per transport to a high of $2,204, with a median cost per transport of $429. The variability of costs per transport reflected differences in certain provider characteristics, such as volume of transports, intensity of Medicare transports, and level of government subsidies received. Providers reported that personnel costs accounted for the largest percentage of their total costs in 2010 and contributed the most to increases in total costs between 2009 and 2010. The median cost per ground ambulance transport for providers in our sample was $429 in 2010, but providers’ costs per transport ranged from a low of $224 to a high of $2,204. Five percent of providers had costs per transport that were less than $253, and 5 percent had costs per transport that were more than $924. Figure 1 shows the distribution of 2010 costs per transport for providers in our sample. Among the population of providers from which our sample was drawn, the estimated median cost per transport ranged from $401 to $475, which represents the 95 percent statistical confidence interval around the median and is the range within which we expect the population median cost per transport to fall in 95 percent of the samples we could have drawn. Super-rural providers had estimated median costs per transport that were significantly higher than urban providers (see table 1). The variability associated with our survey data did not allow us to conclude that rural providers’ estimated median costs per transport were significantly different from super-rural or urban providers. As will be discussed later, when we controlled for other provider characteristics that affected cost per transport using regression analysis, differences in cost per transport by service area were not significant.characteristics other than service area were more important in explaining the variation in cost per transport. The median Medicare margin, including add-on payments, was about positive 2 percent in 2010 for the 153 providers in our sample.removed the add-on payments, we found that payments decreased for the providers in our sample, resulting in a lower median Medicare margin of negative 1 percent for those providers. See table 2. Ambulance transports for all Medicare fee-for-service beneficiaries in the nation increased by 33 percent from 2004 to 2010. All three service areas—urban, rural, and super-rural—experienced growth. Transports per 1,000 beneficiaries in super-rural areas grew the most, by 41 percent, and transports per 1,000 beneficiaries in rural and urban areas increased by 35 percent and 32 percent, respectively. (See table 4.) The increase in ambulance transports from 2004 to 2010 is attributable primarily to an increase in BLS nonemergency transports, which rose by 59 percent from 2004 to 2010. Super-rural areas experienced the largest increase in BLS nonemergency transports (82 percent). The increase in Medicare beneficiaries’ use of ambulance services did not appear to be caused by changes in the demographic characteristics of beneficiaries. For example, factors such as age, race, and sex remained stable from 2004 to 2010 in urban, rural, and super-rural areas. Representatives we spoke with from one ambulance provider organization suggested that some of the increase in ambulance transports was attributable to increased billing for Medicare ambulance services at the local-government level. Some local governments that provided ambulance transports free of charge had been reluctant in the past to bill insurers such as Medicare because patients would then be financially responsible for out-of-pocket insurance costs, such as deductibles and copayments. The increased out-of-pocket costs for patients had the potential to result in less community support of ambulance providers through fewer charitable contributions and fewer volunteers. However, these local governments have begun to bill Medicare as well as other insurers because of increased budgetary pressures. Representatives we spoke with also added that the introduction of the national fee schedule in 2002 may have contributed to increased billing because it allowed providers to better anticipate the amount of revenue they could receive from Medicare. The Department of Health and Human Services (HHS) Office of Inspector General (OIG) has explored increases in ambulance utilization and has cited improper payments as one potential cause. For example, HHS OIG found that nonemergency transports, including BLS nonemergency transports, made up the majority of improper payments for ambulance services, and particularly transports for dialysis services. HHS OIG also found that Medicare’s ambulance transport benefit is highly vulnerable to abuse and found that many ambulance transports paid for by Medicare did not meet Medicare program requirements, including transports that were not medically necessary. We provided a draft of this report to HHS and invited representatives of AAA to review the draft. HHS had no general or technical comments on behalf of CMS. The AAA representatives provided oral comments and generally agreed with our findings; however, AAA had some questions regarding our methodology and conclusions, which we clarified in the report where appropriate and discuss below. In addition, AAA provided technical comments, which we incorporated as appropriate. AAA representatives questioned whether the Medicare margin results were comparable to those of the 2007 report and were concerned that readers would conclude that providers’ Medicare margins have increased over time. We clarified in the report that we do not consider the results reported in 2007 and in the current report to be directly comparable because the samples examined in each report were different and we reported median Medicare margins in the current report whereas in 2007 we reported average Medicare margins. AAA representatives noted that our sample contains providers that have been in business since at least 2003 and that the cost data from this sample may not be representative of all ambulance providers. We agree that the providers in our sample represent mature and well-established organizations—an advantage because this approach avoids start-up organizations with potentially high start-up costs, as described in our scope and methodology. Despite the differences in the samples and the type of measure used for reporting Medicare margins, both of these studies showed wide variation in costs per transport and Medicare margins. AAA representatives had some questions about the results of our regression analysis. For example, the regression results suggest that ambulance providers that receive a greater proportion of government subsidies tend to have higher costs. The representatives theorized that providers with higher costs seek additional government support and did not think this finding was consistent with how their industry operates. As described in the report, the Medicare Payment Advisory Commission found an association between increased resources and increased costs in the hospital industry and theorizes that such hospitals face less pressure to control costs. We found an association in the ambulance industry but determining causality was beyond the scope of our work. AAA representatives also questioned the regression analysis results that indicated that providers’ use of volunteer staff did not significantly contribute to differences in providers’ total costs, because our survey data indicated that personnel costs were, on average, 61 percent of providers’ total costs. The results may be a consequence of the relatively small sample size and, in addition, a small proportion of providers in our sample using volunteer staff (21 percent). Finally, the AAA representatives commented that ground ambulance providers’ current Medicare payments are lower than those we calculated for 2010 because of the expiration of a required temporary increase in Medicare payments for certain geographic areas, the implementation of a policy for reporting fractional mileage, and the introduction of a productivity adjustment relative to the annual inflation adjustment of the fee schedule. In addition, AAA noted that the cost of fuel has increased since 2010. We acknowledge that these factors likely lowered Medicare payments and increased costs for some providers after 2010, the most recent year for which data were available when we began our study. We are sending copies of this report to other congressional committees and the Administrator of CMS. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. GAO staff who made major contributions to this report are listed in appendix II. This appendix describes the data and methods we used to respond to our research objectives. We conducted a survey of ambulance providers to collect data on their costs and other characteristics. We relied on these survey data for much of our analyses and supplemented our survey results with information from other sources, including Medicare claims data, as appropriate. We also analyzed Medicare claims data to determine payments to ambulance providers as well as to determine the number of Medicare ambulance transports. We tested the internal consistency and reliability of the data from our survey and the Medicare claims data and determined that all data sources were adequate for our purposes. We conducted our work from April 2012 through September 2012 in accordance with generally accepted government auditing standards. To collect data on ground ambulance providers’ costs, revenues, transports, and organizational characteristics for calendar year 2010, or for the fiscal year that corresponded to all or the majority of a provider’s calendar year 2010 data, we sent a web-based survey to a random, nationally representative sample of 294 eligible ambulance providers. We obtained data from 154 providers for a response rate of 52 percent, after excluding cost outliers and surveys with unreliable data. We determined that our sample was nationally representative of the approximately 2,900 ambulance providers that billed Medicare in 2003 and 2010, were still operational in 2012, and did not share costs with nonambulance services or air ambulance services. However, the small sample size and the variability of reported costs reduced the precision of our estimates. We drew potentially eligible providers for our survey from an existing sample, originally developed for our 2007 report, of 900 non-hospital- based ground ambulance providers that billed Medicare in 2003. Through Internet searches and phone contacts to ambulance providers, we excluded any providers that (1) were no longer in business; (2) shared costs with nonambulance services, such as those providers affiliated with a fire department; or (3) we were otherwise not able to contact.did for the 2007 report, we excluded ground ambulance providers that also provided air ambulance services. After all exclusions, we had 294 eligible providers for potential survey participation. On the basis of the number of providers that were eligible for our sample and the number of providers that responded to our survey, we calculated sample weights to estimate how many Medicare ambulance providers our sample represented. To develop our survey instrument, we modified the survey instrument used for our 2007 report, which was mailed to ambulance providers, to tailor it to our current objectives and format it for use as a web-based survey. We retained questions about ambulance providers’ costs, revenues, and transports, as well as questions to identify organizational characteristics that might affect ambulance providers’ costs, such as the use of volunteer staff. We added questions related to changes in total cost (increases or decreases) from 2009 to 2010 and the cost components that most contributed to the changes. We also asked providers for their National Provider Identifier (NPI), which providers use to bill Medicare, and their Provider Transaction Access Number (PTAN). These numbers enabled us to identify and analyze Medicare claims for We needed these current identifiers to link the providers we surveyed.the providers in our sample to Medicare claims data because our sample was based on the sampling frame of our 2007 report, and Medicare has implemented a new identification system since then. We sought feedback on our survey instrument from both internal and external sources. It was reviewed by internal survey experts and pretested on seven ambulance providers. We also consulted with the American Ambulance Association (AAA), an industry group that represents ambulance providers. On the basis of the feedback we received, we modified the survey instrument as appropriate. We sent our survey by e-mail to 294 eligible ambulance providers on April 12, 2012. We asked providers to complete the survey within 2 weeks of receipt. We later extended this deadline 2 weeks to give providers more time to complete the survey. Providers were encouraged to contact us by e-mail or a toll-free number so that we could resolve any questions or problems. We sent three reminder e-mails to providers that had not yet completed the survey (6, 14, and 21 days after sending the survey to providers) and made two rounds of reminder telephone calls to encourage participation. AAA and the National Association of Emergency Medical Technicians encouraged providers to participate in the survey. When providers returned surveys that were incomplete, invalid, or resulted in conflicting responses to key items, we conducted follow-up by phone and e-mail. We took steps to ensure that the data reported in the survey were valid and reliable. First, we included in the survey instrument questions intended to validate the reported cost data. For example, we asked providers whether certain cost components (such as personnel costs) were included in the total cost amount submitted, and we asked how confident providers were about the total cost amount submitted. As a result, we excluded from our analyses one provider that was not confident in the total cost amount. Second, we conducted analyses to identify any incomplete data or inconsistencies in responses. If we found such data, we contacted the provider to try to obtain complete or corrected data. We excluded three providers that were not able to provide complete data on total cost or total transports. Third, we used a lognormal distribution to exclude outliers with a cost per transport more than three standard deviations from the mean. We excluded three providers with costs per transport that were outliers. All computer programs we used for our analyses were peer reviewed to verify that they were written correctly and executed properly. On the basis of our efforts to validate the data, including computer testing and corrections, we concluded that the data were sufficiently valid and reliable for our purposes. All sample surveys are subject to sampling error—that is, the extent to which the survey results differ from what would have been obtained from the population instead of the sample. The sample is only one of a number of samples that we might have drawn. As a result, we reported the results of our analyses with their 95 percent confidence intervals. The 95 percent confidence interval refers to the range of values within which we would expect the true population value to fall in 95 percent of the samples we could have drawn. We analyzed 2010 Medicare claims data for the survey nonrespondents and compared this information with similar claims data for providers in our sample. Using Medicare claims data for all survey recipients, we were able to test for potential nonresponse bias for the characteristics contained in the claims data. The nonresponse analysis did not find any statistically measurable bias that would affect our analyses of providers’ costs. We used regression analysis to investigate the relationship between providers’ total cost and provider characteristics that may have affected their costs. We opted for a total cost model using a logarithmic functional form because it is well grounded in microeconomic theory. Although we considered using a similar model of the same functional form with cost per transport as the dependent variable, we determined that the parameter estimates of such a model would be similar to the total cost model. Provider characteristics included in our model were: (1) volume of transports, (2) cost of doing business, (3) mix of Medicare transports, (4) intensity of Medicare transports, (5) service area, (6) use of volunteer staff, (7) receipt of government subsidies, and (8) ownership type. We used those results to produce a graph illustrating the relationship between cost per transport and volume of transports. We also used the results of the regression analysis to estimate the effect on providers’ cost per transport of reducing the value of each of two variables that were significant in the regression. See table 5 for the characteristics included in the model, how each characteristic was measured, and the data source for each characteristic. Our regression analysis modeled total cost at the provider level as a function of the provider characteristics described above. We used ordinary least squares to model the log of total costs for a provider.model was specified in log-log form to conform to standard microeconomic theory regarding cost functions. The two continuous independent variables—transport volume and geographic practice cost index (GPCI)—were entered in log form. The remaining variables were not entered in log form because they were either indicator variables (value of 0 or 1) or percentage variables (values ranging from 0 to 1.00). Three of the explanatory variables in the regression were statistically significant at the 1 percent or better level in explaining the variation in providers’ total costs: total transports, percentage of revenues from The government subsidies, and percentage of Medicare transports that were nonemergency. Table 6 shows the regression results. We used the regression results to predict the log of total cost and then converted it to total cost by taking the antilog. We applied an adjustment to the resulting prediction of total cost to account for the fact that our regression was for log total cost rather than total cost. We then divided total cost by total transports to derive cost per transport. We used this method to produce predictions of cost per transport for the range of 1 to 20,000 transports shown in figure 2 of the report. We also used the regression results to estimate the effect on cost per transport of a reduced percentage of revenues from government subsidies and a reduced percentage of nonemergency transports. In each case, we held the other variables in the regression model at their regression sample mean and calculated cost per transport for the sample two ways: one with the value of the variable of interest set at its sample average and another with it set at a value 25 percent less. We reported the difference between these two values for each variable. To examine the relationship between Medicare payments and providers’ costs, we used Medicare claims data to calculate Medicare payments in 2010 for the providers in our sample, and we calculated Medicare margins—the percentage difference between providers’ Medicare payments per transport and their costs per transport. To examine ambulance transports per 1,000 Medicare beneficiaries, we used Medicare claims data and Centers for Medicare & Medicaid Services (CMS) 2010 Medicare enrollment data. We found CMS’s claims and enrollment data to be sufficiently reliable for the purposes of this report. We calculated 2010 Medicare payments for the providers in our sample using Medicare carrier claims data. We identified relevant ambulance claims for 153 providers by using the NPIs (which providers use to bill Medicare) and PTANs reported by providers on the survey. We excluded any Medicare claims without either service-level or mileage payments and any claims with service-level payments that were more than three standard deviations from the mean of the log distribution for all such claims. We also excluded any claims for transports with multiple patients because the calculations for these payments require additional information not available on Medicare claims. See figure 5 for the payment formulas specified in the Medicare ambulance fee schedule. To calculate service-level payments, we used the type of transport identified on the claim to determine the associated relative value unit, which is a constant multiplier that adjusts the service-level base rate to account for the mix and intensity of the service, and we used the 2010 service-level base rate of $209.65. We used the zip code where the transport originated to determine the adjustment from the geographic practice cost index (GPCI), which is used to account for the different costs of operating ambulance services in different regions of the country. In accordance with CMS’s payment methodology, we adjusted 70 percent of the service-level payment by the GPCI, and we did not adjust the other 30 percent by the GPCI. We also used the zip code where the transport originated to determine the applicable urban, rural, or super-rural add-on payment rate. To calculate mileage payments, we used the number of miles reported on the claim and the 2010 mileage base rate of $6.74. We used the zip code where the transport originated to determine the applicability of the permanent mileage increase for miles 1 through 17 for rural and super-rural transports and to determine the applicable urban, rural, or super-rural add-on payment rate. The total fee schedule payment for each transport is the sum of the service-level and mileage payments. We calculated payments with and without the applicable add-on payment rates, and we assumed that providers charged the maximum allowed amount under the ambulance fee schedule.that our payment calculations were comparable to actual payments made based on the claims, we compared the payments we calculated with add- ons to the payment amounts on the claims for a random sample of 6,000 urban, rural, and super-rural claims, and we found the difference in the amounts to be less than 1 percent. All payments are expressed in 2010 dollars. To ensure For the providers in our sample, we reported the median of providers’ Medicare payment per transport by predominant service area (urban, rural, or super-rural) and for all providers. Medicare payment per transport, we divided the sum of the provider’s Medicare payments by the sum of its Medicare transports. To calculate each provider’s Medicare margin, we used the provider’s cost per transport, as calculated from the survey responses, and its Medicare payment per transport, described in the previous section. We subtracted the provider’s cost per transport from its Medicare payment per transport, and we divided this amount by the provider’s Medicare payment per transport. For the providers in our sample, we reported the median Medicare margin and the distribution of providers’ Medicare margins by predominant service area (urban, rural, or super-rural) and for all providers. As we did in the 2007 report, we classified providers as super-rural if 60 percent or more of their Medicare transports in 2010 originated in a super-rural zip code. We classified providers as rural if they did not meet the super-rural definition and 60 percent or more of their Medicare transports in 2010 originated in rural or super-rural zip codes. We classified providers as urban if they did not meet the rural or super-rural classifications. Since some providers furnish transports in more than one area, there is likely to be some measurement error in identifying the full effect of service area on costs. excluded claims with service-level payments outside of three standard deviations from the mean of the log distribution for all such claims for each of these years. We counted Medicare beneficiaries as the number of months beneficiaries were enrolled in Medicare Part A or B in 2010 divided by 12. We then divided the number of transports by the number of enrolled Medicare beneficiaries and multiplied the quotient by 1,000. We also examined the change in transports per 1,000 Medicare beneficiaries from 2004 to 2010. Medicare claims data, which are used by the Medicare program as a record of payments made to health care providers, are closely monitored by both CMS and Medicare Administrative Contractors—contractors that process, review, and pay claims for Medicare Part B–covered services, including ambulance services. The data are subject to various internal controls, including checks and edits performed by the contractors before claims are submitted to CMS for payment approval. Although we did not review these internal controls, we assessed the reliability of Medicare claims data by reviewing related CMS documentation, interviewing agency officials about the data, and comparing payments in a sample of claims to expected payments based on Medicare’s published ambulance fee schedule. We determined that the Medicare claims data were sufficiently reliable for the purposes of this report. In addition, we assessed the reliability of CMS’s enrollment data by reviewing related CMS documentation and comparing the enrollment data to published sources. We determined that Medicare enrollment data were sufficiently reliable for the purposes of this report. In addition to the contact named above, Christine Brudevold, Assistant Director; Ramsey Asaly; Carl S. Barden; Stella Chiang; Carolyn Fitzgerald; Leslie V. Gordon; Corissa Kiyan; Rich Lipinski; Elizabeth T. Morrison; Aubrey Naffis; and Eric Wedum made key contributions to this report. Ambulance Providers: Costs and Expected Medicare Margins Vary Greatly. GAO-07-383. Washington, D.C.: May 23, 2007. Ambulance Services: Medicare Payments Can Be Better Targeted to Trips in Less Densely Populated Areas. GAO-03-986. Washington, D.C.: September 19, 2003. | Since 2004, Congress has authorized supplemental temporary payments, called "add-on" payments, to augment Medicare fee schedule payments to ambulance providers. The add-on payments increased payments for transports in urban, rural, and super-rural (the least densely populated) areas by $175 million in calendar year 2011, according to the Medicare Payment Advisory Commission. In 2007, GAO reported a decline in transports by beneficiaries in super-rural areas and recommended that the Centers for Medicare & Medicaid Services (CMS) monitor beneficiary use of ambulance transports to ensure access to services, particularly in super-rural areas. The Middle Class Tax Relief and Job Creation Act of 2012 required GAO to update the 2007 report. GAO examined, for 2010 (the most recent year complete data were available when GAO began the study), (1) ground ambulance provider costs for transports, (2) the relationship between Medicare payments and provider costs, and (3) beneficiary use of ground ambulance transports. To do this work, GAO sent a survey to a sample of eligible providers based on the 2007 report sample asking for provider costs and characteristics. The sample is representative of all ground ambulance providers that billed Medicare in 2003 and 2010, were operational in 2012, and did not share costs with nonambulance services or air ambulance services. GAO also performed a regression analysis to examine factors that affect costs, analyzed Medicare claims and enrollment data, and interviewed representatives of ambulance provider organizations. CMS reviewed a draft of this report and had no comments. Ground ambulance providers' costs per transport for 2010 varied widely. The median cost per transport for the providers in GAO's sample was $429, ranging from $224 to $2,204 per transport. Provider characteristics that affected cost per transport were volume of transports (including both Medicare and non-Medicare transports), intensity of transports (the proportion of Medicare transports that were nonemergency), and the extent to which providers received government subsidies. Higher volume of transports, higher proportions of nonemergency transports, and lower government subsidies were associated with lower costs per transport. Providers reported that personnel cost was the largest cost component in their 2010 total costs and the biggest contributor to increases in their total costs from 2009 to 2010. The median Medicare margin, including add-on payments, was about +2 percent in 2010 (meaning that providers' Medicare payments per transport exceeded their overall costs per transport) for the providers in GAO's sample, but Medicare margins varied widely for those providers. When GAO removed the add-on payments, payments decreased for the providers in the sample, resulting in a lower median Medicare margin of -1 percent. Due to the wide variability of Medicare margins for providers in the sample, GAO cannot determine whether the median provider among the providers in the population that the sample represents had a negative or positive margin. The median Medicare margin with add-on payments ranged from about -2 percent to +9 percent, while the median Medicare margin without add-on payments ranged from about -8 percent to +5 percent. Ground ambulance transports for all Medicare fee-for-service beneficiaries grew 33 percent from 2004 to 2010. Transports by beneficiaries nationwide grew the most in super-rural areas (41 percent) relative to urban and rural areas. The increase overall is attributable primarily to an increase of 59 percent over this period in basic life support (BLS) nonemergency transports, which include noninvasive interventions, such as administering oxygen. In comparing this growth by service area, BLS nonemergency transports in super-rural areas grew the most--by 82 percent. Representatives from an ambulance provider organization suggested the increase in transports may be from increased billing by local governments. Some local governments that used to provide Medicare transports free of charge may bill Medicare now because of increased budgetary pressures. The Department of Health and Human Services Office of Inspector General has cited improper payments--which can be the result of billing mistakes--as one potential cause for increases in Medicare ambulance utilization and has stated that the Medicare ambulance transport benefit is highly vulnerable to abuse, with some payments for transports not meeting program requirements. |
DOD’s civilian workforce has undergone a sizeable reduction but remains critical to DOD’s mission success. Strategic human capital management provides a framework for maximizing the value added by the civilian workforce through aligning its civilian human capital initiatives to support DOD’s overarching mission. Since the end of the cold war, DOD has undergone sizable reductions in its civilian workforce. Between fiscal years 1989 and 2002, DOD’s civilian workforce shrank from 1,075,437 to 670,166—about a 38 percent reduction. DOD accomplished this downsizing without proactively shaping the civilian workforce to have the skills and competencies needed to accomplish future DOD missions. As a result, today’s workforce is older and more experienced, but 58 percent will be eligible for early or regular retirement in the next 3 years. Moreover, the President’s fiscal year 2003 budget request projects that DOD’s civilian workforce will be further reduced by about 55,000 through fiscal year 2007. As shown in figure 1, at the end of fiscal year 2002, the military departments employed 85 percent of DOD’s civilians; 15 percent were employed by the other defense organizations. Furthermore, the 2000 Defense Science Board Task Force report observed that the rapid downsizing during the 1990s led to major changes in the roles of and balance between DOD’s civilian and military personnel and contractor personnel. The roles of the civilians and private-sector workforce are expanding, including participation in combat functions—as a virtual presence on the battlefield—and in support duties on both the domestic and international scenes. These changing roles call for greater attention to shaping an effective civilian workforce to meet future demands within a total force perspective. This perspective includes a clear understanding of the roles and characteristics of DOD’s civilian and military personnel and the most appropriate source of capabilities—military, civilian, or contractor. The Under Secretary of Defense for Personnel and Readiness is the principal staff assistant and advisor to the Secretary and Deputy Secretary of Defense for total force management as it relates to readiness, personnel requirements and management, and other matters. The Under Secretary’s office develops policies, plans, and programs for recruitment, training, equal opportunity, compensation, recognition, discipline, and separation of all DOD personnel, including active, reserve, and retired military and civilian personnel. This office also analyzes the total force structure as it relates to quantitative and qualitative military and civilian personnel requirements. Within this office is the Office of the Deputy Under Secretary of Defense for Civilian Personnel Policy, which formulates plans, policies, and programs to manage the DOD civilian workforce. Policy leadership and human resource programs and systems are provided through the Civilian Personnel Management Service. Strategic human capital management involves long-term planning that is fact based, focused on program results and mission accomplishment, and incorporates merit principles. Studies by several organizations, including GAO, have shown that highly successful performance organizations in both the public and private sectors employ effective strategic management approaches as a means to prepare their workforce to meet present and future mission requirements as well as achieve organizational success. In our 2001 High-Risk Series and Performance and Accountability Series and again in 2003, we designated strategic human capital as a high-risk area and stated that serious human capital shortfalls are threatening the ability of many federal agencies to economically, efficiently, and effectively perform their missions. We noted that federal agencies, including DOD and its components, needed to continue to improve the development of integrated human capital strategies that support the organization’s strategic and programmatic goals. In March 2002, we issued an exposure draft of our model of strategic human capital management to help federal agency leaders effectively lead and manage their people. The model is designed to help agency leaders effectively use their people and determine how well they integrate human capital considerations into daily decision making and planning for the program results they seek to achieve. Similarly, the Office of Management and Budget (OMB) and the Office of Personnel Management (OPM) have developed tools that are being used to assess human capital management efforts. In October 2001, OMB developed standards for success for strategic human capital management—one of five governmentwide reform initiatives in the President’s Management Agenda. In December 2001, OPM released a human capital scorecard to assist agencies in responding to the OMB standards for success; later, in October 2002, OMB and OPM developed—in collaboration with GAO— revised standards for success. To assist agencies in responding to the revised standards, OPM released the Human Capital Assessment and Accountability Framework. In April 2002, the final report of the Commercial Activities Panel, mandated by Congress and chaired by the Comptroller General, sought to elevate attention to human capital considerations in making sourcing decisions. Federal organizations are increasingly concerned with sourcing issues because they are being held accountable for addressing another President’s Management Agenda initiative that calls for determining their core competencies and deciding how to build internal capacity or contract out for services. Until recently, top-level leadership at the department and component levels has not been extensively involved in strategic planning for civilian personnel; however, it is of higher priority to top-level leadership today than it has been in the past. With the exception of the Air Force, leadership at the component level has not been proactive, but is becoming more involved in responding to the need for strategic planning, providing guidance, or supporting and working in partnership with civilian human capital professionals. We have previously emphasized that high-performing organizations need senior leaders who are drivers of continuous improvement and also stimulate and support efforts to integrate human capital approaches with organizational goals. There is no substitute for the committed involvement of top leadership. Strategic planning for the Department of Defense civilian workforce is becoming a higher priority among DOD’s senior leadership, as evidenced by direction given in 2001 in the Quadrennial Defense Review (QDR) and the Defense Planning Guidance and by the Under Secretary of Defense for Personnel and Readiness to develop a civilian and military human resources strategic plan. We previously reported that a demonstrated commitment to change by agency leaders is perhaps the most important element of successful management reform and that leaders demonstrate this commitment by developing and directing reform. OMB and OPM have similarly advocated the need for top leadership to fully commit to strategic human capital planning. The Defense Science Board reported in 2000 that senior DOD civilian and military leaders have devoted “far less” attention to civilian personnel challenges than the challenges of maintaining an effective military force. In 1992, during the initial stages of downsizing, DOD officials voiced concerns about what they perceived to be a lack of attention to identifying and maintaining a balanced basic level of skills needed to maintain in-house capabilities as part of the defense industrial base. In our 2000 testimony, Strategic Approach Should Guide DOD Civilian Workforce Management, we testified that DOD’s approach to civilian force reductions was less oriented toward shaping the makeup of the workforce than was the approach it used to manage its military downsizing. In its approach to civilian workforce downsizing, the department focused on mitigating adverse effects (such as nonvoluntary reductions-in-force) through retirements, attrition, hiring freezes, and base closures. (See app. II for a time line of key events related to DOD’s civilian workforce downsizing.) DOD initiated a more strategic approach when it published its first strategic plan for civilian personnel (Civilian Human Resources Strategic Plan, 2002-2008) in April 2002. In developing the departmentwide plan, the Office of the Under Secretary of Defense for Personnel and Readiness made efforts to work in conjunction with defense components’ civilian human capital communities by inviting their leaders to contribute to working groups and special meetings and reviewing the services’ civilian human capital strategic plans. However, DOD has yet to provide guidance on how to integrate component-level civilian human capital strategic plans with its departmentwide civilian strategic plan. DOD officials said that full integration would be difficult because of the wide array of human capital services and mission support provided at the component level. However, one of the lessons learned in our previous work on strategic planning in the defense acquisition workforce was the need for leadership to provide guidance for planning efforts. Without guidance, defense components may not be able to effectively function together in support of the departmentwide plan. For example, DOD’s goal to provide management systems and departmentwide force planning tools may not be fully or efficiently achieved without a coordinated effort among all defense components. The component-level plans we reviewed included goals, objectives, or initiatives to improve analysis or forecasting of workforce requirements, but they did not indicate coordination with the departmentwide effort or with one another. Civilian human capital planning has emerged as an issue in another DOD-related forum for top leaders. In November 2002, the Human Resources Subcommittee of the Defense Business Practice Implementation Board released its report to DOD’s Senior Executive Council recommending, among other things, the establishment of a “Human Capital Transformation Team” to help implement agreed upon changes to transform human capital management in DOD’s civilian workforce. Leadership participation in strategic planning varies among the defense components we reviewed. High-level leaders in the Air Force, the Marine Corps, the Defense Contract Management Agency (DCMA), and the Defense Finance and Accounting Service (DFAS) have provided the impetus for strategic planning and are partnering with civilian human capital professionals to develop and implement their strategic plans. Such partnership is increasing in the Army and not as evident in the Department of the Navy. Since the mid-1990s, Air Force leadership has been relatively active in strategic planning for civilian human capital. In 1999, high-level Air Force leadership recognized the need for strategic human capital planning to deal with the significant downsizing that had occurred over the last several years. For the civilian workforce, this recognition culminated in the publication in 2000 of the Civilian Personnel Management Improvement Strategy White Paper; the Air Force produced an update of this document in 2002. Air Force leadership also has recognized that it must further enhance its efforts with greater attention to integrated, total force planning. Air Force leadership has demonstrated this commitment by incorporating civilian human capital leaders into broader Air Force strategic planning and resource allocation processes. Air Force leaders created a human resources board (the Air Force Personnel Board of Directors) composed of 24 senior civilian and military leaders. The board convenes semi-annually to address military and civilian human capital issues in an integrated, total force context. It is fostering integrated planning with the intent of developing an overarching strategy—holistic, total force strategy—designed to meet Air Force workforce demands for the present and the future and intended to encompass the needs of active, reserve, civilian, and contractor personnel by 2004. Furthermore, the Air Force began to allocate resources for civilian human capital initiatives in fiscal year 2002 due to the strong support from Air Force leaders. In recent years, strategic human capital planning has generally received increasing top-level leadership support in the Marine Corps, DCMA, DFAS, and the Army. A Marine Corps official told us that the Commandant of the Marine Corps and other top Marine Corps leaders became involved with civilian human capital strategic planning in 2001. The Commandant, in October 2002, endorsed the civilian human capital strategic plan, which outlines the Corps’ vision, intent, core values, expected outcomes, and strategic goals for civilian human capital. Officials are currently developing an implementation plan, which is expected to contain specific objectives, milestones, points of accountability, resource requirements, and performance measures. DCMA began strategic human capital planning in 2000 in response to guidance from the Office of the Under Secretary of Defense, Acquisition, Technology, and Logistics, and issued its first human capital strategic plan in 2002. DCMA officials told us that their human resources director is a member of DCMA’s broader executive management board and that human capital—civilian and military—is a standing agenda issue at the board’s monthly meetings. DFAS officials told us their director includes human capital professionals in DFAS’s management decision- making processes. Further, human capital is a key element in the DFAS agencywide strategic plan. DFAS initiated its human capital strategic planning efforts in 2002, but it has not yet published its plan. Within the Army, top-level leadership involvement in strategic planning efforts for civilian human capital has been limited but increasing. The bulk of such efforts has instead originated in the Army’s civilian human capital community. The Army’s civilian human capital community recognized the need for strategic civilian human capital planning in the mid-1990s and developed strategic plans. The Army’s civilian human capital community also initiated, in 2000, an assessment of the civilian workforce situation and developed new concepts for human resource systems and workforce planning. Since 2002, Army top-level leadership has become more explicitly involved in their civilian human capital community’s initiatives. For example, in January 2003, the Vice Chief of Staff of the Army formally endorsed the Army’s human capital strategic plan. Also, in January 2003, Army top leaders endorsed the recommendations of a study to improve the development and training of the Army’s civilian workforce, which followed three companion studies with similar objectives for military personnel. Additionally, as of March 2003, Army top leaders accepted the rationale and validated the requirement for another initiative to centrally manage senior civilian leaders by basing selection and retention decisions on long-term Army needs rather than on the short-term needs of local commanders. The Army plans to establish a management office to begin this effort in fiscal year 2004. Army officials told us that all of these efforts have not yet been fully funded. Without the commitment and support of Army top leaders, the Army’s civilian human capital community has limited authority to carry out reforms on its own and limited ability to ensure that its reforms are appropriately focused on mission accomplishment. In addition, Army civilian human capital officials’ contributions to broader strategic planning efforts have been increasing. Specifically, officials told us that while the Army’s civilian human capital community has a voice in the Army’s resource allocation deliberations, getting civilian personnel issues included in top-level Army planning and budgeting documents is sometimes challenging. Within the past year, however, civilian human resource issues have been included in the Army-wide strategic readiness system (a balanced scorecard) and an Army well-being initiative (balancing the demands and expectations of the Army and its people). Within the Department of the Navy, top-level leadership involvement in strategic planning efforts for civilian human capital has been limited. Department of the Navy leadership invested in studies related to strategic planning for its civilian workforce, but it has been slow to develop a strategic plan for its civilian human capital. Two documents published in August 2000 and May 2001 reported the results of work sponsored by a personnel task force established by the Secretary of the Navy to examine facets of the Department of the Navy’s human resources management. One, a study conducted and published by the National Academy of Public Administration’s Center for Human Resources Management, focused on Department of the Navy civilian personnel issues; the other reported on the rest of the findings of the task force. Department of the Navy human capital officials told us that they have not implemented the recommendations of those studies because (1) many require new legislation and (2) the studies were future oriented, looking as far ahead as 2020, and it will take time to implement the recommendations. These officials said that although the Department of the Navy had not yet developed a strategic plan for its civilian human capital, the Navy major commands (referred to as claimants) did their own human capital strategic planning as necessary, adding that they believed these efforts were sufficient. More recently, however, these officials told us that they are developing (on their own initiative) a strategic plan for the Department of the Navy’s civilian workforce. In addition, the Navy has very recently undertaken other strategic planning efforts. In July 2002, the Navy established a new organization to develop a consolidated approach to civilian workforce management that centers on 21 core competency functional areas. Navy officials view this recent initiative, which involves senior military and civilian leaders, as the first step in developing a total force concept (civilian, active and reserve military, and contract employees). In March 2003, the Department of the Navy established (1) a new position that will provide a liaison for the Navy and Marine Corps strategic planning processes and (2) a Force Management Oversight Counsel, co-chaired by top Navy and Marine Corps officials, which will develop an overarching framework for Navy and Marine Corps strategic planning. With the looming uncertainty of continued downsizing, anticipated retirements, and increased competitive sourcing of non-core functions, strategic planning for the civilian workforce will grow in importance. If high-level leaders do not provide the committed and inspired attention to address civilian human capital issues (that is, establish it as an organizational priority and empower and partner with their human capital professionals in developing strategic plans for civilian human capital), then future decisions about the civilian workforce may not have a sound basis. For the most part, the strategic plans we reviewed lacked such key elements as mission alignment, results-oriented performance measures, and data-driven workforce planning. Mission alignment is demonstrated by clearly showing how the civilian workforce contributes to accomplishing an organization’s overarching mission. It is also evident in descriptions of how the achievement of human capital initiatives will improve an organization’s performance in meeting its overarching mission, goals, and objectives. Results-oriented performance measures enable an organization to determine the effect of human capital programs and policies on mission accomplishment. Finally, data on the needed knowledge, skills, competencies, size, and deployment of the workforce to pursue an organization’s missions allow it to put the right people, in the right place, at the right time. The interrelationships of these three key elements are shown in figure 2. Without adequate alignment, performance measures, and workforce data, DOD and its components cannot be certain their human capital efforts are properly focused on mission accomplishment. Previously, we emphasized that high-performing organizations align their human capital initiatives with mission and goal accomplishment. Organizations’ strategic human capital planning must also be results oriented and data driven, including, for example, information on the appropriate number and location of personnel needed and their key competencies and skills. High-performing organizations also stay alert to emerging mission demands and human capital challenges and reevaluate their human capital initiatives through the use of valid, reliable, and current data. The human capital goals and objectives contained in strategic plans for civilian personnel were not, for the most part, explicitly aligned with the overarching missions of the organizations we reviewed. Moreover, none of the plans fully reflected a results-oriented approach to assessing progress toward mission achievement. Human capital strategic plans should be aligned with (i.e., consistent with and supportive of) an organization’s overarching mission. Alignment between “published and approved human capital planning documents” and an organization’s overarching mission is advised in OPM’s Human Capital Assessment and Accountability Framework. With regard to assessing progress, programs can be more effectively measured if their goals and objectives are outcome-oriented (i.e., focused on results or impact) rather than output-oriented (i.e., focused on activities and processes), in keeping with the principles of the Government Performance and Results Act (GPRA). Congress anticipated that GPRA would be institutionalized and practiced throughout the federal government; federal agencies are expected to develop performance plans that are consistent with the act’s approach. Based on the above criteria, we analyzed the human capital strategic plans that five of the seven organizations in our review have published for the following: Human capital goals and objectives that explicitly describe how the civilian workforce helps achieve the overarching mission, goals, and objectives. Results-oriented measures that track the success of the human capital initiatives in contributing to mission achievement. All of the civilian human capital plans we reviewed referred to their respective organizations’ mission; however, the human capital goals, objectives, and initiatives did not explicitly link or describe how the civilian workforce efforts would contribute to the organizations’ overarching mission achievement, and more importantly how the extent of contribution to mission achievement would be measured. Aspects of DCMA’s plan, however, demonstrate alignment by including a general explanation of the overarching mission inclusive of human capital goals, objectives, and initiatives that further define how its civilian workforce contributes to achieving the overarching mission. For example: DCMA’s overarching mission is to “Provide customer-focused acquisition support and contract management services to ensure warfighter readiness, 24/7, worldwide.” DCMA’s human capital plan demonstrates the alignment of the agency’s workforce by stating that the agency will accomplish its overarching mission by “Partner, or strategically team with customers to develop better solutions, and ensur warfighter success on all missions” and “Providing expertise and knowledge throughout the acquisition life cycle, from cradle to grave; from factory to foxhole and beyond”. DCMA’s plan contains one human capital goal, among other agency- wide goals, directed at aligning workforce efforts with mission accomplishment. The goal is to enable DCMA people to excel by building and maintaining a work environment that (1) attracts, (2) develops, and (3) sustains a quality workforce. Several objectives and initiatives in DCMA’s plan demonstrate a link to this human capital goal and to the overarching mission. Examples of these initiatives include determining ways to (1) making DCMA employment attractive, (2) establishing a professional development framework that is integrated and competency-based as well as developing an advanced leadership program, and (3) sustaining a quality workforce by ensuring recognition and awards to high-performing personnel. This alignment of DCMA’s workforce, initiatives, and goals to the overarching mission helps DCMA ensure that its civilian workforce has the necessary expertise and knowledge to provide customer-focused acquisition support and contract management services. The other plans in our review generally did not demonstrate this degree of alignment. For example, in the Army civilian human capital strategic plan, four of the six human resource goals are more narrowly directed toward the role played by the human resource community and only indirectly tie the civilian workforce to the achievement of the Army’s overall mission. However, two goals—“systematic planning that forecasts and achieves the civilian work force necessary to support the Army’s mission” and “diversity through opportunity” —link more explicitly to the Army’s overarching mission. Also, DOD’s departmentwide civilian human capital plan refers to the overarching mission by including broad references to DOD’s overarching strategic plan. However, the plan is silent about what role DOD’s civilian workforce is expected to play in achievement of the mission. The plan recognizes the need for aligning the civilian workforce with the overarching mission by proposing to develop a human resource management accountability system to guarantee the effective use of human resources in achieving DOD’s overarching mission. Moreover, none of the plans in our review contained results-oriented goals and measures. For example, DOD’s strategic goal to “promote focused, well-funded recruiting to hire the best talent available” is not expressed in measurable terms (i.e., it does not define “focused, well-funded, and best talent available”), and the measures for this goal are process oriented (i.e., developing or publishing a policy or strategy; reviewing programs) rather than results oriented. DOD’s plan, however, indicates that mission achievement measures are being developed. At the component level, the Army, in particular, has developed metrics related to its personnel transaction processes; although these measures are important, they are not focused on measuring outcomes related to mission accomplishment. Army officials recognize the importance of relating outcomes to mission accomplishment and are presently working to develop such measures. Without results-oriented measures, it is difficult for an organization to assess the effectiveness of its human capital initiatives in supporting its overarching mission, goals, and objectives. Officials at DOD and the defense components in our review told us they recognize the importance of alignment and results-oriented measures in strategic human capital planning. In fact, the Air Force has recently undertaken an initiative to develop a planning framework aligning strategy, vision, execution, measurement, and process transformation. Many human capital officials we spoke with noted they have only recently begun to transition from their past role of functional experts—focused primarily on personnel transactions—to partners with top leadership in strategically planning for their civilian workforce. In their new role, they expect to make improvements in strategically managing civilian personnel, including identifying results-oriented performance measures in future iterations of their plans. Until such elements are in place, it is difficult to determine if the human capital programs DOD and its components are funding are consistent with overarching missions or if they are effectively leading to mission accomplishment. The civilian human capital strategic plans for DOD and its components include goals focused on improving their human capital initiatives, but only two components include workforce data that supported the need for those particular initiatives. GAO and others have reported that it is important to analyze future workforce needs to (1) assist organizations in tailoring initiatives for recruiting, developing, and retaining personnel to meet its future needs and (2) provide the rationale and justification for obtaining resources and, if necessary, additional authority to carry out those initiatives. We also stated that to build the right workforce to achieve strategic goals, it is essential that organizations determine the critical skills and competencies needed to successfully implement the programs and processes associated with those goals. To do so, three types of data are needed: (1) what is available—both the current workforce characteristics and future availability, (2) what is needed—the critical workforce characteristics needed in the future, and (3) what is the difference between what will be available and what will be needed—the gap. Without this information, DOD cannot structure its future workforce to support the Secretary of Defense’s initiatives or mitigate the risk of shortfalls in critical personnel when pending civilian retirements occur. Of the five organizations in our review that had civilian human capital strategic plans, two—the Air Force and DCMA—included some information about the future workforce and indicated the gaps to be addressed by its civilian human capital initiatives. The Air Force’s plan includes a chart that illustrates, in terms of years of federal service, the current workforce compared to a 1989 baseline (prior to the downsizing of its civilian workforce) and a target workforce for fiscal year 2005. This information was generally based on data that were readily available but considered to be a less-than-adequate indicator for level of experience, and it is not clear how the target workforce data were derived. According to the Air Force, its analysis illustrated the shortfall in the number of civilians with less than 10 years of service when compared to the Air Force’s long-term requirements. Using this and other analyses, the Air Force initially developed workforce-shaping activities in four areas—accession planning, force development, retention/separation management, and enabling activities, which included 27 separate initiatives. DCMA’s plan describes the agency’s workforce planning methodology, which focuses on identifying gaps between its current and future workforce. DCMA’s strategic workforce planning team analyzes quantitative data on the current workforce and employs an interview protocol to gather and analyze information from DCMA managers and subject matter experts pertaining to future work and workforce requirements. According to DCMA, this methodology allows it to link the desired distribution of positions, occupational series, and skills to organizational outcomes, processes, and customer requirements and to DOD’s transformation guidance, goals, and initiatives. Although DCMA has not completely identified or quantified its future workforce requirements, it identified the following: requirements for new technical skills, especially software acquisition and integration; upgrading general skills and maintaining the existing skill base; correcting imbalances in geographic locations; requirements for hiring about 990 employees per year through 2009; and obtaining additional positions to support anticipated increasing procurements. In contrast to the Air Force and DCMA plans, the DOD, Army, and Marine Corps plans lack information about future workforce needs. For example, DOD’s civilian human capital plan contains data on those civilians eligible for retirement by grade level and by job category. However, the plan does not address key characteristics such as skills and competencies that will be needed in the future workforce to support changes being undertaken by DOD. Without this information and a methodology to analyze and identify the gaps that exist between what will be available and what will be needed, it is not clear that the human capital initiatives in DOD’s plan will result in the desired future workforce. All of the plans we reviewed acknowledge strategic workforce planning shortfalls by setting goals or initiatives to improve in this area. For example, DOD’s plan includes a goal to obtain management systems and tools that support total force planning and informed decision making. DOD has begun adopting the Army’s Civilian Forecasting System and the Workforce Analysis Support System for departmentwide use, which will enable it to project the future workforce by occupational series and grade structure. However, the systems (which are based on a regression analysis of historical data) are not capable of determining the size and skill competencies of the civilian workforce needed in the future. Also, DOD has not yet determined specifically how this new analytic capability will be integrated into programmatic decision-making processes. DOD officials stated that its first step was to purchase the equipment and software, which was accomplished in 2002. DOD is now analyzing users’ needs. As of December 2002, DOD officials were testing the systems, but they expressed concerns that the Army systems may not serve the needs of a complex and diverse organization such as DOD. The civilian human capital strategic plans we reviewed did not address how the civilian workforce would be integrated with their military counterparts or sourcing initiatives to accomplish DOD’s mission. The 2001 QDR states that future operations will not only be joint but also depend upon the total force—including civilian personnel as well as active duty and reserve personnel. The QDR also emphasizes that DOD will focus its “owned” resources in areas that contribute directly to warfighting and that it would continue to take steps to outsource and shed its non-core responsibilities. The 2000 Defense Science Board Task Force report states that DOD needs to undertake deliberate and integrated force shaping of the civilian and military forces, address human capital challenges from a total force perspective, and base decisions to convert functions from military to civilian or to outsource functions to contractors on an integrated human resource plan. In addition, the National Academy of Public Administration, in its report on the Navy civilian workforce 2020, notes that as more work is privatized and more traditionally military tasks require support of civilian or contractor personnel, a more unified approach to force planning and management will be necessary; serious shortfalls in any one of the force elements will damage mission accomplishment. The Academy’s report also states that the trend towards greater reliance on contractors necessitates a critical mass of civilian personnel expertise to protect the government’s interest and ensure effective oversight of contractors’ work. Further, the 2002 Commercial Activities Panel final report indicates that sourcing and human capital policies should be inextricably linked together, and it calls for federal sourcing policies to be “consistent with human capital practices designed to attract motivate, retain, and reward a high performing workforce.” DOD’s overall human capital strategy, however, consists of three separate plans: one for civilians, one for military personnel, and one for quality of life issues for servicemembers and their families. DOD has not integrated the contractor workforce into these plans. Although DOD officials maintain that these plans are intended to complement each other, the plans are not integrated to form a seamless and comprehensive strategy. The civilian plan was prepared separately from the other two military plans with little direct involvement of key stakeholders, such as representatives from military personnel and manpower requirements communities. Although not reflected in its departmentwide civilian human capital strategic plan, DOD acknowledged—in its response to the President’s Management Agenda to accomplish workforce restructuring, reorganizations, delayering, outsourcing, and reengineered and streamlined processes—that these efforts could only be accomplished through coordinating and integrating civilian and military components. The departmentwide civilian plan includes a longer-term objective to assess the need for and the capabilities of automated information management tools to primarily integrate civilian and military personnel and transaction data. We believe these tools can also provide information for planning and analysis, but they may not provide DOD with the information needed to proactively shape the total DOD workforce in response to current changes (i.e., the Secretary’s transformation of the department, increasing joint operations, and competitive sourcing initiatives) because (1) contractor data are not included and (2) the projected date for accomplishing this objective, September 2008, may be too late to effect near-term decisions. In addition, officials in the Office of the Under Secretary of Defense for Personnel and Readiness recognize that integration of the military and civilian plans is important and are developing an umbrella document that will encompass all three components of the human capital strategy, but it has not established a time frame for completion. Furthermore, DOD’s civilian human capital strategic plan does not address the role of civilian vis-à-vis contractor personnel or how DOD plans to link its human capital initiatives with its sourcing plans, such as efforts to outsource non-core responsibilities. The plan notes that contractors are part of the unique mix of DOD resources, but none of the goals and objectives discusses how DOD will shape its future workforce in a total force (civilian, military, and contractor) context. We believe that effective civilian workforce planning cannot be accomplished in isolation from planning for military personnel or sourcing initiatives. As the Commercial Activities Panel report notes, it is particularly important that sourcing strategies support, not inhibit, the government organization’s efforts to recruit and retain a high-performing in-house workforce. We also noted in our High Risk report that careful and thoughtful workforce planning efforts are critical to making intelligent competitive sourcing decisions. At the service level, the Air Force’s strategic plans for civilian personnel were not initially developed in a total force context, but the current plans acknowledge the need to integrate strategic planning for civilians with their military counterparts, as well as taking into account contractors. For example, the Air Force has set a goal and taken steps to integrate planning for active, reserve, civilian, and contractor personnel by 2004. Air Force officials stated concerns about the significant budgetary consequences when planning does not take place in a total force context. For example, when civilian or contractor personnel perform functions previously conducted by military personnel, the defense component involved must obtain additional funds because payment for civilians and contractors cannot come from military personnel funds. The Air Force estimates that these costs could be $10 billion to $15 billion over the next 5 years. Although a proposed time frame is not provided, the Marine Corps’ civilian plan states the need to forecast military and civilian levels and workforce requirements based on strategic mission drivers, stratified workload demand, and business process changes; the requirements for its civilian marines will take into account the appropriate redistribution of work among the military, civilian, and contractor communities. The Army’s civilian human capital plan states that it will have to acquire, train, and retain its total force in an operational environment that will place different demands on human capital management. The Army’s human capital community has an objective to support the Army-wide “Third Wave” initiative, which focuses on privatization of non-core functions to better allocate scarce resources to core functions. (The Department of the Navy does not yet have a civilian human capital strategic plan.) The defense agencies we reviewed, which have relatively few military personnel compared to the military services, are taking or plan to take an integrated approach to strategic planning for their civilian and military workforces, but they do not indicate how they will integrate these efforts with their sourcing initiatives. DCMA’s human capital strategic plan includes both civilian and military personnel. For example, the plan includes a goal to address the underassignment of military personnel, because their absence further compounds the difficulties caused by the downsizing of civilian positions and the increasing workload. DFAS is planning to include both civilian and military personnel in the human capital strategic plan that it is developing. Like DCMA, military personnel are a small but important part of the overall DFAS workforce, but they are projected to be less available in the future. For example, the Air Force has announced that it is reducing its military personnel presence at DFAS over the next several years. Without integrated planning, goals for shaping and deploying military, civilian, and contractor personnel may not be consistent with and support each other. Consequently, DOD may not have the workforce it needs to accomplish tasks critical to readiness and mission success. DOD has made progress in establishing a foundation for strategically addressing civilian human capital issues by developing its departmentwide civilian human capital strategic plan. However, the alignment of human capital goals with the overarching mission is unclear in DOD’s and the components’ strategic plans for civilian human capital, and results- oriented performance measures linked to mission accomplishment are lacking. Without these key elements, DOD and its components may miss opportunities to more effectively and efficiently increase workforce productivity. Also, without greater commitment from and the support of top leaders, civilian human capital professionals in DOD and the defense components may design strategic planning efforts that are not appropriately focused on mission accomplishment and that do not have adequate support to carry out. Moreover, DOD top leadership has not provided its components with guidance on how to align component-level strategic plans with the departmentwide plan. Without this alignment, DOD’s and its components’ planning may lack the focus and coordination needed (1) to carry out the Secretary of Defense’s transformation initiatives in an effective manner and (2) to mitigate risks of not having human capital ready to respond to national security events at home and abroad. Although DOD and component officials recognize the critical need for ensuring that the future workforce be efficiently deployed across their organizations and have the right skills and competencies needed to accomplish their missions, their strategic plans lack the information needed to identify gaps in skills and competencies. As a result, DOD and its components may not have a sound basis for funding decisions related to human capital initiatives and may not be able to put the right people in the right place at the right time to achieve the mission. Furthermore, as personnel reductions continue and DOD carries out its transformation initiatives, integrating planning in a total force context—as mentioned in the QDR—becomes imperative to ensure that scarce resources are most effectively used. However, military and civilian human capital strategic plans—both DOD’s and the components’—have yet to be integrated with each other. Furthermore, the civilian plans do not address how human capital policies will complement, not conflict with, the department-level or component-level sourcing plans, such as competitive sourcing efforts. To improve human capital strategic planning for the DOD civilian workforce, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, to undertake the following: Improve future revisions and updates to the DOD departmentwide strategic human capital plan by more explicitly aligning with DOD’s overarching mission, including results-oriented performance measures, and focusing on future workforce needs. To accomplish this, the revisions and updates should be developed in collaboration with top DOD and component officials and civilian and military human capital leaders. Direct the military services and the defense agencies to align their strategic human capital plans with the mission, goals, objectives, and measures included in the departmentwide strategic human capital plan and provide guidance to these components on this alignment. Define the future civilian workforce, identifying the characteristics (i.e., the skills and competencies, number, deployment, etc.) of personnel needed in the context of the total force and determine the workforce gaps that need to be addressed through human capital initiatives. Assign a high priority to and set a target date for developing a departmentwide human capital strategic plan that integrates both military and civilian workforces and takes into account contractor roles and sourcing initiatives. We received comments from the Department of Defense too late to include them in the final report. These comments and our evaluation of them, however, were incorporated into a subsequent report (DOD Personnel: DOD Comments on GAO’s Report on DOD’s Civilian Human Capital Strategic Planning, GAO-03-690R). We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Commandant of the Marine Corps; and the Directors of DCMA and DFAS. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-5559 if you or your staff have any questions concerning this report. Key contributors are listed in appendix III. As requested by the Ranking Minority Member of the House Committee on Armed Services, Subcommittee on Readiness, we reviewed civilian human capital strategic planning in the Department of Defense (DOD). Specifically, the objectives of this report were to assess (1) the extent to which top-level leadership is involved in strategic planning for civilian personnel and (2) whether strategic plans for civilian personnel are aligned with the overall mission, results oriented, and based on data about the future civilian workforce. We also determined whether the strategic plans for civilian personnel are integrated with plans for military personnel or sourcing initiatives. We focused primarily on civilian human capital strategic planning undertaken since 1988, when DOD began downsizing its civilian workforce. Our analyses were based on the documents that each organization identified as its civilian human capital strategic planning documents. Several documents had been published or updated either just prior to or during the time of our review (May 2002 to March 2003). Also, DOD and component strategic planning for civilian personnel is a continuous process and involves ongoing efforts. We did not review the implementation of the human capital strategic plans because most plans were too recent for this action to be completed. The scope of our review included examining the civilian human capital strategic planning efforts undertaken by DOD, its four military services, and two of its other defense organizations—the Defense Finance and Accounting Service (DFAS) and the Defense Contract Management Agency (DCMA). We selected the military services since they account for about 85 percent of the civilian personnel in DOD. To understand how civilian human capital strategic planning is being undertaken by other defense organizations, which account for the other 15 percent of the DOD civilian workforce, we determined the status of the human capital strategic planning efforts of 21 other defense organizations through a telephone survey. We judgmentally selected two defense agencies, DFAS and DCMA, because of their large size and because they perform different functions; therefore, they could offer different perspectives on strategic planning for civilians. DFAS and DCMA account for about 26 percent of the civilian personnel in other defense organizations. DFAS has about 15,274 civilian employees and more than 1,000 military personnel, performs finance and accounting activities, and does not have a civilian human capital strategic plan, although it does have an overall agency strategic plan that includes human capital as a key element. DCMA has about 11,770 civilian employees and about 480 military personnel, performs acquisition functions, and has a civilian human capital strategic plan. To assess the extent to which top-level leadership is involved in strategic planning for civilian personnel, we reviewed the civilian human capital strategic plans for discussions of the methodology used in developing them that indicated leadership involvement. Further, we compared the civilian human capital strategic plans publication dates to key events, such as the issuance of the President’s Management Agenda, which advocates strategic human capital planning. We discussed top leadership involvement in the development of human capital strategic plans with the applicable civilian human capital planning officials. These officials included representatives from the following offices: Department of Defense: Under Secretary of Defense for Personnel and Readiness, including Deputy Under Secretary of Defense for Civilian Personnel Policy and Director, Civilian Personnel Management Service. Department of Air Force: Assistant Secretary of the Air Force for Manpower and Reserve Affairs; Assistant Deputy Chief of Staff for Personnel Headquarters; Director of Strategic Plans and Future Systems, and Director, Air Force Personnel Operations Agency, Deputy Chief of Staff for Personnel; and Directorate of Personnel, Air Force Materiel Command. Department of the Army: Deputy Chief of Staff, G-1. Department of the Navy: Deputy Assistant Secretary of the Navy for Civilian Personnel Policy and Equal Employment Opportunity; Deputy Chief of Naval Operations for Manpower and Personnel; and Deputy Commandant of the Marine Corps for Manpower and Reserve Affairs. Defense Contract Management Agency: Executive Director, Human Resources; and Director, Strategic Planning, Programming, and Analysis. Defense Finance and Accounting Service: Human Resources Directorate and Resource Management Directorate. To assess whether strategic plans for civilian personnel are aligned with the overall mission, results oriented, and contained data about the future civilian workforce, we compared each plan with the concepts articulated in our model for strategically managing human capital and similar guidance provided by the Office of Budget and Management and the Office of Personnel Management (which are discussed in greater detail in the Background section of this report). Among the numerous sources we reviewed, we used the criteria described in our reports on Exposure Draft: A Model of Strategic Human Capital Management; Human Capital: A Self-Assessment Checklist for Agency Leaders; High-Risk Series: An Update; and Performance and Accountability Series – Major Management Challenges and Program Risks. Specifically, we looked for (1) the alignment of human capital approaches to meet organizational goals, (2) the presence of results-oriented performance measures, and (3) the references to use of workforce planning data to justify human capital initiatives (i.e., policies and programs). To ensure consistency with our application of the criteria in other GAO engagements, we also reviewed approximately 100 of our reports that addressed their application within DOD and other federal agencies. Also, to better understand the existing human capital framework and its relationship to the strategic planning efforts, we gathered information about policies, programs, and procedures. Finally, we validated the results of our analyses of the plans with appropriate agency officials. To assess whether the strategic plans for civilian personnel are integrated with plans for military personnel or sourcing initiatives, we analyzed the civilian human capital strategic plans for (1) references to military personnel or a total force perspective and (2) discussions about competitive and strategic sourcing efforts being undertaken in a total force context. We also collaborated with other GAO staff who reviewed (1) DOD’s strategic plans for military personnel and quality of life issues for servicemembers and their families, (2) sourcing initiatives, and (3) DOD’s acquisition workforce. In addition, we discussed integration between civilian and military personnel plans with the applicable civilian human capital planning officials previously mentioned. We conducted our review from May 2002 to March 2003 in accordance with generally accepted government auditing standards. Figure 3 provides a time line of several key events and dates that affected DOD’s civilian workforce between 1988 and 2002. It also shows when DOD and its components published their human capital strategic plans. In addition to the name above, Daniel Chen, Joel Christenson, Barbara Joyce, Janet Keller, Shvetal Khanna, Dan Omahen, Gerald Winterlin, Dale Wineholt, and Susan Woodward made key contributions to this report. | The Department of Defense's (DOD) civilian employees play key roles in such areas as defense policy, intelligence, finance, acquisitions, and weapon systems maintenance. Although downsized 38 percent between fiscal years 1989 and 2002, this workforce has taken on greater roles as a result of DOD's restructuring and transformation. Responding to congressional concerns about the quality and quantity of, and the strategic planning for the civilian workforce, GAO determined the following for DOD, the military services, and selected defense agencies: (1) the extent of top-level leadership involvement in civilian strategic planning; (2) whether elements in civilian strategic plans are aligned to the overall mission, focused on results, and based on current and future civilian workforce data; and (3) whether civilian and military personnel strategic plans or sourcing initiatives were integrated. Generally, civilian personnel issues appear to be an emerging priority among top leaders in DOD and the defense components. Although DOD began downsizing its civilian workforce more than a decade ago, it did not take action to strategically address challenges affecting the civilian workforce until it issued its civilian human capital strategic plan in April 2002. Top-level leaders in the Air Force, the Marine Corps, the Defense Contract Management Agency, and the Defense Finance Accounting Service have initiated planning efforts and are working in partnership with their civilian human capital professionals to develop and implement civilian strategic plans; such leadership, however, was increasing in the Army and not as evident in the Navy. Also, DOD has not provided guidance on how to integrate the components' plans with the department-level plan. High-level leadership is critical to directing reforms and obtaining resources for successful implementation. The human capital strategic plans GAO reviewed for the most part lacked key elements found in fully developed plans. Most of the civilian human capital goals, objectives, and initiatives were not explicitly aligned with the overarching missions of the organizations. Consequently, DOD and the components cannot be sure that strategic goals are properly focused on mission achievement. Also, none of the plans contained results-oriented performance measures to assess the impact of their civilian human capital initiatives (i.e., programs, policies, and processes). Thus, DOD and the components cannot gauge the extent to which their human capital initiatives contribute to achieving their organizations' mission. Finally, the plans did not contain data on the skills and competencies needed to successfully accomplish future missions; therefore, DOD and the components risk not being able to put the right people, in the right place, and at the right time, which can result in diminished accomplishment of the overall defense mission. Moreover, the civilian strategic plans did not address how the civilian workforce will be integrated with their military counterparts or sourcing initiatives. DOD's three human capital strategic plans--two military and one civilian--were prepared separately and were not integrated to form a seamless and comprehensive strategy and did not address how DOD plans to link its human capital initiatives with its sourcing plans, such as efforts to outsource non-core responsibilities. The components' civilian plans acknowledge a need to integrate planning for civilian and military personnel--taking into consideration contractors--but have not yet done so. Without an integrated strategy, DOD may not effectively and efficiently allocate its scarce resources for optimal readiness. |
Using CDBG funds to respond to disasters is not unprecedented; however, the dollar amounts allocated for such purposes in the wake of the terrorist attacks on New York City are the largest ever made through the program. In the months following September 11, 2001, $3.5 billion in emergency supplemental CDBG funding was made available for New York City—more than the total CDBG funds provided nationwide for all major disasters in the last 10 years. Congress appropriated $40 billion to the President for emergency expenses (Emergency Response Fund) to respond to the terrorist attacks of September 11. Emergency response funds available for transfer to the Department of Housing and Urban Development (HUD) could be used for CDBG programs, as authorized by title I of the Housing and Community Development Act of 1974, as amended. Specifically, on November 1, 2001, the Office of Management and Budget designated $700 million for CDBG funding for New York City out of the Emergency Response Fund that Congress had appropriated. On January 10, 2002, Congress appropriated an additional $2 billion for CDBG funding, earmarking at least $500 million to compensate small businesses, nonprofit organizations, and individuals for their economic losses. Finally, on August 2, 2002, Congress appropriated an additional $783 million for CDBG funding. Although the CDBG program’s primary purpose is community development, not disaster assistance, supplemental CDBG appropriations have been made to provide recovery assistance from past natural disasters, usually severe hurricanes, earthquakes, or floods. As in the aftermath of natural disasters, HUD waived many requirements—such as assisting persons of low and moderate income—of the general CDBG program. HUD is one of many federal agencies that offer disaster assistance, and HUD requires that its funds not be used to duplicate benefits provided by other federal agencies, such as SBA. Empire State is the New York State entity designated by the Governor to administer the first CDBG appropriation of $700 million. Created in 1968, Empire State is a corporate governmental agency of the state of New York and is currently engaged in housing and economic development and special projects throughout the state. To carry out large-scale economic development activities, Empire State has created various consolidated subsidiaries. In November 2001, the Empire State board of directors authorized the creation of the Lower Manhattan Development Corporation (LMDC) to assist in the economic recovery and revitalization of lower Manhattan, with special emphasis on the redevelopment of the areas damaged by the terrorist attacks. LMDC functions as a joint city-state development corporation with a 16-member board of directors that is appointed by the Governor and the Mayor. For the amounts appropriated by Congress in the 2002 Emergency Supplemental and the 2002 Supplemental previously noted, which totaled $2.8 billion, LMDC was designated in the legislation as the entity to develop programs and distribute assistance. In its January 30, 2002, action plan, Empire State estimated that almost 18,000 businesses in New York City, representing approximately 563,000 employees, were disrupted or forced to relocate as a result of the terrorist attacks. Empire State estimated that businesses with 200 employees or fewer accounted for 99 percent of all affected businesses and about 50 percent of all affected employees. As lead agency in administering federal assistance to New York City businesses, Empire State is carrying out the action plan, which HUD approved, for providing $700 million in business assistance, with $506 million allocated for small business programs. Additionally, LMDC has a HUD-approved action plan for spending $306 million, primarily to provide residential retention and attraction grants to individuals. LMDC also has made a proposed action plan available for public comment that it will submit to HUD, which would provide $350 million to Empire State for use in its business assistance programs—$200 million of which would be used for its small business programs. LMDC currently has issued no formal plans for spending the remaining, approximately $2 billion in CDBG funding. In addition to the assistance provided by government and private organizations, qualifying small businesses can receive federal tax benefits that have been made available for those affected by the terrorist attacks. The tax benefits include expanded work opportunity tax credits and special allowances for certain business property. Businesses also may benefit from real estate tax abatement, commercial rent tax exemptions or reductions, and energy discounts. These types of assistance are not discussed in this report. From an allocation of $506 million, Empire State developed various programs to assist small businesses. Empire State’s Business Recovery Grant (BRG) Program provides grants to businesses to compensate them for economic loss, and its Small Firm Attraction and Retention Grant (SFARG) Program provides incentives to remain in or relocate to lower Manhattan. Empire State is implementing additional programs to provide technical assistance and loans and also expects to reimburse other city and state programs for their expenditures. Additionally, Empire State has made and continues to make many efforts to reach out to affected businesses. Table 1 contains information on the funding provided and disbursed as of September 11, 2002, for each of the Empire State small business assistance programs. In addition to the small business programs, Empire State officials said that retention assistance for larger businesses is particularly important to the future of the lower Manhattan economy. To a great extent, larger businesses and their employees provide small businesses in lower Manhattan with a client base. Small businesses in turn provide larger businesses and their employees with services ranging from business consulting, accounting, and office supplies to personal services, such as dry cleaning, dining establishments, and newsstands. Empire State has allocated $5 million for a recovery grant program for larger businesses with more than 500 employees nationwide, but with fewer than 200 employees in lower Manhattan, from which it has disbursed $3.1 million to assist 18 businesses. Empire State also has allocated $170 million for a larger firm business attraction and retention program, and LMDC is seeking approval to provide Empire State with additional funds, of which $150 million would go toward this program, bringing the total program allocation to $320 million. As of September 11, 2002, no disbursement of funds had been made from this program. According to Empire State, it had made offers to 102 businesses of which 50 had accepted offers totaling $140 million—a process that requires a much longer time frame than does the SFARG Program. The BRG Program for small businesses offers grants to compensate for economic losses. The BRG Program is Empire State’s most far-reaching business assistance program. The first BRGs were provided in mid- February 2002. As of September 11, 2002, 8,783 businesses had received BRG grants totaling $254 million. The median grant amount was $9,261. Businesses with fewer than 50 employees accounted for 95 percent of the businesses receiving BRGs and received $200 million, or 79 percent of the total amount of BRGs disbursed. All types of businesses are eligible for BRGs, and assisted businesses can be categorized into various sectors, as shown in figure 1. The largest number of businesses assisted falls into three sectors: professional and technical services; finance, which also includes insurance; and retail trade. To be eligible for a BRG, businesses must have had fewer than 500 employees worldwide; have been located on or south of 14th Street in Manhattan on September 11, 2001; and have suffered uncompensated economic losses related to the attacks. The program identifies four geographic areas, or zones, upon which it then bases the number of days of revenue for which it will compensate. In the BRG computation, the number of days of revenue increases the closer the zone is to the World Trade Center site. See appendix II for a map that identifies these geographic areas. Revenue periods range from 3 to 25 days, and maximum grant amounts range from $50,000 to $300,000, not to exceed a business’ economic loss after adjusting for insurance and other compensation. In addition to other eligibility requirements, businesses must still be operating in the city or agree to resume operations in the city within 1 year of the receipt of grant funds as well as agree to retain a substantial portion of their business operations in the city for at least 3 years. The program will accept applications through December 31, 2002. The size of businesses assisted varies as measured by the number of employees and annual revenues. Businesses with fewer than 10 employees accounted for about 75 percent of the businesses assisted (see fig. 2). Recipients of BRG assistance who had revenues of less than $1 million accounted for 5,785 businesses, or about 67 percent of the businesses assisted (see fig. 3). The BRG Program has provided assistance to thousands of businesses; however, it has awarded only about one-half of the number of grants it originally estimated and has not covered a substantial portion of the uncompensated economic losses reported by businesses. Although Empire State estimated that it would make 19,600 grant awards, on the basis of the number of small businesses believed to be located in the eligible area, as of September 11, 2002, it had provided 9,373 grants. Empire State is making additional outreach efforts and hopes to increase the number of businesses assisted. Analysis of the economic losses reported by businesses shows that at the median of businesses receiving a BRG, the BRG covered about 17 percent of a business’ losses that were not covered by insurance and other city and state grants. Empire State recently changed the BRG computation, both retroactively and prospectively, to increase the number of business day revenues considered in determining the grant amount, particularly for those businesses that were in or near the World Trade Center. This change will result in increased payments to some businesses and thereby reduce the amount of their uncompensated economic losses. With new criteria for increased payments and additional applications expected, Empire State is estimating that the total allocation for the BRG Program will be $481 million. Empire State also is expected to use CDBG funds to reimburse city and state programs that provided grants to small businesses soon after the September 11 attacks. The city and state programs have disbursed $24 million in assistance; however, as of September 11, 2002, neither had filed for reimbursement. As previously noted, LMDC is currently seeking approval to provide Empire State with additional CDBG funds, of which $150 million would go toward the BRG Program, bringing the total program allocation from $331 million to $481 million. Empire State and LMDC plan to meet the federal legislative requirement that $500 million in CDBG assistance be used to compensate small businesses, nonprofits, and individuals located in lower Manhattan almost exclusively through the BRG Program. The remaining expenditures will come from part of LMDC assistance to individuals through its housing assistance program. The SFARG Program offers grants to qualifying businesses (i.e., businesses with no more than 200 employees that are located or planning to locate in the general area south of Canal Street) that sign a new lease or renew an existing lease for a minimum of 5 years. For existing businesses to be eligible, their current lease must expire no later than December 31, 2004, except for businesses located in an area designated as the “October 23rd Zone.” The program offers grants on the basis of the number of employees in the business. Grant payments are made in two installments, the first at the time of application approval and the second 18 months after the application date. Total payments are $3,500 per employee, except for businesses that were in the “Restricted Zone” and remained downtown, for whom total payments are $5,000 per employee. The program will accept applications through December 31, 2004. The first SFARG assistance was provided on June 13, 2002. As of September 11, 2002, Empire State disbursed $12 million to 246 businesses in initial installment payments. The median grant amount was $27,500. The SFARG Program initially was limited to businesses with a minimum of 10 and no more than 200 employees. In response to public reaction, the program was amended to expand eligibility to all businesses with no more than 200 employees, with no lower limit. The program also has been criticized for excluding businesses that were located in the eligible area as of September 11, 2001, but that had long-term leases that did not expire by December 31, 2004. Business advocates argue that those businesses also had a demonstrated commitment to the area, which should make them eligible and not place them at a disadvantage relative to new businesses coming to the area. Empire State officials told us that SFARG was designed to provide incentives to businesses at risk of leaving, not for those that already had long-term commitments in the area. Additional criticism has been made that SFARG took too long to put the program in place and that relatively few businesses have received any benefits. LMDC is currently seeking approval to provide Empire State with additional CDBG funds, of which $50 million would go toward the SFARG Program, bringing the total program allocation from $105 million to $155 million. The Business Recovery Loan Program will provide funding to community- based lending organizations, which in turn will provide low-cost working capital loans to businesses that were adversely affected by the terrorist attacks and to businesses that have subsequently located or will locate new operations in lower Manhattan. The program is intended to enhance access to capital to businesses, particularly to those that do not meet SBA credit or eligibility criteria for disaster loans. Loans are available to businesses (1) located on or south of 14th Street in Manhattan as of September 11, 2001; (2) located in the five boroughs of New York City, but outside of lower Manhattan, that were adversely affected because at least 10 percent of their revenues were derived from sales or services to other businesses located on or south of 14th Street in Manhattan; or (3) newly located on or south of 14th Street in Manhattan since September 11, 2001. As of September 11, 2002, Empire State had selected 10 organizations to participate in the Business Recovery Loan Program. State officials had not disbursed any funds from the program and were in the process of contracting with the lending organizations. Under the program, lending organizations can make loans up to $250,000 per business. Repayments of principal by the borrowers of eligible loans may be retained by the lending organization as capital for making additional small business loans in the lender’s target area. A business advocacy group has criticized Empire State for taking too long to put the program in place. The Technical Assistance Program provides grants to community-based organizations and other service providers to allow them to provide additional assistance to businesses affected by the World Trade Center disaster. The program allocation is $5 million, with a maximum grant of $250,000 per organization. Technical service providers are to assist small businesses with strategic planning; finance, insurance, and legal issues; and basic business management and to help businesses identify and access disaster funds available from CDBG-funded state programs and other city, state, and federal government agencies. The service providers may also assist with marketing, member development, and attraction efforts. To qualify for technical assistance, businesses must have fewer than 200 employees, have been affected by the disaster, and currently be located south of 14th Street in lower Manhattan. As of September 11, 2002, Empire State had selected 23 community-based and other service providers for the program and had provided a total of $224,000 to 4 of the organizations—some of which already offered technical assistance as part of their ongoing assistance programs. Although such organizations already have offered services to some businesses and over a year has elapsed since the attacks, a state official said that there is still a need for additional services and that more and better information currently exists to help make business decisions than in the period immediately after September 11, 2001. Empire State officials also hope that businesses that obtain technical assistance will apply for financial assistance, if they have not done so already. The Empire State action plan allocates $15 million to provide loan loss reserve subsidies to lenders making bridge loans to affected businesses. Empire State is a partner in the World Trade Center Disaster Recovery Bridge Loan Program, a joint city-state program that began in October 2001. Through this program, the city and state provide loan loss reserve subsidies to participating lenders, which make bridge loans to businesses awaiting SBA loan approvals. Eligible businesses are New York City-based, commercial, industrial, and retail enterprises and not-for-profits that were affected by September 11 and that are applying for SBA disaster loans. Participating banks and community-based lenders make the bridge loans to provide interim capital to businesses. If the SBA loan is approved, the business pays off the bridge loan with the SBA loan proceeds. If the borrower does not qualify for an SBA loan, the lender may restructure the bridge loans as term loans. In the original Bridge Loan Program, New York City and State shared equally in providing participating lenders with a 20 percent loan loss reserve subsidy for approved bridge loans. Empire State will use CDBG funds to reimburse the city and state for their loss reserve expenditures at a later date. Participating lenders have disbursed $31.5 million in bridge loans to 950 businesses as of September 11, 2002. The total city-state loan loss reserve payments total $6.3 million. The Bridge Loan Program is open until January 31, 2003, corresponding to the SBA Disaster Loan Program’s ending date. Empire State and LMDC are not alone in their efforts to provide assistance to small businesses in lower Manhattan. There are many other organizations from all levels of government and the private and nonprofit sectors that have come forward to offer loans, grants, and technical assistance to small businesses affected by the disaster. Often these organizations were providing assistance within weeks or months of September 11, well before the Empire State programs became available. SBA disaster assistance is the other major source of federal assistance to businesses in New York. SBA began making loans within days after the terrorist attacks and has since made thousands of loans to businesses throughout the region. New York City and State offered cash grants to businesses within the first few months after the terrorist attacks as well as bridge loans to businesses through participating lenders. Some banks have also provided additional assistance and short-term loans to affected businesses. Finally, many nonprofit organizations, often funded by donations and charitable groups, have made loans and grants and offered other aid to hundreds of small businesses. Although these programs have not reached as many businesses or provided as much funding as the Empire State programs, they have filled a need by providing early assistance and targeting hard-to-reach groups and businesses. Some of these organizations, as well as business advocacy groups, also have played an important role in facilitating the flow of information among businesses and representing the interests of small businesses recovering from the disaster. In the aftermath of September 11, SBA declaration number 3364, “New York City Explosions and Fires,” entitled business owners, nonprofit organizations, homeowners, and renters in New York City and the surrounding region to apply for SBA physical disaster loans and economic injury disaster loans (EIDL). Congress made special appropriations of $175 million to SBA for disaster assistance to respond to the terrorist attacks. SBA can use the appropriations to provide approximately $651 million in loans, while allowing $40 million for program administration. The appropriations are being used to cover the “subsidy rate” of the loans, which represent the costs to the government for the loans. From its first loan on September 15, 2001, through September 11, 2002, SBA provided 4,381 loans totaling $346 million within the broadly defined disaster area; of this $346 million, businesses in lower Manhattan received $154 million.SBA’s deadline for filing applications has been extended several times and is now January 31, 2003. Physical disaster loans go to eligible business owners (for any size business), nonprofit organizations, homeowners, and renters. Business loan terms are for a maximum of 30 years at a 4 percent interest rate when no credit is available elsewhere. The loans can be used to repair or replace disaster-damaged property, including real estate, machinery and equipment, inventory, and supplies. SBA also gives EIDLs to eligible small businesses and nonprofits. SBA determines what constitutes a “small” business on the basis of the type of business and its revenue or number of employees. EIDL loans can be used for working capital, including making payments on short- or long-term notes or accounts payable. The loans carry a 4 percent interest rate but are only available to applicants with no credit available elsewhere. Loan amounts for both physical disaster and EIDL loans have been raised to $10 million and nonprofits have been made eligible for this disaster only. Collateral is required for physical loans over $10,000 and for EIDL loans over $5,000. SBA also requires that applicants have a reasonable ability to repay the loan and any other obligations from expected earnings. Table 2 shows SBA assistance to businesses in lower Manhattan, as of September 11, 2002. Business advocacy groups have criticized SBA for requiring collateral, particularly personal residences, for business loans and for denying too many loans. According to SBA data, denials and withdrawals have accounted for 54 percent of all business application dispositions. The primary reasons for denial were “no repayment ability” and “unsatisfactory credit.” The primary reasons identified for withdrawals were “no IRS record found” and “failure to furnish additional information.” SBA has also received criticism for not providing loans in a timely manner. According to SBA data, the average elapsed time from the receipt of a completed business loan application to disbursement issued is 38 days. Although additional funding remains available for disaster loans, the number of applications has dwindled in recent months. SBA officials said that some recent applications are for businesses that already have received loans but are seeking additional loans. SBA’s outreach efforts have included opening multiple locations to distribute and explain applications and door-to-door outreach to affected businesses. At one time, SBA worked from 20 different locations throughout Manhattan at which business owners could get SBA disaster applications and information, including 1 location in Chinatown with multilingual personnel. SBA currently makes loan applications and information available at 2 locations and over the telephone and Internet. SBA’s Service Corps of Retired Executives Program also has provided business counseling to affected owners. Funded in part by SBA and the state of New York, the New York Small Business Development Centers (SBDC) have seen increased demand at regional locations in their roles of providing business counseling and management assistance to small businesses since September 11, 2001.SBA has trained SBDC personnel to help business owners complete disaster loan applications; in turn, SBDC personnel have helped more than 500 business owners apply for SBA disaster loans. The SBDC program has also established its own loan fund through private donations and provided $5,000, 3-year loans at a 3 percent interest rate to 170 businesses, for a total disbursement of $850,000. Although the loan program is now closed, having expended all of its funds, SBDC officials are looking to obtain additional funding to reopen the program in the near future. SBDC officials also anticipate obtaining state funding to establish another loan program to provide additional assistance to affected small businesses. Both the city and state of New York established assistance programs within months of the World Trade Center attacks. Specifically, the city established the New York City Lower Manhattan Business Retention Grant Program to provide cash grants to nonretail businesses. This program began on November 14, 2001, and provided cash grants totaling $10 million to 1,674 nonretail businesses, including manufacturers and professional service firms. The program stopped accepting applications on March 31, 2002. To qualify, businesses had to be located south of Houston Street and employ 50 or fewer workers; they also had to apply for a loan from SBA or an approved lender. A business could receive up to $2,500 upon completing a loan application and up to a $7,500 cash grant (for a maximum of $10,000) upon approval of the loan, depending on the size of the requested loan. Moreover, businesses that were located in the World Trade Center were eligible for the full $10,000 without having to apply to SBA. The state established the World Trade Center Retail Recovery Grant Program to provide cash grants to retail businesses. This program began on November 5, 2001, and provided 3,048 retail businesses in lower Manhattan with cash grants totaling $13.7 million. Eligible businesses included retail and personal service firms, with fewer than 500 employees, located south of Houston Street. The program offered businesses compensation equal to 3 days of lost revenue, capped at $10,000, and required that businesses continue to operate in New York City. The state closed the program to new applications on December 31, 2001, after which Empire State began offering grants through the CDBG-funded Business Recovery Grant Program. Under the Empire State BRG Program, if a business had previously received a Retail Recovery Grant, the BRG grant amount was reduced by that amount. While the city and state grant programs are now closed to new applications, a joint city-state bridge loan program—the World Trade Center Disaster Recovery Bridge Loan Program—that works in cooperation with banks is still available, as previously described in this report. This program has participating banks and community-based lenders provide low-cost bridge loans to small businesses and nonprofits. The city and state each provided banks with 10 percent of the approved loan amount as a loan loss reserve. The first program loans were made on October 5, 2001. Subsequently, Empire State allocated $15 million from its CDBG funds to provide loan loss reserve subsidies and expects to reimburse the city and state for their prior and continuing expenditures. In addition to the grant programs, within days of the September 11 attacks, both the city and state established emergency walk-in centers that assisted small businesses. A toll-free hotline also was established to direct callers to emergency services. Business location services were provided as well as comprehensive on-line and hard-copy directories of emergency and business services available from governmental and nongovernmental sources. Outreach has included radio and print advertisements, direct mail, direct telephone calls, informational workshops, and an “Adopt a Company” Program. In addition to their participation in the Bridge Loan Program, some banks in New York offered additional assistance to small businesses, although there are no comprehensive data on the amount of total assistance they provided. Some banks offered short-term loan programs for businesses affected by the disaster. Loan terms were usually short, extending up to 5 years, with an interest rate at or below prime. However, banks did maintain existing credit standards; consequently, some banks had a high denial rate. For example, one bank denied over 80 percent of the applications that it received. After the September 11 terrorist attacks, several nonprofit organizations that traditionally assisted small businesses and had an interest in the business environment of lower Manhattan saw an immediate need that they could fill. Many nonprofits created programs for affected small businesses within weeks of the disaster and raised funds from banks, foundations, and other private contributors. As more disaster-related funding has become available, the nonprofits have been able or are seeking to supplement their original funds to expand or continue programs. The September 11th Fund, an organization dedicated to providing emergency and long-term assistance to the victims of September 11, became a major funding source for the nonprofits. The fund set aside $50 million to help small businesses and provided significant funding to many of the organizations mentioned below. Additionally, some of the nonprofits discussed below and others have participated in the city and state’s Bridge Loan Program, have been selected to receive funding from Empire State to provide technical assistance, and/or have been selected to receive some of the $50 million of loan capital that Empire State will be awarding. Nonprofits have been able to offer different and sometimes more personal services than those provided through the larger federal programs. For instance, Accion New York (Accion), a small business mircrolender, offers a package of loans, small grants, and personal technical assistance through its newly created “American Dream Fund.” These services include help in completing forms and creating needed financial documents. The New York City Partnership provides businesses with recoverable grants and intensive technical assistance, such as a mentor to help with future business planning. The partnership also created a goods and services clearinghouse for businesses affected by the disaster. Another nonprofit, Seedco, offers not only loans and grants, but also wage subsidies to enable small businesses to meet payroll and retain workers who might otherwise be laid off. For each business, Seedco will subsidize 50 percent of the salary of up to 10 employees who make $12 an hour or less. Often, nonprofit programs will specifically target types of businesses that are either overlooked or ineligible for federal programs or other nonprofit assistance. Renaissance Development Corporation (Renaissance), which has been working in Chinatown since 1973, markets its programs to affected businesses, such as the garment industry and limousine drivers. Accion targets businesses that have been turned down for SBA loans; specifically, Accion established a working relationship with SBA in which SBA refers these businesses to Accion. Accion also was located at a business recovery center, where clients had access not only to Accion but also to Empire State and SBA programs. The partnership’s program specifically chose to target retail businesses with 50 or more employees, in part, because Seedco’s program covers those with fewer than 50 employees. Many of the nonprofits have far more flexible lending criteria than either SBA or the banks, thereby allowing them to make loans the others have eschewed. Unlike SBA, Renaissance does not require collateral or tax receipts, instead it relies on store receipts, site visits, lottery sales, and personal knowledge of a business to determine business viability. Table 3 shows major nongovernmental assistance as of September 11, 2002. The nonprofits noted above and other groups also have played an important role in advocating for the interests of small businesses. For example, newly founded business advocacy groups, such as From the Ground Up and the World Trade Center Tenants Association, have lobbied Empire State, city and federal officials, and others to change programs to benefit small businesses. Some of these groups also have helped facilitate the flow of information among businesses and organizations, either formally or informally. The Manhattan Chamber of Commerce, for instance, has held networking events in lower Manhattan to bring various resources to one place. Seedco has published a widely used directory of resources available to help small businesses. We provided HUD, SBA, and Empire State with an opportunity to review this report. They provided comments that were technical in nature, which we have addressed in this report where appropriate. We are sending copies of this report to the Ranking Minority Member of the House Committee on Small Business, the Chairman and Ranking Minority Member of the Senate Committee on Small Business, other appropriate congressional committees, the Secretary of Housing and Urban Development, and the Administrator of the Small Business Administration. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact Nancy Simmons or me at (202) 512-8678. Key contributors to this report were Catherine Hurley, Mark McArdle, Dan Meyer, and Barbara Roesmann. To obtain information on the assistance provided to small businesses from Community Development Block Grant (CDBG) supplemental funding, we interviewed officials from the Department of Housing and Urban Development (HUD), New York State’s Empire State Development Corporation (Empire State), and the Lower Manhattan Development Corporation (LMDC). For our analysis, we obtained detailed program information and data on the various programs that HUD, Empire State, and LMDC have created to assist businesses after September 11, including an Empire State database of grant recipients. This database is the same one used by the HUD Office of Inspector General to monitor expenditures in New York. We ascertained how information for this database was collected and maintained to determine its reliability, and we found the information to be reliable for our purposes. To obtain information on other sources of funds available to rebuild and sustain business in lower Manhattan, we interviewed officials from the following: the Small Business Administration (SBA), the New York City Economic Development Corporation, the New York Small Business Development Center (SBDC), FleetBoston and the Bank of New York, and nonprofit organizations that provided financial assistance. We selected the nonprofit organizations by reviewing various media and Internet sources on the rebuilding effort in New York as well as through referrals from other organizations concerned with economic renewal in lower Manhattan. We met with officials from the following nonprofit and other organizations that offer financial assistance toward the rebuilding and economic renewal efforts: Accion New York, Downtown Alliance, New York City Partnership, Renaissance Development Corporation, SeedCo, and the September 11th Fund. We also met with business advocacy groups, whose directors are often small business owners, to obtain their views on the assistance that Empire State, SBA, and others provided. These groups included the following: From the Ground Up, Manhattan Chamber of Commerce, Tribeca Organization, Wall Street Rising, and the World Trade Center Tenants Association. We obtained the Empire State disaster recovery database, which captured data on program activity through September 11, 2002. We used these data to calculate descriptive statistics on the numbers of businesses, dollar amounts, and other characteristics of the Business Recovery Grant (BRG) Program, the Small Firm Attraction and Retention Grant (SFARG) Program, and the large business recovery grant program. We used median instead of mean values because the median values were more representative of the “typical” grant. In addition, we analyzed the database to determine other characteristics of BRG recipients, including annual gross revenues, number of employees, type of business on the basis of the North American Industry Classification System code given, and the extent to which BRGs covered businesses’ reported losses. We limited our analysis to disbursed grants. When multiple grants went to the same business as the result of an appeal or from an award for a supplemental grant, we summarized the data by business, not by grant. Since the BRG Program includes nonprofits in addition to small businesses, we included nonprofits in our analysis, although entities that identify themselves as nonprofits accounted for less than 3 percent of the total receiving grants. Other conditions or limitations are described in the explanations of specific analyses that follow. For our analysis of business employee size, we used the total number of employees of the business; when the business had other business affiliations, we used the total number of employees worldwide. The BRG Program uses the total number of employees worldwide to determine if a business qualifies as a small business. In our analysis of revenues of BRG recipients, we used the gross revenue amount reported at the business location. This gross revenue amount is the figure used in computing the grant amount. The database did not have total business gross revenues that included affiliated businesses. We included both businesses that received one grant and businesses that received multiple grants, when the database included the same gross revenue figure for each of the multiple grants. Also, Empire State informed us that the gross revenue entries include projected annual revenues for some new businesses that did not have a year of revenue data, as well as annual expenses, in lieu of revenue, for some businesses that do not generate revenues and for nonprofits. For our analysis of type of business, we used the business classification code from the database and grouped the results by the first two letters of the code, which designate the general industry type. Where the groups represented less than 3 percent of all businesses, we grouped them in the “other” category. We made two calculations of the extent to which BRGs compensated for business losses. The business loss data are self-reported and unaudited by Empire State. In the first calculation, we determined to what extent BRGs covered the uncompensated loss incurred by each business. The uncompensated loss was determined by using the business “net loss” database entry, which reflected remaining losses after adjusting for insurance proceeds and the city’s Lower Manhattan Business Retention Grants; we further reduced this amount by the amount of the state Retail Recovery Grant. The BRG amount was then divided by the uncompensated loss figure to obtain the percentage of uncompensated loss covered by BRGs for each business. Where businesses had received multiple grants and the net loss figures were the same for each grant, we totaled the disbursed grant amounts and divided the total by the uncompensated loss amount. In the second calculation, we determined to what extent BRGs covered the total loss incurred by each business. We divided the BRG amount by the total business loss to obtain the percentage of the total loss covered by BRGs for each business. Where businesses had received multiple grants and the total loss figures were the same for each grant, we totaled the disbursed grant amounts and divided it by the total loss amount. To more accurately characterize the loss and compensation experience of small businesses in lower Manhattan for this report, we considered the entire distribution of the above statistics over all businesses to identify any uneven distribution around the median, or 50 percentile, which was the most common single summary measure we chose to report. We conducted our review between April and September, 2002 in Washington, D.C., and New York, New York, in accordance with generally accepted government auditing standards. South of Canal St. Houston St. - Canal St. 14th St. - Houston St. | The attacks on the World Trade Center had a substantially negative impact on the New York City economy, severely affecting businesses. In the aftermath of the attacks, Congress, among other things, appropriated emergency supplemental funds to several federal agencies to aid and rebuild the affected areas. The Chairman of the House Committee on Small Business asked GAO to describe the assistance provided to small businesses that is funded from emergency supplemental appropriations of federal Community Development Block Grant funds and other sources. To assist in New York City's recovery from the September 11, 2001, terrorist attacks, Congress appropriated $3.5 billion in Community Development Block Grant funding of which Congress earmarked at least $500 million to be used to compensate small businesses, nonprofit organizations, and individuals for their economic losses. One year after the attacks, these funds, administered in part by New York State's Empire State Development Corporation (Empire State), have provided $266 million to about 9,000 small businesses, many with fewer than 10 employees. Such assistance has included grants to compensate businesses for part of their economic losses--for both physical and economic injuries--and payments to attract and retain small businesses in efforts to revitalize the affected areas. Hundreds of millions of dollars remain available through these and other programs to assist an estimated 18,000 affected businesses. Empire State has employed mailings, visits, walk-in centers, and mass media to inform businesses of assistance programs. Other efforts by the Small Business Administration, New York City and State, banks, and nonprofit organizations have provided critical assistance to address the immediate and additional unmet needs of small businesses. |
Within USDA, FNS has overall responsibility for overseeing the school- meals programs, which includes promulgating regulations to implement authorizing legislation, setting nationwide eligibility criteria, and issuing guidance. School-meals programs are administered at the state level by a designated state agency that issues policy guidance and other instructions to school districts providing the meals to ensure awareness of federal and state requirements. School districts are responsible for completing application, certification, and verification activities for the school-meals programs, and for providing children with nutritionally balanced meals each school day. The designated state agency conducts periodic reviews of the school districts to determine whether the program requirements are being met. Schools and households that participate in free or reduced-price meal programs may be eligible for additional federal and state benefits. Depending on household income, children may be eligible for free or reduced-price meals. Children from families with incomes at or below 130 percent of the federal poverty level are eligible for free meals; the income threshold for a family of four was $28,665 in the 2010–2011 school year. Those with incomes between 130 percent and 185 percent of the federal poverty level are eligible for reduced-price meals. Income is any money received on a recurring basis—including, but not limited to, gross earnings from work, welfare, child support, alimony, retirement, and disability benefits—unless specifically excluded by statute. In addition, students who are in households receiving benefits under certain public-assistance programs—specifically, SNAP, Temporary Assistance for Needy Families (TANF), or Food Distribution Program on Indian Reservations (FDPIR)—or meet certain approved designations (such as students who are designated as homeless, runaway, or migrant; or who are foster children) are eligible for free school meals regardless of income. In May 2014, we reported that USDA had taken several steps to implement or enhance controls to identify and prevent ineligible beneficiaries from receiving school-meals benefits. For example: USDA worked with Congress to develop legislation to automatically enroll students who receive SNAP benefits for free school meals; SNAP has a more-detailed certification process than the school-meals program. For our May 2014 report, USDA officials told us that they were emphasizing the use of direct certification, because, in their opinion, it helps prevent certification errors without compromising access. Direct certification reduces the administrative burden on SNAP households, as they do not need to submit a separate school- meals application. It also reduces the number of applications school districts must review. The number of school districts directly certifying SNAP-participant children increased from the 2008 through 2013 school years. For example, during the 2008–2009 school year, 78 percent of school districts directly certified students, and by the 2012– 2013 school year, this percentage had grown to 91 percent of school districts, bringing the estimated percentage of SNAP-participant children directly certified for free school meals to 89 percent. USDA was also conducting demonstration projects in selected states and school districts to explore the feasibility of directly certifying children that participate in the Medicaid program. USDA requires state agencies that administer school-meals programs to conduct regular, on-site reviews—referred to as “administrative reviews”—to evaluate school districts that participate in the school- meals programs. Starting in the 2013–2014 school year, USDA increased the frequency with which state agencies complete administrative reviews from every 5 years to every 3 years. As part of this process, state agencies are to conduct on-site reviews of school districts to help ensure that applications are complete and that the correct eligibility determinations were made based on applicant information. School districts that have adverse findings in their administrative reviews are to submit a corrective-action plan to the state agency, and the state agency is to follow up to determine whether the issue has been resolved. In February 2012, USDA distributed guidance to state administrators to clarify that school districts have the authority to review approved applications for free or reduced-price meals for school-district employees when known or available information indicates school- district employees may have misrepresented their incomes on their applications. In our May 2014 report, we identified opportunities to strengthen oversight of the school-meals programs while ensuring legitimate access, including clarifying use of for-cause verification, studying the feasibility of electronic data matching to verify income, and verifying a sample of households that are categorically eligible for assistance. As described in USDA’s eligibility manual for school meals, school districts are obligated to verify applications if they deem them to be questionable, which is referred to as for-cause verification. We reported in May 2014 that officials from 11 of the 25 school districts we examined told us that they conduct for-cause verification. These officials provided examples of how they would identify suspicious applications, such as when a household submits a modified application— changing income or household members—after being denied, or when different households include identical public-assistance benefit numbers (e.g., if different households provide identical SNAP numbers). However, officials from 9 of the 25 school districts we examined told us that they did not conduct any for-cause verification. For example, one school-district official explained that the school district accepts applications at face value. Additionally, officials from 5 of the 25 school districts told us they only conduct for-cause verification if someone (such as a member of the public or a state agency) informs them of the need to do so on a household. Although not generalizable, responses from these school districts provide insights about whether and under what conditions school districts conduct for-cause verifications. In April 2013, USDA issued a memorandum stating that, effective for the 2013–2014 school year, all school districts must specifically report the total number of applications that were verified for cause. However, the outcomes of those verifications would be grouped with the outcomes of applications that have undergone standard verification. As a result, we reported in May 2014 that USDA would not have information on specific outcomes, which it may need to assess the effectiveness of for-cause verifications and to determine what actions, if any, are needed to improve program integrity. While USDA had issued guidance specific to school- district employees and instructs school districts to verify questionable applications in its school-meals eligibility manual, we found that the guidance did not provide possible indicators or describe scenarios that could assist school districts in identifying questionable applications. Hence, in May 2014, we recommended that USDA evaluate the data collected on for-cause verifications for the 2013–2014 school year to determine whether for-cause verification outcomes should be reported separately and, if appropriate, develop and disseminate additional guidance for conducting for-cause verification that includes criteria for identifying possible indicators of questionable or ineligible applications. USDA concurred with this recommendation and in January 2015 told us that FNS would analyze the 2013–2014 school year data to determine whether capturing the results of for-cause verification separately from the results of standard verification would assist the agency’s efforts to improve integrity and oversight. USDA also said that FNS would consider developing and disseminating additional guidance, as we recommended. In addition to for-cause verification, school districts are required to annually verify a sample of household applications approved for free or reduced-price school-meals benefits to determine whether the household has been certified to receive the correct level of benefits—we refer to this process as “standard verification.” Standard verification is generally limited to approved applications considered “error-prone.” Error-prone is statutorily defined as approved applications in which stated income is within $100 of the monthly or $1,200 of the annual applicable income- eligibility guideline. Households with reported incomes that are more than $1,200 above or below the free-meals eligibility threshold and more than $1,200 below the reduced-price threshold would generally not be subject to this verification process. In a nongeneralizable review of 25 approved civilian federal-employee household applications for our May 2014 report, we found that 9 of 19 households that self-reported household income and size information were not eligible for free or reduced-price-meal benefits they were receiving because their income exceeded eligibility guidelines. Two of these 9 households stated in their applications annualized incomes that were within $1,200 of the eligibility guidelines and, therefore, could have been selected for standard verification as part of the sample by the district; however, we determined that they were not selected or verified. The remaining 7 of 9 households stated annualized incomes that fell below $1,200 of the eligibility guidelines and thus would not have been subject to standard verification. For example, one household we reviewed submitted a school-meals application for the 2010–2011 school year seeking school-meals benefits for two children. The household stated an annual income of approximately $26,000 per year, and the school district appropriately certified the household to receive reduced-price-meal benefits based on the information on the application. However, we reviewed payroll records and determined that the adult applicant’s income at the time of the application was approximately $52,000—making the household ineligible for benefits. This household also applied for and received reduced-meal benefits for the 2011–2012 and 2012–2013 school years by understating its income. Its 2012–2013 annualized income was understated by about $45,000. Because the income stated on the application during these school years was not within $1,200 per year of the income-eligibility requirements, the application was not deemed error-prone and was not subject to standard verification. Had this application been subjected to verification, a valid pay stub would have indicated the household was ineligible. One method to identify potentially ineligible applicants and effectively enforce program-eligibility requirements is by independently verifying income information with an external source, such as state payroll data. States or school districts, through data matching, could identify households that have income greater than the eligibility limits and follow up further. Such a risk-based approach would allow school districts to focus on potentially ineligible families while not interrupting program access to other participants. Electronic verification of a sample of applicants (beyond those that are statutorily defined as error-prone) through computer matching by school districts or state agencies with other sources of information—such as state income databases or public- assistance databases—could help effectively identify potentially ineligible applicants. In May 2014, we recommended that USDA develop and assess a pilot program to explore the feasibility of computer matching school-meal participants with other sources of household income, such as state income databases, to identify potentially ineligible households—those with income exceeding program-eligibility thresholds—for verification. We also recommended that, if the pilot program shows promise in identifying ineligible households, the agency should develop a legislative proposal to expand the statutorily defined verification process to include this independent electronic verification for a sample of all school-meals applications. USDA concurred with our recommendations and told us in January 2015 that direct-verification computer matching is technologically feasible with data from means-tested programs, and that data from SNAP and other programs are suitable for school-meals program verification in many states. USDA said that FNS would explore the feasibility of using other income-reporting systems for program verification without negatively affecting program access for eligible students or violating statutory requirements. Depending on the results of the pilot program, USDA said that FNS would consider submitting a legislative proposal to expand the statutorily defined verification process, as we recommended. In May 2014, we found that ineligible households may be receiving free school-meals benefits by submitting applications that falsely state that a household member is categorically eligible for the program due to participating in certain public-assistance programs—such as SNAP—or meeting an approved designation—such as foster child or homeless. Of the 25 civilian federal-employee household applications we reviewed, 6 were approved for free school-meals benefits based on categorical eligibility. We found that 2 of the 6 were not eligible for free or reduced- price meals and 1 was not eligible for free meals, although that household may have been eligible for reduced-price meals. For example, one household applied for benefits during the 2010–2011 school year—providing a public-assistance benefit number—and was approved for free-meal benefits. However, when we verified the information with the state, we learned that the number was for medical- assistance benefits—a program that is not included in categorical eligibility for the school-meals programs. On the basis of our review of payroll records, this household’s annualized income of at least $59,000 during 2010 would not have qualified the household for free or reduced- price-meal benefits. This household applied for school-meals benefits during the 2011–2012 and 2012–2013 school years, again indicating the same public-assistance benefit number—and was approved for free-meal benefits. Figure 1 shows the results of our review. Because applications that indicate categorical eligibility are generally not subject to standard verification, these ineligible households would likely not be identified unless they were selected for for-cause verification or as part of the administrative review process, even though they contained inaccurate information. These cases underscore the potential benefits that could be realized by verifying beneficiaries with categorical eligibility. In May 2014, we recommended that USDA explore the feasibility of verifying the eligibility of a sample of applications that indicate categorical eligibility for program benefits and are therefore not subject to standard verification. USDA concurred with this recommendation and told us in January 2015 that FNS would explore technological solutions to assess state and local agency capacity to verify eligibility of a sample of applications that indicate categorical eligibility for school-meals-program benefits. In addition, USDA said that FNS would clarify to states and local agencies the procedures for confirming and verifying the application’s status as categorically eligible, including for those who reapply after being denied program benefits as a result of verification. Chairman Rokita, Ranking Member Fudge, and Members of the Subcommittee, this concludes my prepared remarks. I look forward to answering any questions that you may have at this time. For further information on this testimony, please contact Jessica Lucas- Judy at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Gabrielle Fagan, Assistant Director; Marcus Corbin; Ranya Elias; Colin Fallon; Kathryn Larin; Olivia Lopez; Maria McMullen; and Daniel Silva. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In fiscal year 2014, 30.4 million children participated in the National School Lunch Program and 13.6 million children participated in the School Breakfast Program, partly funded by $15.1 billion from USDA. In May 2014, GAO issued a report on (1) steps taken to help identify and prevent ineligible beneficiaries from receiving benefits in school-meal programs and (2) opportunities to strengthen USDA's oversight of the programs. This testimony summarizes GAO's May 2014 report ( GAO-14-262 ) and January 2015 updates from USDA. For the May 2014 report, GAO reviewed federal school-meals program policies, interviewed program officials, and randomly selected a nongeneralizable sample that included 25 approved applications from civilian federal-employee households out of 7.7 million total approved applications in 25 of 1,520 school districts in the Dallas, Texas, and Washington, D.C., regions. GAO performed limited eligibility testing using civilian federal-employee payroll data from 2010 through 2013 due to the unavailability of other data sources containing nonfederal-employee income. GAO also conducted interviews with households. GAO referred potentially ineligible households to the USDA Inspector General. In its 2014 report, GAO recommended that USDA explore (1) using computer matching to identify households with income that exceeds program-eligibility thresholds for verification, and (2) verifying a sample of categorically eligible households. USDA generally agreed with the recommendations and is taking actions to address them. In May 2014, GAO reported that the U.S. Department of Agriculture (USDA) had taken several steps to implement or enhance controls to identify and prevent ineligible beneficiaries from receiving school-meals benefits. For example: USDA worked with Congress to develop legislation to automatically enroll students who receive Supplemental Nutritional Assistance Program benefits for free school meals; this program has a more-detailed certification process than the school-meals program. Starting in the 2013–2014 school year, USDA increased the frequency with which state agencies complete administrative reviews of school districts from every 5 years to every 3 years. As part of this process, state agencies review applications to determine whether eligibility determinations were correctly made. In its May 2014 report, GAO identified opportunities to strengthen oversight of the school-meals programs while ensuring legitimate access, such as the following: If feasible, computer matching income data from external sources with participant information could help identify households whose income exceeds eligibility thresholds. As of May 2014, school districts verified a sample of approved applications deemed “error-prone”—statutorily defined as those with reported income within $1,200 of the annual eligibility guidelines—to determine whether the household is receiving the correct level of benefits (referred to as standard verification in this testimony). In a nongeneralizable review of 25 approved applications from civilian federal households, GAO found that 9 of 19 households that self-reported household income and size information were ineligible and only 2 could have been subject to standard verification. Verifying a sample of categorically eligible applications could help identify ineligible households. GAO reported that school-meal applicants who indicate categorical eligibility (that is, participating in certain public-assistance programs or meeting an approved designation, such as foster children) were eligible for free meals and were generally not subject to standard verification. In a nongeneralizable review of 25 approved applications, 6 households indicated categorical eligibility, but GAO found 2 were ineligible. |
ACIP, commonly referred to as flight pay, is intended as additional pay to attract and retain officers in a military aviation career. The amount of ACIP varies from $125 a month for an aviator with 2 years or less of aviation service to $650 a month for 6 years to 18 years of service. After 18 years, the amount gradually decreases from $585 a month to $250 a month through year 25. After 25 years, aviators do not receive ACIP unless they are in operational flying positions. ACP, which has existed for all services since 1989, is considered a bonus and is intended to entice aviators to remain in the service during the prime of their flying career. An ACP bonus can be given to aviators below the grade 0-6 with at least 6 years of aviation service and who have completed any active duty service commitment incurred for undergraduate aviator training. However, it cannot be paid beyond 14 years of commissioned service. The services believe that it is during the 9-year to 14-year period of service that aviators are most sought after by the private sector airlines. Therefore, to protect their aviation training investment, all services, except the Army, which is currently not using the ACP program, offer ACP contracts to experienced aviators. In fiscal year 1996, the Army, the Navy, the Marine Corps, and the Air Force designated 11,336 positions as nonflying positions to be filled by aviators. These nonflying positions represent about 25 percent of all authorized aviator positions. As shown in table 1, the total number of nonflying positions has decreased since fiscal year 1994 and is expected to continue to decrease slightly up through fiscal year 2001. Service officials told us that they have been able to reduce the number of nonflying positions primarily through force structure reductions and reorganization of major commands. The services, however, have not developed criteria for determining whether there are nonflying positions that could be filled by nonaviators. The officials said that a justification is prepared for each nonflying position explaining why an aviator is needed for the position. These justifications are then approved by higher supervisory levels. The officials believe that this process demonstrates that the position must be filled by an aviator. In our view, the preparation of a written justification for filling a particular position with an aviator does not, in and by itself, demonstrate that the duties of a position could not be performed by a nonaviator. Because the services’ position descriptions for nonflying positions do not show the specific duties of the positions, we could not determine whether all or some part of the duties of the nonflying positions can only be performed by aviators. Consequently, we could not determine whether the number of nonflying positions could be further reduced. In commenting on a draft of this report, an Air Force official said that the Air Force Chief of Staff has directed that all nonflying positions be reviewed and a determination made by July 1997 as to which positions can be filled by nonaviators. All aviators receive ACIP, regardless of whether they are in flying or nonflying positions, if they meet the following criteria. Eight years of operational flying during the first 12 years of aviation service entitles the aviator to receive ACIP for 18 years. Ten years of operational flying during the first 18 years of aviation service entitles the aviator to receive ACIP for 22 years. Twelve years of operational flying during the first 18 years of aviation service entitles the aviator to receive ACIP for 25 years. ACP criteria are more flexible than ACIP in deciding who receives it, the amount paid, and the length of the contract period. According to service officials, ACP is an added form of compensation that is needed to retain aviators during the prime of their flying career when the aviators are most attractive to private sector airlines. To protect their training investment, all the services believe it is necessary to offer ACP contracts. The Army does not offer ACP contracts because, according to Army officials, it has not had a pilot retention problem. For fiscal years 1994 through April 30, 1996, the Army, the Navy, the Marine Corps, and the Air Force made ACIP and ACP payments to their aviators totaling $909.1 million. Of this total amount, $211 million, or about 23 percent, was paid to aviators in nonflying positions by the Air Force, the Navy, and the Marine Corps. The following table shows ACIP and ACP payments by each service for each of the fiscal years. The services view ACP as a retention incentive for their experienced aviators. However, the way the services implement this incentive varies widely in terms of who receives ACP, the length of time over which it is paid, and how much is paid. To illustrate, The Army does not offer ACP to its aviators because it has not had a pilot retention problem that warrants the use of the ACP program. The Navy offers long-term ACP contracts of up to 5 years and a maximum of $12,000 a year to eligible pilots in aircraft types with a critical pilot shortage. The Marine Corps offered short-term ACP contracts of 1 or 2 years at $6,000 a year through fiscal year 1996. Beginning in fiscal year 1997, the Marine Corps plans to offer long-term ACP contracts of up to 5 years at $12,000 a year to its eligible pilots and navigators in aircraft types that have critical personnel shortages. The Air Force offers long-term ACP contracts of up to 5 years at a maximum of $12,000 a year to all eligible pilots if there is a pilot shortage for any fixed- or rotary-wing aircraft. Table 3 shows the number and dollar amount of ACP contracts awarded by the services for fiscal years 1994 through 1996. As shown above, the Air Force greatly exceeds the other services in the number of ACP contracts awarded as well as the value of the contracts. This is because the Air Force does not restrict ACP contracts just to pilots of particular aircraft that are experiencing critical pilot shortages. Instead, if there is an overall shortage in fixed-wing or rotary-wing pilots, all eligible pilots in those respective aircraft are offered ACP. According to Air Force officials, the reason for offering ACP contracts to all fixed-wing and/or rotary-wing pilots rather than specific aircraft is because they want to treat all their pilots equally and not differentiate between pilots based on the type of aircraft they fly. In their opinion, if they were to only offer ACP to pilots of certain aircraft types, morale could be adversely affected. The point in an aviator’s career at which ACP is offered generally coincides with completion of the aviator’s initial service obligation—generally around 9 years. By this time, the aviator has completed pilot or navigator training and is considered to be an experienced aviator, and according to service officials, is most sought after by private sector airlines. For this reason, the services believe that awarding an ACP contract is necessary to protect their training investment and retain their qualified aviators. For example, the Air Force estimates that by paying ACP to its pilots, it could retain an additional 662 experienced pilots between fiscal years 1995 and 2001. The issue of whether ACP is an effective or necessary retention tool has been brought into question. For example, an April 1996 Aviation Week and Space Technology article pointed out that in the previous 7 months, 32 percent of the 6,000 new pilots hired by private sector airlines were military trained pilots. This is in contrast with historical airline hiring patterns where 75 percent of the airline pilots were military pilots. The concern about military pilots being hired away by the airlines was also downplayed in a June 1995 Congressional Budget Office (CBO) report. The report stated that employment in the civilian airlines sector is far from certain. Airline mergers, strikes, or failures have made the commercial environment less stable than the military. Consequently, military aviators may be reluctant to leave the military for the less stable employment conditions of the airline industry. CBO concluded that short-term civilian sector demands for military pilots may not seriously affect the services’ ability to retain an adequate number of pilots. The services include nonflying positions in their aviator requirements for determining future aviator training needs. Therefore, aviator training requirements reflect the number of aviators needed to fill both flying and nonflying positions. As shown in table 4, of all the services, the Air Force plans the largest increase in the number of aviators it will train between fiscal years 1997 and 2001—a 60-percent increase. The reason for the large training increase in Air Force aviators is because it believes that the number of aviators trained in prior years was insufficient to meet future demands. Because nonflying positions are included in the total aviator requirements, the Navy and the Marine Corps project aviator shortages for fiscal years 1997-2001 and the Air Force projects aviator shortages for fiscal years 1998-2001. As shown in table 5, there are more than enough pilots and navigators available to meet all flying position requirements. Therefore, to the extent that the number of the nonflying positions filled by aviators could be reduced, the number of aviators that need to be trained, as shown in table 4, could also be reduced. This, in turn, would enable the Navy, the Marine Corps, and the Air Force to reduce their aviator training costs by as much as $5 million for each pilot and $2 million for each navigator that the services would not have to train. The savings to the Army would be less because its aviator training costs are about $366,000 for each pilot. We recommend that the Secretary of Defense direct the Secretaries of the Army, the Navy, and the Air Force to develop criteria and review the duties of each nonflying position to identify those that could be filled by nonaviators. This could allow the services to reduce total aviator training requirements. In view of the recent articles and studies that raise questions about the need to incentivize aviators to remain in the service, the abundance of aviators as compared to requirements for flying positions, and the value of ACP as a retention tool, we recommend that the Secretary of Defense direct the service secretaries to reevaluate the need for ACP. If, the reevaluation points out the need to continue ACP, we recommend that the Secretary of Defense determine whether the services should apply a consistent definition in deciding what groups of aviators can receive ACP. In commenting on a draft of this report, Department of Defense (DOD) officials said that it partially agreed with the report and the recommendations. However, DOD also said that the report raises a number of concerns. DOD said that it did not agree that only flying positions should be considered in determining total aviator requirements. In its opinion, operational readiness dictates the need for aviator expertise in nonflying positions, and nonflying positions do not appreciably increase aviator training requirements. The report does not say or imply that only flying positions should be considered in determining total aviator requirements. The purpose of comparing the inventory of aviators to flying positions was to illustrate that there are sufficient pilots and navigators to meet all current and projected flying requirements through fiscal year 2001. We agree with DOD that those nonflying positions that require aviator expertise should be filled with aviators. The point, however, is that the services have not determined that all the nonflying positions require aviator expertise. Furthermore, to the extent that nonflying positions could be filled by nonaviators, the aviator training requirements could be reduced accordingly. DOD also said that the report, in its opinion, does not acknowledge the effectiveness of the processes used for determining aviator training requirements or the use of ACP in improving pilot retention. The issue is not whether ACP has improved retention—obviously it has—but whether ACP is needed in view of the data showing that the civilian airline sector is becoming less dependent on the need for military trained pilots and that military pilots are becoming less likely to leave the service to join the civilian sector. DOD further commented that the articles cited in the report as pointing to a decrease in civilian sector demand for military trained pilots contain information that contradicts this conclusion. DOD believes that the fact that the airlines are currently hiring a smaller percentage of military trained pilots is an indication of a decrease in pilot inventory and the effectiveness of ACP as a retention incentive. The articles cited in our report—Aviation Weekly and Space Technology and the June 1995 CBO report—do not contain information that contradicts a decreasing dependence on military trained pilots. The Aviation Weekly and Space Technology article points out that about 70 percent of the recent pilot hires by the civilian airlines have been pilots with exclusively civilian flying backgrounds. This contrasts to previous hiring practices where about 75 percent were military trained pilots. The CBO report also discusses expected long-term hiring practices in the civilian airline sector. The report points out that while the number of new hires is expected to double (from 1,700 annually to 3,500 annually) between 1997 and 2000, the Air Force’s efforts to retain its pilots may not be affected because the industry’s new pilots could be drawn from an existing pool of Federal Aviation Agency qualified aviators. Furthermore, the issue is not whether the pilot inventory is decreasing and whether ACP is an effective retention tool. The point of the CBO report was that because of private sector airline mergers, strikes, or failures, the commercial environment is less stable than the military. As a result, there is a ready supply of pilots in the civilian sector and the short-term demands for military pilots may be such that the Air Force’s quest to retain an adequate number of pilots is not seriously affected. In commenting on why the Air Force’s method of offering ACP contracts differs from the Navy’s and the Marine Corps’ methods, DOD stated that while morale and equity are vital to any retention effort, it is not the primary determinant in developing ACP eligibility. We agree and the report is not meant to imply that morale and equity is the primary determinant for developing ACP eligibility. The report states that the reason cited by Air Force officials for not restricting ACP contracts to just those pilots in aircraft that have personnel shortages, as do the Navy and the Marine Corps, is because of the morale and equity issue. Another reason cited by Air Force officials was the interchangability of its pilots. However, the Navy and the Marine Corps also have pilot interchangability. Therefore, interchangability is not a unique feature of the Air Force. DOD agreed with the recommendation that the services review the criteria and duties of nonflying aviator positions. However, DOD did not agree that the nonflying positions should be filled with nonaviators or that doing so would appreciably reduce aviator training requirements. DOD also agreed with the recommendation that the services need to continually review and reevaluate the need for ACP, including whether there should be a consistent definition in deciding what groups of aviators can receive ACP. In DOD’s opinion, however, this review and affirmation of the continued need for ACP is already being done as part of the services’ response to a congressional legislative report requirement. We agree that the services report annually on why they believe ACP is an effective retention tool. However, the reports do not address the essence of our recommendation that the need for ACP—a protection against losing trained pilots to the private sector—should be reevaluated in view of recent studies and reports that show that private sector airlines are becoming less dependent on military trained pilots as a primary source of new hires. The annual reports to Congress also do not address the issue of why the Air Force, unlike the Navy and the Marine Corps, does not restrict ACP to those aviators in aircraft that have aviator personnel shortages. A complete text of DOD’s comments is in appendix II. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Director, Office of Management and Budget; and the Chairmen and the Ranking Minority Members, House Committee on Government Reform and Oversight, Senate Committee on Governmental Affairs, House and Senate Committees on Appropriations, House Committee on National Security, Senate Committee on Armed Services, and House and Senate Committees on the Budget. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix III. To accomplish our objectives, we reviewed legislation, studies, regulations, and held discussions with service officials responsible for managing aviator requirements. Additionally, we obtained data from each of the services’ manpower databases to determine their flying and nonflying position requirements. Using this information, we developed trend analyses comparing the total number of aviator positions to the nonflying positions for fiscal years 1994-2001. The Army was not able to provide requirements data for fiscal years 1994 and 1995. To determine the benefits paid to aviators serving in nonflying positions, we obtained an automated listing of social security numbers for all aviators and, except for the Army, the services identified the aviators serving in nonflying positions. The data were submitted to the appropriate Defense Financial Accounting System offices for the Army, the Air Force, and the Marine Corps to identify the amounts of aviation career incentive pay (ACIP) and aviation continuation pay (ACP) paid to each aviator. The Navy’s financial data was provided by Defense Manpower Data Center. To assess whether the services implement ACIP and ACP uniformly, we obtained copies of legislation addressing how ACIP and ACP should be implemented and held discussions with service officials to obtain and compare the methodology each service used to implement ACIP and ACP. To determine how the services compute aviator requirements and the impact their flying and nonflying requirements have on training requirements, we held discussions with service officials to identify the methodology used to compute their aviator and training requirements. We also obtained flying and nonflying position requirements, available inventory, and training requirements from the services’ manpower databases. We then compared the flying and nonflying requirements to the respective services’ available aviator inventory to identify the extent that the available inventory of aviators could satisfy aviator requirements. We performed our work at the following locations. Defense Personnel and Readiness Military Personnel Policy Office, Defense Financial Accounting System, Kansas City, Missouri; Denver, Colorado; and Indianapolis, Indiana; Defense Manpower Data Center, Seaside, California; Air Force Directorate of Operations Training Division, Washington, D.C.; Air Force Personnel Center, Randolph Air Force Base, Texas; Air Force Directorate of Personnel Military Compensation and Legislation Division and Rated Management Division, Washington, D.C.; Air Combat Command, Langley Air Force Base, Virginia; Bureau of Naval Personnel, Office of Aviation Community Management, Navy Total Force Programming, Manpower and Information Resource Management Division, Washington, D.C.; Navy Manpower Analysis Team, Commander in Chief U.S. Atlantic Fleet, Marine Corps Combat Development Command, Force Structure Division, Marine Corps Deputy Chief of Staff for Manpower and Reserve Affairs Department, Washington, D.C.; Army Office of the Deputy Chief of Staff for Plans Force Integration and Analysis, Alexandria, Virginia; Army Office of the Deputy Chief of Staff for Personnel, Washington, D.C.; Congressional Budget Office, Washington, D.C. We performed our review from March 1996 to December 1996 in accordance with generally accepted government auditing standards. Norman L. Jessup, Jr. Patricia F. Blowe Patricia W. Lentini The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed certain Department of Defense (DOD) nonflying positions, focusing on: (1) the number of aviators (pilots and navigators) that are assigned to nonflying positions in the Army, Navy, Marine Corps, and Air Force; (2) the amount of aviation career incentive pay (ACIP) and aviation continuation pay (ACP) paid to aviators in nonflying positions; (3) whether the services implement ACIP and ACP uniformly; and (4) whether the nonflying positions affect the number of aviators the services plan to train to meet future requirements. GAO found that: (1) for fiscal year (FY) 1996, the Army, Navy, Marine Corps, and Air Force designated 11,336 positions, or about 25 percent of all aviator positions, as nonflying positions to be filled by aviators; (2) since FY 1994, the number of nonflying positions has decreased and this decrease is expected to continue through 2001 when the number of such positions is estimated to be 10,553; (3) for fiscal years 1994 through April 30, 1996, the Army, Navy, Marine Corps, and Air Force paid $739.7 million in ACIP, of which $179.1 million was paid to aviators in nonflying positions; (4) additionally, the Navy, Marine Corps, and Air Force paid $169.4 million in ACP, of which $31.9 million was paid to aviators in nonflying positions; (5) the Army does not pay ACP; (6) ACIP is payable to all aviators who meet certain flying requirements and all the services implement it in a consistent fashion; (7) with ACP, however, the services have a great deal of latitude in deciding who receives it, the length of time it is paid and the amount that is paid; (8) in determining their aviator training requirements, the services consider both flying and nonflying positions; (9) including nonflying positions increases the total aviator requirements and results in the services projecting aviator shortages in the upcoming fiscal years; (10) however, GAO's analysis showed that there are more than enough aviators available to satisfy all flying position requirements; (11) to the extent that the number of nonflying positions filled by aviators can be reduced, the number of aviators that need to be trained also could be reduced, saving training costs of about $5 million for each Navy, Marine Corps, and Air Force pilot candidate and about $2 million for each navigator candidate; and (12) the savings to the Army would be about $366,000 for each pilot training requirement eliminated. |
Although the exact number and timing of the controllers’ departures are impossible to determine, scenarios we developed indicate that the total attrition of controllers from FAA will grow substantially in the short and long terms. As a result, FAA will likely need to hire thousands of air traffic controllers in the next decade. At the end of fiscal year 2003, FAA had 15,635 controllers, and according to its staffing standard, it is targeting a controller staffing level of 15,136 in fiscal year 2004, 15,300 in fiscal year 2005, and 16,109 in fiscal year 2009. However, so far this year, the agency has lost nearly 400 controllers due to retirements and as of May had hired only 1 controller. FAA has reported similar projections of a wave of air traffic controller retirements, and in a 2004 report, the Inspector General also reported on the coming wave, citing FAA’s estimate that nearly 7,100 controllers could leave the agency by 2012. Our 2002 report found that FAA estimated it would experience retirements of controllers at a level three times higher than that experienced over the 5-year period from 1996- 2000. On top of the substantial number of retirements, at the time, FAA also projected that an additional 2,000 controllers would be needed by 2010 to address forecasted increases in demand for air travel. Our 2002 report analyzed, among other things, the retirement eligibility levels for various portions of the controller workforce and found that the annual number of controllers first becoming eligible for retirement would peak in fiscal year 2007, when about 10 percent of the air traffic controllers will become eligible to retire. (See fig. 1.) In addition, we found that by 2011, about 68 percent of the current controllers would be eligible to retire. We found a similar situation with the retirement eligibility of supervisors. Because supervisors are important to air traffic control operations and because they tend to be older than others controlling traffic, we examined retirement eligibility and survey results of supervisors at FAA as of June 2001. We found that supervisors will also become eligible to leave FAA in very high numbers over the next decade. Specifically, we found that 1,205, or 65 percent of current supervisors, would become eligible to retire between 2002 and 2011. (See fig. 2.) However, with 28 percent of current supervisors already eligible to retire and another 65 percent reaching eligibility by 2011, a total of about 93 percent of 1,862 current supervisors will be eligible to retire by the end of fiscal year 2011. As a result, FAA may face substantial turnover in its supervisory ranks over the next decade. This turnover could put a further strain on FAA’s ability to maintain a sufficient certified controller workforce, as experienced controllers will be tapped to fill open supervisory positions, leaving fewer to control air traffic or provide training for new controllers. Because of the crucial role certain facilities play in the national air space system, we analyzed the impact of retirement eligibility on the 21 major “en route” centers (air route traffic control centers used to manage aircraft beyond a 50-nautical-mile radius from airports), the 10 busiest airport towers, and the 10 busiest TRACON facilities (terminal radar approach control facilities used to track airplanes and manage the arrival and departure of aircraft within a 5-to-50 nautical mile radius of airports). Based on our analysis of FAA’s employee database, we found the en route centers and the busiest terminal facilities will experience a sizeable increase in the number of controllers reaching retirement eligibility. As figure 3 shows, retirement eligibility in these facilities grows over the next decade. Based on our analysis for the towers, we found that the Denver tower had the highest proportion of retirement-eligible controllers as of September 30, 2001, with 14 of its 51 controllers (27 percent) eligible to retire. We found that by the end of fiscal year 2006, 45 percent of Denver’s current controllers would be eligible to retire, and by the end of fiscal year 2011, 46 of its 51 controllers (90 percent), will reach retirement eligibility. Our analysis of the 10 busiest TRACON facilities showed that the Dallas/Fort Worth TRACON had the highest level of current controllers eligible to retire at the end of fiscal year 2001, with 36 of its 147 controllers (24 percent) eligible. We found that by the end of fiscal year 2006, the cumulative percentage would grow to 46 percent, and by the end of fiscal year 2011 would reach 87 percent, as 128 of the 147 controllers currently at the facility would reach retirement eligibility. In examining the 21 major en route centers, we found that the Jacksonville center had the highest proportion of retirement-eligible controllers at the end of fiscal year 2001, with 79 of its 376 controllers (21 percent) eligible for retirement. According to our analysis, by the end of fiscal year 2006, at least 29 percent of current controllers would be eligible for retirement at 10 centers—Albuquerque, Atlanta, Boston, Fort Worth, Houston, Jacksonville, Los Angeles, Memphis, Seattle, and Washington, D.C. We are not alone in seeing a bow wave of controller retirements approaching over the next several years. This month, FAA provided us with projections that 329 controllers would retire in fiscal year 2004, and that this level would double by fiscal year 2007 to over 650 in that year, and double again to 1170 by fiscal year 2013. These levels are significantly higher than the average of less than 200 retirements per year over the past 5 years (1999-2003). Similarly, the Department of Transportation Inspector General reported this month that increasing numbers of controllers will become eligible to retire through 2012, with a peak of retirement eligibility around fiscal year 2007, and that FAA had estimated that nearly 7,100 controllers could leave FAA by fiscal year 2012. There are several challenges related to hiring and training large numbers of air traffic controllers in the short amount of time available. Although we identified these challenges in 2002 and recommended that FAA create a comprehensive workforce plan that addresses these challenges, FAA has not yet created a plan. Moreover, its recent actions suggest that it has not implemented strategies to meet these challenges and put into place a system that will bring on board air traffic controllers in time to deal with the projected retirements of many controllers. However, senior FAA officials told us that the agency’s new Air Traffic Organization is currently preparing a comprehensive business plan, including a comprehensive controller workforce plan, which is due to the Congress in December 2004. A key component of workforce planning is ensuring that appropriately skilled employees are available when and where they are needed to meet an agency’s mission. This means that an agency continually needs trained employees to become available in time to fill newly opened positions. We reported in 2002 that FAA’s hiring practice was generally to hire new employees only when current employees leave, which does not adequately account for the time needed to train controllers to fully perform their functions. The amount of time it takes new controllers to gain certification depends on the facility at which they work, but generally, training takes from 2 to 4 years and can take up to 5 years at some of the busiest and most complex facilities. Moreover, during the training period, the current training process depends upon substantial one-on-one training, during which an experienced controller works directly with a controller in training, monitoring the trainee’s actions, so there must be an overlap of experienced controllers and newly hired controllers. FAA regional officials, who are responsible for ensuring that FAA’s air traffic facilities are adequately staffed, were particularly concerned about FAA’s general hiring practice. Specifically, the officials were concerned that significant increases in retirements would leave facilities short of qualified controllers while new trainees were hired and trained. Our report also noted that the lack of experienced controllers could have many adverse consequences. For example, several FAA regional officials stated that if a facility becomes seriously short of experienced controllers, the remaining controllers might have to slow down the flow of air traffic though their airspace. If the situation became dire, FAA could require airlines to reduce their schedules, but this would be an unlikely, worst- case scenario, according to some FAA regional officials. Also, because there would be fewer experienced controllers available to work, some FAA facility officials stated that those controllers could see increased workloads and additional, potentially mandatory, overtime. In addition to potentially resulting in increased work-related stress and sick leave usage, it could also cause experienced controllers to retire sooner than they otherwise might. For example, based on our 2002 survey of controllers, we estimated that 33 percent of controllers would accelerate their decision to retire if forced to work additional mandatory overtime. Identifying sources of future potential employees with the requisite skills and aptitude is also important. Efficiency in hiring will become even more important as FAA faces the wave of controller retirements, for hiring people who do not make it through the training process wastes money and time—and may affect both the cost of the controller workforce and the ability of FAA to fill positions quickly enough to maintain a sufficient controller workforce to meet its mission. FAA has historically hired new controllers from a variety of sources, including graduates from institutions in FAA’s collegiate training institute program, the Minneapolis Community and Technical College, former FAA controllers who were fired by President Reagan in 1981, and former Department of Defense controllers. FAA can also hire off-the-street candidates to become controllers. The success of hiring candidates who actually become controllers depends in large part on identifying potential candidates who have the appropriate aptitude for controllers’ work. Historically, FAA used its initial entry-level training at its academy to screen out candidates who could not become successful controllers. According to FAA officials, as many as 50 percent of off-the-street applicants have dropped out before finishing the required training program, at a cost of $10 million per year, a rate that highlights the difficulty of successfully hiring candidates to replace the thousands of new controllers expected to retire. FAA has recently begun to test a new screening exam that it hopes will better ensure that potential new hires have the skills and abilities necessary to become successful controllers. It will take a number of years to determine if the new test has the desired results. Training challenges include the limited capacity at the training center in Oklahoma City and at the air traffic control facilities. In addition, because of the significant amount of on-the-job training that currently occurs through one-on-one training, to effectively handle a large number of new controllers, there needs to be an overlap period during which both experienced controllers likely to retire soon and newly hired controllers are both on board. While this will result in a temporary increase in the cost of the air traffic controller workforce, eventually more senior, high salary controllers will retire and be replaced by new controllers at lower salaries, possibly reducing expenses; and the need for overlap between these two groups can be reduced. Our 2002 report recommended that FAA develop a comprehensive workforce plan for controllers to deal with these challenges, but FAA has not finalized a plan and its recent actions call into question whether it will have adequate strategies to address these challenges. For example, last year, FAA hired 762 controllers, but according to a senior National Air Traffic Controllers Association official, many of these hires took place at the end of the year, and because of limited space in training facilities, many of those hired were unable to begin entry level training immediately. Moreover, since hiring those controllers at the end of the year to reach a level of 15,635, FAA has lost nearly 400 controllers and has hired only 1 new controller through May of this year. Its fiscal year 2005 budget proposal does not request any funding to hire additional controllers to address the wave of retirements. There are also challenges in the broader context of the air traffic control system that will affect the ability of the air traffic controller workforce to meet future changes in the airline industry and use of airspace. These challenges need to be considered as FAA develops and implements a comprehensive plan for its controller workforce. Challenges include the need for FAA to (1) overcome significant and longstanding management problems it has had with acquiring new systems to modernize the air traffic control system intended to facilitate the safe and efficient movement of air traffic by controllers and (2) adjust to shifts in the use of airspace, including increases in the use of smaller aircraft and changes in air traffic patterns around the country. Controller workforce planning needs to take place in the larger context of FAA’s Air Traffic Control modernization efforts in order to make optimal use of the agency’s investments. However, as our past work has shown, FAA needs to address longstanding problems it has had in deploying new air traffic control systems on schedule, within budget, and with promised capabilities to facilitate the safe and efficient flow of air traffic by controllers. These new systems are intended to improve the safety and efficiency of the nation’s air traffic control system, with some offering the potential to improve the productivity of the controller workforce. To maximize the usefulness of new systems to controllers and to help ensure that safety is not eroded by the introduction of new capabilities, sustained controller involvement is needed as new systems are developed, deployed, and refined. When there is an ineffective link between technology and needs, money and time will be wasted, and the effectiveness of the air traffic controller workforce may be reduced. Moreover, these new systems may change the productivity of the controller workforce, an effect that will need to be taken into account as FAA refines its estimates of future controller workforce needs. For example, our past work on the Standard Terminal Automation Replacement System (STARS)—the workstations used by controllers near airports to sequence and control air traffic—highlights the importance of controller involvement in the development, deployment, and refinement of air traffic control systems. In 1997, when FAA controllers first tested an early version of this commercially available system, they raised some concerns about the way aircraft position and other data were displayed and updated on the controllers’ radar screens. For example, the controllers said the system’s lack of detail about an aircraft’s position and movement could hamper their ability to monitor traffic movement. In addition, controllers noted that many features of the old equipment could be operated with knobs, allowing controllers to focus on the screen. By contrast, STARS was menu-driven and required the controllers to make several keystrokes and use a trackball, diverting their attention from the screen. To address these concerns, among others, FAA decided to develop a more customized system and to deploy an incremental approach, thereby enabling controllers to adjust to some changes before introducing others. This incremental approach costs more and is taking longer to implement than the original STARS project. Despite the importance of controller involvement in the development, deployment, and refinement of new air traffic control systems, such activities can be very time- consuming, often take controllers off-line, and place additional pressure on an already constrained workforce. FAA needs to take into account these demands on the controller workforce as part of its comprehensive workforce plan. Changes in patterns of aircraft usage are likely to affect the needs of the air traffic controller workforce. The increased use of regional jets, the possibly expanding use of air taxis, ongoing general aviation aircraft usage, and fractional ownership, where individuals or companies purchase a share in an aircraft for their occasional use, could all increase the number of smaller aircraft in the sky, placing increased demands on the air traffic controller workforce. In addition, possible changes in air traffic patterns around the country may also impact this workforce. In 2001, we reported that we had found consensus among the studies we reviewed and the industry experts we interviewed that the growing number of regional jets had contributed to congestion in our national airspace. The industry experts we spoke with repeatedly expressed concern about the impact of adding so many aircraft so quickly to airspace whose capacity is already constrained. Because hundreds of new aircraft had been added to already congested airspace while comparatively few turboprops had been taken out of service, many of the experts believed it was inevitable that congestion and delays would increase. They also noted that with many more regional jets on order, congestion and delays were not likely to diminish in the near future. Earlier this month, the Chairman and Chief Executive Officer of AirTran Airways noted that the air traffic control system may have difficulty absorbing the hundreds of regional jets now on order. In coming years, air taxis may also add to crowding in the skies. FAA officials told us that they have been briefed on proposals for using air taxis to carry about four passengers each in selected metropolitan areas where there is heavy surface traffic congestion. The use of such air taxis could increase the demand on controllers to provide air traffic services in these metropolitan areas, where it is likely that there is already heavy air traffic. Furthermore, it is possible that any increases in general aviation or fractional ownership could also increase the amount of traffic in the skies—traffic that must be effectively directed by air traffic controllers to ensure the safety of the airways. Moreover, because fees collected for the Aviation Trust Fund are based largely on ticket taxes assessed on paying airline passengers, the change in the mix of aircraft could have implications for the Aviation Trust Fund. Given the dynamic nature of the airline industry, in which major airlines and low cost airlines may change their flight patterns by adding or removing hubs, the number of flights in any one location may spike or drop abruptly. Recent examples include Independence Air’s move to set up operations at Washington Dulles International Airport and reports by industry sources of a US Airways plan to reduce service to Pittsburgh. These types of potential shifts in the location of demand for air traffic services underscore the need for a nimble air traffic control system that can seamlessly continue to provide services as demand shifts. FAA faces a complex task in effectively addressing the bow wave of controller retirements that is heading its way. The number of factors involved, including the need to time hiring so as not to overload training capacities and the need to be responsive to the changing demands of a dynamic industry, highlight the importance of a carefully considered, comprehensive workforce plan. This plan needs to include strategies for addressing the full range of challenges in order to seamlessly transition from the current workforce to a future workforce that is well qualified, trained, and can accommodate changes in the use of our airspace. However, although we recommended to FAA 2 years ago that it develop a comprehensive plan for this purpose, it has not yet finalized a plan. Senior FAA officials told us that the Air Traffic Organization is currently preparing a comprehensive business plan, including a comprehensive controller workforce plan, which is due to the Congress in December 2004. This is an important opportunity to establish strategies to meet the challenges ahead. Today these challenges continue to underscore the need for action in developing strategies that take into account (1) the expected timing and location of anticipated retirements, (2) the length of the hiring and training processes, (3) limitations on training capacities, and (4) changes in the airline industry and use of airspace that may affect the air traffic controller workforce in coming years. Without focused and timely action on all of these fronts, the gap created by the expected bow wave of controller retirements could reduce the effectiveness of the air traffic control workforce to meet its mission just as increased activity in the skies makes its effectiveness more critical than ever to the safety of our airways. This concludes my statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information on this testimony, please contact JayEtta Z. Hecker at (202) 512-2834 or by e-mail at [email protected]. Individuals making key contributions to this testimony include, David Lichtenfeld, Beverly Norwood, Raymond Sendejas, Glen Trochelman, and Alwynne Wilbur. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In the summer of 2000, the air traffic control system lacked the capacity to handle demand efficiently, and flight delays produced near-gridlock conditions at several U.S. airports. A combination of factors, including the crises instigated by the events of 9/11, temporarily reduced air traffic, but air traffic is now back to near pre-9/11 levels. The ability of the air traffic control system to handle expected traffic in coming years may depend in part on the Federal Aviation Administration's (FAA) effectiveness in planning for a long-expected wave of air traffic controller retirements. GAO's testimony focuses on (1) the magnitude and timing of the pending wave of air traffic controller retirements, (2) the challenges FAA faces in ensuring that well-qualified air traffic controllers are ready to step into the gap created by the expected large number of retirements, and (3) challenges that will affect the ability of the air traffic controller workforce to meet future changes in the airline industry and use of airspace. GAO's statement is based on past reports on the air traffic controller workforce, including GAO's 2002 report that surveyed controllers and analyzed controller workforce data. GAO has updated this work through interviews with and the collection of data from key stakeholders in the aviation community. This work was performed in accordance with generally accepted government auditing standards. FAA faces a bow wave of thousands of air traffic controller retirements over the coming decade. GAO's 2002 report warned that almost half of the controller workforce (about 7,000 controllers) would retire over the next 10 years and about 93 percent of controller supervisors would be eligible to retire by the end of 2011. In addition, GAO's analysis showed that retirements could increase dramatically at the busiest air traffic control facilities. FAA and the Department of Transportation's Inspector General have also reported that a surge in controller retirements is on the way. FAA faces numerous hiring and training challenges to ensuring that wellqualified controllers are ready to fill the gap created by the expected retirements. For example, it can take 2-4 years or more to certify new controllers, and FAA's training facility and air traffic control facilities, where years of on-the-job training occur, have limited capacity. While FAA must make hiring decisions from a long-term perspective, it has generally hired replacements only after a current controller leaves. In 2002, GAO recommended that FAA develop a comprehensive workforce plan to deal with these challenges. However, FAA has not finalized a plan, and its recent actions call into question whether it has adequate strategies to address these challenges. For example, since the beginning of this year, FAA lost nearly 400 controllers and has hired only 1 new controller. Its fiscal year 2005 budget proposal does not request any funding to hire additional controllers. Challenges will also affect the ability of the air traffic controller workforce to meet future changes in the airline industry and use of airspace. Challenges include the need for FAA to overcome management problems with acquiring systems to modernize the air traffic control system and to adjust to shifts in the use of airspace, including increases in the use of smaller aircraft and changes in air traffic patterns around the country. |
The military services preposition stocks ashore and afloat to provide DOD the ability to respond to multiple scenarios by providing assets to support U.S. forces during the initial phases of an operation until the supply chain has been established (see figure). Each military service maintains its own configurations and types of equipment and stocks to support its own prepositioned stock program. The Army stores sets of combat brigade equipment, supporting supplies, and other stocks at land sites in several countries and aboard ships. The Marine Corps stores equipment and supplies for its forces aboard ships stationed around the world and at land sites in Norway. The Navy’s prepositioned stock program provides construction support, equipment for off-loading and transferring cargo from ships to shore, and expeditionary medical facilities to support the Marine Corps. In the Air Force, the prepositioned stock program includes assets such as direct mission support equipment for fighter and strategic aircraft as well as base operating support equipment to provide force, infrastructure, and aircraft support during wartime and contingency operations. Figure: High Mobility Multipurpose Wheeled Vehicles in a Prepositioned Storage Facility (left) and Mine Resistant Ambush Protected Vehicles Being Loaded for Prepositioning (right) In June 2008, DOD issued an instruction directing the Under Secretary of Defense for Policy to develop and coordinate guidance that identifies an overall war reserve materiel strategy that includes starter stocks, which DOD defines as war reserve materiel that is prepositioned in or near a theater of operations and is designed to last until resupply at wartime rates is established. Also, the instruction states that the Under Secretary of Defense for Policy is responsible for establishing and coordinating force development guidance that identifies an overall strategy to achieve desired capabilities and responsiveness in support of the National Defense Strategy. Further, it states that the Global Prepositioned Materiel Capabilities Working Group, including representatives from the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics and the Joint Staff, has responsibility for, among other things, addressing joint issues concerning prepositioned stocks. On March 7, 2017, DOD issued its strategic policy for managing its prepositioned stocks in DOD Directive 3110.07, Pre-positioned War Reserve Materiel (PWRM) Strategic Policy, and included information that addresses one of the six reporting elements enumerated in section 321 of the NDAA for fiscal year 2014. The table below presents our assessment of DOD’s strategic policy. Table: GAO’s Assessment of DOD’s Strategic Policy Compared to the Six Reporting Elements Required by the National Defense Authorization Act for Fiscal Year 2014 GAO Assessment of DOD’s Strategic Policy Addressed (1) Overarching strategic guidance concerning planning and resource priorities that link the Department of Defense’s current and future needs for prepositioning stocks, such as desired responsiveness, to evolving national defense objectives. DOD’s strategic policy requires the Under Secretary of Defense for Policy to develop and coordinate planning and resource requirements – such as those found in the Guidance for Employment of the Force and the Defense Planning Guidance – so that war materiel and prepositioned war reserve materiel is appropriately linked to desired capabilities in support of the national defense strategy. In addition, the strategic policy requires the Under Secretary of Defense for Acquisition, Technology, and Logistics and the DOD Component heads to maintain guidance that includes component-specific requirements for planning and resourcing priorities to address current and future requirements for maintaining prepositioned stocks optimally. GAO Assessment of DOD’s Strategic Policy Not Addressed (2) A description of the department’s vision for prepositioning programs and the desired end state. DOD’s strategic policy does not include information describing the department’s vision for prepositioning programs and the desired end state. Rather, the strategic policy assigns the Under Secretary of Defense for Acquisition, Technology, and Logistics and the DOD Component heads with the responsibility of maintaining guidance that includes a description of the component’s vision and desired end state. (3) Specific interim goals demonstrating how the vision and end state will be achieved. DOD’s strategic policy does not include information on specific interim goals describing how the department’s vision and end state will be achieved. Rather, the strategic policy assigns the Under Secretary of Defense for Acquisition, Technology, and Logistics and the DOD Component heads with the responsibility of maintaining guidance that includes specific interim goals demonstrating how the component’s vision and end state will be achieved. (4) A description of the strategic environment, requirements for, and challenges associated with prepositioning. DOD’s strategic policy does not include a description of the strategic environment, requirements for, and challenges associated with, prepositioning stocks. Rather, the strategic policy assigns the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Under Secretary of Defense for Policy, and the DOD Component heads the responsibility for providing guidance on the strategic environment, requirements for, and challenges associated with prepositioning stocks. (5) Metrics for how the Department will evaluate the extent to which prepositioned assets are achieving defense objectives. The strategic policy does not include metrics to evaluate whether prepositioned assets are achieving defense objectives. Rather, the strategic policy assigns the Chairman of the Joint Chiefs of Staff with the responsibility for developing metrics regarding DOD’s prepositioned stock programs. (6) A framework for joint departmental oversight that reviews and synchronizes the military services’ prepositioned strategies to minimize potentially duplicative efforts and maximize efficiencies in prepositioned stocks across the Department of Defense. The strategic policy does not include a framework for joint departmental oversight. Rather, the strategic policy assigns the Chairman of the Joint Chiefs of Staff with the responsibility for developing such a framework for synchronizing the services’ prepositioning stock programs. As the table shows, DOD addressed the first element in section 321 of the NDAA for fiscal year 2014 by describing strategic planning and resource guidance. However, we assessed the remaining elements as not addressed because DOD did not provide the required information in its strategic policy. Officials from the Office of the Under Secretary of Defense for Policy stated that the strategic policy does not include this information because it is intended to serve as a directive for developing policies and assigning key responsibilities, which can be used as a mechanism for addressing required elements at a later time. However, the NDAA for fiscal year 2014 required that these elements be included in the strategic policy. Specifically: Element 2 (Description of the Department’s Vision and Desired End State) and Element 3 (Specific Interim Goals): DOD’s strategic policy does not include a description of the department’s vision and the desired end state for its prepositioning programs, as required by element 2, or specific interim goals for achieving the department’s vision and desired end state, as required by element 3. Rather, the strategic policy assigns the Under Secretary of Defense for Acquisition, Technology, and Logistics and the DOD Component heads the responsibility for providing guidance that includes a description of the component’s vision and desired end stats as well as specific interim goals for achieving the component’s vision and desired end state. DOD officials from the offices of the Under Secretary of Defense for Policy and the Joint Staff acknowledged that the strategic policy does not include a department-wide vision, desired end state, and specific interim goals. However, the NDAA for fiscal year 2014 requires a department vision, not a component vision. Moreover, for the past 6 years in our annual reports on duplication, overlap, and fragmentation and in our related reports, we have identified the potential for duplication in the services’ prepositioned stock programs due to the absence of a department-wide strategic policy and joint oversight. Such joint oversight is necessary to articulate a single, department-wide vision and interim goals. By not including the department’s vision or interim goals in its strategic policy, and instead, directing the Under Secretary of Defense for Acquisition, Technology, and Logistics and the DOD Component heads to provide guidance on the component’s vision, DOD is continuing its fragmented approach to managing its prepositioned stock programs by further emphasizing individual visions rather than a joint, department-wide view. Unless DOD revises its policy or includes in other guidance a department-wide vision, desired end state, and goals for its prepositioned stock programs, DOD risks being unable to recognize potential efficiencies that could be gained by synchronizing the services’ prepositioning programs with each other. Element 4 (Description of the Strategic Environment and Challenges): DOD’s strategic policy does not include a description of the strategic environment, or the requirements for and challenges associated with prepositioned stocks as required by element 4. Rather, it directs the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Under Secretary of Defense for Policy, and the DOD Component heads to provide guidance on this element. We believe that if the Under Secretaries issue such guidance as directed, DOD and the services will be able to have a shared understanding of the strategic environment, requirements, and challenges of managing their prepositioned stocks, which could promote joint oversight and efficiencies across the department. Element 5 (Metrics): DOD’s strategic policy does not include metrics for evaluating the extent to which DOD prepositioned stocks are achieving defense objectives as required by element 5. Rather, the policy assigns the Chairman of the Joint Chiefs of Staff with the responsibility for developing metrics. We believe that if the Chairman develops such metrics as directed, DOD and the services will have common criteria with which to measure their programs. However, DOD will not be able to create informed metrics before first addressing other elements such as developing a department vision, goals, and articulating the strategic environment. Element 6 (Framework for Joint Departmental Oversight): DOD’s strategic policy does not include a framework for joint departmental oversight that reviews and synchronizes the military services’ prepositioned strategies to minimize potentially duplicative efforts and maximize efficiencies in prepositioned stocks across DOD, as required by element 6. Rather, the policy assigns the Chairman of the Joint Chiefs of Staff with the responsibility for establishing the framework. We have reported for years that DOD lacks joint oversight of its prepositioned programs. We believe that if the Chairman of the Joint Chiefs of Staff develops such a framework, as directed by the policy, DOD and the services will be better able to integrate and align at a department-wide level its prepositioning programs in order to achieve efficiencies and avoid duplication. Further, DOD has not yet issued an implementation plan for managing its prepositioned stock programs, which was also required by Section 321 of the NDAA for fiscal year 2014. The NDAA required DOD to complete the plan by April 24, 2014. In May 2017, DOD officials stated that they were reviewing information on the military services’ prepositioning strategies for consolidation into a department-wide implementation plan. They anticipated that a plan would be finalized by September 30, 2017. It will be important for DOD to address the elements that were omitted from its strategic policy as it creates the implementation plan to ensure that the plan is linked to a complete strategy on prepositioned stocks for the department. Prepositioned stocks play a pivotal role during the initial phases of an operation. We have reported for the past 6 years on the importance of DOD having a department-wide strategic policy and joint oversight of the services’ prepositioned stock programs, and Congress has required DOD to take action in this area. While it is encouraging that DOD has recently issued a strategic policy, the policy does not address most of the required elements enumerated in section 321 of the NDAA for fiscal year 2014. In the cases of a description of the strategic environment and challenges, metrics, and a framework for joint departmental oversight, DOD’s policy appropriately assigns responsibility for the development of such information, and therefore we are not making recommendations related to those elements because the department has already directed their implementation. However, for the description of the department’s vision and desired end state and specific interim goals, DOD’s strategic policy does not include the required information and instead directs the development of the component’s vision, end state, and goals, which reinforces a fragmented and potentially duplicative approach to managing prepositioned stocks across the services. Without either revising its strategic policy or including in other guidance the department’s vision, end state, and goals for its prepositioned stock programs, DOD will continue to be ill positioned to recognize potential duplication, achieve efficiencies, and fully synchronize the services’ prepositioned stock programs across the department. To improve DOD’s management of its prepositioned stocks and reduce potential duplication among the services’ programs, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics, in coordination with the Chairman of the Joint Chiefs of Staff, to revise DOD’s strategic policy or include in other department-wide guidance: a description of the department’s vision and the desired end state for its prepositioned stock programs, and specific interim goals for achieving that vision and desired end state. We provided a draft of this report to DOD for review and comment. DOD provided written comments on the draft, which are reprinted in appendix II. DOD concurred with our recommendations and noted it is taking steps to implement them. We also received technical comments from DOD, which we incorporated throughout our report as appropriate. We are providing copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Acquisition, Technology, and Logistics, and the Chairman of the Joint Chiefs of Staff. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Section 321 of the National Defense Authorization Act (NDAA) for fiscal year 2014 required that the Department of Defense (DOD) establish an implementation plan for its programs of prepositioned stocks. The implementation plan for the prepositioning strategic policy shall include the following elements: A. Detailed guidance for how the Department of Defense will achieve the vision, end state, and goals outlined in the strategic policy. B. A comprehensive list of the Department’s prepositioned stock programs. C. A detailed description of how the plan will be implemented. D. A schedule with milestones for the implementation of the plan. E. An assignment of roles and responsibilities for the implementation of the plan. F. A description of the resources required to implement the plan. G. A description of how the plan will be reviewed and assessed to monitor progress. In addition to the contact named above, individuals who made key contributions to this report include Alissa H. Czyz, Assistant Director; Vincent M. Buquicchio, Tracy W. Burney, Lionel C. Cooper, Richard Powelson, Courtney R. Bond, and Michael D. Silver. | DOD positions billions of dollars worth of assets–including combat vehicles, rations, medical supplies, and repair parts—at strategic locations around the world to use during early phases of operations. Each of the military services maintains its own prepositioned stock program. For the past 6 years, GAO has reported on the risk of duplication and inefficiencies in the services' programs due to the absence of a department-wide strategic policy and joint oversight. Section 321 of the NDAA for fiscal year 2014 required DOD to maintain a strategic policy and develop an implementation plan to manage its prepositioned stocks. The NDAA for fiscal year 2014 also included a provision for GAO to review DOD's strategic policy and implementation plan. This report assessed the extent to which DOD's strategic policy addresses mandated reporting elements and describes the status of DOD's implementation plan. To conduct this work, GAO analyzed DOD's strategic policy against the elements required in the NDAA and discussed the status of the implementation plan with DOD officials. The Department of Defense's (DOD) strategic policy on its prepositioned stock programs, issued in March 2017, addressed one of the six mandated reporting elements (see table). Specifically, DOD's policy describes strategic planning and resource guidance (element 1), as required. GAO assessed the remaining five reporting elements as not addressed because DOD did not provide the required information in its policy. For three of the five elements that were not addressed—a description of the strategic environment and challenges (element 4), metrics (element 5), and a framework for joint oversight (element 6)—DOD's policy assigns responsibility for the development of such information, and therefore GAO is not making recommendations related to those elements because DOD has already directed their implementation. However, for two of the five elements that were not addressed—a description of the department's vision and desired end state (element 2) and specific interim goals (element 3)—DOD's strategic policy does not include the required information and instead directs the development of component's (rather than the department's) vision, end state, and goals. DOD officials stated that the strategic policy does not include required information such as a department-wide vision, end state, and interim goals because it is intended to serve as a directive for assigning responsibilities. Without revising its strategic policy or including required information in other department-wide guidance, DOD will not be positioned to fully synchronize the services' prepositioned stock programs to avoid unnecessary duplication and achieve efficiencies. DOD has not yet issued an implementation plan for managing its prepositioned stock programs, which the National Defense Authorization Act (NDAA) required by April 24, 2014. DOD officials anticipated that a plan would be finalized by September 30, 2017. It will be important for DOD to address the elements that were omitted from its strategic policy as it creates the implementation plan to ensure that the plan is linked to a complete strategy on prepositioned stock programs for the department. GAO recommends that DOD revise its prepositioned stocks strategic policy or include in other department-wide guidance (1) a description of the department's vision and the desired end state, and (2) specific interim goals for achieving this vision and end state. DOD concurred with the recommendations, noting that it is taking steps to implement them. |
In response to a provision of the Choice Act, on October 1, 2014, VA transferred funds and the responsibility for managing and overseeing the processing of claims for VA care in the community from its Veterans Integrated Service Networks (VISN) and VA medical centers to VHA’s Chief Business Office for Purchased Care. Previously, VISNs and medical facilities were responsible for managing both their own budgets for VA care in the community and the staff who processed these claims. After this transition, VHA’s Chief Business Office for Purchased Care became responsible for overseeing VA’s budget for care in the community programs and more than 2,000 staff working at 95 claims processing locations nationwide. The Choice Act also expressed the sense of Congress that VA shall comply with the Prompt Payment Act’s implementing regulations (or any corresponding similar regulation or ruling) when paying for health care pursuant to contracts entered into with community providers. Generally, these regulations require executive branch agencies to add interest penalties to payments made to vendors after the contractually established payment date, or 30 days after the date the agencies receive a proper invoice, if the contract specifies no due date. VHA has numerous programs through which it purchases VA care in the community services. As described in a recent independent assessment of VHA’s health care system, which was mandated by the Choice Act, these programs offer different types of services, have varying eligibility criteria for veterans and community providers, and establish different rules governing payment rates. In addition, for all types of VA care in the community services except individually authorized outpatient care, community providers must include medical documentation with the claims they submit to VHA or its third party administrators (TPA). (See appendix I for a side-by-side comparison of various features of these VA care in the community programs.) In what follows we describe the primary ways VHA purchases care in the community services, the applicable payment rates, and the extent to which VHA requires community providers to submit medical documentation as a condition of claims payment. Individually authorized care. The primary means by which VHA has traditionally purchased care from community providers is through individual authorizations. When a veteran cannot access a particular specialty care service from a VA medical facility—either because the service is not offered or the veteran would have to travel a long distance to obtain it from a VA medical facility—the veteran’s VA clinician may request an individual authorization for the veteran to obtain the service from a community provider. If this request is approved and the veteran is able to find a community provider who is willing to accept VA payment, VA will pay the provider on a fee-for- service basis. Generally, VA pays Medicare’s applicable rates for these services, unless the community provider has an existing contract and negotiated rates with a VA medical facility. For individually authorized inpatient care, VHA requires community providers to submit discharge summaries, at a minimum, as a condition of payment. For individually authorized outpatient care, the authorization itself states whether the community provider must submit any medical documentation as a condition of payment. Emergency care. When care in the community is not preauthorized, VA may reimburse community providers for two different types of emergency care: 1) emergency care for a condition related to a veteran’s service-connected disability and 2) emergency care for a condition not related to a veteran’s service-connected disability. The latter care is commonly referred to as Millennium Act emergency care. For service-connected emergency care, VA generally pays applicable Medicare rates, unless the community provider has an existing contract and negotiated rates with a VA medical facility. For Millennium Act emergency care, VA generally pays the lesser of the amount for which the veteran is personally liable (if a third party such as motor vehicle insurance or workers’ compensation insurance first paid for some portion of the care) or 70 percent of applicable Medicare rates. For claims for both types of emergency care, community providers are required to submit accompanying medical documentation, so that clinicians at VHA’s claims processing locations can determine whether or not the condition treated is related to the veteran’s service-connected disability and whether it meets the prudent layperson standard of an emergency. (See appendix II for a more detailed description of the criteria that must be met before VHA will pay claims for these two types of emergency care.) Patient-Centered Community Care (PC3). In September 2013, VA awarded contracts to two TPAs to develop regional networks of community providers of specialty care, mental health care, limited emergency care, and maternity and limited newborn care when such care is not feasibly available from a VA medical facility. VA and the TPAs began implementing the PC3 program in October 2013, and it was fully implemented nationwide as of April 2014. In August 2014, VA expanded the PC3 program to allow community providers of primary care to join the networks. PC3 is a program VA created under existing statutory authorities, not a program specifically enacted by law. To be eligible to obtain care from PC3 providers, veterans must meet the same criteria that are required for individually authorized VA care in the community services. When they join the PC3 networks, community providers agree to be reimbursed at rates they negotiate with the TPAs, which are reportedly a percentage of applicable Medicare rates. As a condition of their contracts with VA, the two TPAs are required to collect medical documentation from the community providers and return it to VA in a timely manner. Upon receipt, staff at VA facilities are responsible for scanning the associated medical documentation and entering it into the veteran’s VA electronic health record so that it is available for VA clinicians to view. Veterans Choice Program. The Choice Act provides, among other things, temporary authority and funding for veterans to obtain health care services from community providers to address long wait times, lengthy travel distances, or other challenges accessing care at a VA medical facility. Under this authority, VHA introduced the Veterans Choice Program in November 2014. As stated in VA’s December 2015 guidance, the program currently allows eligible veterans to obtain health care services from community providers if the veteran meets any of the following criteria: the next available medical appointment with a VA provider is more than 30 days from the veteran’s preferred date or the date the veteran’s physician determines he or she should be seen; the veteran lives more than 40 miles driving distance from the nearest VA facility with a full-time primary care physician; the veteran needs to travel by air, boat, or ferry to the VA facility that is closest to his or her home; the veteran faces an unusual or excessive burden in traveling to a VA facility based on geographic challenges, environmental factors, or a medical condition; the veteran’s specific health care needs, including the nature and frequency of care needed, warrants participation in the program; or the veteran lives in a state or territory without a full-service VA medical facility. To administer the Veterans Choice Program, VHA modified its contracts with the two TPAs it selected to administer the PC3 program. These contractors are responsible for enrolling community providers in their networks or establishing Choice Provider Agreements with the providers. Veterans Choice Program providers are generally paid Medicare rates. Community providers who are not part of the PC3 or Veterans Choice Program networks submit claims for preauthorized and emergency care to one of VHA’s 95 claims processing locations. For PC3 and Veterans Choice Program care, community providers submit their claims to the TPAs, and the TPAs process the claims and pay the community providers. Subsequently, the TPAs submit claims to one of VHA’s claims processing locations—either the one that authorized the care, in the case of PC3 claims, or the one that VHA has designated to receive Veterans Choice Program claims. VHA staff at these locations process these claims using the same systems used to process other claims for VA care in the community programs, and VA reimburses the TPAs for the care. To process claims for VA care in the community programs, staff at VHA’s claims processing locations use the Fee Basis Claims System (FBCS). FBCS does not automatically apply relevant criteria and determine whether claims are eligible for payment. Rather, staff at VHA’s claims processing locations must make determinations about which payment authority applies to each claim and which claims meet applicable administrative and clinical criteria for payment. (See table 2 for a description of these steps.) In addition to processing claims for VA care in the community programs, staff at VHA’s claims processing locations are also responsible for responding to telephone inquiries from community providers who call to check the status of their claims or inquire about claims that have been rejected. For an illustration of the steps VHA staff must take to process claims from community providers and the TPAs, including which steps require manual intervention from staff, see appendix III. VHA, CMS, and DHA all have requirements for claims processing timeliness. See table 3. VHA, Medicare, and TRICARE follow similar steps to process claims for care they purchase on behalf of their beneficiaries. For example—even though paper claims account for a relatively small proportion of the overall number of claims submitted by Medicare and TRICARE providers— Medicare’s and TRICARE’s claims processors must scan incoming paper claims and verify that information from the claims was captured accurately when the claims were scanned, just as staff at VHA’s claims processing locations must do. In addition, like VHA, Medicare’s and TRICARE’s claims processors send notifications to providers after claims have been processed, to inform them of whether payments were approved or denied for each service listed on the claim. Even though these three agencies follow similar steps to process claims, the volume of claims that the agencies process varies widely, and the actual systems they use to carry out these steps differ markedly in several key respects. (For a summary of selected similarities and differences between VHA’s, Medicare’s, and TRICARE’s systems for processing health care claims, see appendix IV.) Based on our review of applicable documentation and interviews with officials from CMS, DHA, and two contractors that process Medicare and TRICARE claims, we identified the following key differences between VHA’s claims processing system and those of Medicare and TRICARE. These key differences are described below. Use of contractors. Unlike VHA, which employs its own staff to process claims for VA care in the community services, both Medicare and TRICARE use contractors to process claims for care purchased from community providers. CMS uses contractors called Medicare Administrative Contractors (MAC) to process claims for health care items and services. For TRICARE, DHA contracts with three managed care support contractors (MCSC), which are responsible for establishing regional networks of civilian providers, managing referrals, and providing customer service, among other things. To pay claims submitted by TRICARE’s network providers, the three MCSCs have each subcontracted with a single claims processing contractor. Number of claims processing locations. Contractors responsible for processing Medicare and TRICARE claims operate in fewer locations than do staff at VHA’s claims processing locations. Most Medicare Part A and Part B claims are processed by one of 12 jurisdiction-based MACs or 4 MACs that specialize in processing durable medical equipment claims, and all TRICARE claims are processed by a single contractor. In contrast, VHA employed claims processing staff in 95 different locations as of November 2015, and community providers in a given state may submit claims to multiple VHA claims processing locations depending on the type of VA care in the community and where they render services. As we have reported previously, CMS established its current regional model for MACs in 2006 to improve services to beneficiaries and providers and achieve operational efficiencies and cost savings by better balancing claims processing workloads among fewer contractors than it had used in the past. Prior to that time, there were 51 contractors responsible for processing Medicare claims. Rate of electronic claim submission and the capacity to accept medical documentation electronically. While VHA’s, Medicare’s, and TRICARE’s claims processing systems can all accept claims submitted by providers electronically, the rate of electronic submission is much higher in Medicare and TRICARE. According to CMS and DHA officials, the vast majority of Medicare and TRICARE claims are submitted electronically. The officials said that providers submit about 99 percent of Medicare Part A claims, 98 percent of Medicare Part B claims, and between 91 and 95 percent of TRICARE claims electronically. In contrast, according to VHA officials, about 40 percent of claims from providers participating in VA care in the community are submitted electronically. In addition, Medicare and TRICARE contractors’ systems can accept medical documentation electronically, unlike VHA’s claims processing system. VHA’s inability to accept medical documentation electronically discourages community providers from submitting claims electronically because VHA cannot process many types of VA care in the community claims until medical documentation is received. Given the high rates of electronic submission of claims and medical documentation among Medicare and TRICARE providers, the Medicare and TRICARE contractors do not need to devote as many staff resources to scanning paper claims and medical documentation and verifying that information was captured accurately as do VHA’s claims processing locations. Prior authorization. Unless services delivered by community providers meet the coverage criteria for one of VHA’s two emergency care programs, all VA care in the community services must be authorized in advance of when veterans access the care in order for claims to be paid. Medicare, on the other hand, generally does not require prior authorization for the services it covers, and TRICARE generally only requires prior authorization for specialty care services. Automatic claim adjudication. Compared to VHA’s system, the claims processing systems used in Medicare and TRICARE are more automated. While staff at VHA’s claims processing locations must manually apply administrative and clinical criteria to every claim to determine whether the claims should be paid, officials from the Medicare and TRICARE contractors we interviewed described their organizations’ high degrees of automatic claim adjudication. Medicare officials estimated that the MACs process about 95 percent of claims with no manual intervention, while officials from the contractor responsible for processing TRICARE claims estimated that their organization has automated about 75 percent of the claims adjudication process. Medical documentation as a condition of payment. While VHA requires providers to submit medical documentation for most types of claims for VA care in the community services, Medicare and TRICARE do not. According to CMS and DHA officials, Medicare and TRICARE providers are only required to submit medical documentation for a small percentage of claims, such as those flagged during a prepayment review for an examination of medical necessity. Web-based provider self-service portals. Unlike VHA, Medicare and TRICARE contractors both offer Web-based provider self-service portals. Officials from both of the Medicare and TRICARE contractors we interviewed said that their organizations had established Web- based provider self-service portals, which officials told us have decreased providers’ reliance on telephone-based customer service. With these portals, providers are able to access information about the status of their claims 24 hours a day, 7 days a week. In contrast, VHA’s claims processing locations only offer telephone-based provider customer service. Dedicated customer service staff. Unlike VHA, Medicare and TRICARE have dedicated customer service staff. The two Medicare and TRICARE contractors each maintain units with dedicated customer service staff, while staff in other units focus on claims processing. Officials from the two contractors said that within their customer service units, certain individuals are designated to handle calls from providers with more specialized, complex inquiries, while others focus on calls from providers who are inquiring about more routine issues. In contrast, at VHA’s claims processing locations, staff who process claims are also responsible for delivering telephone- based provider customer service. In fiscal year 2015, VHA’s processing of claims for VA care in the community services was significantly less timely than Medicare’s and TRICARE’s claims processing. VHA officials told us that the agency’s fiscal year 2015 data show that VHA processed about 66 percent of claims within the agency’s required timeframe of 30 days or less. In contrast, CMS and DHA data show that in fiscal year 2015, Medicare’s and TRICARE’s claims processing contractors processed about 99 percent of claims within 30 or fewer days of receipt. According to CMS and DHA officials, the vast majority of Medicare and TRICARE claims are submitted electronically. However, the difference between VHA’s claims processing timeliness and that of Medicare and TRICARE is likely greater than what VHA’s available data indicate. Specifically, VHA’s data likely overstate the agency’s claims processing timeliness because they do not account for delays in scanning paper claims, and VHA officials told us that paper claims account for approximately 60 percent of claims for VA care in the community services. VHA’s policy states that determinations of claims processing timeliness should be based upon the date the claim is received, but VHA’s systems can only calculate timeliness on the basis of the date the claim is entered into FBCS. When community providers submit paper claims, VHA policy requires claims processing staff to manually date- stamp them and scan the paper claims into FBCS on the date of receipt. Because FBCS cannot electronically read the dates that are manually stamped on paper claims, the scan date becomes the date used to calculate claims processing timeliness. To the extent that paper claims are not scanned into FBCS upon receipt, this elapsed time is not reflected in VHA’s timeliness calculations. Our review raises questions about whether staff at VHA’s claims processing locations are following the agency’s policy for scanning paper claims into FBCS upon receipt. We do not know the extent of delays in scanning paper claims at all of VHA’s claims processing locations. However, our analysis of the non-generalizable sample of 156 claims for VA care in the community services from the four VHA claims processing locations we visited suggests that it may have taken about 2 weeks, on average, for staff to scan the paper claims in our sample into FBCS. This estimate is based on the number of days that elapsed between the dates that community providers created 86 of the 94 paper claims in our sample and the dates the claims were scanned into FBCS. Based on this analysis, we found that the number of days between the creation date and the scanned date for the paper claims in our sample ranged from 2 days to 90 days. Our observations at one claims processing location we visited were consistent with our analysis of the sampled claims. For example, we observed about a dozen bins of paper claims and medical documentation waiting to be scanned, and some of these bins were labeled with dates indicating they were received by the claims processing location about a month before our visit. Additionally, this claims processing location was the only one of the four claims processing locations we visited that manually date-stamped all of its paper claims upon receipt. Staff at another claims processing location told us that they only date-stamp paper claims for emergency care upon receipt because these claims are only eligible for payment if they have been received within a certain amount of time after the date of service. However, the staff said they do not date-stamp non-emergency care claims because to do so would be too time-consuming. Staff at the other two claims processing locations told us that they did not date-stamp any claims. These findings from the four claims processing locations we visited for this review are consistent with the claims processing deficiencies we identified in our 2014 report on the implementation of the Millennium Act emergency care benefit. Specifically, we found that the VHA claims processing locations we reviewed for the 2014 report were rarely date- stamping incoming paper claims and were not promptly scanning a significant percentage of the paper claims we reviewed into FBCS. In our report, we recommended that VHA implement measures to ensure that all incoming claims are date-stamped and scanned into FBCS on the date of receipt, and VA agreed with our recommendations. Soon after we issued our 2014 report, VHA reiterated its date-stamping and scanning policies on national calls with managers responsible for claims processing, posted articles in its biweekly bulletin for managers and staff, and conducted online training for staff that communicated the importance of date- stamping and promptly scanning claims. However, the observations from our most recent review of a new sample of claims at four other claims processing locations suggest that VHA had not monitored the operational effectiveness of their corrective actions to address our recommendation. VHA officials said that when they became aware of our more recent findings, they began requiring managers at their claims processing locations to periodically certify in writing that all incoming paper claims have been date-stamped and scanned on the day of receipt. Prior to October 2015, VHA did not pay interest penalties on most late payments to community providers, while Medicare and TRICARE have done so. Specifically, until October 2015, VHA paid no interest on claims it paid late for community care delivered by non-contract providers through individual authorizations. According to VHA officials, the agency had not paid interest penalties on these individually authorized services because VA did not interpret the Prompt Payment Act as applying to these payments. However, in October 2015, VA’s Office of General Counsel issued a new legal opinion specifying that the Prompt Payment Act does apply to claims for VA care in the community services that were (1) individually authorized in advance, or (2) delivered by community providers who have contracted with the TPAs to participate in Veterans Choice Program networks. Since then, from October 2, 2015 through November 21, 2015, VHA paid approximately $409,000 in interest penalties on claims for this care, according to VHA officials. To facilitate interest penalty payments on claims for individually authorized VA care in the community services, VHA established a process to automatically pay the penalties when these claims are paid more than 30 days after receipt. However, as we noted earlier in this report, paper claims that officially meet VHA’s timeliness standard could have been in VHA’s possession weeks before being scanned into FCBS, so VHA may not be paying interest on all claims that are paid more than 30 days after the claims were actually received. This issue will likely persist until VHA ensures that all incoming paper claims are date- stamped and scanned into FBCS on the date of receipt, as we recommended in 2014. While VHA has not historically paid interest penalties on claims that are paid late, Medicare and TRICARE officials said their agencies have for many years considered the care provided under their programs to be subject to the Prompt Payment Act. In fiscal year 2014, CMS reported it paid about $3.3 million in interest penalties to Medicare providers (with overall payments for fee-for-service Medicare services totaling $357.3 billion), and DHA reported it paid about $386,000 in interest penalties to TRICARE providers (with overall payments for TRICARE services totaling about $10.5 billion). For both Medicare and TRICARE, the sum of interest penalties—relative to overall expenditures for services—was relatively low in fiscal year 2014 because these programs generally paid providers in a timely manner. See table 4. During the course of our work, VHA officials and staff at three of the four claims processing locations we visited told us that the limitations of the existing information technology systems VHA uses for claims processing—and related workload challenges—have delayed processing and payment of claims for VA care in the community services. These identified limitations are described in more detail below. VHA cannot accept medical documentation electronically. While VHA has the capacity to accept claims from community providers and the TPAs electronically, it does not have the capacity to accept medical documentation electronically from the providers and TPAs. As a result, this documentation must be scanned into FBCS, which delays claims processing, according to VHA staff. Although VHA policy requires VHA staff to promptly scan paper claims into FBCS when received, delays can occur because staff do not have time to scan the high volume of claims and medical documentation received each day, and the capacity of scanning equipment is limited. For example, VHA staff at one claims processing location we visited told us that on Mondays (their heaviest day for mail since they do not receive mail on weekends), they do not scan any incoming claims with accompanying medical documentation. Instead, they generally scan only claims that do not have accompanying medical documentation on Mondays and then scan claims with accompanying medical documentation into FBCS on Tuesdays and Wednesdays. In some cases, the medical documentation community providers must submit can be extensive, which may further delay its entry into FBCS. Officials from one community health care system told us that the medical documentation they submit with claims can be between 25 to 75 pages for each patient. With most types of claims requiring medical documentation, staff at VHA’s claims processing locations may need to scan a significant number of pages of incoming medical documentation each day. Authorizations for VA care in the community services are not always readily available in FBCS. Staff at three of the four VHA claims processing locations we visited told us that processing and payment can also be delayed when authorizations for VA care in the community services are unavailable in FBCS. Before a veteran obtains services from a community provider, staff at a VA medical facility must indicate in the veteran’s VA electronic health record (a system separate from FBCS) that the services have been authorized, and then these staff must manually create an authorization in FBCS. However, VHA officials and staff told us that these authorizations are sometimes unavailable in FBCS at the time claims are processed, which delays processing and payment. The authorizations are unavailable because either (1) they have been electronically suspended in FBCS, and as a result staff at the VA medical facility that authorized the care must release them before any associated claims can be paid, or (2) the estimated date of service on the authorization does not match the date that services were actually rendered, and new authorizations must be entered by staff at the authorizing VA medical facility before the claims can be paid. In our non-generalizable sample of 156 claims, 25 claims were delayed in being processed because an authorization was not initially available in FBCS, resulting in an average delay of approximately 42 days in claims processing. Additionally, 8 of the 12 community providers we interviewed said they were aware that some of their payments had been delayed because authorizations were not available in FBCS when their claims arrived at the VHA claims processing location. FBCS cannot automatically adjudicate claims. FBCS cannot automatically adjudicate claims, and as a result, VHA staff must do so manually, which VHA staff told us can slow claims processing, make errors more likely, and delay claims payment. After information from claims and supporting medical documentation has been scanned and entered into FBCS, the system cannot fully adjudicate the claims without manual intervention. For example, FBCS lacks the capability to electronically apply relevant administrative and clinical criteria for Millennium Act emergency care claims, such as automatically determining whether a veteran is enrolled in the VHA health care system and whether they had received services from a VA clinician in the 24 months prior to accessing the emergency care. Instead, staff processing these claims perform searches within FBCS and manually select rejection reasons for any claims that do not meet VHA’s administrative or clinical criteria for payment. Among the 156 claims we reviewed at four claims processing locations, it took an average of 47 days for claims processing staff to determine that the claims met the administrative and clinical criteria for payment. In addition, even after claims are approved for payment, they require additional manual intervention before the community providers can be paid. For example, in cases where FBCS cannot automatically determine correct payment rates for VA care in the community services, VHA staff manually calculate VHA’s payment rates and enter this information into FBCS. Staff we interviewed also told us that it usually takes about 2 days for claims to return from VA’s program integrity tool, which is a system outside FBCS where claims are routed for prepayment review of potential improper payments. If corrections must be made after the claims return from this prepayment review, payments can be delayed further. Weaknesses in FBCS and VHA’s financial management systems have also delayed claims payments. According to staff at three of the four claims processing locations we visited, payments on some VA care in the community claims are delayed because FBCS and VHA’s financial management systems do not permit officials to efficiently monitor the availability of funds for VA care in the community services. To centralize its oversight of VA care in the community, the Choice Act directed VA to transfer the authority for processing payments for VA care in the community from its VISNs and VA medical centers to VHA’s Chief Business Office for Purchased Care, a change VA implemented in October 2014. However, according to VHA officials from that office, monitoring the use of funds—at a national level—has remained largely a manual process due to limitations of FBCS and the use of separate systems to track obligations and expenditures. According to VHA officials, VHA uses historical data from FBCS to estimate obligations in VHA’s financial management systems on a monthly basis, and these estimates have been unreasonably low for some services, given the unexpected increase in utilization of VA care in the community services over the course of fiscal year 2015. In addition, these officials said that FBCS does not fully interface with VHA’s financial management systems used to track the availability of funds, which results in staff having to manually record the obligations for outpatient VA care in the community services in these systems on a monthly basis. Together, these two issues have impeded the ability of VHA to ensure that funds are available to pay claims for VA care in the community as they are approved, according to VHA officials responsible for monitoring the use of funds. We found that payments for 5 of the 156 claims we reviewed from four claims processing locations were delayed because of these issues, resulting in payment delays that ranged from 1 to 215 days. Inadequate equipment delays scanning of both paper claims and medical documentation. VHA officials also told us that inadequate scanning equipment delayed claims processing and adversely affected VHA’s claims payment timeliness. At the time of our review, staff responsible for scanning paper claims and medical documentation at one of the four claims processing locations we visited told us that they did not have adequate scanning equipment. At this location, the scanners that staff showed us were small enough to be placed on desktops, while the trays for feeding documents into the scanners could only handle a limited number of pages at one time. With an estimated 60 percent of claims and 100 percent of medical documentation requiring scanning, these staff said that they struggled to keep up with the volume of paper coming in to their claims processing location. Staffing shortages adversely affect claims processing timeliness. In addition to the technological issues described above, VHA officials and staff also told us that staffing shortages have adversely affected VHA’s claims processing timeliness. According to VHA officials, the overall number of authorized positions for claims processing staff did not change after the October 2014 organizational realignment that transferred claims processing management and oversight responsibilities to the Chief Business Office for Purchased Care. However, VHA officials said that VHA’s claims processing workload increased considerably over the course of fiscal years 2014 and 2015. (See figure 1 for an illustration of the increase in VHA’s claims processing workload from fiscal year 2012 through fiscal year 2015.) According to VHA officials and staff, the increase in workload contributed to poor staff morale, attrition, and staff shortages—all of which contributed to delays in processing and impeded VHA’s claims processing timeliness. VHA officials told us that in early fiscal year 2015, there were about 300 vacancies among the estimated 2,000 authorized positions for claims processing staff. The 12 community providers and 12 state hospital association respondents who participated in our review told us about various issues they had experienced with VHA’s claims processing system. These issues are described in more detail below. Administrative burden of submitting claims and medical documentation to VHA. Almost all of the community providers we interviewed (11 out of 12) and all of the state hospital association respondents that participated in our review described the administrative burden of submitting claims and medical documentation to their respective VHA claims processing locations. For example, one community provider told us that VHA claims only accounted for about five percent of their business, but the provider told us it employed one full-time staff member who was dedicated to submitting claims to VHA and following up on unpaid ones. This same provider employed a second full-time staff member to handle Medicaid claims, but these accounted for about 85 percent of the provider’s business. According to many of the community providers that participated in our review, obtaining payment from VHA often requires repeated submission of claims and medical documentation. Officials from one community provider we interviewed said that at one point, they had been hand delivering paper medical documentation with paper copies of the related claims to their VHA claims processing location, but VHA staff at this location still routinely rejected their claims for a lack of medical documentation. Similarly, six state hospital association respondents also reported that their members’ claims were often rejected, even though they always sent medical documentation to their VHA claims processing location by certified mail. Some of the community health care system and hospital officials who participated in our review explained that they often must submit medical documentation to their VHA claims processing location twice—once for the claim related to hospital services and again for claims related to physician services. Lack of notification about claims decisions. Community providers who participated in our review also explained that they rarely received written notifications from VHA about claims decisions. To inform community providers and the TPAs about whether their claims have been approved or rejected, staff at VHA’s claims processing locations print notices, known as preliminary fee remittance advice reports, and mail them to the providers and TPAs. However, community providers who participated in our study stated that they rarely received these paper reports in the mail, and even though they received VA payments electronically, it was not clear without the remittance advice reports which claims the payments applied to or whether VHA denied payment for certain line items on some claims. Unlike Medicare and TRICARE, VHA has no online portal where community providers can electronically check the status of their claims to find out if the claims are awaiting processing or if VHA needs additional information to process them. Several of the community providers who participated in our study told us that they would appreciate VHA establishing such a portal. Issues with telephone-based provider customer service. Almost all of the community providers and state hospital associations that participated in our review (9 out of 12 providers and 11 out of 12 associations) experienced issues with the telephone-based provider customer service at VHA’s claims processing locations. For example, officials from three of the community providers we interviewed reported that they routinely wait on hold for an hour or more while trying to follow up on unpaid claims. Officials from a community health care system that operates 46 hospitals and submits claims to 5 different VHA claims processing locations said that 3 of these locations will not accept any phone calls and instead require providers to fax any questions about claim status. According to officials from another community health care system, their VHA claims processing location has limited them to inquiring about only three claims per VHA staff member, per day. The officials explained that if they call twice on the same day and reach the same individual who has already checked the status of three claims, that person will refuse to check the status of additional claims; however, if they connect with a different VHA staff member, they may be able to inquire about additional claims. In the course of our work, VHA officials reported that they implemented several measures in fiscal year 2015 and early fiscal year 2016 that were intended to improve the timeliness of VHA’s payments to community providers and the TPAs. The following are the key steps that VHA officials have reported taking. Elimination of certain medical documentation requirements. On March 1, 2016, VA announced that it had modified its contracts with the TPAs so that community providers participating in the Choice Program will no longer be required to submit medical documentation before their VA care in the community claims can be paid. VA expects this will expedite the processing of claims from Choice Program providers. VHA’s data indicate that the number of VA care in the community authorizations routed to the Choice Program first exceeded the number of authorizations for other types of VA care in the community in November 2015, and in January 2016 (the most recent month for which data were available) about 56 percent of VA care in the community authorizations were routed to the Choice Program. VHA has not eliminated the medical documentation requirement for all other types of VA care in the community, requiring community providers to submit medical documentation before VHA will pay claims for (1) individually authorized inpatient VA care in the community, (2) PC3 care, (3) Millennium Act emergency care, and (4) service-connected emergency care. As discussed earlier in this report, VHA’s inability to electronically accept medical documentation from most community providers and the administrative burden of scanning a high volume of paper medical documentation have caused delays in VHA’s processing of claims for VA care in the community. Staffing increases. VHA officials said that they have recently filled the approximately 300 staff vacancies that resulted from attrition shortly after the October 2014 realignment of claims processing under VHA’s Chief Business Office for Purchased Care. The officials also told us that they have supplemented the existing workforce at VHA’s claims processing locations by hiring temporary staff and contractors to help address VHA’s backlog of claims awaiting processing. In addition, for 2 months in fiscal year 2015, VHA required its claims processing staff to work mandatory overtime, and according to VHA officials, staff are still working overtime on a voluntary basis. At some locations, VHA added second shifts for claims processing staff. As a result, VHA officials told us that VHA was able to decrease its backlog of unprocessed claims for VA care in the community from an all-time high of 736,000 claims in August 2015 to about 453,000 claims as of October 29, 2015. Deployment of nationwide productivity standards. On October 1, 2015, VHA introduced new performance plans with nationwide productivity standards for its claims processing staff, and officials estimated that these standards would lead staff to process more claims each day, resulting in a 6.53 percent increase in claims processing productivity over the course of fiscal year 2016. Improved access to data needed to monitor claims processing performance. VHA has implemented a new, real-time data tracking system to monitor claims processing productivity and other aspects of performance at its claims processing locations. This tool, which VHA officials refer to as the “command center,” permits VHA officials and managers at VHA’s claims processing locations to view claims data related to the timeliness of payments and other metrics at the national, claims processing location, and the individual staff level. Previously, many data were self-reported by the claims processing locations. The VHA officials we interviewed said that they monitor these data daily. New scanning equipment. VHA recently purchased new scanning equipment for 73 of its 95 claims processing locations, including the claims processing location we visited with the small, desktop scanners. The agency awarded a contract in November 2015, and officials said that VHA had installed this new equipment at almost all sites as of January 15, 2016. They expected that installation would be completed at the few remaining sites by the end of January 2016. Improvement of cost estimation tools. In January 2016, VHA deployed an FBCS enhancement that is intended to improve VHA’s ability to estimate obligations for VA care in the community within FBCS. VHA officials said this should help them better estimate costs to help ensure that adequate funds are available to pay claims for VA care in the community services at the time the claims are processed. However, staff at VA medical facilities still must manually enter estimated obligations into VHA’s systems for tracking the availability of funds on a monthly basis, because this information cannot be automatically transferred from FBCS. VHA officials we interviewed in the course of our work acknowledged that the recent steps they have taken to improve claims processing timeliness—such as hiring temporary staff and contractors and mandating that claims processing staff work overtime—are not sustainable in the long term. The officials said that if the agency is to dramatically improve its claims processing timeliness, comprehensive and technologically advanced solutions must be developed and implemented, such as modernizing and upgrading VHA’s existing claims processing system or contracting out the claims processing function. On October 30, 2015, VHA reported to Congress that it has plans to address these issues as part of a broader effort to consolidate VA care in the community programs. However, the agency estimates that it will take at least 2 years to implement solutions that will fully address all of the challenges now faced by its claims processing staff and by providers of VA care in the community services. According to VHA officials, the success of this long-term modernization plan will also hinge on significant investments in the development and deployment of new technology. In its October 2015 plan, VHA stated that it expects it will significantly increase its reliance on community providers to deliver care to veterans in the coming years. In addition, VHA plans to adopt many features or capabilities for its claims processing system that are similar to Medicare’s and TRICARE’s claims processing systems, including (1) greater automatic adjudication of claims, (2) automating the entry of authorizations, (3) establishing a mechanism by which community providers can electronically submit medical records, (4) creating a Web- based portal for community providers to check the status of their claims, and (5) establishing a nationwide provider customer service system with dedicated staff so that other staff can focus on claims processing. According to this plan, in fiscal year 2016 VHA will examine potential strategies for developing these capabilities—including the possibilities of contracting for (1) the development of the claims processing system only or (2) all claims processing services, so that contractors, rather than VHA staff, would be responsible for processing claims (similar to Medicare and TRICARE). Based on statements made by community providers that participated in our review, it is critical for VA to succeed in achieving its goal of deploying a modernized claims processing system. Without (1) significantly improving the timeliness of its payments and (2) addressing community providers’ concerns about the administrative burden of obtaining VHA payments and the agency’s lack of responsiveness when they inquire about unpaid claims, VHA risks losing the cooperation of these providers as it attempts to transition to a future care delivery model that would heavily rely on them to deliver care to veterans. Since the release of its October 2015 plan for consolidating VA care in the community programs, VHA has done some of the preliminary work needed to modernize its claims processing system. After issuing a request for interested parties to share information, VHA held an industry day in December 2015, where about 80 participants discussed with VHA the extent to which contractors could help support core functions— including claims processing—for the consolidated VA care in the community program VHA plans to establish. VHA officials said they used information gathered from this industry day to inform the development of a draft performance work statement and a draft operations manual for a consolidated VA care in the community program. VA publicly posted these documents in February 2016 and accepted written comments, questions, and other feedback from industry for about two weeks. VA plans to use these responses to help inform any future requests for proposals related to the consolidation of VA care in the community programs and the improvement of claims processing timeliness. VA’s plan for consolidating its care in the community programs outlines its approach to addressing deficiencies in VHA’s claims processing system. VA’s consolidation plan represents a major undertaking that depends, in part, on obtaining congressional approval for legislative changes and budget requests and revamping VA’s information technology systems. Leading practices call for careful planning and for developing an implementation strategy to help ensure that needed changes are made in a timely and cost-effective manner. When facing major challenges similar to the ones VHA faces to modernize its claims processing system, leading practices call for results-oriented organizations to focus on developing robust, comprehensive plans that (1) define the goals the organization is seeking to accomplish, (2) identify specific activities to obtain desired results, and (3) provide tools to help ensure accountability and mitigate risks. In prior work, we have determined that sound plans include the following components (among others): Goals, objectives, activities, and performance measures. This component addresses what the plan is trying to achieve and how it will achieve those results, as well as the priorities, milestones, and performance measures to monitor and gauge results. Resources, investments, and risks. This component addresses what the plan will cost, the sources and types of resources and investments needed, and where resources and investments should be targeted while assessing and managing risks. To date, VHA has not communicated to Congress or other external stakeholders a plan for modernizing its claims processing system that clearly addresses the components of a sound plan identified above. In particular, VHA has not communicated (1) a detailed schedule for developing and implementing each aspect of its new claims processing system; (2) the estimated costs for developing and implementing each aspect of the system; and (3) performance goals, measures, and interim milestones that VHA will use to evaluate progress, hold staff accountable for achieving desired results, and report to stakeholders the agency’s progress in modernizing its claims processing system. The communication of such a plan is also consistent with federal internal control standards for information and communication, which call for agencies to internally and externally communicate the necessary quality information to achieve the entity’s objectives. That VHA has not yet communicated a detailed plan but has stated that it expects to deploy a modernized claims processing system as early as fiscal year 2018 is cause for concern, especially given VA’s past failed attempts to modernize key information technology systems. Our prior work has shown that VHA’s past attempts to achieve goals of a similar magnitude—such as modernizing its systems for (1) scheduling outpatient appointments in VA medical facilities, (2) financial management, and (3) inventory and asset management—have been derailed by weaknesses in project management, a lack of effective oversight, and the failure of pilot systems to support agency operations. For example, we found: VA undertook an initiative in 2000 to replace the outpatient scheduling system but terminated the project after spending $127 million over 9 years. VA has been trying for many years to modernize or replace its financial management and inventory and asset management systems but has faced hurdles in carrying out these plans. In 2010, VA canceled a broad information technology improvement effort that would have improved both of these systems and at the time was estimated to cost between $300 million and $400 million. By September 2, 2009 (just before the project’s cancellation) VA had already spent almost $91 million of the $300 million to $400 million that was originally estimated. A previous initiative to modernize these systems was underway between 1998 and 2004, but after reportedly having spent more than $249 million on development of the replacement system, VA discontinued the project because the pilot system failed to support VHA’s operations. According to VHA officials, instead of investing in administrative systems such as the claims processing system, outpatient scheduling system, financial management systems, or the inventory and asset management system, VA has prioritized investments in information technology enhancements that more directly relate to patient care. As such, VHA officials said they have had little success in gaining approval and funding for information technology improvements for these administrative systems. VHA’s average claims processing timeliness in fiscal year 2015 was significantly lower than Medicare’s and TRICARE’s timeliness and far below its own standard of paying 90 percent of claims within 30 days. If this situation persists, it could have very real consequences on veterans’ access to care. Presently, VHA (through its TPAs), is attempting to improve veterans’ access to care by establishing a robust network of community health providers under its VA care in the community programs. However, absent a responsive provider customer service component and timely payment of claims, many community providers may opt not to participate in VA’s network, thereby narrowing the choices veterans have for seeking care from community providers. In turn, this could lead to longer wait times for veterans to receive care and a greater reliance on VA medical centers, some of which have already experienced long wait times for veterans seeking care. Moreover, millions of dollars in interest penalties resulting from the late payment of claims by VHA could dilute the funding available for the direct delivery of care to veterans. To its credit, VHA has implemented several short-term initiatives intended to address ongoing challenges and improve its timeliness in paying community providers. These initiatives include increasing the number of staff processing claims, purchasing new scanning equipment, holding claims processing staff accountable through new productivity standards, and developing a tracking system to monitor claims processing performance. By VHA’s own admission, however, these short-term initiatives will not resolve all challenges that have long impeded its claims processing timeliness, and many of these initiatives are not sustainable over the longer term. VHA plans to address the remaining challenges through its longer term effort to implement a consolidated VA care in the community program in fiscal year 2018 or later. VHA’s sweeping changes are likely to be costly, and to achieve the goals of this initiative it will be important to have a high level of planning and effective project management, and communication with multiple stakeholders. As we have reported in prior work, VHA’s plans to achieve goals of a similar magnitude—such as the modernization of its systems for outpatient appointment scheduling, financial management, and inventory and asset management—have been derailed by weaknesses in project management and a lack of effective oversight. Therefore, if VHA’s current initiative is to be successful, it is essential that VHA develop a sound implementation plan and an effective project management strategy as it proceeds. Otherwise, the agency risks spending valuable resources on new systems and processes that may not significantly improve VHA’s claims processing timeliness. As part of its implementation plan, it is critical that VHA identify implementation steps and develop the ability to measure and externally communicate its progress to the Congress and other stakeholders. It is also important that VHA be held accountable for achieving major components of the initiative and adhering to its timeline, as stated in its 2015 plan for consolidating VA care in the community programs. To help provide reasonable assurance that VHA achieves its long-term goal of modernizing its claims processing system, the Secretary of Veterans Affairs should direct the Under Secretary for Health to ensure that the agency develops a sound written plan that includes the following elements: a detailed schedule for when VHA intends to complete development and implementation of each major aspect of its new claims processing system; the estimated costs for implementing each major aspect of the system; and the performance goals, measures, and interim milestones that VHA will use to evaluate progress, hold staff accountable for achieving desired results, and report to stakeholders the agency’s progress in modernizing its claims processing system. We provided a draft of this report to VA, HHS, and DOD for comment. VA provided written comments on the draft report, and we have reprinted these comments in Appendix V. In its comments, VA concurred with our recommendation and said that VHA plans to address it when the agency develops an implementation strategy for the future consolidation of its VA care in the community programs. VA also provided technical comments, which we have incorporated as appropriate. HHS had no general comments on the draft report but provided technical comments, which we have addressed as appropriate. DOD had no general or technical comments on the draft report. We are sending copies of this report to the Secretary of Veterans Affairs, the Secretary of Health and Human Services, the Secretary of Defense, appropriate congressional committees, and other interested parties. This report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact Randall B. Williamson at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Characteristics of Selected Care in the Community Programs of the Department of Veterans Affairs (VA) Individually authorized care VHA was first authorized to grant veterans individual authorizations to receive care in the community in 1945, and the current statutory authority was codified in 1986. This is the primary means by which VHA has traditionally purchased care from community providers. Patient-Centered Community Care (PC3) VA created PC3 in 2013 under existing statutory authority, and fully implemented the program in April 2014. Under PC3, two third party administrators (TPA) developed regional networks of community providers to deliver care to veterans. related to a veteran’s service-connected disability. This program was established by the Veterans Millennium Health Care and Benefits Act in 1999 and is commonly referred to as Millennium Act emergency care. Veterans Choice Program The Veterans Choice Program was created by the Veterans Access, Choice, and Accountability Act of 2014. VHA introduced the program in November 2014 and expanded it in April 2015 and December 2015. To administer this program, VHA modified its contracts with the two TPAs it selected to administer the PC3 program. These contractors are responsible for enrolling community providers in their networks or establishing Choice Provider Agreements with the providers. Individually authorized care A veteran may be individually authorized to receive care when they cannot access a particular specialty care service from a VA medical center (because the service is not offered), they would have to wait too long for an appointment, or they would have to travel a long distance to a VA medical center. Patient-Centered Community Care (PC3) The criteria for a veteran to be eligible to access the PC3 program are the same as those for individually authorized care. Veterans Choice Program A veteran is eligible for this program when they: condition as an emergency, and would have deemed it unreasonable for the veteran to access the care at a VA or other federal facility. In addition to meeting the above criteria, a veteran may access Millennium Act emergency care (for a condition not related to a service-connected disability) if services were rendered before they were stable for transfer to a VA or other federal facility, and when the veteran: was enrolled in and would have to travel by air, boat, or ferry to the VA medical center closest to their home; or face unusual or excessive burden (such as geographic challenges) in traveling to a VA medical center; or have specific health care needs that warrant participation (including the nature and frequency of care); or live in a state or territory without a full- service VA medical center. accessed care from a VA clinician in the 24 months preceding the emergency care, is financially liable to the community provider, has no entitlement under another health plan contract (such as Medicare), and has no recourse against a third party that would wholly extinguish liability to the community provider. Individually authorized care Providers have 6 years after the date of service to submit a claim to VHA. For emergency care related to a veteran’s service-connected disability, providers must submit claims within 2 years of the date of service. For care unrelated to a veteran’s service- connected disability, providers must submit claims within 90 days of the date of service. Patient-Centered Community Care (PC3) Providers must submit claims electronically within 180 business days of the end of the episode of care. Veterans Choice Program Providers must submit claims electronically within 180 business days of the end of the episode of care. Rates are negotiated between community providers and VA’s TPAs. These are reportedly a negotiated percentage of local Medicare rates. For individually authorized outpatient care: the authorization will indicate whether documentation is required. For individually authorized inpatient care: at a minimum, providers must submit the discharge summary to VA. Yes. Community providers must submit medical documentation so that VA clinicians can determine whether the care was related to the veteran’s service-connected disability and whether the condition for which the veteran sought treatment meets the prudent layperson standard of an emergency. Yes. Under their contracts with VA, the TPAs must collect medical documentation from community providers and return it to VA in a timely manner. No. health. The standard would be met if there was an emergency medical condition manifesting itself by acute symptoms of sufficient severity (including severe pain) that a prudent layperson who possesses an average knowledge of health and medicine could reasonably expect the absence of immediate medical attention to result in placing the health of the individual in serious jeopardy, serious impairment to bodily functions, or serious dysfunction of any bodily organ or part. See 38 C.F.R. § 17.1002(b). The prudent layperson standard emphasizes the patient’s presenting symptoms, rather than the final diagnosis, when determining whether to pay emergency medical claims. Service-connected emergency care Claim must be filed within 2 years of the date services were rendered. Nonservice-connected emergency care (Millennium Act care) Claim must be filed within 90 days of the date services were rendered. Condition meets the prudent layperson standard of an emergency A VA or other federal medical facility was not feasibly available to provide the needed care, and an attempt to use either would not have been considered reasonable by a prudent layperson The services were rendered before the veteran was stable enough for transfer to a VA or other federal medical facility and before the VA or other federal medical facility agreed to accept the transfer Veteran was enrolled in the VA health care system Veteran had received care from a VA clinician within the 24 months preceding the emergency care episode Veteran is financially liable to the community provider of the emergency care Veteran has no entitlement under another health plan contract (such as Medicare or a private health insurance plan) Veteran has no recourse against a third party that would wholly extinguish his or her liability to the community provider (e.g., motor vehicle insurance or workers’ compensation) Appendix III: Veterans Health Administration’s (VHA) Steps for Processing Claims for the Department of Veterans Affairs’ (VA) Care in the Community Services as of March 2016 VHA has numerous programs through which it purchases VA care in the community services, and these programs have varying rules governing payment rates and requirements for claims processing. The primary means by which VHA has traditionally purchased care from community providers is through individual authorizations. When a veteran cannot access a particular specialty care service from a VA medical facility, the veteran’s VA clinician may request an individual authorization for the veteran to obtain the service from a community provider. In addition, when care in the community is not preauthorized, VA may purchase two different types of emergency care from community providers: 1) emergency care for a condition that was related to a veteran’s service-connected disability and 2) emergency care for a condition not related to a veteran’s service-connected disability. The latter care is commonly referred to as Millennium Act emergency care. See Veterans Millennium Health Care and Benefits Act, Pub. L. No. 106-117, 113 Stat.1545 (1999) (codified, as amended at 38 U.S.C. § 1725) for emergency care not related to a service-connected disability. See 38 U.S.C. § 1728 for emergency care related to a service-connected disability. Medical documentation may not be in the Fee Basis Claims System (FBCS) because either (1) the community provider has not yet submitted the documentation or because (2) staff at the VHA claims processing location have not yet scanned it into FBCS. According to VHA officials, if a claim has been submitted electronically and a community provider does not submit medical documentation within 45 days of the claim being suspended, FBCS will automatically reject the claim. In these cases, the community provider must resubmit both the claim and medical documentation. Examples of relevant administrative and clinical criteria include whether the claim met VA’s timely filing requirement, whether the veteran has other insurance or legal recourse against a third party, and whether services were rendered beyond the point at which the veteran was stable enough to be transferred to a VA or other federal facility. VHA staff use FBCS to generate notifications for the community provider and veteran about whether the claim was paid or rejected. These notifications are called preliminary fee remittance advice reports and include a listing of claim dates and services, the reasons why payments for any services were rejected, and the payment amounts for approved services. One contractor subcontracted by the three managed care support contractors (MCSC). According to officials from this contractor, they employ about 650 staff in 3 locations who are responsible for processing claims from the 3 MCSCs. Yes. A VHA directive states that 90 percent of all claims must be processed within 30 days of receipt. Yes. By law, 95 percent of clean claims must be processed (either paid or denied) within 30 days of receipt. In accordance with statute, the Centers for Medicare & Medicaid Services’ (CMS) manual for processing Medicare claims states that the remaining claims must be processed within 45 days of receipt. Medicare Yes. TRICARE Yes. CMS monitors two key performance metrics: a. The Defense Health Agency (DHA) monitors two key performance metrics: the percentage of all claims that have been processed—either paid, rejected, or denied—in 30 days or less; the percentage of claims awaiting processing that were received less than 30 days ago; the percentage of claims for individually authorized VA care in the community that were processed in 30 days or less; and the percentage of claims for other-than-individually authorized care that were processed in 45 days or less. b. the percentage of clean claims processed within 30 days of receipt, and the percentage of other- than-clean claims processed within 45 days of receipt. According to CMS officials, the MACs submit monthly reports to CMS, which include data on claims processing timeliness. These reports are generated by a module within CMS’s claims processing software and can be independently verified by CMS. that initially lacked sufficient information to be processed that were processed within 90 days of receipt. According to DHA officials, MCSCs submit monthly reports to DHA that include data on the subcontractor’s timeliness of claims processing. These data can be independently verified by DHA. According to Chief Business Office for Purchased Care officials, they have a real-time data tracking system that allows them to monitor claims processing productivity and other aspects of claims processing performance at a national level, for individual claims processing locations, and for individual claims processing staff members. VHA Officials from VHA’s Chief Business Office for Purchased Care told GAO that they introduced nationwide staff productivity standards on October 1, 2015. Medicare CMS officials said that they previously included financial incentives in the MACs’ contracts to encourage the MACs to meet requirements for claims processing timeliness. After all MACs demonstrated that they were easily meeting these requirements, these financial incentives were removed from the MACs’ contracts and replaced by financial incentives to meet other requirements. Officials from one MAC GAO visited reported that they have productivity standards in place for their claims processing staff. TRICARE DHA officials said that the MCSCs are penalized $1 for every claim that the subcontractor does not process in a timely manner. The subcontractor that processes TRICARE claims reported that it has productivity standards in place for its claims processing staff. Yes, and less than half of claims are submitted electronically. Officials from VHA’s Chief Business Office for Purchased Care reported that community providers submit about 40 percent of claims electronically. Yes, and nearly all claims are submitted electronically. CMS officials said that as of fiscal year 2014, more than 99 percent of institutional providers and more than 98 percent of practitioners and suppliers submitted claims electronically. Yes, and nearly all claims are submitted electronically. DHA officials estimate that between 91 and 95 percent of claims are submitted electronically. Yes, according to VA policy, providers are required to submit medical documentation for some specific types of claims. Individually authorized outpatient care: the authorization will indicate whether documentation is required. Individually authorized inpatient care: at a minimum, providers must submit the discharge summary to VA. No. According to CMS officials, providers only submit medical documentation when requested to do so by a MAC, which would only request the documents if claims were flagged for prepayment review (e.g., a clinical review to determine the medical necessity of the services). No. According to DHA officials, providers are not required to submit medical documentation in order for claims to be processed, unless a claim is flagged for a prepayment review (such as a claim for an experimental treatment). Emergency care: medical documentation must be submitted so that the claim can be clinically reviewed to determine whether it meets eligibility criteria for payment. Patient-Centered Community Care (PC3): As a condition of their contracts, VA’s third party administrators are required to submit medical documentation for all claims. VHA No. According to VA officials, the agency does not have the capacity to receive medical documentation electronically from community providers. Medicare Yes. According to CMS officials, when medical documentation is requested, the MACs can receive the information electronically via a Web-based portal. TRICARE Yes. According to DHA officials, when medical documentation is requested, the TRICARE contractor can receive the information electronically via a Web-based portal. Veterans Health Administration, Timeliness Standards for Processing Non-VA Provider Claims, VHA Directive 2010-005 (Washington, D.C.: Jan. 27, 2010). TRICARE MCSCs are subject to claims processing timeliness requirements outlined in law and in the TRICARE Operations Manual. The requirements listed in the Operations Manual are more stringent than in the law, which states that 95 percent of clean claims must be processed within 30 days of submission to the claims processor and that all clean claims must be processed within 100 days of submission to the claims processor. 10 U.S.C. § 1095c(a). In addition to the contact named above, Marcia A. Mann, Assistant Director; Elizabeth Conklin; Christine Davis; Krister Friday; Jacquelyn Hamilton; Alexis C. MacDonald, and Vikki Porter were major contributors to this report. Veterans’ Health Care: Preliminary Observations on VHA’s Claims Processing Delays and Efforts to Improve the Timeliness of Payments to Community Providers. GAO-16-380T. (Washington, D.C.: Feb. 11, 2016). VA’s Health Care Budget: Preliminary Observations on Efforts to Improve Tracking of Obligations and Projected Utilization, GAO-16-374T (Washington, D.C.: Feb. 10, 2016). VA Health Care: Actions Needed to Improve Monitoring and Oversight of Non-VA and Contract Care. GAO-15-654T. (Washington, D.C.: June 1, 2015). VA Health Care: Further Action Needed to Address Weaknesses in Management and Oversight of Non-VA Medical Care. GAO-14-696T. (Washington, D.C.: June 18, 2014). VA Health Care: Actions Needed to Improve Administration and Oversight of Veterans’ Millennium Act Emergency Care Benefit, GAO-14-175. (Washington, D.C.: March 6, 2014). VA Health Care: Management and Oversight of Fee Basis Care Need Improvement. GAO-13-441. (Washington, D.C.: May 31, 2013). | Due to recent increases in utilization of VA care in the community, VHA has had difficulty processing claims in a timely manner. Congress included a provision in law for GAO to review VHA's payment timeliness and to compare it to that of Medicare and TRICARE. This report examines, among other objectives, (1) VHA's, Medicare's, and TRICARE's claims processing timeliness; (2) factors that have impeded VHA's claims processing timeliness and community providers' experiences; and (3) VHA's recent actions and plans to improve its claims processing timeliness. GAO obtained fiscal year 2015 data on VHA's, Medicare's, and TRICARE's claims processing timeliness. GAO also visited 4 of 95 VHA claims processing locations (selected based on variation in geographic location, performance, and workload); reviewed VHA documents and 156 claims from the 4 locations; and interviewed officials from VHA, Medicare, TRICARE, and selected community providers and state hospital associations. Results from GAO's analysis cannot be generalized to all VHA claims processing locations or community providers. To help ensure that veterans are provided timely and accessible health care services, the Veterans Health Administration (VHA) of the Department of Veterans Affairs (VA) has purchased care from non-VA community providers through its care in the community programs since as early as 1945. VHA's agency-wide data show that in fiscal year 2015, it processed about 66 percent of claims within the agency's required time frame of 30 days or less, whereas data from Medicare and TRICARE (the Department of Defense's health care system) show that their contractors processed about 99 percent of claims within 30 days or less. However, VHA's data likely overstate its performance because they do not account for delays in scanning paper claims, which officials say account for approximately 60 percent of claims. GAO's analysis of 156 claims from four VHA claims processing locations indicated that it took an average of 2 weeks for VHA staff to scan paper claims into VHA's claims processing system, and GAO observed multiple bins of paper claims that had been awaiting scanning at one site for over a month. In a 2014 report, GAO recommended that VHA take action to ensure that all of its claims processing locations comply with its policy of scanning claims into VHA's claims processing system upon receipt. While VHA agreed with this recommendation and attempted to reiterate the policy through various means, GAO's more recent findings suggest that VHA did not monitor the operational effectiveness of these corrective actions. VHA officials said that they have since begun requiring managers at their claims processing locations to periodically certify in writing that all incoming paper claims have been date-stamped and scanned on the day of receipt. VHA officials and claims processing staff from the four locations GAO visited indicated that technology limitations, manual processes, and staffing shortages have delayed VHA's claims processing. For example, VHA's claims processing system lacks the capacity to automatically adjudicate claims. VHA staff instead must rely on manual processes, which they say delay payments to community providers. In addition, community providers and state hospital association respondents who participated in GAO's review said they had experienced various issues with VHA's claims processing system. For example, almost all providers described the administrative burden of submitting claims and related medical documentation to VHA and a lack of responsiveness from VHA's claims processing locations when the providers contacted them to follow up on claims. While VHA has recently implemented interim measures to address challenges that have delayed claims processing—such as eliminating certain medical documentation requirements and filling staff vacancies—the agency does not expect to deploy solutions to address all challenges until fiscal year 2018 or later. VHA is currently examining options for modernizing its claims processing system but has not yet communicated to Congress or other external stakeholders a sound plan that clearly addresses the components identified in past GAO work (such as a detailed schedule, estimated costs, and measures of progress). This is concerning, given VA's past failed attempts to modernize key information technology systems. While the agency expects to significantly increase its reliance on community providers to deliver care to veterans in the future, it risks losing their cooperation if it does not improve its payment timeliness. GAO recommends that VA develop a written plan for modernizing its claims processing system that includes a detailed schedule, costs, and performance measures. VA concurred with this recommendation and plans to address it through the planned consolidation of its VA care in the community programs. |
The Farmers Home Administration (FmHA), a lending agency within the U.S. Department of Agriculture (USDA), provides assistance to financially troubled farmers through direct government-funded loans and guarantees on loans made by other agricultural lenders. Until the early 1970s, FmHA provided direct loans only. The Rural Development Act of 1972 (P.L. 92-419, Aug. 30, 1972) provided FmHA with discretionary authority to guarantee farm loans made by other agricultural lenders, such as commercial banks and the Farm Credit System. In guaranteeing a farm loan, FmHA agrees, in the event that a borrower defaults, to reimburse a commercial lender for up to 90 percent of lost principal plus accrued interest and liquidation costs. American farmers have a hierarchy of credit available. Farmers who need to borrow funds to finance their operations or purchase farm property have three basic sources of credit. First, farmers in the best financial position can obtain credit from lenders such as commercial banks, the Farm Credit System, life insurance companies, or individuals. Second, if farmers’ security or ability to meet repayment terms is somewhat marginal, they can obtain credit from commercial lenders through FmHA’s guaranteed farm loan program. Third, if farmers are unable to obtain financing elsewhere, they can obtain a direct loan from FmHA. Table 1.1 shows that FmHA was responsible for about 12.5 percent of the total farm debt on December 31, 1992—guaranteed loans (3.4 percent) plus direct loans (9.1 percent). Data from December 31, 1992, were the latest available. FmHA’s mission is to be a temporary lender of last resort. For farmers who are unable to obtain credit elsewhere, FmHA can provide financing through either a direct or a guaranteed loan. To be eligible for a direct loan, a borrower must be unable to obtain commercial credit at reasonable rates and terms. To obtain a guaranteed loan, a lender must certify that it is unwilling to make the loan without a government-backed guarantee. Direct loans are made at lower interest rates and for longer repayment periods than guaranteed loans. When direct loan borrowers demonstrate financial progress, they are to graduate to commercial credit. If properly implemented, this process enforces FmHA’s mission to supply temporary credit and makes direct loan funds available for other high-risk farmers needing financial assistance. Although FmHA has traditionally provided more direct loans than guaranteed loans, it began to use more guaranteed loans in the mid-1980s. The Congress has since supported this changed emphasis with increased authorizations for guaranteed loans. FmHA provides loan services through a highly decentralized organization consisting of a national program office in Washington, D.C.; a finance office in St. Louis, Missouri; and a field office structure comprising 47 state offices, about 250 district offices, and about 1,700 county offices located throughout the nation. FmHA’s county supervisors, who manage the county offices, have extensive responsibility and authority for administering the agency’s farm loan programs, including approving and servicing loans. FmHA’s district directors are to provide guidance and supervision to county supervisors within designated geographic areas in the making and servicing of farm loans, and state directors are to administer and oversee operations within one or more states. Also, district and state directors have approval authority for certain loans. During 1993, the Secretary of Agriculture proposed to the Congress a plan to restructure USDA. In early October 1994, the Congress approved a restructuring plan for USDA. This action could change the way that farm loans are administered by the Department. FmHA provides direct and guaranteed loans for both farm operating and farm ownership purposes. Farm operating loans are authorized for purposes such as buying equipment items, livestock, and poultry; paying annual operating and/or family living expenses; and refinancing debts. Direct operating loans may not exceed $200,000, including any outstanding principal on other direct farm operating loans. Guaranteed operating loans may not exceed $400,000 in total outstanding loan principal. When a farm operating loan is made, collateral must be provided as security. Farm ownership loans, whether direct or guaranteed, are authorized for purposes such as buying real estate, refinancing existing debt, and making improvements to the farm. Direct and guaranteed farm ownership loans may not exceed $200,000 and $300,000, respectively, including any outstanding principal on other farm ownership loans, soil and water loans, and recreation loans. When a farm ownership loan is made, real estate or a combination of real estate and chattel property must be provided as security. Terms for repaying FmHA’s loans vary according to the loan’s type, the loan’s purpose, and the nature of the security. The payment period for farm operating loans may range from 1 to 7 years, while the payment period for farm ownership loans can be as long as 40 years. FmHA also makes other types of direct farm loans not evaluated in this report, such as emergency disaster loans that are made to farmers whose operations have been substantially damaged by adverse weather or by other natural disasters. These loans are intended to assist farmers in covering actual losses incurred so that they can return to normal farming operations. In the 1980s, FmHA began using more guaranteed loans and fewer direct loans in order to encourage farm lending by commercial lenders, reduce budget outlays on direct loans, and devote more effort to servicing its growing number of direct loans and increasingly delinquent direct accounts. Under the Food Security Act of 1985 (P.L. 99-198, Dec. 23, 1985)—referred to as the 1985 Farm Bill—and again in the Omnibus Budget Reconciliation Act of 1990 (P.L. 101-508, Nov. 5, 1990), the Congress supported this shift in emphasis by decreasing authorizations for direct loans and increasing authorizations for guaranteed loans. In each year since fiscal year 1987, FmHA’s new guaranteed loans have exceeded new direct loans. (See fig. 1.1.) However, the Agricultural Credit Improvement Act of 1992 (P.L. 102-554, Oct. 28, 1992) could change some of this emphasis back to direct loans. Under the act, FmHA must transfer 75 percent of its unobligated guaranteed operating loan authority at the end of the third quarter of a fiscal year to a new agency program that uses direct ownership loans to fund beginning farmers. In fiscal year 1993, FmHA transferred about $650 million under this authority but obligated very little of these transferred funds. During fiscal year 1992, FmHA guaranteed almost $1.6 billion on slightly less than 14,000 farm ownership and operating loans. On the basis of our random sample of these loans, we estimate that 91 percent of the loans went to borrowers who already had farm loans (whether commercial or FmHA credit) when they obtained an FmHA guaranteed loan and that 9 percent went to first-time farm loan borrowers. In addition, as shown in table 1.2, about 68 percent of the loans went to borrowers who had more than 10 years’ farm experience, 64 percent went to feed grain producers, 69 percent went to borrowers who had sales of between $100,000 and $500,000 annually, and the loans were made to borrowers whose farms averaged over 800 acres. Furthermore, we estimate that about 54 percent of the loan funds were used for paying operating expenses, various purchases, or other expenses. Another 6 percent was used for farm real estate purchases, as shown in table 1.3, and the remaining 40 percent was used for refinancing existing debt. Also, commercial banks provided the majority of the loans. Our work was part of a special GAO governmentwide audit effort to help ensure that areas potentially vulnerable to fraud, waste, mismanagement, and abuse are identified and that appropriate corrective actions are taken. Concerned about FmHA’s high losses in its direct loan program and the potential for similar losses in its guaranteed loan program, we reviewed the guaranteed loan program to determine (1) the extent of losses under the guaranteed farm loan program compared with those under the direct loan program, (2) the extent to which the guaranteed farm loan program has graduated FmHA’s direct loan borrowers to commercial credit, and (3) ways to make the guaranteed farm loan program more of a source for funding direct loan borrowers. In addressing these objectives, we conducted work at 6 FmHA state offices, 12 FmHA county offices, FmHA’s St. Louis Finance Office, and FmHA headquarters. Figure 1.2 shows the location of the state and county offices that we reviewed. Additionally, we reviewed and analyzed our reports issued since the 1985 Farm Bill was passed, reports issued by USDA’s Office of Inspector General since fiscal year 1988, the results of FmHA’s internal control reviews, and the annual reports from the Secretary of Agriculture to the President required by the Federal Managers’ Financial Integrity Act of 1982 (P.L. 97-255, Sept. 8, 1982). Appendix I provides more detail on our scope and methodology. To obtain information on the characteristics of FmHA’s guaranteed loan borrowers and the planned use of loan funds, we sent two questionnaires—one on farm ownership loans and another on farm operating loans—to county office officials requesting information about a randomly selected sample of loans that were made to borrowers who obtained loans in fiscal year 1992. Appendix II discusses our survey methodology and contains our estimates and sampling errors. Appendixes III and IV contain copies of the questionnaires used. Additionally, to evaluate the quality of the guaranteed loan portfolio, we sent another questionnaire to county office officials requesting information about the payment record of a randomly selected sample of borrowers who had outstanding loans as of June 30, 1993. Appendix V discusses our survey methodology for this aspect of our work and contains our estimates and sampling errors, and appendix VI contains a copy of the questionnaire. To determine whether the guaranteed loan program is a viable funding source for more of FmHA’s direct loan borrowers, we conducted a structured interview with 53 commercial lenders in eight states—34 of these lenders had outstanding guaranteed loans, and 19 did not. Also, we interviewed representatives of the American Bankers Association and the Independent Bankers Association of America. We started our work in February 1993 and used September 30, 1993, as a cut-off date for most of the financial information about FmHA’s farm loan portfolio. This date allowed us to have relatively recent and comparable data on the financial status of FmHA’s direct and guaranteed farm loan portfolios. In addition, we conducted detailed field work through October 1993, updating selected information through July 1994. We performed our work in accordance with generally accepted government auditing standards. FmHA’s written comments on the results of our work appear in appendix VIII. FmHA’s guaranteed loan program has been more successful than the direct loan program from a financial standpoint. From 1976 through 1993, FmHA guaranteed $12 billion of lenders’ loans and made $55.6 billion in direct loans. Overall losses—actual losses through 1993 plus estimates of future losses—on FmHA’s guaranteed loans are expected to be about 9 percent compared with direct loan losses of about 40 percent. A key reason for the differences in losses is that guaranteed loan borrowers are lower credit risks than direct loan borrowers are; that is, to obtain a direct loan, a borrower must show that a commercial lender would not make the loan at reasonable interest rates. Another contributing factor is that a greater proportion of the direct loans was made just prior to the farm financial crisis in the mid-1980s, when farm lenders experienced higher-than-normal loss rates. Although more successful than the direct loan program, the guaranteed loan program is experiencing programmatic problems that contribute to increased financial risk to the government. Specifically, FmHA allows guaranteed or direct loan borrowers who have defaulted on previous loans to obtain new guaranteed loans. Also, FmHA’s internal control reviews have reported that field office officials have not always followed the agency’s standards for servicing guaranteed loans. Borrowers who receive FmHA’s guaranteed loans are more creditworthy than FmHA’s direct loan borrowers. As a result, FmHA has experienced and estimates it will experience lower losses from guaranteed loans. Also, as of September 30, 1993, about 5 percent of the outstanding guaranteed loan debt was held by delinquent borrowers compared with about 38 percent that was held by direct loan borrowers. FmHA’s actual and estimated losses from guaranteed loans are substantially less than those from its direct loans. From 1976 through 1993, FmHA guaranteed about $12 billion in lenders’ loans—almost 135,000 farm loans to approximately 86,000 borrowers—and expects to incur losses of about $1.1 billion, or 9.2 percent. These losses are much lower than those expected for the direct loan program, which total about $22.3 billion on $56 billion of loans for the same period, or about 40 percent. (See table 2.1.) Guaranteed loan losses would be expected to be less because guaranteed loan borrowers are less of a credit risk than direct loan borrowers are. Another contributing factor to the lower guaranteed loan losses is that a greater proportion of the direct loans was made in the late 1970s and early 1980s, just prior to the start of a period when farm lenders, overall, experienced higher-than-normal losses. Prior to 1987, the majority of FmHA’s farm loans were direct loans. However, beginning with 1987 and through 1993, the majority of FmHA’s farm loans were guaranteed loans. Consistent with FmHA’s estimate of future losses, two other measures of future performance each indicate that the outstanding guaranteed loans are less vulnerable to future losses than direct loans. These indicators consist of our assessment of the outstanding guaranteed loans and recent delinquencies. According to our estimates, 13.4 percent of the 1993 guaranteed loan portfolio is at risk: 7.5 percent is held by delinquent borrowers, and 5.9 percent is held by borrowers whose loans have been rescheduled to keep their accounts current. (See table 2.2.) In comparison, as shown in our prior report, we estimated that 70 percent of the direct loans that were outstanding in 1990 were similarly at risk. Another indicator of the extent that guaranteed loans are less risky than direct loans is the difference in delinquencies. FmHA reports show that as of September 30, 1993, delinquent borrowers held 4.8 percent of the outstanding guaranteed loan debt compared with 37.6 percent of the direct loan debt. Despite the fact that the guaranteed farm loan program is in better financial condition than the direct loan program, FmHA has hundreds of millions of dollars in guaranteed loans that are at risk, in part, because some of its policies and practices do not protect the government’s interest. Specifically, FmHA does not prohibit borrowers with poor repayment histories from obtaining new loans. Furthermore, FmHA’s field office officials have not always properly implemented loan-servicing standards, which are designed to protect the federal government’s financial interest. FmHA’s loan-making policies do not prohibit borrowers who defaulted on a guaranteed or direct loan from obtaining new guaranteed loans. As we reported in February 1994, 408 borrowers who received new guaranteed loans totaling almost $60 million during fiscal years 1991-93 had cost FmHA $67 million in losses on their previous loans. (See table 2.3.) Although the loans are relatively new—from 1 to 3-years old—16 borrowers, or about 4 percent of the 408, were delinquent on their new loans as of September 30, 1993. For example, one borrower received a guaranteed loan for $80,000 in 1991 after receiving about $317,000 in direct loan debt relief in 1989; by 1993, this borrower was delinquent on the guaranteed loan. Similarly, FmHA guaranteed a $400,000 loan in 1991 for a borrower who had defaulted on an earlier guaranteed loan, thereby causing FmHA to pay a loss claim of $254,000; by 1993, this borrower was delinquent on the new guaranteed loan. In our April 1992 report, we recommended that to strengthen FmHA’s loan-making standards, the Congress amend the Con Act to prohibit loan guarantees for borrowers (1) whose defaulting on previous guaranteed loans caused FmHA to pay commercial lenders’ loan loss claims and (2) whose defaulting on previous direct loans resulted in debt being written off or written down. The Congress has not implemented these recommendations. In recent years, FmHA’s field offices have improved their compliance with the agency’s standards for making guaranteed loans but, through fiscal year 1993, had not improved in complying with the standards for servicing such loans. FmHA requires its field offices to follow specific credit standards in approving guaranteed loans. These standards include determining an applicant’s eligibility and repayment ability and the adequacy of collateral. FmHA also requires its field offices to follow specific loan-servicing standards in overseeing the lender’s servicing of loans. This servicing includes (1) inspecting collateral to ensure that the borrower possesses and is maintaining security property, (2) providing the same servicing for FmHA guaranteed loans as for other loans, and (3) ensuring that loan funds are used properly. To evaluate the extent that FmHA’s field offices comply with the agency’s policies, procedures, and standards, FmHA established the Coordinated Assessment Review (CAR) as a part of its internal control review. The CAR consists of examining a random sample of direct and guaranteed loans each year in selected states. Generally, loans made in about 15 states are sampled and reviewed each year so that each state is reviewed every 3 years. FmHA’s target for an acceptable compliance rate is 85 percent—or no more than a 15-percent noncompliance rate. According to the CARs, FmHA’s field offices improved their oversight of lenders’ guaranteed loan-making process. Since our April 1992 report, recent CARs have shown that the field offices had less than a 15-percent noncompliance rate for all standards that put the government at risk. Conversely, through fiscal year 1993, the CARs showed that FmHA’s field offices had not improved their oversight of lenders’ servicing. In our April 1992 report, for example, we reported that in 25 percent of the cases reviewed, field office officials had not, as required, effectively monitored lenders’ compliance with standards for inspecting collateral and for ensuring the proper use of loan funds. The CARs for fiscal year 1993 showed that the field offices continue to have a high rate of noncompliance in several areas. Of the 15 loan-servicing standards, the field offices exceeded a noncompliance rate of 15 percent for 12 of the standards. For example, the following three cases relate to potential loss claims and demonstrate the noncompliance areas found: There was a 36.8-percent noncompliance rate for the standard that FmHA officials concur with the lender that a delinquency was beyond a borrower’s control before allowing the lender to reschedule or reamortize a loan. The failure to follow this standard can lead to the increased risk of paying higher loss claims because of accrued interest and deteriorated collateral. There was a 36.2-percent noncompliance rate for the standard that FmHA officials review lenders’ loan files within 90 days of closing a loan. Not following this standard can lead to the increased risk of paying higher loss claims because of errors in the value of collateral and the position of the lien. There was a 21-percent noncompliance rate for the standard that FmHA officials approve cash flow values prior to advances for the 2nd and 3rd years on line-of-credit operating loans. Such deficiencies can lead to the increased risk of paying higher loss claims because of credit advances to borrowers (1) whose operations had changed to the point where the advances were not in accordance with the terms of the loan or (2) whose financial conditions had deteriorated to the point where repayment would be questionable. In our April 1992 report, we recommended that FmHA develop and implement a system that ensures that its field office officials adhere to its standards for making and servicing guaranteed loans. In response, FmHA informed us about various actions it had developed for ensuring compliance, such as monitoring through its internal reviews and using the results of reviews to evaluate lending officials’ performance. However, as discussed above, while FmHA’s compliance with loan-making standards has improved, compliance with loan-servicing standards, through fiscal year 1993, had not. Although the guaranteed loan program has incurred much less losses than the direct loan program, some of FmHA’s lending policies and practices continue to place the government at a higher-than-necessary financial risk. These risks exist because (1) certain loan-making policies allow FmHA to guarantee loans whose potential for loss is high and (2) FmHA’s field office officials have not always followed the agency’s credit standards for servicing guaranteed loans. This risk could be reduced if, for example, the Congress implemented recommendations that we made in our April 1992 report. In commenting on a draft of this report (see app. VIII), FmHA agreed that a borrower’s past record of debt repayment often reflects a willingness to repay debt. However, FmHA stated that our statistics do not support the position that losses caused by events beyond the control of borrowers should prevent them from receiving additional credit. FmHA also stated that there is no correlation between past failures and a probability of future losses. We share FmHA’s concerns and recognize that there are cases in which borrowers may default for reasons that are beyond their control. Nonetheless, we are concerned that past failures are a strong indicator of not only the willingness but the priority of debt repayment by borrowers—i.e., the forgiveness of debt followed by the making of additional loans sends a signal (1) that could encourage borrowers to default and (2) that there will be little if any impact on obtaining additional loans. As a beginning of a renewed emphasis on monitoring lenders, FmHA cited several actions that it has initiated and planned. FmHA added that its emphasis has resulted in improved monitoring, as evidenced by a significant improvement in the rate of compliance with the three key standards for servicing guaranteed loans discussed in this chapter. We are encouraged by the results of FmHA’s fiscal year 1994 CAR reviews, which were not complete at the time of our review, and hope that the pattern of compliance with the servicing standards continues to follow the path of compliance with the agency’s loan-making standards. Furthermore, FmHA stated that its loss rate on guaranteed loans, which it calculated by comparing the total amount of losses incurred with the total amount of loans made over the life of the program, is 4 percent. We disagree with FmHA’s methodology for making this calculation because it fails to take into account the losses estimated on outstanding loans. A more accurate presentation is to compare total loans made with the total of losses already incurred and those estimated to occur on loans that are outstanding. As shown in table 2.1, this results in a 9.2-percent loss rate. Few direct loan borrowers have moved to guaranteed loans as a step toward graduating to commercial—nongovernment supported—credit. A contributing factor has been the lack of an FmHA policy that would encourage the use of the guaranteed loan program as an interim step in graduating direct loan borrowers to commercial credit. At the direction of the Congress, FmHA initiated action in late 1993 to include moving to guaranteed loans as an interim step in the graduation process. Furthermore, FmHA field office staff often fail to follow through on the required processes for identifying direct loan borrowers with the potential for graduation and graduating those who have shown sufficient financial progress to qualify for commercial credit. As a result, some borrowers may remain in the direct loan program longer than justified, taking advantage of the agency’s subsidized interest rates and long repayment terms. Although FmHA officials and commercial lenders believe that few direct loan borrowers can meet the requirements for a guaranteed loan, FmHA does not know how many can qualify. A logical step in graduating borrowers from direct loans to commercial credit would be to promptly replace their direct loans with guaranteed loans when a borrower qualifies. However, most direct loan borrowers are not getting guaranteed loans. Furthermore, FmHA has not had a policy to use the guaranteed loan program as a means of encouraging direct loan borrowers to graduate to commercial credit. According to FmHA’s data on borrowers who have outstanding loans and who receive new loan obligations, most direct loan borrowers do not obtain guaranteed loans. As table 3.1 shows, only about 7,300 FmHA direct loan borrowers, or 4 percent of the total number during fiscal years 1991-93, obtained guaranteed loans. These borrowers held direct loans for varying lengths of time—some for more than 20 years. FmHA has not historically used the guaranteed loan program as a stepping stone in helping direct loan borrowers progress to commercial credit. According to its own policies, FmHA, as a temporary source of credit, should graduate a borrower from direct loans to commercial credit at the earliest possible time. Because qualifying for commercial credit without a government guarantee is more stringent than qualifying with a guarantee, moving from a direct loan to a guaranteed loan is a logical progression for borrowers whose financial condition has improved but not sufficiently to qualify for commercial credit. FmHA has not used the guaranteed program in this way because its criteria for graduation from direct loans to commercial credit have not included any interim steps. FmHA considers a direct loan borrower to graduate from government support when that borrower (1) pays in full, before the expiration of the loan, all farm program loans or all of one type of farm program loan by refinancing with other credit sources and (2) continues farming. FmHA does not consider graduation to cover a borrower who pays off the debt under normal terms, and the agency specifically excludes borrowers who move from direct to guaranteed loans. However, FmHA recently initiated action to include moving to guaranteed loans as an interim step in the graduation process. In December 1993, 14 months after enactment of the Agricultural Credit Improvement Act of 1992, which required such action, FmHA published a proposed regulation in the Federal Register to incorporate the use of guaranteed loans as an interim step in graduating direct loan borrowers to commercial credit without a guarantee. In late October 1994, FmHA officials told us that the agency anticipates publishing the revised regulations to graduate direct loan borrowers to guaranteed loans in November 1994. FmHA requires that its field offices annually review direct loan borrowers for graduation to commercial credit. However, the field office lending officials often do not adhere to the process. As a result, some borrowers with graduation potential are not identified as likely candidates, and other borrowers who are identified are not made to graduate. In addition, its classification of borrowers according to their repayment ability is not reliable. Thus, FmHA does not know how many direct loan borrowers qualify for guaranteed loans. Nonetheless, FmHA officials and commercial lenders believe that few FmHA direct loan borrowers can meet the requirements for a guaranteed loan. FmHA’s primary tool for identifying and graduating qualified direct loan borrowers is its annual graduation review process. This process is intended to target borrowers who have displayed sufficient financial progress to graduate from the direct loan program to commercial credit. Annually, FmHA’s St. Louis Finance Office provides each county office with a list of borrowers who have had outstanding loans for 3 years or more. County office officials initially review and remove borrowers from the list who are clearly unable to graduate by using available knowledge of local lenders’ criteria or other information that the officials may have on borrowers’ financial status. County office officials may also add to the list borrowers whose financial condition has substantially improved since obtaining their loans. Borrowers who are not initially removed or who are added to the list are considered potential candidates for graduation. County office officials are to thoroughly evaluate these borrowers’ financial position by considering their financial strengths, income capabilities, and other characteristics that relate to meeting local lenders’ criteria. For those identified as candidates for commercial credit through this process, FmHA requires that they be requested to graduate or to provide information documenting why they cannot graduate. However, FmHA’s field office officials do not always conduct the reviews to identify which borrowers are potential candidates for graduation. Almost 200 borrowers, or about 17 percent, of the approximately 1,160 borrowers who FmHA should have reviewed for graduation potential during fiscal years 1991 and 1992 at the 12 county offices we visited were not reviewed. County office supervisors said they did not review the borrowers because they believed other pressing work was more important, such as servicing delinquent borrowers. In addition, another 310 borrowers, or about 27 percent, at these 12 offices were removed from consideration without any reasons annotated in the county offices’ records for their removal. County office supervisors could not explain why the borrowers were removed from consideration. Of 115 direct loan borrowers identified for graduation to commercial credit at these county offices, the FmHA supervisors did not take the additional steps required to graduate 54 borrowers or to conclude that they could not graduate. For 32 borrowers, the county office supervisors said they did not try to graduate them because they believed the borrowers could not meet local lenders’ credit standards. For the remaining 22 borrowers, the county office loan files showed that the borrowers had not responded to the county offices’ request to graduate and that the county office supervisors had not taken any further action. If a borrower fails to respond, the county office supervisor may consider the borrower to be in default as provided for in the loan agreement. A county office supervisor may then initiate action to accelerate repayment of the loan or legal action to foreclose on the loan. In taking such actions, the county office supervisor must obtain the concurrence of the FmHA district and state office officials and, if legal action is involved, USDA’s Office of General Counsel. However, some county office supervisors said they did not pursue more forceful action with borrowers who did not provide the requested financial information because they did not believe that higher-level officials would support their efforts. Our review indicates that some of the borrowers who did not graduate to commercial credit had financial circumstances indicating that they could have moved from the direct loan program if the county office supervisors would have followed through as required. For example, a borrower obtained a $28,000, 40-year soil and water loan in 1987 and paid off $1,300 by June 1993. According to June 1989 financial information in his FmHA loan file, the borrower had a net worth of over $400,000 and liabilities of about $116,000. The borrower did not comply with the county office’s request for financial information during the 1991 graduation review. In early 1993, the borrower was again asked to provide updated financial information, but no response had been received as of August 1993. The county office’s supervisor acknowledged that FmHA should have taken further action to force this borrower to graduate. Other examples are described in appendix VII. Another tool, which the Congress has directed FmHA to use in identifying direct loan borrowers for graduation, is FmHA’s loan classification system. The loan classification system is designed to record FmHA’s current judgment of all borrowers’ ability to repay their loans. The objectives of the system are to assess the overall quality of FmHA’s loan portfolio, estimate loan losses to the government, assess the need for any special loan servicing, and improve the management of the loan program. Classifications are to be assigned when loans are made and updated whenever a borrower’s financial condition changes significantly. As shown in table 3.2, borrowers are classified on a 1-to 5-scale, with the highest-quality loans described as “commercial” (category 1) and the lowest quality described as a “loss” (category 5). However, in many cases, FmHA’s county offices did not assign a correct classification, and in other cases they did not keep the classifications current, as required. As of September 30, 1993, FmHA’s records showed that about 27,000, or about 20 percent, of FmHA’s approximately 140,000 direct loan borrowers were classified in the two highest loan categories, indicating that they should be candidates for graduation. Of these, 4,856 were classified as commercial, and 22,331 were classified as standard. In reviewing 171 borrowers who were classified as commercial quality borrowers at the 12 county offices we reviewed, county office officials told us that 112 borrowers, or about 66 percent, were improperly classified because they had insufficient income or inadequate loan security to meet minimum commercial credit standards. County office supervisors explained that many borrowers were simply categorized incorrectly when originally classified. They stated that when the system was implemented in 1988, they had only a limited time to classify all borrowers. In their haste to meet the deadline to classify each borrower, county office supervisors relied on personal knowledge in lieu of supporting financial documents. Moreover, they said that in some cases, they did not update the borrowers’ classifications because they do not view the information as useful to them. In accordance with congressional requirements, FmHA is developing a plan to improve its graduation process. In December 1993, 8 months after the date established in the Agricultural Credit Improvement Act of 1992 for implementing such action, FmHA published a proposal in the Federal Register to improve the graduation process and plans to implement it in November 1994. The principal change in the proposed regulations strengthens the process by identifying potential graduation candidates on the basis of their financial condition as recorded in FmHA’s loan classification system. Specifically, borrowers classified in the top two categories—i.e., commercial and standard quality—are to be reviewed each year for graduation. However, FmHA’s proposed plan does not contain any new initiative to ensure that FmHA staff accurately assign and update the loan classifications of their borrowers—an overriding weakness in the existing program. FmHA’s headquarters and field office officials believe that few direct loan borrowers can meet the credit standards required by commercial lenders to qualify for guaranteed loans. For example, all 6 FmHA state officials and 9 of the 12 county office supervisors we interviewed said that many direct loan borrowers will never be able to qualify for guaranteed loans unless there is a major turnaround in their production and finances, which they believed would not occur. These officials’ beliefs are based upon perceptions that some direct loan borrowers either (1) do not have sufficient farm management skills or financial education or (2) have farm operations or financial needs that are too small to be of interest to commercial lenders. Furthermore, some lenders in the eight states we reviewed also believed that FmHA’s guaranteed loan program—as currently designed and operated—is not a viable funding source for some direct loan borrowers. These lenders stated that most direct loan borrowers simply cannot qualify for guaranteed loans. On the other hand, they also told us that it is viable for those individuals who have made progress in overcoming the financial difficulties that led to their becoming direct loan borrowers. Most direct loan borrowers do not receive guaranteed loans even though obtaining such loans would seem to be a natural progression in improving their creditworthiness and ultimately qualifying them for commercial credit without a guarantee. While FmHA officials and lenders contend that few direct loan borrowers can qualify for a guaranteed loan, FmHA cannot verify this because its county offices have often failed to identify and graduate direct loan borrowers who qualify for commercial credit. As a result, some borrowers remain in the direct loan program and receive government assistance from the program longer than justified. Congress’s required changes to the graduation process, directed in 1992 legislation, have the potential to bring improvement when FmHA implements them—which are planned for November 1994. Requiring that FmHA’s guaranteed loan program be routinely used as an interim step for direct loan borrowers in their progression to commercial credit without a guarantee and using the loan classification system as the basis for identifying candidates for graduation can bring improvement. However, given the past failure of FmHA field offices to comply with existing graduation and loan classification requirements, FmHA needs to address county supervisors’ views that graduation is not a high priority and their skepticism about whether superiors will support them in graduating borrowers. To ensure that FmHA effectively implements the congressionally directed plan for using guaranteed loans as an interim step in moving direct loan borrowers to commercial credit without a guarantee, we recommend that the Secretary of Agriculture direct the FmHA Administrator to develop and implement a plan to ensure that county office supervisors assign accurate loan classifications to all new direct loan borrowers, promptly update loan classifications as borrowers’ financial conditions adequately evaluate each direct loan borrower listed annually for graduation potential to identify and graduate those borrowers who qualify for guaranteed loans or commercial credit. In commenting on a draft of this report (see app. VIII), FmHA agreed that it has not emphasized the graduation of direct loan borrowers to commercial credit through the use of the guaranteed loan program. FmHA stated that it will soon implement various changes to its loan programs, some of which are designed to assist borrowers in graduating from direct loans. For example, FmHA plans to issue regulations requiring that borrowers’ loan classifications be updated at least every 2 years and that borrowers who are classified as commercial or standard grade borrowers be referred to commercial lenders every 2 years. However, FmHA did not provide specifics on how it plans to ensure that county office officials perform the required review of borrowers’ loan classifications and their potential for graduation or graduating those borrowers who qualify. In the past, county office officials have not fully complied with FmHA’s requirements in these areas. “These are FmHA’s highest quality Farmer Program accounts. The financial condition of the borrowers is strong enough to enable them to absorb the normal adversities of agricultural production and marketing. There is ample security for all loans, there is sufficient cash flow to meet the expenses of the agricultural enterprise and the financial needs of the family, and to service debts. The account is of such quality that commercial lenders would view the loans as a profitable investment.” (Underscoring added.) Therefore, our point remains unchanged—i.e., according to the county supervisors we spoke with, many borrowers (66 percent of the 171 direct loan borrowers reviewed who were classified as commercial) were misclassified using FmHA’s own definition and were not candidates for graduation because of problems with cash flow, high debt, or a marginal repayment record. Commercial lenders and FmHA officials believe that to get lenders to take on a greater portion of FmHA’s approximately 140,000 direct loan borrowers as their own clients, changes would be required in (1) direct loan provisions to more effectively encourage borrowers to move from such loans and (2) the guaranteed loan program to make it more attractive to lenders. Moving borrowers from direct loans would reduce FmHA’s outstanding direct loan debt and the government’s risk exposure that exists with such loans, allow the agency’s field staff to more effectively administer the direct loan program, and reinforce the agency’s role as a temporary credit source. However, even if the suggested changes are made, many borrowers would still not be able to obtain guaranteed loans because they could not meet commercial lenders’ credit standards. Commercial lenders and FmHA field office lending officials that we interviewed suggested changes to FmHA’s direct loan program that they believe would cause existing direct loan borrowers to seek commercial credit with a guarantee as soon as they qualify. These suggestions involve gradually increasing the interest rate charged on direct loans until it equals the rate charged on commercial loans, making direct ownership loans for the purchase of farm land for 10 to 15 years with a balloon payment at the end of the term instead of payments over 40 years, and writing off the amount of the outstanding direct loan debt that exceeds the market value of the security property for the loan (collateral). Regarding gradually increasing the interest rate charged on direct loans, some commercial lenders we interviewed and a 1991 American Bankers Association (ABA) Task Force report said that interest rates on direct loans, which are lower than commercial rates, should be periodically increased. Specifically, eight commercial lenders suggested that the interest rate that FmHA charges should be increased over time so that the rates eventually match commercial rates. Such increases could cause borrowers to start looking elsewhere for financing as the advantage of below-market rates is eliminated. According to 64 percent of the 53 lenders that we interviewed, borrowers do not have an incentive to move from direct to guaranteed loans because of the low interest rates on direct loans. The ABA Task Force recommended that all of FmHA’s direct loans have a graduated interest rate clause so that borrowers understand that interest rates will change on specific dates. According to the ABA, because there is no interest rate adjustment mechanism in place for FmHA’s direct loans, borrowers are encouraged to remain in the program, particularly when the rates remain low in relation to commercial rates. For example, while interest rates on guaranteed operating loans made in 1992 averaged 9.8 percent, FmHA’s direct loans were often made at 7 percent. On the other hand, implementing a proposal that routinely causes interest rates to increase without considering the borrowers’ financial condition could adversely affect some borrowers’ abilities to repay their loans on schedule and thus result in defaults. With respect to the suggestion for shortening FmHA’s farm ownership loan terms, which typically run 40 years, some of the commercial lenders we interviewed and the ABA Task Force agreed with the need for shorter terms. The ABA emphasized that having a maturity date preceding the amortization date of the loan would enforce FmHA’s purpose of being a temporary lender. Likewise, 10 of the lenders we interviewed told us that longer repayment terms act as a disincentive to get borrowers to move from the direct loan program. As an alternative, one commercial lender suggested that in lieu of making loans with a 40-year repayment, FmHA should make shorter-term loans—e.g., loans with a 15-year maximum term—and require a balloon payment at the end of the term. On the other hand, implementing a proposal that causes a loan’s maturity date to be shortened and that increases payments could result in repayment difficulties for those borrowers who acquire additional farm real estate to expand their operations or who made capital improvements to their existing operations. Concerning the third suggestion—that FmHA reduce direct loan debt to the value of the loan security—many lenders believe that some FmHA borrowers have outstanding direct loan debt that exceeds the value of their security. Some of the commercial lenders we interviewed told us that they would not make a loan to repay a borrower’s outstanding direct loan debt if the loan could not be adequately secured by collateral property. Specifically, most of the 53 lenders we interviewed said that FmHA would need to reduce the debt to at least the value of the security if the borrowers could not pay the debt down to that value. Implementing such a suggestion could provide lenders with a greater incentive to provide credit to direct loan borrowers. On the other hand, implementing a proposal that causes FmHA to reduce outstanding debt would result in the agency’s incurring losses on loans to borrowers who have remained current on their agreed-upon loan payments. Many of the commercial lenders that we interviewed told us about problems they have had in participating in the guaranteed farm loan program and suggested changes. For example, many of the lenders stated that FmHA’s paperwork requirements are excessive. Some also said that FmHA has been slow in processing their guaranteed loan applications. Even though the lenders have had problems, many of them are still interested in participating in the guaranteed loan program. They, as well as the FmHA field office officials we interviewed, provided us with suggested changes that they said could increase the willingness of lenders to take on more direct loan borrowers as clients. These suggestions cover both administrative and programmatic aspects of making guaranteed farm loans. Of the 34 lenders with guaranteed farm loans that we interviewed, 28 told us that FmHA’s paperwork requirements are excessive. Seventy-five percent of these 28 lenders said it was the most significant problem they have had in participating in the guaranteed program. Another problem area frequently cited by the 34 lenders was that FmHA’s field offices have been slow in processing applications. Table 4.1 shows the major problems that lenders identified. Generally, lenders with guaranteed loans told us that FmHA’s paperwork requirements increase a bank’s workload and the time spent in processing a loan application. This occurs because FmHA requires more information in a guaranteed loan application than a bank requires in an application for a loan not involving a guarantee. The commercial lending officials we interviewed suggested changes to administrative aspects of the guaranteed loan program as a means of increasing their participation. These suggestions include reducing the paperwork required for guaranteed loans, eliminating the requirement that lenders submit financial and production history data on existing direct loan borrowers who seek guaranteed loans to repay outstanding direct loan debt, and allowing lenders to certify borrowers’ eligibility to participate in the guaranteed loan program. As discussed earlier, problems with FmHA’s paperwork requirements were cited by lenders as a significant issue affecting their participation in the guaranteed loan program. Among other things, they told us that because FmHA’s paperwork requirements differ from those normally used in the banking industry, they had to prepare two sets of loan application documents—one for reviews by their internal credit committee and a second containing the same information but in a different format on FmHA’s forms. Also, according to ABA officials, lenders have to submit paperwork in the application package that does not directly relate to the loan, such as a certification that loan funds will not be used for lobbying activities. According to the lenders, requirements such as these add to their cost of doing business and make them reluctant to participate—particularly, in regard to funding low-valued loans because of their low-profit potential. In response to previous reports that have criticized FmHA’s paperwork requirements and as required by the Agricultural Credit Improvement Act of 1992, on June 24, 1993, FmHA published interim regulations in the Federal Register revising FmHA’s loan application paperwork requirements for loans of $50,000 or less and for lenders who participate in FmHA’s certified lender program. While these revisions should result in a lessening of the paperwork required for some lenders, many of the lenders that we interviewed did not know that FmHA was attempting to streamline the loan application process. On the related suggestion that FmHA should stop requiring lenders to obtain and submit financial and production history data for borrowers when applying for guaranteed loans to repay existing direct loan debts, ABA officials and some of the lenders we interviewed questioned the need to submit such data, which the county offices should already have. Also, the lenders said that while FmHA requires 5 years of historical data, some lenders usually consider only the past 3 years in deciding on an application. ABA officials further recommended that if a commercial lender was willing to repay a borrower’s outstanding direct loan debt with a guaranteed loan, then FmHA should simply “pass through” the person’s outstanding debt to the bank without the need to submit any new or additional paperwork. In such cases, the ABA officials said that there is no need for an entire application package as with a new applicant/borrower. Fifty-eight percent of the 53 lenders we interviewed told us that eliminating this requirement could result in an increase in the use of the guaranteed farm loan program to repay applicants’ outstanding direct loan debts. The third change suggested by lenders was that they, rather than FmHA’s county committees, should be allowed to certify applicants’ eligibility to receive guaranteed loans. Specifically, county committees, which consist of two members elected by local farmers and one designated by FmHA, decide on the eligibility of applicants to participate in FmHA’s farm loan programs. Among other things, two of the lenders who had guaranteed loans said they have encountered personal bias by some county committee members against their loan applicants, and six others said that county committees have been slow in making decisions on guaranteed loan applications. One lender illustrated the situation as follows: The bank makes lending decisions on a daily basis, but it is delayed in making guaranteed loans if the applications do not arrive in time for a county committee meeting or if the committee requests additional information. Nineteen of the 53 lenders we interviewed stated that participation in the guaranteed loan program could increase if lenders were permitted to certify applicants’ eligibility. The commercial lenders and the FmHA field office officials we interviewed also cited various changes to the program that could result in increased use of guaranteed loans. These include increasing the guarantee percentage above 90 percent when the loan is being used to refinance outstanding direct loan debt, removing the guaranteed loan fee for borrowers whose direct loan debts are being refinanced with guaranteed loans, and increasing the authority for making subsidized loans under the Interest Assistance Program. The first proposal applies to increasing the guarantee percentage above 90 percent when the loan is being used to refinance outstanding direct loan debt owed to FmHA. The Con Act currently limits the guarantee to 90 percent for all loan-use purposes. Forty of the 53 lenders we interviewed said that such a change would increase their use of the guaranteed loan program to repay an applicant’s outstanding direct loan debt. Twenty-seven of these lenders suggested a 100-percent guarantee, and 12 others suggested a 95-percent guarantee (one did not suggest a specific percentage above 90 percent). Furthermore, one lender said that FmHA’s guarantee percentage should be reduced over time, after a borrower demonstrates a record of loan repayment. FmHA has a 100-percent exposure with direct loans. If a 95-percent guarantee was provided on a loan for repaying outstanding direct loan debt, then the government’s risk exposure would be reduced by 5 percent. If a 100-percent guarantee was provided on a loan for that purpose, then FmHA would only have the additional risk for any accrued interest and liquidation costs over what those costs would be to the agency. Another proposed change was for FmHA to remove the guaranteed loan fee for borrowers whose direct loan debts were being refinanced with guaranteed loans. FmHA charges lenders a 1-percent loan origination fee for the federal guarantee, which lenders usually pass on to borrowers. For example, if a loan is for $200,000 and the guarantee is for 90 percent, then the guaranteed amount is $180,000, and the fee is $1,800. Removing this fee when any part of a guaranteed loan is being used to repay direct loan debt could be an added inducement for borrowers to seek guaranteed loans. Five lenders suggested removing the fee on loans involving the repayment of direct loan debt in order to make the guaranteed program more viable. For example, one lender said that the fee adds to a borrower’s cost, and another said that borrowers can use the added cost as an excuse for not seeking to move from their direct loans. Although FmHA requires county supervisors to waive this fee when more than half of the guaranteed loan funds are being used for refinancing direct loan debt, two county office supervisors we interviewed said they do not waive the fee on any guaranteed loan. The third change that some commercial lenders and county supervisors suggested was that FmHA’s authority for making subsidized loans under the Interest Assistance Program should be increased. Under this program, which is an interest subsidy program, a lender is reimbursed by FmHA for charging a borrower an interest rate that is less than the lender’s regular rate. Some lenders and county supervisors told us that this program has helped some direct loan borrowers obtain guaranteed farm operating loans. However, the agency has not been authorized to use the program for farm ownership loans. Four commercial lenders and five state and county office officials said that if FmHA’s interest assist authority was expanded, they believe that some direct loan borrowers could move their outstanding farm ownership debt to guaranteed loans. In order for borrowers to obtain commercial loans, they must be able to meet the credit standards of the lenders who make the loans. Because direct loan borrowers may not be able to fully meet standards in areas such as cash flow, security, and equity, lenders may need to lower their standards. However, even if the lenders relaxed their standards, there are, in the opinion of some lenders and banking industry representatives we interviewed, direct loan borrowers who would not be able to obtain commercial credit even with guarantees because of their inability to qualify for such credit. For example, the 30,806 borrowers who were delinquent on $5.2 billion in direct loans, as of September 30, 1993, would not be candidates for commercial credit. Ten lenders with guaranteed loans told us that the guaranteed program cannot replace the direct loan program for some borrowers. Four lenders without guaranteed loans said that some direct loan borrowers simply are unable to qualify for commercial credit. Furthermore, some lenders, notably those without guaranteed loans, said that (1) they are not looking for risky customers, which FmHA’s direct loan borrowers are by definition, or for clients who cannot meet their minimum credit standards and (2) they will not make a loan that is not financially sound. Three lenders, who did not have outstanding FmHA guaranteed loans, specifically said that they perceived borrowers who needed a guaranteed loan to be financially weak and that they would not lower their lending standards in order to fund an applicant with a guaranteed loan. Likewise, officials from ABA and from the Independent Bankers Association of America said that even if changes are made to FmHA’s farm loan programs, the guaranteed program would not be a viable funding source for some direct loan borrowers. For example, ABA officials said that commercial banks would be unwilling to fund some direct loan borrowers because their financial histories reflect a long-term pattern of failing to meet their debt obligations. To stimulate the movement of borrowers from direct loans, lenders have made a variety of suggestions. If some or all of the proposals are implemented, some existing FmHA direct loan borrowers would likely move to guaranteed loans, which could lessen the agency’s risk exposure, reinforce its role as a temporary source of credit, and reduce its workload. The exact number, while unknown, probably would not be a high percentage of FmHA’s approximately 140,000 direct loan borrowers because many have marginal production and financial histories. Nonetheless, moving any portion of the outstanding direct loan borrowers to the commercial sector is desirable if the government’s risk exposure can be adequately protected. Therefore, deciding whether suggested changes should be made ultimately requires balancing FmHA’s risk exposure against the concessions that would have to be made to lenders. Implementing some of the suggestions in this chapter may not have much impact on FmHA’s risk exposure. For example, there would be no cost impact if FmHA stopped requiring lenders to submit financial and production data for guaranteed loans to refinance existing direct loan borrowers’ debt owed to FmHA. Also, since FmHA has a 100-percent risk exposure with direct loans, allowing a greater-than-90-percent guarantee for loans to repay outstanding direct loan debt may actually lessen FmHA’s risk if the rate was, for example, 95 percent, and may add only slightly to its risk if the rate was 100 percent. To ensure that lenders had some stake in the loan, a guarantee of something less than 100 percent would be needed. However, some proposals, such as reducing outstanding direct loan debt to the value of security would result in immediate losses to FmHA—i.e., forgiveness of some portion of existing debts—although some of the losses may ultimately occur anyway. We realize that commercial lenders’ support for many of these suggestions is influenced largely by their desire to expand their clientele and generate profit. Even so, we believe that the overall implications of the suggestions presented in this chapter are worthy of further discussion and consideration. For example, if FmHA offered to write off the part of borrowers’ direct loan debts that exceeded the market value of their loan security property, what impact would that have on the agency’s overall losses and on borrowers’ receiving commercial credit, with or without a guarantee? Likewise, if the guarantee percentage for loans to pay off existing direct loans was increased above 90 percent, what impact would the lender’s lessened exposure have on its incentive to properly service the loan and what would be the implications for other government guaranteed loan programs? In commenting on a draft of this report (see app. VIII), FmHA agreed that additional changes can be made to the guaranteed loan program to assist in moving direct loan borrowers to commercial credit. FmHA cited various actions it has initiated or plans to take to make the guaranteed loan program more attractive to commercial lenders, such as reducing the paperwork required for a guaranteed loan of less than $50,000 and having county office officials assist lenders in completing a guaranteed loan application. FmHA also said that it will consider the other suggestions in this chapter and that it shares our concern about making the guaranteed program vulnerable to the large losses that have been experienced by the direct loan program. | GAO reviewed the Farmers Home Administration's (FmHA) guaranteed farm loan program, focusing on: (1) the extent of losses under the guaranteed loan program compared with those under the FmHA direct loan program; (2) the extent to which the guaranteed loan program has graduated FmHA direct loan borrowers to commercial credit; and (3) ways to make the guaranteed farm loan program more of a source for funding direct loan borrowers. GAO found that: (1) the guaranteed farm loan program has substantially lower delinquency and default rates than the direct loan program because its borrowers present fewer financial risks; (2) FmHA increases the government's risk exposure by permitting borrowers who have defaulted on past loans to obtain new guaranteed loans and by failing to follow FmHA loan servicing standards; (3) FmHA has not effectively used the guaranteed loan program to graduate direct loan borrowers to commercial credit; (4) only 4 percent of direct loan borrowers obtained guaranteed loans in fiscal years 1991 through 1993, partially because FmHA did not fully implement its procedures for identifying and graduating qualified direct loan borrowers; (5) Congress has required FmHA to propose regulations to improve borrowers' transition to commercial credit; (6) FmHA and commercial lenders believe that many direct loan borrowers will never qualify for guaranteed loans; and (7) commercial lenders believe that FmHA should provide incentives for borrowers to seek commercial credit and make the guaranteed loan program more attractive to commercial lenders. |
The five major Army inventory control points manage secondary items and repair parts valued at $17 billion. These items are used to support Army track and wheeled vehicles, aircraft, missiles, and communication and electronic systems. The process for identifying the items and the quantity to stock begins with developing the budget request—the key to effective inventory management. If too few or the wrong items are available to support the forces, then readiness suffers and the forces may not be able to perform their assigned military missions. On the other hand, if too many items are acquired, then limited resources are wasted and unnecessary costs are incurred to manage and maintain the items. The Army uses different processes for determining its spare and repair parts budget requests and for determining which parts to buy or repair. The process for determining spare and repair parts budget requests is based on data from the budget stratification reports, which show the dollar value of requirements and inventory available to meet the requirements. When an item’s available inventory is not sufficient to meet the requirements, it is considered to be in a deficit position. The aggregate value of items in a deficit position then becomes the Army’s basis for determining its spare and repair parts needs. As these needs are formulated into a budget request, the end result (budget request) is normally less than the aggregate value of items in a deficit position. This makes it even more important that the true needs be based on accurate data. Otherwise, funds may be allocated to procuring spare and repair parts that should be spent on other priority needs. Using accurate data in the requirements determination process avoids such misallocation of funds. We have previously issued reports pointing out data inaccuracy problems in the Army’s requirements determination process and the effect of these inaccuracies on inventory decisions. See appendix IV. The process for determining which items to buy or repair is based on information in the item’s supply control study, which is automatically prepared when an item reaches a point when insufficient assets are available or due in to meet requirements. When a study is prepared, the item manager validates the requirements and asset information in the study. Based on the results of the validated data, the item manager will decide whether to buy, repair, or not buy the quantity recommended by the study. As of September 30, 1994, we reviewed 258 items from a universe of 8,526 items with a deficit inventory position. The selected items represented 3 percent of the items in a deficit position but accounted for $519 million, or 69 percent, of the $750 million deficit inventory value. We found that 94 of the 258 items, with a reported deficit inventory value of $211 million, had data errors that affected the items’ requirements or inventory available to satisfy the requirements. Table 1 shows the results of our review for the Army’s inventory control points. Overstated requirements and understated inventory levels were the major reasons items were erroneously reported in a deficit position. In addition, some items were incorrectly included in the process for determining funding requirements. If the items’ inventory position had been correctly reported, the true deficit value for the 94 items would have been about $23 million rather than $211 million. Table 2 shows the major reasons why items were incorrectly classified as deficit. When insufficient inventory is on hand and due in to meet an item’s requirements, the budget stratification process will report the item as being deficit. If the item’s deficit position is caused by overstated requirements, this means that resources could be wasted buying unneeded items. As shown in table 2, overstated requirements caused 53 items to be erroneously reported as being in a deficit position. The overstated requirements resulted from inaccurate demand data, inaccurate leadtime data, and lower-than-expected requirements. Table 3 shows the number of instances where these reasons caused the items’ requirements to be overstated. The following examples illustrate the types of inaccurate data that caused overstated requirements: The item manager for an aircraft floor item used on the CH-47 Chinook helicopter said that the database still included demands from Operations Desert Shield and Desert Storm. Including these demands in the requirements determination caused the budget stratification process to erroneously classify the item as having a deficit inventory position of about $500,000. If the outdated demands had been purged from the system, the item would not have been in a deficit position. According to the item manager for the front lens assembly item used on the AN/PVS-7B Night Vision Goggles, the item requirements shown in the budget stratification report did not materialize. She said that the report showed the item as having a deficit inventory position of $2.4 million. However, when it came time to procure the item, the project leader reduced the planned procurement quantity because the field units indicated they did not like the item. The item’s actual deficit position should have been only $18,000. According to the item manager, an angle drive unit used on the M2/M3, M2A1/M3A1 Bradley Fighting Vehicle system had an inflated safety level requirement in the budget stratification report. The report showed a safety level of 6,887 units instead of the correct safety level of 355. As a result, a deficit inventory position of $6.6 million was reported. When a prime stock number has authorized substitute items, the requirements and inventory for the prime and substitute items are supposed to be added and shown as one requirement and one inventory level under the prime number. This did not happen. The requirements for both types of items were shown as one requirement but the inventory was not. As a result, the inventory to meet the overall requirement was understated, and the item was placed in a deficit position. For example, according to the item manager for a night window assembly used on the TOW subsystem for the M2/M3 Bradley Fighting Vehicle, the budget stratification report showed a deficit supply position of $800,000 for the item. This occurred because inventory belonging to a substitute item was not counted toward the prime item’s requirements. The item manager said the true deficit for the assembly was $65,000. There were also requirements problems for items being repaired at maintenance facilities. The requirements system did not accurately track stock in transit between overhaul facilities and the depots. According to item managers at several inventory control points, they had problems either tracking the physical movement of inventory between the depots and repair facilities, or ensuring that all records were processed so the database accurately accounted all applicable assets. These problems could cause items to be erroneously reported as being in a deficit position. Table 4 shows how often these reasons resulted in understated inventory levels. Our review of selected items identified nine items that should have been excluded from the budget stratification process. By including these items, the budget stratification process identified funding needs for the items when, in fact, the funds to procure the items were being provided by another service, a foreign country under a foreign military sales agreement, or another appropriation. Table 5 shows the number of items that were incorrectly included in the budget stratification process. The following examples illustrate the effect of including “excluded” items in the budget stratification process: According to the item manager for a fire control electronic unit used on the M1A2 main battle tank, the Army issued a contract in August 1993 to procure items to meet the Army’s requirements as well as foreign military sales. Because the Army is reimbursed for foreign military sale items, these items should have been excluded from the budget stratification process. However, the items were included in the stratification process and were reported as having a deficit inventory position of $2.3 million. The inventory control point procured a gas-particulate filter unit used in producing modular collective protective equipment. According to the item manager, procurement appropriation funds, provided by the program manager’s office, were used to buy the items. Because the stratification process is only supposed to deal with items procured by the Defense Business Operating Fund, the item should not have been included in the stratification process and a deficit inventory position of about $800,000 should not have been reported. According to the item manager, the Air Force manages and makes all procurements for a panel clock item. The Army’s budget stratification report showed this item had a deficit inventory position of $700,000. However, because the Air Force managed this item, the panel clock should not have been coded as an Army secondary item for inclusion in the budget stratification report. The item manager for an electronic component item said that the item should have been coded as an inventory control point asset rather than a project manager’s office asset. Because project manager items are not available for general issue, these items were not counted against the item’s requirements in the budget stratification report. If these items had been properly coded, the item would not have been reported as having a $700,000-deficit inventory position. According to the item manager, an electronic component item should have been coded as a major end item rather than a secondary item and not included in the budget stratification process. The item was reported as having a deficit inventory position of $500,000. The Army is aware of many of the processing, policy, and data problems affecting the accuracy of the requirements data. Furthermore, the Army has identified 32 change requests to correct problems with the requirements determination and supply management system. According to Army officials, the cost to implement the 32 change requests would be about $660,000, and in their opinion, the benefits would greatly outweigh the added costs. The officials said these changes would correct many of the problems, including some of the ones we identified during our review. Nevertheless, not all of the requests have been approved for funding because the Department of Defense is developing a standard requirements system as part of its Corporate Information Management initiative and does not want to spend resources to upgrade existing systems. As a result, it has limited the changes that the services can make to their existing systems. Army officials said that the standard system is not expected to be implemented for at least 4 years. Furthermore, major parts of the existing system will probably be integrated into the standard system. Therefore, unless the data problems are corrected, they will be integrated into the standard system and the Army will still not have reliable data. Army officials also cited examples where processing change requests are needed to correct other data problems in the requirements determination system. For example, the depots do not always confirm material release orders/disposal release orders received from the inventory control points. As a result, the inventory control points do not know if the depots actually received the orders. They identified numerous instances where the depots put the release orders in suspense because of higher priority workloads. This resulted in the release orders not being processed in a timely manner, processed out of sequence, or lost and not processed at all. Because the inventory control points could not adequately track the release orders, they could have reissued the release orders. The reissuance could have caused duplicate issues or disposals, imbalances in the records between the inventory control points and the depots, and poor supply response to the requesting Army units. A system change request was initiated in November 1994 to address this problem, but the request has not yet received funding approval. Although Army officials could not provide a cost estimate to implement the change request, it could save about $1 million in reduced workload for the inventory control points and depots. According to Army officials, one programming application in the requirements determination system uses reverse logic to calculate the supply positions of serviceable and unserviceable assets. It compares the supply position of all serviceable assets to the funded approved acquisition objective (current operating and war reserve requirements). However, for the same item, the program compares the supply position of all unserviceable assets to the total of the current operating and war reserve requirements, the economic retention quantity, and contingency quantity. The effect of this is that serviceable inventory can be sent to disposal while unserviceable inventory is being returned to the depots. According to Aviation and Troop Command records, the Command disposed of $43.5 million of serviceable assets at the same time that $8.5 million of unserviceable assets, of the same kind, were returned to the depots between March and September 1994. By September 1995, the Command had disposed of $62 million of serviceable assets. Command officials said that a system change request was initiated in November 1994 to correct the programming logic problem. However, the request did not receive funding approval because it violated Department of Army policy, even though the estimated cost to implement the change request would be less than $20,000. Although this change will not reduce the reported deficit quantities, it will allow the commands to keep more serviceable items in lieu of unserviceables, and it will reduce overhaul costs. Furthermore, according to Command records, this policy is causing the disposal of high-dollar, force modernization items that could result in re-procurement and adversely affect stock availability to field units. We recommend that the Secretary of Defense direct the Secretary of the Army to proceed with the pending system change requests to correct the data problems. Doing so could correct many of the problems identified in our report. Furthermore, the corrective actions would improve the overall reliability and usability of information for determining spare and repair parts requirements. The Department of Defense agreed with the report findings and partially agreed with the recommendation. It said that instead of the Secretary of Defense directing the Army to proceed with the system change request, the Army will be requested to present a request for funding for the system changes to the Corporate Configuration Control Board at the Joint Logistics Systems Center. The Board, as part of the Corporate Information Management initiative, was established to consider and resolve funding matters related to changes to existing systems. In our opinion, the action proposed by the Department of Defense achieves the intent of our recommendation, which was for the Army to seek funds to correct the data problems in its requirements determination system. Defense’s comments are presented in their entirety in appendix II. We are sending copies of this report to the Secretary of the Army; the Director, Office of Management and Budget; and the Chairmen, House Committee on Government Reform and Oversight, Senate Committee on Governmental Affairs, the House and Senate Committees on Appropriations, House Committee on National Security, and Senate Committee on Armed Services. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix III. We held discussions with responsible officials and reviewed Army regulations to determine the process used by the Army to identify its spare and repair parts needs for its budget development process. We focused on the process used to identify items in a deficit position. As part of these discussions, we also studied the budget stratification process, which is the major database input used in the budget development process. To identify the items in a deficit position, we obtained the September 30, 1994, budget stratification data tapes for the five Army inventory control points: Army Munitions and Chemical Command, Aviation and Troop Command, Communications-Electronics Command, Missile Command, and Tank-Automotive Command. From the total universe of 8,526 secondary items with a deficit inventory position valued at $750 million, we selected all items that had a deficit position of $500,000 or more. This resulted in a sample of 258 items with a total inventory deficit position of $519 million, or 69 percent of the total deficit. For each of the 258 selected items, we obtained information from the responsible item manager to determine whether the item was actually in a deficit position as of September 30, 1994. For those items that the budget stratification process had erroneously placed in a deficit position, we determined the reason for its misclassification. We obtained this information by reviewing item manager files and discussing the items with responsible item management personnel. We categorized the reasons for the erroneous classifications to determine frequency distribution for each type of reason. We then determined through discussions with item management officials and review of system change requests what actions were taken or planned to correct the identified problems. We performed our review from October 1994 to July 1995 in accordance with generally accepted government auditing standards. Army Inventory: Growth in Inventories That Exceed Requirements (GAO/NSIAD-90-68, Mar. 22, 1990). Defense Inventory: Shortcomings in Requirements Determination Processes (GAO/NSIAD-91-176, May 10, 1991). Army Inventory: Need to Improve Process for Establishing Economic Retention Requirements (GAO/NSIAD-92-84, Feb. 27, 1992). Army Inventory: More Effective Review of Proposed Inventory Buys Could Reduce Unneeded Procurement (GAO/NSIAD-94-130, June 2, 1994). Defense Inventory: Shortages Are Recurring, But Not a Problem (GAO/NSIAD-95-137, Aug. 7, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the: (1) accuracy of the databases used to determine Army spare and repair parts requirements and inventory levels for Defense Business Operations Fund budget requests; and (2) actions taken to correct data problems that could affect the reliability of these budget estimates. GAO found that: (1) the Army's 1994 budget report contained numerous inventory data inaccuracies which led to erroneous reports of deficit inventory positions for several items; (2) overstated requirements and understated inventory levels were the major cause of most of the false deficit position reports; (3) the actual deficit position value for 94 items was about ten-fold less than what was reported; (4) some items should have been excluded from the budget stratification process; (5) although the Army is aware of many requirements data problems and has identified several change requests to correct these problems, the Army has not been able to correct these problems because the Department of Defense (DOD) is developing a standard requirements determination system for all the services and has limited how much the services can spend to change their existing systems; (6) the new DOD standard system will not be implemented for 4 years and most of its existing data will be integrated into that system; and (7) the Army cannot ensure that its budget requests represent its actual funding needs for spare and repair parts, that the new system will receive accurate data when it is implemented, or that expensive usable items will not be discarded and reprocured. |
As provided by the Panama Canal Treaty of 1977, the Panama Canal Commission will terminate on December 31, 1999, when the Republic of Panama will assume full responsibility for the management, operation, and maintenance of the Panama Canal. The Treaty provides that the Canal be turned over in operating condition and free of liens and debts, except as the two parties may otherwise agree. As discussed in note 9 to the financial statements, as of September 30, 1995, the Commission forecasts that the present $90.7 million in unfunded liabilities should be covered by tolls over the remaining life of the Treaty. We did not examine the Commission’s forecast and express no opinion on it. The ability to cover these future costs, including administrative costs, is dependent upon (1) obtaining the budgeted levels of Canal operations and (2) future economic events. The Commission operates as a rate-regulated utility. In fiscal year 1995 approximately 74 percent of its operating revenues were obtained from tolls and the remaining 26 percent, from nontoll revenues, such as navigation services and electric power sales. Early retirement, compensation benefits for work injuries, and post-retirement medical care costs are being funded from Canal revenues on an accelerated basis in order to be fully funded by 1999. During the period of our audit, the President of the United States served as the rate regulator for tolls, which are established at a level to recover the costs of operating and maintaining the Canal. The following is taken from management’s analysis of the Commission’s financial statements. The analysis generally explains the changes in major financial statement line items from fiscal years 1994 to 1995. Our opinions on these financial statements do not extend to the analysis presented below, and, accordingly, we express no opinion on this analysis. While we do not express an opinion on the analysis, we found no material inconsistencies with the financial statements taken as a whole. The Commission’s operations ended fiscal year 1995 at breakeven, compared to the net operating revenue of $1.7 million for fiscal year 1994. The net operating revenue for 1994 was applied to the $0.6 million outstanding balance of unrecovered costs from fiscal year 1992 operations and the $1.1 million balance left was paid to the Republic of Panama on March 9, 1995. From fiscal years 1991 through 1995, toll and nontoll revenues increased an average of approximately 3.8 percent annually. Fiscal year 1995 total operating revenues increased to $586 million, up 6.9 percent from fiscal year 1994 due mainly to an increase in Canal traffic, principally larger vessels. Nontoll revenues, which consist primarily of navigation services and electric power sales, increased to $164 million during fiscal year 1995, up 12.6 percent from fiscal year 1994. The deduction from tolls revenue for working capital was increased from $5 million in fiscal year 1994 to $10 million in fiscal year 1995 in order to substantially complete the financing of the Commission’s storehouse and fuel inventories. The deduction from tolls revenue for contributions for capital expenditures increased from $11.5 million in fiscal year 1994 to $30.3 million in fiscal year 1995. The increase was attributable to the funding required for the increase in the Commission’s capital program in 1995 for the acquisition of the crane TITAN and to provide funding for anticipated additional capital expenditures for replacements and additions to the tug fleet, acceleration of the Gaillard Cut widening project and the purchase of additional towing locomotives. From fiscal years 1991 through 1995, total operating expenses increased an average of approximately 4.0 percent annually. Fiscal year 1995 total operating expenses increased to $586 million, up 7.3 percent over fiscal year 1994. The following were some of the highlights: Tonnage payments to the Republic of Panama increased $9.8 million or 13.9 percent in fiscal year 1995. The additional net tonnage transiting the Canal produced $7.6 million of the increase and a rate change from 36 cents to 37 cents per ton accounted for $2.2 million of the change. Navigation service and control costs increased $13.6 million or 14.8 percent due mainly to the cost of additional resources required to service the record traffic levels experienced in fiscal year 1995. The increase in locks operation costs of $10.4 million or 18.2 percent reflected the cost of additional crews required for the increased level of traffic, additional locks maintenance and repair projects, and increased costs for locks overhaul projects. Depreciation expense increased $5.7 million or 22.1 percent in fiscal year 1995 principally as the result of (1) an adjustment to the service life for certain assets, (2) the change in the capitalization limit from $1,500 to $5,000 for minor items acquired in fiscal year 1995, and (3) the depreciation for new additions to plant during the fiscal year. Partially offsetting these increases was a credit adjustment resulting from the amortization of capital contributions of assets acquired prior to fiscal year 1992. Interest expense on the interest-bearing investment of the United States decreased $3.2 million or 42.2 percent in fiscal year 1995 because of the larger average cash balances maintained by the Commission in its U.S. Treasury revolving fund account and lower interest rates. Other operating expenses increased $5.8 million or 18.8 percent primarily because of an increase in the provision for marine accident claims during the year related to accidents that occurred during fiscal year 1995. By the end of fiscal year 1995, total assets of the Commission increased by 3.3 percent to $851 million, and total liabilities and reserves decreased by 1.6 percent to $263 million. Capital increased by 5.6 percent to $588 million. The most significant changes in individual account balances by the end of fiscal year 1995 were the following: Property, plant, and equipment (excluding depreciation and valuation allowances) increased by a net $38 million to $1,141 million. This increase was due primarily to net capital expenditures of $38.7 million and the acquisition of several plant items from other U.S. government agencies. Major capital additions to plant from capital expenditures included $9.4 million for the Canal widening/straightening program; $8.0 million for the replacement and improvement of facilities and buildings; $6.6 million for the replacement and addition of floating equipment; $5.2 million for the replacement and addition of miscellaneous equipment; $4.0 million for improvements to electric power, communication, and water systems; $2.6 million for the replacement of motor vehicles; and $1.6 million for the replacement of launches and launch engines. Current assets increased by a net $54 million to $262 million due principally to an increase in cash. Cash increased by $46.8 million as a result of the net cash provided by operating activities exceeding the net cash used in investing activities. Deferred charges decreased by a net $26 million to $87 million. This was due principally to the amortization of deferred charges for early retirement, compensation benefits for work injuries, and post-retirement medical care costs. Liabilities and reserves decreased by a net $4.2 million to $263 million. The major reason for the net decrease included a decrease of $26.9 million for certain employee benefits, offset in part by increases in the liabilities for severance pay, accounts payable, employees’ leave, marine accident claims, and in the reserve for lock overhauls. Capital increased by a net $31 million to $588 million, principally because of a $21.4 million net increase in capital contributions for capital expenditures and a $10.0 million increase in contributions for working capital. The Panama Canal Act of 1979 requires us to include in our annual audit report to the Congress a statement listing (1) all direct and indirect costs incurred by the United States in implementing the 1977 Treaty, net of any savings, and (2) the cost of any property transferred to the Republic of Panama. The act also provides that direct appropriated costs of U.S. government agencies should not exceed $666 million, adjusted for inflation over the life of the Treaty. As of September 30, 1995, the inflation-adjusted target was $1,367 million. U.S. Government agencies that provided services to the former Panama Canal Company and Canal Zone Government provided the direct and indirect cost information including the cost of property transferred to the Republic of Panama as required under the 1977 Treaty. This information is presented in unaudited supplementary schedules to the Commission’s financial statements, and, accordingly, we express no opinion on these schedules. From fiscal years 1980 to 1995, the net reported costs to the U.S. Government under the Treaty amounted to $791 million, which is less than the act’s inflation-adjusted target. As required by the Panama Canal Act of 1979, we are sending copies of this report to the President of the United States and the Secretary of the Treasury. We are also sending copies to the Director of the Office of Management and Budget; the Secretaries of State, Defense, and the Army; the Chairman of the Board of Directors of the Panama Canal Commission; and the Administrator of the Panama Canal Commission. Comptroller General of the United States United States General Accounting Office Washington, D.C. 20548 Comptroller General of the United States To the Board of Directors Panama Canal Commission Our audits of the Panama Canal Commission found the fiscal years 1995 and 1994 financial statements to be reliable in all material respects; although certain internal controls to help assure compliance with a statutory spending limitation should be improved, management fairly stated that internal controls in place on September 30, 1995, were effective in safeguarding assets from material loss, assuring material compliance with laws governing the use of budget authority and with other relevant laws and regulations, and assuring that there were no material misstatements in the financial statements; and reportable noncompliance with laws and regulations we tested for the fiscal year ended September 30, 1995. We discussed a draft of this report with the Commission’s Chief Financial Officer who agreed with our findings and conclusions. Described below are significant matters considered in performing our audit and forming our conclusions. The Commission’s management identified and reported an instance of reportable noncompliance with laws and regulations and certain related controls. The amount of the violation was not material to the financial statements. In January 1996, the Commission reported to us a violation of the Antideficiency Act for the fiscal year 1995 Panama Canal Revolving Fund. The Commission exceeded its $50,030,000 congressional spending limitation for administrative expenses, as set forth in the fiscal year 1995 appropriation (Public Law 103-331) by $160,225. Management determined that the violation resulted from a weakness in the Commission’s system of internal controls related to the review of the classification of obligations for consultant services. Management has taken steps to improve the specific control weaknesses related to this incident. As required in 31 U.S.C. Section 1351, the Commission has reported all relevant facts of this Antideficiency Act violation and a statement of actions taken to the President, the Office of Management and Budget (OMB), the Speaker of the House of Representatives, and the President of the Senate. As discussed in note 9 to the financial statements, the Panama Canal Treaty requires that the Commission transfer the Canal to the Republic of Panama on December 31, 1999, free of liens and debts, except as the two parties may otherwise agree. To comply with this provision, the Commission is required to identify and fully fund its liabilities by that date. Note 9 indicates that, as of September 30, 1995, the Commission had total liabilities and reserves of $262.7 million and total resources of $172.0 million. The Commission forecasted that the net unfunded $90.7 million in liabilities should be collected from future toll revenues over the remaining life of the Treaty. We did not examine the Commission’s forecast and, accordingly, express no opinion on the forecast. The following sections provide our opinions on the Commission’s financial statements and assertion on internal controls, and our report on the Commission’s compliance with laws and regulations we tested. This section also discusses the information presented in the unaudited supplemental schedules and the scope of our audit. The financial statements including the accompanying notes present fairly, in all material respects, in conformity with generally accepted accounting principles, the Commission’s assets, liabilities, and capital; operating revenue and expenses; changes in capital; and cash flows. We evaluated management’s assertion about the effectiveness of its internal controls designed to safeguard assets against loss from unauthorized acquisition, use, or disposition; assure the execution of transactions in accordance with laws governing the use of budget authority and with other laws and regulations that have a direct and material effect on the financial statements or that are listed in OMB audit guidance and could have a material effect on the financial statements; and properly record, process, and summarize transactions to permit the preparation of reliable financial statements and to maintain accountability for assets. Management of the Commission fairly stated that those controls in effect on September 30, 1995, provided reasonable assurance that losses, noncompliance, or misstatements material to the financial statements would be prevented or detected on a timely basis. Management of the Commission also fairly stated the need to improve certain internal controls for review of the classification of obligations for consultant services, as described above. These weaknesses in internal controls, although not considered to be material to the financial statements, represent deficiencies in the design or operations of internal controls which could adversely affect the entity’s ability to meet the internal control objectives to assure the execution of transactions in accordance with laws and regulations or meet OMB criteria for reporting matters under the Federal Managers’ Financial Integrity Act (FMFIA) of 1982. Management made this assertion based upon criteria established under FMFIA and OMB Circular A-123, Internal Control Systems. Except as noted above, our tests for compliance with selected provisions of certain laws and regulations disclosed no other instances of noncompliance that would be reportable under generally accepted government auditing standards. However, the objective of our audit was not to provide an opinion on overall compliance with laws and regulations. Accordingly, we do not express such an opinion. The Treaty related cost schedules are presented as required by the Panama Canal Act of 1979, and the schedule of property, plant, and equipment is presented for purposes of additional analysis. This information has not been subjected to the auditing procedures applied in the audit of the financial statements, and, accordingly, we express no opinion on these schedules. While we do not express an opinion on the detailed schedule of property, plant, and equipment, we found no material inconsistencies with the financial statements taken as a whole. preparing the annual financial statements in conformity with generally establishing, maintaining, and assessing the internal control structure to provide reasonable assurance that the broad control objectives of FMFIA are met; and complying with applicable laws and regulations. We are responsible for obtaining reasonable assurance about whether (1) the financial statements are reliable (free of material misstatement and presented fairly, in all material respects, in conformity with generally accepted accounting principles) and (2) management’s assertion about the effectiveness of internal controls is fairly stated, in all material respects, based upon criteria established under FMFIA and OMB Circular A-123, Internal Control Systems. We are also responsible for testing compliance with selected provisions of certain laws and regulations and for performing limited procedures with respect to unaudited supplementary information appearing in this report. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the financial statements; assessed the accounting principles used and significant estimates made by evaluated the overall presentation of the financial statements; obtained an understanding of the internal control structure related to safeguarding of assets, compliance with laws and regulations, and financial reporting; tested relevant internal controls over safeguarding, compliance, and financial reporting and evaluated management’s assertion about the effectiveness of internal controls; tested compliance with selected provisions of the following laws and regulations: Panama Canal Act of 1979, Antideficiency Act, Prompt Payment Act, Civil Service Reform Act of 1978, as amended, Fair Labor Standards Act, and Accounting and Auditing Act of 1950; considered compliance with the process required by FMFIA for evaluating and reporting on internal control and accounting systems; prepared Treaty related costs schedules using unaudited information obtained from other federal agencies; and compared the unaudited detailed schedule of property, plant, and equipment for consistency with the information presented in the financial statements. We did not evaluate all internal controls relevant to operating objectives as broadly defined by FMFIA, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to those controls necessary to achieve the objectives outlined in our opinion on management’s assertion about the effectiveness of internal controls. Because of inherent limitations in any internal control structure, losses, noncompliance, or misstatements may nevertheless occur and not be detected. We also caution that projecting our evaluation to future periods is subject to the risk that controls may become inadequate because of changes in conditions or that the degree of compliance with controls may deteriorate. We did our work in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO audited the Panama Canal Commission's financial statements for the fiscal years ended September 30, 1995 and 1994, focusing on: (1) the statements' reliability; (2) internal controls; and (3) compliance with selected applicable laws and regulations. GAO found that: (1) the financial statements presented fairly, in all material respects, the Commission's financial position as of September 30, 1995 and 1994, and the results of its operations, changes in capital, and cash flows for the years then ended, in conformity with generally accepted accounting principles; (2) although improvements are needed, the Commission's internal controls in effect as of September 30, 1995, reasonably ensured that losses, noncompliance, or material misstatements would be prevented or detected; (3) the Commission disclosed a nonmaterial violation of the Antideficiency Act and has implemented internal controls that should prevent any future violations; (4) there were no other reportable instances of noncompliance with applicable laws and regulations; (5) the Commission has improved its general controls over its computerized information systems to correct previously identified weaknesses; and (6) the Commission expects to have its liabilities fully funded by the time the Canal is transferred to Panama. |
Under DERP, DOD is required to conduct environmental restoration activities at sites located on former and active defense properties that were contaminated while under its jurisdiction. Program goals include the identification, investigation, research and development, and cleanup of contamination from hazardous substances, pollutants, and contaminants; the correction of other environmental damage (such as detection and disposal of unexploded ordnance) that creates an imminent and substantial endangerment to public health or welfare or the environment; and the demolition and removal of unsafe buildings and structures. Types of environmental contaminants found at military installations include solvents and corrosives; fuels; paint strippers and thinners; metals, such as lead, cadmium, and chromium; and unique military substances, such as nerve agents and unexploded ordnance. DOD has undergone five BRAC rounds, with the most recent occurring in 2005. Under the first four rounds, in 1988, 1991, 1993, and 1995, DOD closed 97 major bases, had 55 major base realignments, and addressed hundreds of minor closures and realignments. DOD reported that the first four BRAC rounds reduced the size of its domestic infrastructure by about 20 percent and generated about $6.6 billion in net annual recurring savings beginning in fiscal year 2001. As a result of the 2005 BRAC decisions, DOD was slated to close an additional 25 major bases, complete 32 major realignments, and complete 755 minor base closures and realignments. When the BRAC decisions were made final in November 2005, the BRAC Commission had projected that the implementation of these decisions would generate over $4 billion in annual recurring net savings beginning in 2011. In accordance with BRAC statutory authority, DOD must complete closure and realignment actions by September 15, 2011—6 years following the date the President transmitted his report on the BRAC recommendations to Congress. Environmental cleanup and property transfer actions associated with BRAC sites can exceed the 6-year time limit, having no deadline for completion. As we have reported in the past, addressing the cleanup of contaminated properties has been a key factor related to delays in transferring unneeded BRAC property to other parties for reuse. DOD officials have told us that they expect environmental cleanup to be less of an impediment for the 2005 BRAC sites since the department now has a more mature cleanup program in place to address environmental contamination on its bases. In assessing potential contamination and determining the degree of cleanup required (on both active and closed bases), DOD must comply with cleanup standards and processes under all applicable environmental laws, regulations, and executive orders. The Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA) authorizes the President to conduct or cause to be conducted cleanup actions at sites where there is a release or threatened release of hazardous substances, pollutants or contaminants which may present a threat to public health and the environment. The Superfund Amendments and Reauthorization Act of 1986 (SARA) amending CERCLA clarified that federal agencies with such sites shall be subject to and comply with CERCLA in the same manner as a private party, and DOD was subsequently delegated response authority for its properties. To respond to potentially contaminated sites on both active and closed bases, DOD generally uses the CERCLA process, which includes the following phases and activities, among others: preliminary assessment, site investigation, remedial investigation and feasibility study, remedial design and remedial action, and long-term monitoring. SARA also required the Secretary of Defense to carry out the Defense Environmental Restoration Program (DERP). Following SARA’s enactment, DOD established DERP, which consists of two key subprograms focused on environmental contamination: (1) the Installation Restoration Program (IRP), which addresses the cleanup of hazardous substances where they were released into the environment prior to October 17, 1986; and (2) the Military Munitions Response Program (MMRP), which addresses the cleanup of munitions, including unexploded ordnance and the contaminants and metals related to munitions, where they were released into the environment prior to September 30, 2002. While DOD is authorized to conduct cleanups of hazardous substances released after 1986 and munitions released after 2002, these activities are not eligible for DERP funds but are instead considered “compliance” cleanups and are typically funded by base operations and maintenance accounts. Once a property is identified for transfer by a BRAC round, DOD’s cleanups are funded by the applicable BRAC account. While SARA had originally required the government to warrant that all necessary cleanup actions had been taken before transferring property to nonfederal ownership, the act was amended in 1996 to allow expedited transfers of contaminated property. Now such property, under some circumstances, can be transferred to nonfederal users before all remedial action has been taken. However, certain conditions must exist before DOD can exercise this early transfer authority; for example, the property must be suitable for the intended reuse and the governor of the state must concur with the transfer. Finally, DOD remains responsible for completing all necessary response action, after which it must warrant that such work has been completed. DOD uses the same method to propose funding for cleanup at active and BRAC sites and FUDS; and cleanup funding is based on DERP goals and is generally proportional to the number of sites in each of these categories. Specifically, officials in the Military Departments, Defense Agencies, and FUDS program who are responsible for environmental restoration at the sites under their jurisdiction formulate cleanup budget proposals based on instructions in DOD’s financial management regulation and DERP environmental restoration performance goals. DOD’s DERP goals include reducing risk to human health and the environment, preparing BRAC properties to be environmentally suitable for transfer, having final remedies in place and completing response actions, and fulfilling other established milestones to demonstrate progress toward meeting program performance goals. DERP goals included target dates representing when the current inventory of active and BRAC sites and FUDS are expected to complete the preliminary assessment and site inspection phases, or achieve the remedy in place or response complete (RIP/RC) milestone. In addition, Congress has required the Secretary of Defense to establish specific performance goals for MMRP sites. Table 1 provides a summary of these goals for the IRP and MMRP. As the table indicates, BRAC sites have no established goals for preliminary assessments or site inspections. For sites included under the first four BRAC rounds, the goal is to reach the RIP/RC milestone at IRP sites by 2015 and at MMRP sites by 2009. For sites included under the 2005 BRAC round, the goal is to reach the RIP/RC milestone at IRP sites by 2014 and at MMRP sites by 2017. DOD’s military components plan cleanup actions that are required to meet these goals at the installation or site level. DOD requires the components to assess their inventory of BRAC and other sites by relative risk to help make informed decisions about which sites to clean up first. Using these relative risk categories, as well as other factors such as stakeholder interest and mission needs, the components set more specific cleanup targets each fiscal year to demonstrate progress and prepare a budget to achieve those goals and targets. The proposed budgets and obligations among site categories are also influenced by the need to fund long-term management activities. While DOD uses the number of sites achieving RIP/RC status as a primary performance metric, sites that have reached this goal may still require long-term management and, therefore, additional funding for a number of years. Table 2 shows the completion status for active and BRAC sites and FUDS, as of the end of fiscal year 2008. Table 3 shows the completion status of BRAC sites and those that require long term management under the IRP, MMRP, and the Building Demolition/Debris Removal Program by military component, for fiscal years 2004 through 2008. DOD data show that, in applying the broad restoration goals, performance goals, and targets, cleanup funding is generally proportional to the number of sites in the active, BRAC, and FUDS site categories. Table 4 shows the total DERP inventory of sites, obligations, and proportions at the end of fiscal year 2008. As the table indicates, the total number of BRAC sites requiring cleanup is about 17 percent of the total number of defense sites, while the $440.2 million obligated to address BRAC sites in fiscal year 2008 is equivalent to about 25 percent of the total funds obligated for cleaning up all defense waste sites. Since DERP was established, approximately $18.4 billion has been obligated for environmental cleanup at individual sites on active military bases, $7.7 billion for cleanup at sites located on installations designated for closure under BRAC, and about $3.7 billion to clean up FUDS sites. During fiscal years 2004 through 2008, about $4.8 billion was spent on cleaning up sites on active bases, $1.8 billion for BRAC sites, and $1.1 billion for FUDS sites. Table 5 provides DOD’s funding obligations for cleanup at BRAC sites by military component and program category for fiscal years 2004 through 2008. Table 6 shows DOD’s estimated cost to complete environmental cleanup for sites located at active installations, BRAC installations, and FUDS under the IRP, MMRP, and the Building Demolition and Debris Removal Program for fiscal years 2004 through 2008. Finally, table 7 shows the total inventory of BRAC sites and the number ranked as high risk in the IRP and MMRP, by military component, for fiscal years 2004 through 2008. Our past work has also identified a number of challenges to DOD’s efforts in undertaking environmental cleanup activities at defense sites, including BRAC sites. For example, we have reported the following: DOD’s preliminary cost estimates for environmental cleanup at specific sites may not reflect the full cost of cleanup. That is, costs are generally expected to increase as more information becomes known about the extent of the cleanup needed at a site to make it safe enough to be reused by others. We reported in 2007 that our experience with prior BRAC rounds had shown that cost estimates tend to increase significantly once more detailed studies and investigations are completed. Environmental cleanup issues are unique to each site. However, we have reported that three key factors can lead to delays in the cleanup and transfer of sites. These factors are (1) technological constraints that limit DOD’s ability to accurately identify, detect, and clean up unexploded ordnance from a particular site, (2) prolonged negotiations between environmental regulators and DOD about the extent to which DOD’s actions are in compliance with environmental regulations and laws, and (3) the discovery of previously undetected environmental contamination that can result in the need for further cleanup, cost increases, and delays in property transfer. In conclusion, Mr. Chairman, while the data indicate that DOD has made progress in cleaning up its contaminated sites, they also show that a significant amount of work remains to be done. Given the large number of sites that DOD must clean up, we recognize that it faces a significant challenge. Addressing this challenge, however, is critical because environmental cleanup has historically been a key impediment to the expeditious transfer of unneeded property to other federal and nonfederal parties who can put the property to new uses. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Anu Mittal at (202) 512- 3841 or [email protected] or John B. Stephenson at (202) 512-3841 or [email protected]. Contributors to this testimony include Elizabeth Beardsley, Antoinette Capaccio, Vincent Price, and John Smith. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Under the Defense Environmental Restoration Program (DERP), the Department of Defense (DOD) is responsible for cleaning up about 5,400 sites on military bases that have been closed under the Base Realignment and Closure (BRAC) process, as well as 21,500 sites on active bases and over 4,700 formerly used defense sites (FUDS), properties that DOD owned or controlled and transferred to other parties prior to October 1986. The cleanup of contaminants, such as hazardous chemicals or unexploded ordnance, at BRAC bases has been an impediment to the timely transfer of these properties to parties who can put them to new uses. The goals of DERP include (1) reducing risk to human health and the environment (2) preparing BRAC properties to be environmentally suitable for transfer (3) having final remedies in place and completing response actions and (4) fulfilling other established milestones to demonstrate progress toward meeting program performance goals. This testimony is based on prior work and discusses information on (1) how DOD allocates cleanup funding at all sites with defense waste and (2) BRAC cleanup status. It also summarizes other key issues that GAO has identified in the past that can impact DOD's environmental cleanup efforts. DOD uses the same method to propose funding for cleanup at FUDS, active sites, and BRAC sites; cleanup funding is based on DERP goals and is generally proportional to the number of sites in each of these categories. Officials in the Military Departments, Defense Agencies, and FUDS program, who are responsible for executing the environmental restoration activities at their respective sites, formulate cleanup budget proposals using the instructions in DOD's financial management regulation and DERP environmental restoration performance goals. DERP's goals include target dates for reaching the remedy-in-place or response complete (RIP/RC) milestone. For example, for sites included under the first four BRAC rounds, the goal is to reach the RIP/RC milestone at sites with hazardous substances released before October 1986 by 2015 and for sites in the 2005 BRAC round by 2014. DOD's military components plan cleanup actions that are required to meet DERP goals at the installation or site level. DOD requires the components to assess their inventory of BRAC and other sites by relative risk to help make informed decisions about which sites to clean up first. Using these relative risk categories, as well as other factors, the components set more specific restoration targets each fiscal year to demonstrate progress and prepare a budget to achieve those goals and targets. DOD data show that, in applying the goals, and targets, cleanup funding has generally been proportional to the number of sites in the FUDS, active, and BRAC site categories. For example, the total number of BRAC sites requiring cleanup is about 17 percent of the total number of defense sites requiring cleanup, while the $440.2 million obligated to address BRAC sites in fiscal year 2008 is equivalent to about 25 percent of the total funds obligated for this purpose for all defense waste sites. GAO's past work has also shown that DOD's preliminary cost estimates for cleanup generally tend to rise significantly as more information becomes known about the level of contamination at a specific site. In addition, three factors can lead to delays in cleanup. They are (1) technological constraints that limit DOD's ability to detect and cleanup certain kinds of hazards, (2) prolonged negotiations with environmental regulators on the extent to which DOD's actions are in compliance with regulations and laws, and (3) the discovery of previously unknown hazards that can require additional cleanup, increase costs, and delay transfer of the property. |
In April 1991, and in concert with United Nations Security Council Resolution 688, military units from a coalition of the United States and 12 other countries began providing direct emergency care and assistance to Kurds and other ethnic groups in northern Iraq following a revolt against the Iraqi government. This emergency relief effort was named Operation Provide Comfort. Coalition forces secured an area of northern Iraq that excluded Iraqi aircraft above the 36th parallel—the tactical area of responsibility (TAOR),or no-fly zone—and prepared transit camps within Iraq for the return of the people who had fled from the advancing Iraqi army. To provide a secure environment for the returnees, the coalition established a security zone within the TAOR into which Iraqi forces could not enter. Coalition air forces from France, Turkey, the United Kingdom, and the United States were assembled to conduct frequent air operations in the TAOR. A Military Coordination Center was established in Zakhu, Iraq, located inside the security zone, to provide a direct communications link with the Iraqi military, humanitarian relief agencies, and Kurdish leaders. Figure 1.1 illustrates the location of pertinent points in and around the TAOR. The U.S. Commander in Chief, Europe, delegated operational control of assigned U.S. Army and Air Force units to the Combined Task Force Commander located at Incirlik Air Base, Turkey. The Combined Task Force Commander also had tactical control of participating Turkish, French, and British forces; but operational control of those forces was retained by their parent commands. On July 20, 1991, the Combined Task Force Commander issued an operations plan governing the conduct of the Operation Provide Comfort. The plan delineated the command relationships and organizational responsibilities within Combined Task Force Operation Provide Comfort. The Combined Task Force was headed by U.S. and Turkish co-commanders and included a Combined Task Force staff; a Combined Forces Air Component (CFAC); and the Army component, including the Military Coordination Center. CFAC coordinated air operations for Operation Provide Comfort. It had operational control of air assets—such as Airborne Warning and Control System aircraft (AWACS) and F-15 and F-16 fighters—and tactical control of Army helicopters. The Military Coordination Center at Zakhu was supported by a Black Hawk helicopter detachment at Diyarbakir, Turkey. (See fig. 1.1.) Air operations were conducted by planes and personnel assigned to Operation Provide Comfort on a temporary duty basis. Fighter aircraft performed the bulk of the Operation Provide Comfort flying mission. A typical “mission package” contained as many as 30 to 40 fighter aircraft and a variety of aircraft with specific mission capabilities. The fighters flew two- and four-ship formations and provided the following capabilities: visual and sensor reconnaissance of military targets, defensive counter air operations, suppression of enemy air defenses, and on-call precision-guided munitions delivery. At the beginning of each mission, no other aircraft was supposed to enter the TAOR until fighters with airborne intercept radars had searched, or “sanitized,” the area. During the daily operations, the AWACS was responsible for (1) controlling aircraft enroute to and from the TAOR; (2) coordinating air refueling; (3) providing airborne threat warning and control for Operation Provide Comfort aircraft operating in the TAOR; and (4) providing surveillance, detection, and identification of all unknown aircraft. The AWACS took off about 2 hours before the rest of the fixed-wing package and eventually entered an orbit in Turkish air space slightly north of the TAOR. (See fig. 1.1.) The AWACS mission crew was headed by a Mission Crew Commander who had overall responsibility for the AWACS mission. The Mission Crew Commander directly supervised an Air Surveillance Officer; Senior Director; and various communications, radar, and data processing technicians. The Air Surveillance Officer supervised air surveillance technicians who were responsible for identifying and monitoring non-Operation Provide Comfort aircraft. The Senior Director supervised and directed the activity of the controllers. The Enroute Controller was responsible for Operation Provide Comfort aircraft going to and from the TAOR. The Tanker Controller was responsible for coordinating the refueling of Operation Provide Comfort aircraft. The TAOR Controller was responsible for Operation Provide Comfort aircraft in the TAOR. In addition, a Turkish controller was present on each AWACS mission flight. Military Coordination Center Black Hawk helicopters stationed at Diyarbakir provided air transportation for the Military Coordination Center liaison team and conducted resupply missions at Zakhu, as required. The Black Hawks also used Zakhu as a stage for flying missions further in the TAOR to visit Kurdish villages, monitored conditions in the security zone, and conducted search and rescue missions. On April 14, 1994, two U.S. Army Black Hawk helicopters and their crew assigned to the Military Coordination Center were transporting U.S., United Kingdom, French, and Turkish military officers; Kurdish representatives; and a U.S. political advisor in the TAOR. The Black Hawks had departed Zakhu enroute to Irbil, Iraq. (See fig. 1.1.) At the same time, two F-15s were sanitizing the area that the Black Hawks were in; and the AWACS was over Turkey providing airborne threat warning and control. The AWACS was aware that the Army Black Hawk helicopters had departed Zakhu and were proceeding east into the TAOR. However, the F-15 pilots were unaware that Black Hawk helicopters were already in the area and were not advised of the presence of friendly aircraft. The fighters twice informed the AWACS that they had unknown radar contacts in the TAOR, and the AWACS had access to electronic information regarding the presence of friendly aircraft in the vicinity of the F-15s’ reported radar contacts. Throughout the incident, the helicopters were unable to hear the radio transmissions between the F-15 pilots and the AWACS because they were on a different radio frequency. According to the Aircraft Accident Investigation Board President’s opinion, when the F-15 pilots were unable to get positive/consistent electronic responses, they performed a visual intercept with each making a single identification pass over the Black Hawks to identify the “unknown” aircraft. However, the Board President concluded that the identification passes were carried out at speeds, altitudes, and distances at which it was unlikely that the pilots would have been able to detect the Black Hawks’ markings. The pilots said that they did not recognize the differences between the U.S. Black Hawk helicopters with wing-mounted fuel tanks and Hind helicopters with wing-mounted weapons. The Board President determined that the pilot in the lead F-15 aircraft had misidentified the U.S. Black Hawks as Iraqi Hind helicopters and the wingman did not confirm, when asked by the lead pilot, that he had been unable to make a positive identification. The flight lead fired a single missile and shot down the trailing Black Hawk helicopter. At the lead pilot’s direction, the F-15 wingman fired a single missile and shot down the lead helicopter. All 26 individuals aboard the two helicopters were killed in the fratricide. When the Combined Task Force Commander learned that the Black Hawks had been shot down on April 14, 1994, he appointed the former CFAC Commander to conduct a Safety Board Investigation. The appointee assembled a staff and began to collect relevant information. Later on April 14, the Secretary of Defense ordered an Aircraft Accident Investigation, which provides more disclosure to the public than does a safety investigation. As a result, the safety investigation was discontinued; and an Aircraft Accident Investigation Board was convened under Air Force Regulation 110-14, since replaced by Air Force Instruction 51-503. The investigation’s main objectives were (1) to gather and preserve evidence for further investigations and inquiries by conducting a thorough investigation and preparing an accident report and (2) to determine if possible, through the Board President’s opinion, the accident’s main causes. The House Committee on National Security held a hearing in August 1995 related to the April 14, 1994, incident. Subsequently, the Chairman and Ranking Minority Member of the Committee’s Subcommittee on Military Personnel and Representative Mac Collins asked us, not to reinvestigate the shootdown but, to determine whether (1) the Board investigation of the Black Hawk shootdown had met its objectives and goals, (2) subsequent Uniform Code of Military Justice (UCMJ) investigations had followed established guidelines, and (3) any Department of Defense and/or Air Force officials had improperly or unlawfully influenced these investigations. The Subcommittee also requested that we consider concerns raised by victims’ families and others. We did not evaluate the appropriateness of resultant disciplinary or corrective actions. We interviewed family members of the U.S. victims and others with concerns about how the military had handled the incident. In general, they had questions about the process and results of the Aircraft Accident Investigation Board and the UCMJ investigations. We examined thousands of documents, including over 2,000 classified documents; interviewed over 160 individuals; and visited localities in the United States, Europe, and the Middle East. To assess compliance with regulations, we reviewed the Aircraft Accident Investigation Board’s report, its exhibits, and applicable regulations. We interviewed Board members, both legal and technical advisors to the Board, accident recovery team members, and many of those interviewed by the Board concerning their role in the incident. In particular, we examined how the Board investigated the incident, conducted its analyses, and produced its report and the Board President’s opinion. We also reviewed information developed by the Inquiry and Investigating Officers under UCMJ and the court-martial trial transcripts and exhibits. To address questions raised by victims’ family members and others regarding the actions of the two F-15 pilots, the AWACS crew, and command officials, we reviewed the then existing rules of engagement, operations plan, pertinent orders (Airspace Control Order, Air Tasking Order, and Aircrew Read Files), and guidance. We interviewed F-15 pilots and Air Force officials to determine their understanding of the existing orders and guidance for fighters in Operation Provide Comfort. We questioned these individuals regarding the two F-15 pilots’ actions during, and statements concerning, the April 14, 1994, incident. We also interviewed the F-15 pilots involved in the shootdown. Further, we interviewed senior board members of the Flying Evaluation Boards that were convened. We reviewed the two F-15 pilots’ flight and military records and interviewed former instructors and fellow pilots to gain additional insight into their qualifications and abilities. Regarding the Identification Friend or Foe (IFF) system and its operation on April 14, 1994, we reviewed the (1) Airspace Control Order and the Air Tasking Order authority by which the IFF system was to be employed; (2) statements of the two F-15 pilots regarding their operation and responses received from the systems on April 14, 1994; and (3) subsequent Air Force task force studies. We also interviewed other pilots, IFF technicians, and other technical experts to better understand the limitations and performance of the incident F-15 IFF systems. With regard to AWACS operations, we reviewed the applicable procedures for AWACS operations on the date of the incident. We interviewed crew members aboard the AWACS during the shootdown to determine their knowledge of the events and their understanding of the roles and responsibilities of the crew members. We also interviewed other AWACS crew members who had served in Operation Provide Comfort to determine the general understanding among AWACS crew members of their roles and responsibilities; the commanders and other command personnel of the 552d Air Control Wing, which operated the AWACS in Operation Provide Comfort; and Air Combat Command personnel responsible for identifying and instituting changes in AWACS operations following the investigations. To gain an understanding of command and control issues at the Combined Task Force, we interviewed personnel who were stationed at Operation Provide Comfort both before and after the incident. With respect to Black Hawk operations, we examined the procedures used by Military Coordination Center personnel in scheduling helicopter activities and the Center’s integration with the other Operation Provide Comfort mission components. In addition, we interviewed individuals responsible for developing the procedures for Black Hawk flights; those who prepared the helicopters for their mission on April 14, 1994; and responsible Combined Task Force officials. We also reviewed documents concerning the Combined Task Force and the Black Hawk helicopters at the Center for Army Lessons Learned at Fort Leavenworth, Kansas. Further, we interviewed Combined Task Force and European Command officials and reviewed directives and files at Incirlik, Turkey, and Stuttgart, Germany. Regarding improper or unlawful command influence during the Board’s investigation process, we interviewed the Board President and Deputy, members, and advisors. We also contacted the Commander, U.S. Air Forces in Europe, who had convened the Aircraft Accident Board Investigation. In addition, we reviewed records that cautioned against unlawful influence. Regarding improper or unlawful command influence during the UCMJ investigations process, we reviewed the record of decisions made by the Inquiry Officers and Investigating Officers in the UCMJ investigations to determine whether they were in compliance with provisions in the UCMJ and the Manual for Courts-Martial. These records of decisions included the suspected violations, the facts considered, and the analyses used to arrive at the conclusions and recommendations reached. The Department of Defense would not allow us to interview the Convening Authorities or the Inquiry and Investigating Officers. The Department of Defense provided written comments on a draft of this report. Those comments concur with our primary conclusions and agreed that the few differences between our report and the Board report would not have affected the Board President’s conclusions. The following chapters discuss the Aircraft Accident Investigation Board report and subsequent findings, subsequent investigations under the UCMJ, the results of Flying Evaluation Boards, and corrective actions taken. The congressional requesters asked us, among other points, to (1) determine if the Aircraft Accident Investigation Board’s investigation had met its objectives, (2) determine whether improper or unlawful command influence had occurred during the investigation, and (3) consider concerns raised by victims’ family members and others. We found that the Board, in a limited time frame, conducted an extensive investigation that fulfilled the requirements of Air Force Regulation 110-14 to obtain and preserve evidence and, with a few exceptions, to report the factual circumstances relating to the accident. Also consistent with the regulation, the Board President stated his opinion of the accident’s causes. In addition, our interviews of the Board President and other Board members, as well as technical and legal advisors, disclosed no evidence of improper or unlawful command influence during the Board process. During our review of the Board’s investigation/report and subsequent Department of Defense reviews, plus our interviews of Operation Provide Comfort officials and participants, we noted that the Board report and/or opinion (1) did not discuss the incident F-15 pilots’ responsibility, under the Airspace Control Order, to report to the Airborne Command Element aboard the AWACS about the unidentified helicopters; (2) cited a CFAC Commander statement that inaccurately portrayed the Airborne Command Element as not having authority to stop the incident; and (3) erroneously concluded that the Black Hawks’ use of an incorrect electronic code prevented the F-15 pilots from receiving electronic responses from the helicopters. Last, victims’ family members and others at the August 1995 congressional hearing raised concerns that included possible discipline problems in the F-15 community in Operation Provide Comfort at the time of the shootdown and the incident F-15 pilots’ perceived urgency to engage during the shootdown. While Air Force Regulation 110-14 did not require the Board to examine these issues, it did not preclude the examination; and we determined that the issues were pertinent to our review. Indeed, discipline problems did exist in the F-15 community in Operation Provide Comfort, and some Operation Provide Comfort officials questioned the incident F-15 pilots’ haste to engage the unknown helicopters. At the direction of the Secretary of Defense on the day of the shootdown, the U.S. Commander in Chief, Europe, ordered the Commander, U.S. Air Forces in Europe, to conduct an Aircraft Accident Investigation. The Aircraft Accident Investigation Board was properly convened under Air Force Regulation 110-14. The Board’s investigation met its goal to obtain and preserve documentary, testimonial, and physical evidence for possible claims, litigation, and disciplinary and administrative needs. On May 27, 1994, also in accordance with Air Force Regulation 110-14, the Aircraft Accident Investigation Board issued a 60-page summary report including the Board President’s opinion that, with three exceptions, provided a summary of the most important facts and circumstances of the incident. Those deficiencies—involving the incident F-15 pilots, the incident Airborne Command Element, and the Black Hawks’ use of an incorrect electronic identification code—are discussed later in this chapter. The Board President’s opinion, which was also required by the regulation and was included in the summary, identified the accident’s causes as a chain of events that began with the lack of a clear understanding among the Operation Provide Comfort organizations about their respective responsibilities and culminated with the F-15 lead pilot’s misidentification of the Black Hawks as Iraqi Hinds and the F-15 wingman’s failure to notify the lead pilot that he had not positively identified the helicopters. (The Board President’s opinion appears as app. I.) The Board report was transmitted through the Chairman of the Joint Chiefs of Staff to the Secretary of Defense. On April 15, 1994, the Commander, U.S. Air Forces in Europe, appointed the Commander, 3d Air Force (a major general), as President of the Aircraft Accident Investigation Board. The other Board members included a Deputy Board President (an Army colonel), a Chief Investigator who was an F-15 pilot, an AWACS expert, a Black Hawk pilot, a Black Hawk maintenance officer, a flight surgeon, a Board recorder, and a public affairs officer. The Board also included 13 technical advisors and 4 legal advisors (3 Air Force and 1 Army). The Board conducted its investigation from April 15 to May 27, 1994,when it issued its report, including 25 volumes of evidence containing testimony from 137 witnesses. The Board reviewed directives on command and control; rules of engagement, pertinent orders (Airspace Control Order, Air Tasking Order, and Aircrew Read Files), aircrew preparation, and scheduling; aircraft maintenance documentation on the involved aircraft; aircrew qualification and training records and materials; physical and medical examinations; data on the sequence of events for each of the aircraft, such as flight plans, communications tapes, and briefing and preflight preparations; search and rescue activities; and integration of Army and Air Force operations. The Board also reviewed classified documents, video tapes, and magnetic tapes relating to the accident. To assess the possible malfunction of the Air-to-Air Interrogation (AAI) and Identification Friend or Foe (IFF) system components, the Board commissioned testing of the incident fighters’ AAI systems and the helicopters’ transponders. It also commissioned a filmed re-creation of the incident with an F-15 fighter approaching a Black Hawk helicopter at various elevations, distances, and approaches. Section 104 (a)(2) of the Rules for Court-Martial (RCM), Manual for Courts-Martial, defines unlawful command influence as an attempt to coerce or, by any unauthorized means, influence the action of a court-martial or any other military tribunal or any member thereof, in reaching the findings or sentence in any case or the action of any convening, approving, or reviewing authority with respect to such authority’s judicial acts. We found no evidence of improper or unlawful command influence exerted during the Aircraft Accident Investigation Board process. The Board members and technical and legal advisors we interviewed stated that they had had free rein to examine all facets during the investigation. According to the Commander, U.S. Air Forces in Europe, when the U.S. Commander in Chief, Europe, tasked him to convene the Aircraft Accident Investigation Board, he was told to uncover the facts and get all the details. Also according to the Commander, when he assigned the Board President, he told the president to leave no rock unturned and bring up every fact during the investigation. According to the Board President, his directions to the Board members were to let the “chips fall” where they may and to hold back nothing. He stated that there was “absolutely no command influence” and that the Board was extremely careful to avoid even the appearance of any influence. Although the Aircraft Accident Investigation Board reported that incident participants, including the Black Hawk pilots, lacked knowledge of command and control guidance, such as portions of the Airspace Control Order, it did not discuss the F-15 pilots’ responsibility under the Airspace Control Order to report to the Airborne Command Element when encountering an unknown aircraft during Operation Provide Comfort missions. Further, although it had evidence to the contrary, the Board, through its report, cited a CFAC Commander’s inaccurate testimony that the Airborne Command Element had no decision-making authority regarding aircraft encounters in the TAOR. According to the Board’s Senior Legal Advisor, the Board did not report the F-15 pilots’ nonadherence to that aspect of the order because the Airborne Command Element was aware of the intercept; thus the Board did not consider the pilots’ nonadherence to be a significant cause of the shootdown. The Senior Legal Advisor stated that, in the Board’s opinion, Operation Provide Comfort management had allowed operations to degrade to such a point that the Board report, partly for this reason, focused on command and control problems. Combined Task Force flying operations were conducted according to an Airspace Control Order published by the CFAC Director of Operations (CFAC/DO). The two-volume Airspace Control Order, which provided the rules and procedures governing all Operation Provide Comfort aircrews, was required reading for those aircrews. Volume II augmented volume I by providing detailed and specific guidance and procedures for conducting Operation Provide Comfort air operations. The CFAC/DO directed the Operation Provide Comfort flight operations through a ground-based Mission Director at Incirlik Air Base and an Airborne Command Element aboard the AWACS. The Airborne Command Element, according to the Board report, was to act as the “’eyes and ears’ of the CFAC/DO” aboard the AWACS. In support of this, as excerpted from the Airspace Control Order, “ will contact who will then pass the information to the CFAC/DO” concerning any unusual circumstances, such as an unidentified aircraft in the TAOR. The Board report and Board President’s opinion did not address that the pilots were required to report to the Airborne Command Element in an “unusual circumstance” as specified by the following excerpt from volume II of the Airspace Control Order. (For the sake of clarity, we have used titles in place of code names.) “Aircrews experiencing any unusual circumstances/occurrences while flying [Operation Provide Comfort] missions will report the incident to [the Airborne Command Element] or [the AWACS crew] if is unavailable.” The list of six such unusual circumstances/occurrences contained in the Airspace Control Order included “ny intercept run on an ’Unidentified aircraft.’” According to Operation Provide Comfort officials, the Airspace Control Order was specifically designed to slow down a potential engagement to allow CFAC time to check things out. In response to questions we raised, the Board’s Senior Legal Advisor said that the Board had reviewed that provision and evidence showing that the F-15 pilots had read both volumes of the Airspace Control Order containing the requirement to contact the Airborne Command Element for guidance. He added that the contact-requirement issue was not significant to the Board because the Airborne Command Element was aware of the discussion between the F-15 pilots and the TAOR controller about the intercept. He also said that the Board concluded that the F-15 pilots had reason to believe that the Airborne Command Element was monitoring the conversation and that the Airborne Command Element was, in fact, aware of the intercept and did not intervene. Further, the Operation Provide Comfort management, in the Board’s opinion, had allowed operations to degrade to such a degree that it “may not have been common practice” at the time for F-15 pilots to contact the Airborne Command Element. He said that partly because of this degradation, the Board’s focus turned to the command and control failures that had created an environment that allowed the incident to occur. However, this duty to contact the Airborne Command Element for directions concerning unusual circumstances had been reemphasized by an oral directive issued because of an incident about a week before the shootdown. In that incident, F-15 pilots had initially ignored an Airborne Command Element’s directions to “knock off,” or stop, an engagement with a hostile fighter aircraft they thought was in the no-fly zone. The Airborne Command Element overheard the pilots preparing to engage the aircraft and contacted them, telling them to stop the engagement because he had determined that the hostile aircraft was outside the no-fly zone and that he was also leery of a “bait and trap” situation. After several unsuccessful attempts to call off the engagement during which the F-15 pilots did not respond to him, he ordered the pilots to return either to their assigned patrol point or to base. The F-15s returned to their assigned patrol point. The CFAC/DO issued the resultant oral directive to the F-15 detachment representative at the next Detachment Commander meeting following the incident. At the meeting, the CFAC/DO listened to the complaints of the F-15 representative and then told him that the word of the Airborne Command Element was final. He also told the F-15 representative that the Airspace Control Order was very clear and must be followed. While the Board did an extensive investigation, it was unaware of this oral directive. The Aircraft Accident Investigation Board report cited as fact the former CFAC Commander’s testimony that the Airborne Command Element “had no decision-making authority.” The Board justified citing the statement as fact in its report because it was made by the Commander, from whom the Board believed all authority for CFAC operations stemmed. The Board did not include in its report, testimony from the CFAC Commander at the time of the shootdown, the CFAC/DO, a Mission Director, and others with more knowledge of actual Operation Provide Comfort air operations that contradicted the former CFAC Commander’s reported testimony concerning Airborne Command Element authority. The CFAC/DO told the Board and us that he had delegated time-sensitive decision-making authority to his Mission Directors and Airborne Command Elements. He testified to the Board that he had given the authority to the Airborne Command Element to terminate the mission package “and bring the entire operation back.” He further told us that the week before the shootdown he had supported the Airborne Command Element’s decision to knock off the F-15 pilots’ intercept and had commended the Airborne Command Element on his actions. The Combined Task Force Commander also supported the Airborne Command Element’s decision. The Board President’s opinion erroneously concluded that the Black Hawks’ use of a wrong code prevented the F-15s from receiving a response in one of the electronic identification modes. We agree with an Air Force analysis, using information that was also available to the Board, that determined that the F-15 pilots should have received a response despite the wrong code. The analysis based its finding on the manner in which the pilots testified that they had interrogated the helicopters. During their sanitization sweep, the F-15 pilots, using radar, located unknown, slow-moving contacts in the TAOR that were subsequently identified as helicopters. In an attempt to identify if the helicopters were friendly, the F-15 pilots interrogated the aircraft with their AAI/IFF systems. An F-15’s AAI/IFF system can interrogate using four identification signals, or modes: I, II, III, and IV. In the TAOR, the transponders on Black Hawk helicopters transmit Modes I, II, and IV. However, two Mode I codes were designated for use in Operation Provide Comfort at the time of the incident: one inside, the other outside the TAOR. As stated in the Board report, the Black Hawk pilots were using the Mode I code for outside the TAOR, and the F-15 pilots’ systems were set to the Mode I code for inside the TAOR. The Board report and its President’s opinion noted that the Black Hawks’ use of the wrong Mode I code had resulted in the F-15 pilots’ failure to receive a Mode I response. The Aircraft Accident Investigation Board took testimony from the pilots who had flown the same F-15s on flights immediately before and after the shootdown, in addition to testimony from the incident lead pilot and wingman, to determine whether they had experienced any problems with the IFF systems. All said that they had had no problems and had successfully interrogated other aircraft using Modes I and IV. The Board also had operational tests performed on the F-15s’ AAI/IFF components a few days after the incident. The tests revealed no problem that would have prevented the lead aircraft from interrogating and displaying Modes I, II, and IV. The wingman’s AAI system was found to be capable of interrogating Modes I, II, and IV and of displaying Mode I and II signals. However, it could not display Mode IV signals generated by the test set. After the operational testing, the Board removed the AAI components from the F-15s and sent them to two Air Force laboratories for teardown analysis. The laboratory tests were performed without recalibrating the components, and the reports showed no problems that would have affected the performance of the equipment. Because of weapons impact, the resulting crash, and/or the subsequent fire, the transponder on one helicopter was completely destroyed. The transponder in the other helicopter was partly destroyed and was sent to a Department of Defense laboratory. The report of the teardown analysis of this transponder concluded that it had been on at the time of the incident but that the testing could not determine conclusively whether the system had been fully operational at the time. The Board President’s opinion concluded that the Black Hawks had been using the wrong Mode I code inside the TAOR after they departed Zakhu for Irbil, Iraq, and that the incorrect code was responsible for the F-15 pilots’ failure to receive a Mode I response when they interrogated the helicopters. However, the Air Force special task force’s subsequent review of the IFF component revealed that, based on the descriptions of the system settings that the pilots testified they had used on several interrogation attempts, the F-15s should have received and displayed any Mode I or II response, regardless of code. Thus, the helicopters’ use of the wrong Mode I code should not have prevented the F-15s from receiving a response. In reaching his conclusion, the Board President relied on the evidence collected by the Board, which included the pilots’ testimony as well as other information about the IFF system settings and how the system should perform. In its report, the Board cited three of four interrogation attempts about which the lead pilot had testified on April 23, 1994. One of the three was performed in a way that should have displayed any Mode I or II response, as later noted by the Air Force special task force. The task force also found that the additional interrogation attempt described on April 23 was identical to the one that should have displayed any Mode I or II response. The additional interrogation, not reported by the Board, took place during the period in which the AWACS was receiving friendly Mode I and II returns from the helicopters at an increasingly frequent rate and when the lead pilot was closer to the helicopters than during his initial interrogation attempt at the same settings. The Board President recalled discussions about the F-15 IFF-system settings and said the Board report had included the interrogation attempts about which the Board was certain. He told us that because of the difference between the lead pilot’s incident-day statement and his testimony, it was difficult to determine the number of times that the lead pilot had interrogated the helicopters. Victims’ family members and others raised concerns about the lack of discussion in the Board report concerning the discipline of F-15 pilots in general in Operation Provide Comfort and the F-15 pilots’ perceived urgency to engage during the shootdown. Although Air Force Regulation 110-14, under which the Board’s investigation was conducted, did not require the Board to examine such environment issues, neither did the regulation rule out an examination. However, the two issues were relevant to our review. According to Operation Provide Comfort officials, the pilots’ failure on April 14, 1994, to contact the Airborne Command Element was a product of a lack of F-15 mission discipline, as demonstrated by the incident a week before the shootdown when F-15 pilots initially ignored Airborne Command Element instructions to “knock off” an engagement with an Iraqi aircraft. According to the Combined Task Force Commander, the pilots’ failure was also related to a rivalry-induced urgency to engage “hostile” aircraft. The Mission Director during the shootdown and the Airborne Command Element involved in the knock-off incident told us that they had had problems with mission discipline issues involving F-15 pilots assigned to Operation Provide Comfort during the time period leading up to the shootdown. The Airborne Command Element stated that on the evening of the knock-off incident, several F-15 pilots, including the pilots whom he had ordered to cease their proposed engagement, approached him and questioned whether he was a “combat player” and whether Airborne Command Elements were perhaps too conservative. According to CFAC officials, the F-15 pilot community was “very upset” about the intervention of the Airborne Command Element during the knock-off incident and felt he had interfered with the carrying out of the F-15 pilots’ duties. The Airborne Command Element from the knock-off incident also told us that so many flight discipline incidents had occurred that CFAC held a group safety meeting in late February or early March 1994 to discuss the need for more discipline. The flight discipline issues included midair close calls, unsafe incidents when refueling, and unsafe takeoffs. The Combined Task Force Commander said that he had recognized a potential supervisory problem with the F-15 Detachment because no F-15 pilots were on the Combined Task Force staff. He had made several unsuccessful requests to the Commander, 17th Air Force, to have an experienced F-15 pilot—on flying status—assigned to the Combined Task Force staff. According to the Combined Task Force Commander, the 17th Air Force Commander told him that the available number of F-15 slots was limited and one could not be spared for Operation Provide Comfort. We noted, however, that as part of the corrective actions taken following the shootdown, an F-15 pilot was assigned to the Combined Task Force staff. Further, the shootdown occurred, according to the CFAC/DO’s statement to us, because of a lack of training and aircrew discipline in following established guidelines on the part of the two F-15 pilots involved in the incident. He stated, “he pilots made a terrible mistake” and with greater discipline—coupled with the multiple safeguards designed to prevent such an incident—this fratricide may have been avoided. The Combined Task Force Commander and other Operation Provide Comfort officials acknowledged that a rivalry existed between the F-15 and F-16 communities, including those in Operation Provide Comfort detachments. Operation Provide Comfort officials told us that while such rivalry was normally perceived as healthy and leading to positive professional competition, at the time of the shootdown the rivalry had become more pronounced and intense. The Combined Task Force Commander attributed this atmosphere to the F-16 community’s having executed the only fighter shootdown in Operation Provide Comfort and all shootdowns in Bosnia. In the opinion of the Combined Task Force Commander, the shootdown pilots’ haste was due in part to the planned entry of two F-16s into the TAOR 10 to 15 minutes after the F-15s. He said that if the F-15 pilots had involved the chain of command, the pace would have slowed down, ruining the pilots’ chances for a shootdown. Further, CFAC officials stated that the Airspace Control Order was specifically designed to slow down a potential engagement to allow CFAC time to check things out. They said that the presence of the helicopters, which were flying southeast away from the security zone, posed no threat to the mission and there was no need for haste. For example, the Mission Director stated that, given the speed of the helicopters, the fighters had time to return to Turkish airspace, refuel, and still return and engage the helicopters before they could have crossed south of the 36th parallel. According to the F-15 Squadron Operations Officer at the time of the shootdown and the Board’s Senior Legal Advisor, the tactical environment did not warrant a rush to judgment. The Operations Officer added that the F-15 pilots had acted too hastily and should have asked more questions. The Senior Legal Advisor said that, in his opinion, the pilots had an unnecessarily aggressive attitude toward the intercept and shootdown. The lead incident pilot told us that he was concerned about going low to check out the unknown aircraft. His primary concerns at the time were (1) being fired on from the ground, (2) flying into the ground, and (3) a possible air threat. Because of these concerns, he remained high for as long as possible and dropped down briefly for a visual identification that lasted, according to the lead pilot, “between 3 and 4 seconds.” He told us that he saw no Iraqi flag on the helicopters and that the helicopters were not acting in a hostile manner. He assumed they were Iraqi Hinds because they were in the middle of Iraq, although he acknowledged that they could have been Syrian or Iranian Hinds. The incident wingman told us that his visual identification was not as close to the helicopters as was the lead pilot’s. His visual identification lasted “between 2 and 3 seconds.” He said, in hindsight, “We should have taken another pass; but at the time, I was comfortable with the decision.” The Board report and Board President’s opinion would have presented a more complete record of the incident’s events had they discussed the incident F-15 pilots’ requirement to report to the Airborne Command Element, accurately assessed the Airborne Command Element’s authority, not concluded that the Black Hawks’ use of an incorrect code had prevented Mode I electronic responses from the helicopters, and addressed F-15 pilot discipline issues. This more complete information, in turn, may have raised additional questions about the actions and inaction of the F-15 pilots and the Airborne Command Element and, therefore, could have influenced subsequent disciplinary or corrective actions. However, if the information had been included, it would not have affected the Board President’s conclusion: that a chain of events, whose final actions were the lead pilot’s incorrect identification and the wingman’s failure to clarify his lack of identification, caused the fratricide. Further, it is difficult to predict if the incident’s outcome would have differed had the F-15 pilots contacted the Airborne Command Element directly. The congressional requesters also asked us to (1) determine whether military justice investigations, conducted after the Aircraft Accident Investigation Board completed its work, had complied with provisions in the Uniform Code of Military Justice (UCMJ); (2) determine if improper or unlawful command influence had been exerted during the UCMJ process; and (3) answer general questions raised by family members and others regarding actions taken following the investigations. First, we found that the subsequent UCMJ investigations complied with provisions in the UCMJ and the Manual for Courts-Martial. Preliminary inquiries, under the Rules for Court-Martial (RCM), were conducted into the actions of 14 officers. The Air Force used two separate investigative paths, one for seven AWACS-related officers and the other for the two F-15 pilots and five Operation Provide Comfort officials. The former were investigated by a command separate from the one to which they were assigned. This command developed evidence beyond the material contained in the Board’s report. As a result of the preliminary inquiry, charges were preferred against four AWACS crew members and the Airborne Command Element. An Investigating Officer investigated these charges under Article 32, UCMJ; one officer was determined to be blameless; and the Commander, 963d Air Control Squadron retired as a Lieutenant Colonel although he had been selected for promotion. After the Article 32 investigation, one officer—the Senior Director of the AWACS crew—was tried by general court-martial and acquitted, and one officer received nonjudicial punishment in the form of a letter of reprimand. The remaining three officers received administrative letters of reprimand. On a separate path, the actions of the two F-15 pilots and five Operation Provide Comfort officials were reviewed under RCM in a preliminary inquiry conducted by the pilots’ Wing Commander. The Wing Commander relied on the Board report and filed dereliction-of-duty and negligent homicide charges against the F-15 wingman that were the focus of an Article 32 investigative hearing. Subsequently, charges against this pilot were dropped; however, he later received a letter of reprimand. Administrative action was taken against four other officers: the lead pilot received a letter of reprimand, two other officers received letters of admonition, and one received a letter of counseling. No action was taken against the remaining two officers. The Air Force also convened Flying Evaluation Boards for the two F-15 pilots involved in the shootdown. In addition, 16 months after the incident and 6 days after the House Committee on National Security hearing, the Chief of Staff of the Air Force found that a number of performance evaluations of personnel involved in the incident (1) were inconsistent with administrative actions taken by higher-level commanders and (2) failed to reflect that some officers had not met Air Force standards. Accordingly, the Chief of Staff prepared negative letters of evaluation regarding seven officers involved in the shootdown and implemented additional actions against five of them. Second, based on our review of the summary reports of investigation and statements made by cognizant officials, we found no evidence of improper or unlawful command influence in the investigative or judicial process. However, we were unable to complete our investigation and determine whether the consideration and disposition of suspected offenses under the UCMJ were the result of improper or unlawful command influence. Department of Defense officials would not allow us to interview the key officials—Convening Authorities, Inquiry Officers, and Investigating Officers—involved in the UCMJ investigations. On July 12, 1994, the Secretary of Defense approved the Aircraft Accident Investigation Board report. The Secretary of the Air Force thereafter forwarded the report to the Commander, Air Combat Command, and the Commander, U.S. Air Forces in Europe, as well as to the Commander, U.S. Army in Europe, for appropriate action under the UCMJ and any administrative actions. Thus, the Air Force UCMJ investigations followed two separate paths—through Air Combat Command (AWACS-related personnel) and U.S. Air Forces in Europe (Combined Task Force Operation Provide Comfort personnel and F-15 pilots). The AWACS mission crew and 963d Squadron Commander involved in the shootdown were assigned to the 552d Air Control Wing, which was under the jurisdiction of the 12th Air Force. However, the Staff Judge Advocate to the 12th Air Force had served as Legal Advisor to the Aircraft Accident Investigation Board. As a result, the Air Force considered him disqualified from conducting an RCM investigation or serving as staff judge advocate to the Convening Authority during the disciplinary review. Therefore, the Commander, Air Combat Command, designated the Commander of the 8th Air Force as the court-martial Convening Authority.The Commander, 8th Air Force, appointed an Inquiry Officer to conduct an RCM 303 inquiry regarding the actions of seven officers under Air Combat Command’s command. The seven officers were the CFAC Mission Director, Airborne Command Element, 963d Squadron Commander, AWACS Mission Crew Commander, AWACS Senior Director, AWACS Enroute Controller, and AWACS TAOR Controller. On July 18, 1994, the Convening Authority appointed an Inquiry Officer to conduct the RCM 303 investigation. The Commander, U.S. Air Forces in Europe, designated the Commander of the 17th Air Force as the court-martial Convening Authority. On July 22, 1994, the Convening Authority appointed an Inquiry Officer to conduct a preliminary inquiry under RCM 303 into the roles of the following seven officers in the shootdown: Combined Task Force Commander, CFAC Commander, CFAC/DO, Combined Task Force Director of Plans and Policy, Combined Task Force Intelligence Officer, and the two F-15 pilots. The F-15 pilots were assigned to the 53d Fighter Squadron at Spangdahlem Air Base, Germany. The Inquiry Officer was the Commander of the 52d Fighter Wing at Spangdahlem Air Base to whom the F-15 pilots’ squadron reported. The Commander, U.S. Army in Europe, directed the Judge Advocate, U.S. Army in Europe, to determine whether administrative or disciplinary action was warranted against any Army personnel for their role in the incident. The actions of one person—the Combined Task Force Chief of Staff (an Army colonel)—were considered as possible for review. On July 18, 1994, the Convening Authority appointed legal, F-15, and AWACS advisors to assist the Inquiry Officer. The investigation was conducted from July 18 to August 18, 1994. The inquiry team obtained testimony from AWACS personnel, flew in an AWACS, observed simulated Operation Provide Comfort missions, and interviewed senior directors and controllers not on the incident flight. The Inquiry Officer prepared a 77-page report, largely consisting of an analysis of the charges against the officers, with 2 volumes of supporting material. The report also reflected the Inquiry Officer’s logic for selecting the appropriate articles of the UCMJ that might be applicable to the actions of the AWACS-related personnel, including manslaughter, negligent homicide, and dereliction of duty. The Inquiry Officer said that voluntary or involuntary manslaughter charges would be inappropriate against the AWACS-related officers for their involvement in the shootdown. The Inquiry Officer concluded that negligent homicide charges could be made against some of them for their involvement in this matter; but he recommended against this course of action, because “the occurrence of an independent, unforeseeable, intervening act, namely the incorrect identification of the helicopters by the F-15 pilots . . .” would not support a conviction for negligent homicide. On August 30 and 31, 1994, the Inquiry Officer preferred dereliction-of-duty charges against the following AWACS-related officers: the Airborne Command Element, the Mission Crew Commander, the Senior Director, the Enroute Controller, and the TAOR Controller. No charges were preferred against the 963d Airborne Air Control Squadron Commander or the Mission Director. The Inquiry Officer concluded that no adverse action should be taken against the Mission Director because he had not failed to take any required actions. On September 7, 1994, the Convening Authority appointed an Article 32, UCMJ, Investigating Officer, who was assigned to the U.S. Air Force Trial Judiciary, to examine the charges against the five charged officers, in accordance with RCM 405. The Convening Authority directed the Investigating Officer to inquire into the truth of the matters set forth in the charges, secure information to determine what their disposition should be, and issue a report and advisory recommendations. The Investigating Officer held a joint Article 32 investigative hearing involving all five officers from October 11 to October 26, 1994. Forty-eight witnesses testified at the hearing; and the government and defense attorneys entered 271 exhibits, including 54 classified exhibits, into the hearing record. The Investigating Officer issued his report on November 12, 1994, and recommended that the dereliction-of-duty charge against the Senior Director be referred to a general court-martial. He also recommended that the Enroute Controller receive nonjudicial punishment under Article 15, UCMJ, and that the charges against the remaining three officers be dismissed. In his appointment letter, the Commander, 17th Air Force, directed the RCM 303 Inquiry Officer to (1) determine if any of the seven officers (Combined Task Force Commander and staff and two F-15 pilots) had committed acts related to the shootdown that amounted to offenses punishable under the UCMJ, (2) recommend disposition of any offense and whether administrative actions were warranted, and (3) file charges if warranted. He also appointed two legal advisors and a technical advisor to assist the Inquiry Officer. The Inquiry Team reviewed the Aircraft Accident Investigation Board report and supporting documentation. It neither obtained oral testimony nor collected any additional evidence; instead, it relied on witness interviews conducted by the Board. On August 29, 1994, the Inquiry Officer issued a 66-page report on his investigation. The report identified the following as “possible” offenses: dereliction of duty by all seven officers, involuntary manslaughter by the F-15 pilots, and negligent homicide by all the officers except the Intelligence Officer. After concluding that three officers had committed violations under the UCMJ, the Inquiry Officer preferred dereliction-of-duty charges against two Operation Provide Comfort senior officers and dereliction-of-duty and negligent homicide charges against one F-15 pilot, the wingman. On September 8, 1994, the Commander, U.S. Air Forces in Europe, appointed an Article 32 Investigating Officer, who was assigned to the U.S. Air Force Trial Judiciary, European Circuit, to investigate the charges against the F-15 wingman. In accordance with RCM 405, Manual for Courts-Martial, the Commander directed the Investigating Officer to inquire into the truth of the matters set forth in the charges by the Inquiry Officer, secure information to determine what disposition should be made of the charges, and issue a report with advisory recommendations. The Investigating Officer held an Article 32 hearing on November 7-9, 1994. The government attorneys called one witness—the F-15 flight leader—and entered 18 exhibits into the hearing record. The exhibits included (1) the transcript of the F-15 wingman’s taped account of the shootdown made in the cockpit approximately 45 minutes after the shootdown, (2) the wingman’s testimony before the Aircraft Accident Investigation Board, and (3) the flight leader’s testimony during the investigation of the aircraft accident and the AWACS Article 32 hearing. The defense attorneys called no witnesses and entered 116 exhibits into the hearing record, including a prepared statement read by the wingman during the hearing and a detailed, 102-page factual and legal presentation of his theory of the case. The Investigating Officer issued his report on November 12, 1994, and recommended dismissal of the charges against the wingman. His analysis focused on whether the lead pilot had called the AWACS announcing the engagement before or after the wingman responded to the lead pilot’s directive to confirm whether the helicopters were Iraqi Hinds. He concluded that if the call was made before the wingman’s response, the lead pilot had relieved the wingman of the duty to independently identify the helicopters. Based on his review of the pilots’ testimony and the wingman’s experience, he concluded that it was more likely that the lead pilot’s engagement announcement had preceded the wingman’s alleged “nonresponsive” confirmation. On September 30, 1994, the Judge Advocate, U.S. Army in Europe, advised the Commander, U.S. Army in Europe, that consideration was warranted concerning whether the Combined Task Force Chief of Staff was responsible for the breakdown in staff communication that had been cited in the Board report. After reviewing the relevant Board testimony and other evidence, however, he recommended that no adverse action be taken against the officer because he had (1) focused his attention according to the Combined Task Force Commander’s direction, (2) had neither specific direction nor specific reason to inquire into the transmission of information between his Director of Operations for Plans and Policy and the CFAC, (3) been the most recent arrival and the only senior Army member of a predominately Air Force staff and therefore generally unfamiliar with air operations, and (4) relied on experienced colonels under whom deficiencies had occurred. The Flying Evaluation Boardsconvened as a result of the shootdown, made findings concerning the proficiency, professionalism, care, and judgment of the two pilots, and made recommendations concerning their suitability for future aviation responsibilities. Upon review of the Boards’ findings and recommendations, the Commander, 17th Air Force determined that both pilots should be reassigned to noncombat aircraft. He further recommended that the F-15 lead pilot, Captain Eric A. Wickson, should be assigned next as an instructor pilot in basic flight training. The Commander, U.S. Air Forces in Europe concurred with this determination and also concluded that the F-15 wingman, Lieutenant Colonel Randy W. May should be reassigned to a nonflying aviator staff position. On the basis of his review of administrative actions taken by higher-level authorities regarding Air Force personnel involved in the shootdown, the Air Force Chief of Staff determined that the personnel records of some involved personnel did not reflect their failure to meet Air Force standards. Accordingly, for seven of those involved in the incident, he wrote letters of evaluation that addressed how each of the officers had failed to meet these standards and took additional action against five officers. On January 20 and 25, 1995, the Commander, 17th Air Force, appointed separate Flying Evaluation Boards for Captain Wickson and Lieutenant Colonel May. Each board consisted of a senior board member and two board members, all of whom were pilots; a legal advisor; a recorder; and a reporter. The Commander, 17th Air Force, directed the two senior board members to make special findings on whether the pilots had shown lack of judgment in performing their duties on April 14, 1994, and whether they were unsuited for duty in a combat aircraft role. The Commander, 17th Air Force, also directed the boards to make recommendations on whether the pilots had potential to continue flying. Captain Wickson’s Flying Evaluation Board was held on February 6, 1995; and Lieutenant Colonel May’s, on February 9-10, 1995. The pilots were the only witnesses in their Flying Evaluation Board hearings. The government and defense attorneys submitted eight volumes of evidence in the Wickson hearing and seven volumes of evidence in the May hearing, including the Aircraft Accident Investigation Board summary of facts and executive summary; the Operation Plan for Operation Provide Comfort; the Aircrew Read File; each pilot’s testimonies before the Aircraft Accident Investigation Board and Article 32 hearings; the transcript of Lieutenant Colonel May’s aircraft videotape of the incident; a Kurdish citizen’s videotape of the incident; and each pilot’s medical and training records, ratings, and awards. On April 5, 1995, the Commander, U.S. Air Forces in Europe, concurred with the boards’ recommendations that Lieutenant Colonel May and Captain Wickson remain qualified for aviation service. He also directed that Lieutenant Colonel May be reassigned to a staff position not involving flying duties and that Captain Wickson be reassigned to flying duties (1) as an instructor in basic flying training or (2) in other noncombat aircraft. On July 25, 1995, the Secretary of the Air Force requested that the Air Force Chief of Staff review the administrative actions taken in regard to the Air Force personnel involved in the shootdown. On August 9, 1995, the Air Force Chief of Staff advised the Secretary of the Air Force of the actions he had taken. The Chief of Staff said that the military justice process had worked as it was supposed to after the incident and that he was comfortable with the military justice actions taken. He concluded that a proper balance between command involvement and individual rights had been maintained throughout the military justice process. Further, the administrative actions taken by commanders were within an appropriate range of options available to them. However, he said that a number of performance evaluations of involved personnel were inadequate because they were inconsistent with administrative actions taken by higher-level commanders and failed to reflect that the ratees had not met Air Force standards. Accordingly, pursuant to authority granted him by the Secretary of the Air Force, he prepared the following letters of evaluation regarding seven of the Air Force personnel involved in the shootdown and implemented additional actions against five. Combined Task Force Commander, Brigadier General Jeffrey S. Pilkington. A letter of evaluation addressed his failure to meet Air Force standards and became a permanent part of his record. CFAC Commander, Brigadier General Curtis H. Emery. A letter of evaluation was placed in his permanent record to reflect his failure to meet Air Force standards. F-15 Wingman, Lieutenant Colonel Randy W. May. A letter of evaluation was placed in his officer selection record to reflect his failure to meet Air Force standards. He was disqualified from aviation service for 3 years. F-15 Lead Pilot, Captain Eric A. Wickson. A letter of evaluation was placed in his officer selection record to reflect his failure to meet Air Force standards. He was disqualified from aviation service for 3 years based on his demonstrated lack of judgment associated with flight activities. AWACS Senior Director, Captain Jim Wang. A letter of evaluation detailing his failures to meet Air Force standards was included in his officer selection record and disqualified him from assignment to duties involving control of aircraft in air operations for at least 3 years. AWACS Enroute Controller, Captain Joseph M. Halcli. A letter of evaluation reflecting his failure to meet Air Force standards was placed in his officer selection record and disqualified him from assignment to duties involving control of aircraft in air operations for at least 3 years. AWACS TAOR Controller, First Lieutenant Ricky L. Wilson. A letter of evaluation reflecting his failure to meet Air Force standards was placed in his officer selection record. It recommended that he not be assigned to duties involving control of aircraft in air operations for at least 3 years. Our review of the summary reports of investigation during the UCMJ process and statements by officials knowledgeable of that process revealed no evidence of command influence. However, we were unable to confirm that the consideration and disposition of suspected offenses under UCMJ had not been subject to unlawful command influence because we were denied our request to interview applicable UCMJ Convening Authorities, Inquiry Officers, and Investigating Officers. The Investigating Officer in the AWACS Article 32 hearing stated that he had not been subject to command influence during the proceedings. The counsel for the Senior Director, Captain Wang, had filed a motion to dismiss the charges against the Senior Director based on an allegation of unlawful command influence by the Secretary of Defense on the Secretary of the Air Force. In response to that motion, six officials provided either a Stipulation of Expected Testimony, a memorandum, or an affidavit stating that they had neither been the subject of improper command influence nor taken action to improperly influence military justice officials. These officials were the Secretary of the Air Force; Air Force Chief of Staff; Commander, Air Combat Command; Deputy Staff Judge Advocate, Headquarters Air Combat Command (Legal Advisor to the RCM 303); the RCM 303 Inquiry Officer; and the General Court-Martial Convening Authority, the Commander, 8th Air Force. The convening judge denied the motion, ruling that the defense had failed to meet its burden of establishing at least the appearance of unlawful command influence. Further, to address the question of command influence in the case of the Senior Director, Captain Wang’s military attorney told us that he interviewed the Secretary of the Air Force about whether she or the Secretary of Defense had intervened in the court-martial. The attorney was satisfied that neither of them had exercised command influence during the UCMJ process. However, our request to the Air Force and the Department of Defense to interview military officials involved in the Black Hawk UCMJ proceedings was denied. These officials included the Convening Authorities, RCM 303 Inquiry Officers, and Article 32 Investigating Officers for investigations by both the Air Combat Command and the U.S Air Forces in Europe. The Department of Defense voiced the belief that “any Congressional intrusion into the deliberative process . . . endangers the actual and perceived independence of the military justice system.” We assured the Air Force that we would ask those officials only about the presence of unlawful command influence and would not intrude into the deliberative processes they had used in the proceedings, but we were denied access to those decision-makers who might have knowledge of possible influence. Consequently, we were unable to confirm whether the consideration and disposition of suspected offenses under the UCMJ were the result of improper or unlawful command influence. In accord with concerns voiced by victims’ family members and others, we also looked at the corrective and other actions taken after the shootdown. Military officials took immediate actions to help ensure that the Black Hawk accident was not repeated. Further, after the issuance of the Aircraft Accident Investigation Board report, the European Command; the Chairman, Joint Chiefs of Staff; the Air Combat Command; and the Air Force instituted a large number of corrective actions. These actions included modification of the Rules of Engagement; inclusion of Black Hawk flight times on the Air Tasking Order; reviews of command structure and operations, plus operating doctrines and procedures; revision of AWACS training programs and certification procedures; and modifications of visual and electronic identification training. In transmitting the Board report to the Secretary of Defense, the Chairman of the Joint Chiefs of Staff made the following observation: “For over 1,000 days, the pilots and crews assigned to Operation Provide Comfort flew mission after mission, totalling over 50,000 hours of flight operations, without a single major accident. Then, in one terrible moment on the 14th of April, a series of avoidable errors led to the tragic deaths of 26 men and women of the American Armed Forces, United States Foreign Service, and the Armed Forces of our coalition allies. In place were not just one, but a series of safeguards—some human, some procedural, some technical—that were supposed to ensure an accident of this nature could never happen. Yet, quite clearly, these safeguards failed.” According to an Air Combat Command official who was familiar with the Board’s report and who participated in the Command’s UCMJ investigations, over 130 separate mistakes were involved in the shootdown. A discussion follows of some corrective actions spawned by the shootdown and the Aircraft Accident Investigation report. Beginning April 15, 1994, the European Command and Combined Task Force Commanders instituted immediate corrective actions designed to prevent a recurrence of the shootdown. The actions included, among others, modification of the Rules of Engagement, to restrict procedures for engaging Iraqi helicopters; inclusion of Black Hawk flight times on the Air Tasking Order; requirement for verbal confirmation of a positive IFF Mode IV check on all Operation Provide Comfort aircraft prior to their entry into the TAOR; reorganization of the Combined Task Force to designate one U.S. Air Force Colonel exclusively as the Commander, CFAC; further definition of AWACS responsibilities for coordination of air operations; placement of radios on Black Hawk flights to enable communication with fighter aircraft; and painting of white recognition stripes on the Black Hawk rotor blades to enhance their identification from the air. In response to a directive from the Deputy U.S. Commander in Chief, Europe, an Air Force/Army team assessed Operation Provide Comfort’s mission, organization, and operations. The assessment was conducted from May 31 to June 8, 1994, and placed particular emphasis on the adequacy of European Command guidance and oversight; the Combined Task Force command structure and organization, manning, and support; and operating doctrine and procedures. The assessment team flew missions with F-15, Black Hawk, and AWACS units; interviewed key personnel and random unit personnel; and reviewed organizational plans, procedures, and directives. The team issued a 59-page classified report that contained over 40 recommendations for operations improvements. During October 14-22, 1995, a second team conducted another operational assessment of Operation Provide Comfort and made 166 additional recommendations in a classified report. A number of recommendations made by both teams have been implemented. On July 7, 1994, the Chairman of the Joint Chiefs of Staff, with the approval of the Secretary of Defense, directed that (1) all Commanders in Chief review their Joint Task Force operations to ensure that they were conducted in accordance with published joint doctrine; (2) the Commanders in Chief establish a program of regular oversight of all their Joint Task Force operations; and (3) his staff review the curricula of all appropriate professional military education institutions to ensure proper emphasis on Joint Task Force organization, procedures, and operations. The Chairman also recommended that the Secretary of Defense direct the Air Force Chief of Staff to review the adequacy of AWACS training programs and certification procedures, develop a retraining program based on the lessons learned from the shootdown, and ensure that all mission aircrews underwent this training. The Chairman further convened a conference of the Joint Chiefs and all Commanders in Chief on September 15, 1994, to discuss actions being taken to prevent a recurrence of the shootdown. On October 6, 1994, the Chairman advised the Secretary of Defense that all Commanders in Chief had completed reviews of their joint operations, aggressively implemented changes where required, and established programs to ensure regular oversight of those operations. Further, the Joint Staff found shortcomings in how Joint Task Force operations had been addressed in professional military education systems. According to the Chairman, each of the shortcomings was being addressed and corrections implemented. At the direction of the Secretary of Defense and the Chairman of the Joint Chiefs of Staff, the Secretary of the Air Force tasked the Air Combat Command to investigate the specific operational issues identified in the Aircraft Accident Investigation Board report. The Air Combat Command assembled a “Tiger Team” consisting primarily of Air Combat Command headquarters staff augmented with representatives from the 8th Air Force, Air Force Weapons Center, Air National Guard, and the 552d Air Control Wing. The team divided into three groups: AWACS/Airborne command and control, visual and electronic identification, and ground command and control. The three groups used the Aircraft Accident Investigation Board report as a frame of reference and identified 90 issues, which they studied in depth. The Air Combat Command Tiger Team issued its report on September 14, 1994, making about 140 recommendations, most of which had been completed or were underway when the report was issued. The report also proposed six recommendations for consideration by the Air Staff or the Joint Chiefs of Staff. Concurrent with the Air Combat Command tasking, the Secretary of the Air Force appointed an Air Force special task force to assist all Air Force commands in identifying potential problem areas and implementing appropriate corrections. The task force effort, which included the Air Combat Command Tiger Team work, involved over 120 people and over 30,000 hours in 6 major Air Force commands and Air Force Headquarters. The task force’s primary emphasis was to determine if the shootdown was an isolated incident or indicative of a bigger problem. It issued its report to the Secretary of Defense on September 30, 1994. The report concluded that the incident was not indicative of a larger Air Force problem and that the following two breakdowns in individual crew performance had contributed to the incident: (1) the AWACS failed to build and provide an accurate air picture and (2) the F-15 pilots misidentified the target. The report also recommended a one-time retraining and recertification program for all AWACS aircrews and a plan to reduce the temporary duty of AWACS crews to 120 days per year. The report concluded that the Air Force had corrected, or was in the process of correcting, training programs to address the shortcomings noted. On July 27, 1995, the Commander, Air Combat Command, informed the Air Force Chief of Staff that the Air Combat Command had completed a majority of the Tiger Team recommendations and that efforts were on target in achieving the desired results. He said that all AWACS crews had been recertified by October 13, 1994, and that the certification process was being applied to all AWACS crews deploying to any location. He further stated that AWACS temporary duty rates had been decreased from 166 to 135 days per year from January 1995 to July 1995. He also said that Air Combat Command planned to increase the number of AWACS crews. However, he noted that the Air Combat Command was continuing to work on the following three areas: computer-based training devices, visual identification, and electronic identification. For example, he stated that the Air Combat Command had updated visual identification training material, provided computer hardware for the Air Force-improved computer-based training developed by an Air Force contractor, and distributed the material to all Air Combat Command fighter units. The Commander, Air Combat Command, noted that the new product was an improvement over previous training materials (35MM slides and video) but that it did not fully meet the Command’s needs. He said that the Air Combat Command, in conjunction with the Air Education and Training Command, was pursuing an enhanced visual training program that would expand capabilities and allow aircrews to view three-dimensional or animated images against a variety of backgrounds from multiple aspects in all configurations and camouflage paint schemes. This new program was distributed to all Air Combat Command units in January 1996. | Pursuant to a congressional request, GAO reviewed military investigations made subsequent to the April 14, 1994, shootdown by U.S. Air Force F-15 fighters of two Army Black Hawk helicopters over Iraq in which 26 individuals died. GAO noted that: (1) the Aircraft Accident Investigation Board conducted an extensive investigation that complied with evidentiary requirements and guidelines in collecting and preserving evidence and produced a report that, with a few exceptions, provided an overview of the factual circumstances relating to the accident; (2) the report focused on command and control problems, including individuals' lack of knowledge of specific procedures, but: (a) did not discuss the F-15 pilots' responsibility to report to the Airborne Command Element when encountering an unknown aircraft; and (b) cited a statement that inaccurately portrayed the Airborne Command Element as not having authority to stop the incident, even though evidence that the Airborne Command Element had the authority was available to the Board; (3) the Board President erroneously concluded that the Black Hawks' use of an incorrect electronic identification code resulted in the F-15 pilots not receiving an electronic response; (4) family members and others raised concerns about a perceived general lack of discipline in the F-15 pilot community in Operation Provide Comfort and a perceived urgency by the pilots to engage during the shootdown, but the Board's report and opinion did not discuss these issues; (5) Operation Provide Comfort officials stated that the pilots' failure to contact the Airborne Command Element was the result of a lack of F-15 mission discipline at the time of the incident; (6) the officials stated that, in their view, there was no reason for the F-15 pilots' urgency to engage; (7) these issues are not inconsistent with the Board President's conclusion regarding the chain of events, but including them in the Board's report may have raised additional questions about the pilots' actions and the Airborne Command Element that could have been useful in subsequent proceedings; (8) during its review of the Aircraft Accident Investigation Board process, GAO found no evidence of improper or unlawful command influence; (9) the Uniform Code of Military Justice (UCMJ) investigations complied with provisions of the UCMJ and the Manual for Courts-Martial; (10) GAO found no evidence of improper or unlawful command influence in the UCMJ investigations, but was unable to confirm whether the consideration and disposition of suspected offenses under the UCMJ were the result of improper or unlawful command influence; and (11) the Air Force Chief of Staff took additional personnel actions after finding that a number of individuals' performance evaluations had not reflected their failure to meet Air Force standards. |
In 1944, President Franklin D. Roosevelt made a commitment that no servicemen blinded in combat in World War II would be returned to their homes without adequate training to meet the problems imposed by their blindness, according to VA. From 1944 to 1947, the Army and Navy provided this rehabilitation training. In 1947, responsibility for this training was transferred to VA, and in 1948, VA opened its first BRC to provide comprehensive inpatient care to legally blind veterans. In 1956, blind rehabilitation services were expanded to include veterans whose legal blindness was not service-connected. Because of this expansion, the demographics of VA’s blind veteran population shifted toward predominately older veterans whose legal blindness was caused by age-related eye diseases. Expanded eligibility also caused an increase in demand for services. VA responded to this demand by opening 9 additional BRCs in the United States and Puerto Rico for a total of 10 facilities with 241 authorized beds. (See table 1.) As of May 5 2004, VA reported that there were 2,127 legally blind veterans waiting for admission to BRCs. In fiscal year 2003, VA estimated that about 157,000 veterans were legally blind, with more than 60 percent age 75 or older. About 44,000 legally blind veterans were enrolled in VA health care. VA estimated that through 2022, the number of legally blind veterans would remain stable. (See fig. 1.) The National Institutes of Health (NIH) considers the increase in age- related eye diseases to be an emerging major public health problem. According to NIH, the four leading diseases that cause age-related legal blindness are cataract, glaucoma, macular degeneration, and diabetic retinopathy, each affecting vision differently. (See fig. 2 for illustrations of how each disease affects vision.) Cataract is a clouding of the eye’s normally clear lens. Most cataracts appear with advancing age, and by age 80, more than half of all Americans develop them. Glaucoma causes gradual damage to the optic nerve—the nerve to the eye—that results in decreasing peripheral vision. It is estimated that as many as 4 million Americans have glaucoma. Macular degeneration results in the loss of central visual clarity and contrast sensitivity. It is the most common cause of legal blindness in older Americans and rarely affects those under the age of 60. Diabetic retinopathy is a common complication of diabetes impairing vision over time. It results in the loss of visual clarity, peripheral vision, and color and contrast sensitivity. It also increases the eye’s sensitivity to glare. Nearly half of all diabetics will develop some degree of diabetic retinopathy, and the risk increases with veterans’ age and the length of time they have had diabetes. To assist legally blind veterans, VA established Visual Impairment Services Team (VIST) coordinators who act as case managers and are responsible for coordinating all medical services for these veterans, including obtaining medical examinations and arranging for blind rehabilitation services. There are about 170 VIST coordinators, who are located at VA medical centers that have at least 100 enrolled legally blind veterans. VIST coordinators are also responsible for certain administrative services such as reviewing the veteran’s compensation and pension benefits. Almost all of VA’s blind rehabilitation services for veterans are provided through comprehensive inpatient care at BRCs, where veterans are trained to use their remaining vision and other senses, as well as adaptive devices such as canes, to help compensate for impaired vision. VA offers both basic and computer training. (See table 2 for examples of the types of skills taught during basic and computer training.) In fiscal years 2002 and 2003, VA spent over $56 million each year for inpatient training at BRCs. During this same time period, VA spent less than $5 million each year to provide outpatient rehabilitation training for legally blind veterans. VA offers three types of blind rehabilitation outpatient services to legally blind veterans, but these services are available in few VA locations. The three types of services include Visual Impairment Services Outpatient Rehabilitation (VISOR), Visual Impairment Center to Optimize Remaining Sight (VICTORS), and Blind Rehabilitation Outpatient Specialists (BROS). The services range from short-term outpatient programs provided in VA facilities to home-based services. Figure 3 identifies the locations throughout the United States and Puerto Rico where these services are offered. VISOR is a 10-day outpatient program located at the VA medical center in Lebanon, Pennsylvania, that offers training in the use of low vision equipment, basic orientation and mobility, and living skills. Serving veterans in the surrounding 13-county area, it is primarily for veterans who can independently perform activities of daily living and who require only limited training in visual skills and orientation and mobility, such as traveling within and outside their homes. According to a VISOR official, the program is meant to provide training to veterans while they wait for admission to a BRC or to veterans who do not want to attend a BRC. Veterans who participate in this program are housed in hoptel beds within the medical facility. In fiscal year 2003, 54 veterans attended the VISOR program; about 20 to 30 percent of these veterans were legally blind. According to a VISOR official, there is no waiting list for this program and the local medical center provides the necessary funding for it. VICTORS is a 3- to 7-day outpatient program for veterans in good health whose vision loss affects their ability to perform activities of daily living, such as personal grooming and reading mail. The program provides the veterans with a specialized low vision eye examination, prescriptions for and training in the use of low vision equipment, and counseling. There are three VICTORS programs located in VA medical centers in Kansas City, Missouri; Chicago, Illinois; and Northport, New York. Veterans are housed in hoptel beds within the medical facility or in nearby hotels. In fiscal year 2003, VICTORS served over 900 veterans; about 25 to 30 percent of these veterans were legally blind. According to VICTORS officials, the wait time for admission to VICTORS varied from about 55 to about 170 days. The medical center where the program is located funds the services. BROS are blind rehabilitation outpatient instructors who provide a variety of short-term services to veterans in their homes and at VA facilities. BROS train veterans prior to and following their participation in BRC programs, as well as veterans who cannot or do not choose to attend a BRC. BROS training addresses veterans’ immediate needs, especially those involving safety issues such as reading prescriptions or simple cooking. There are 23 BROS throughout VA’s health care system, with 7 located in the VA network that covers Florida and Puerto Rico. In fiscal year 2003, BROS trained about 2,700 veterans, almost all of whom were legally blind. Wait time for BROS services varied from about 14 to 28 days according to the BROS we interviewed. BROS are funded by the medical centers where they are located. VA officials who provide services to legally blind veterans told us that some veterans could benefit from increased access to outpatient blind rehabilitation services. We obtained this information by asking VA to review all of the veterans who, as of March 31, 2004, were on the waiting lists for admission to the five BRCs we visited and to determine whether outpatient services could meet their needs. VA officials reported that 315 out of 1,501 of these veterans, or 21 percent, could potentially be better served through access to outpatient blind rehabilitation services, if such services were available. The types of veterans VA believes could potentially benefit from outpatient services include those who are very elderly or lack the physical stamina to participate in a comprehensive 28- to 42-day BRC program and those who have medical needs that cannot be provided by the BRC. For example, some BRCs are unable to accept patients requiring kidney dialysis. In addition, some veterans do not want to leave their families for long periods of time and some legally blind veterans are primary caretakers for their spouses and are unable to leave their homes. VA officials also told us that veterans in good health who can independently perform activities of daily living and require only limited or specialized training could also be served effectively on an outpatient basis. A VA study concluded that there is a need for increased outpatient services for legally blind veterans. In 1999, VA convened a Blind Rehabilitation Gold Ribbon Panel to study concerns about the growing number of legally blind veterans. The panel examined how VA historically provided blind rehabilitation services and recommended that VA transition from its primarily inpatient model of care to one that included both inpatient and outpatient services. In 2000, VA established the VIAB to implement the panel’s recommendations. The VIAB drafted guidance for a uniform standard of care policy for visually impaired veterans throughout VA’s health care system. This guidance outlined a continuum of care to provide a range of services from basic low vision to comprehensive inpatient rehabilitation training, including use of more outpatient services from both VA and non-VA sources. In January 2004, a final draft of the uniform standard of care policy was forwarded to VA’s Health Systems Committee for approval. The committee believed additional information was needed for its approval and requested additional analysis that compared currently available blind rehabilitation services with anticipated needs. VA plans to complete this analysis in the first quarter of fiscal year 2005 and then resubmit the uniform standard of care policy and the additional analysis to the Health Systems Committee. VA officials were unable to provide a timeframe for the Health Systems Committee’s approval. Some VIST coordinators have already provided outpatient services to legally blind veterans by referring them to state and private blind rehabilitation services. For example, in Florida a VIST coordinator referred veterans to the Lighthouse for the Blind for computer training at its outpatient facility if they did not live near and did not want to travel to the BRC. A VIST coordinator in Oklahoma arranged contractor-provided computer training in the veteran’s home for veterans with a 20 percent or more service-connected disability. The coordinator issued the computer equipment to a local contractor; the contractor then set up the equipment in the veteran’s home and provided the training. Another VIST coordinator in North Carolina referred all legally blind veterans to state service agencies, including veterans waiting for admission to a BRC. Each county in that state had a social worker for the blind that referred its citizens to independent living programs for in-home training in orientation and mobility and living skills. The state provided this training at no charge to the veteran and VA paid for the equipment. Recently, VA has begun to shift computer training from inpatient settings at BRCs to private sector outpatient settings. VA’s goal was to remove from the BRC waiting list by July 30, 2004, those veterans seeking admission to a BRC only for computer training. In spring 2004, VA issued instructions stating that the prosthetic budget of each medical center, which already paid for computer equipment for legally blind veterans, would now pay for computer training. Additionally, the Blind Rehabilitation Service Program Office asked BRCs to identify all the veterans waiting for admission for computer training and refer them back to their VIST coordinator for local computer training. If BRC and VIST coordinator staff determined that local computer training was not available or appropriate for a veteran, they were to provide an explanation to the program office. On May 5, 2004, 674 veterans were waiting for admission to a BRC for computer training. As of July 1, 2004, 520 veterans were removed from the BRC waiting list because arrangements were made for them to receive computer training from non-VA sources or they no longer wanted the training. There are two factors that affect VA’s expansion of outpatient services systemwide. One factor is the agency’s long-standing belief that rehabilitation training for legally blind veterans can be best provided in a comprehensive inpatient setting. The second reported factor is VA’s method of allocating funds for blind rehabilitation outpatient services, which provides local medical center management discretion to provide funds for them. Some VA officials told us that one factor affecting veterans’ access to outpatient care has been the agency’s traditional focus on providing comprehensive inpatient training at BRCs. VA has historically considered the BRCs to be an exemplary model of care, and since 1948 BRCs have been the primary source of care for legally blind veterans. However, this delivery model has not kept pace with VA’s overall health care strategy that reduces reliance on inpatient care and emphasizes outpatient care. VA’s continued reliance on inpatient blind rehabilitation care is evident in its recent decision to build two additional BRCs in Long Beach, California, and Biloxi, Mississippi. We have, however, observed some recent changes that may affect this reliance on inpatient services. For example, VA has new leadership in its blind rehabilitation program that has expressed an interest in providing a broad range of inpatient and outpatient services to meet the training needs of legally blind veterans. Further, as previously discussed, the VIAB’s draft continuum of care policy recommends a full range of blind rehabilitation services, emphasizing more outpatient care, including VICTORS, VISOR, and BROS. VA blind rehabilitation officials also told us that they believe changes to VA’s resource allocation method could provide an incentive to expand blind rehabilitation services on an outpatient basis. The VIAB believes that the funds allocated for basic outpatient care for legally blind veterans do not cover the cost of providing blind rehabilitation services. Veterans Integrated Service Networks (networks) are allocated funds to provide basic outpatient care for veterans, which they then allocate to the medical centers in their regions. Both the networks and the medical centers have the discretion to prioritize the use of these funds for blind rehabilitation services or any other medical care. Some networks and medical centers have made outpatient blind rehabilitation training a priority and use these funds to provide outpatient services. For example, the network that covers Florida and Puerto Rico has used its allocations to fund seven BROS that are located throughout the region to provide outpatient blind rehabilitation services to legally blind veterans in their own homes or at VA facilities. Currently, the VIAB is working with VA’s Office of Finance and Allocation Resource Center to develop an allocation amount that would better reflect the cost of providing blind rehabilitation services on an outpatient basis, which could in turn, provide an incentive for networks and medical centers to expand outpatient rehabilitation services for legally blind veterans. Many legally blind veterans have some vision, which frequently can be enhanced with optical low vision devices and training that includes learning to perform everyday activities such as cooking, reading prescription bottles, doing laundry, and paying bills. Since the 1940s, VA’s preferred method of providing training to these veterans has been through inpatient services offered by BRCs. Because of its predisposition toward inpatient care, VA has developed little capacity to provide this care on an outpatient basis uniformly throughout the country. For the last 10 years, VA has been transitioning its overall health care system from a delivery model based primarily on inpatient care to one incorporating more outpatient care. Outpatient services for legally blind veterans, however, have lagged behind this trend. Recently, VA drafted a uniform standard of care policy that recommends a full range of blind rehabilitation services, emphasizing more outpatient care, including more services provided by VISOR, VICTORS, and BROS type programs. Making inpatient and outpatient blind rehabilitation training services available to meet the needs of legally blind veterans will help ensure that these veterans are provided with options to receive the right type of care, at the right time, in the right place. We are recommending that the Secretary of Veterans Affairs direct the Under Secretary for Health to issue, as soon as possible in fiscal year 2005, a uniform standard of care policy that ensures that a broad range of inpatient and outpatient blind rehabilitation services are more widely available to legally blind veterans. We provided a draft of this testimony to VA for comment. In oral comments, an official in VA’s Office of the Deputy Under Secretary for Health informed us that VA concurred with our recommendation. Mr. Chairman, this concludes my prepared remarks. I will be glad to answer any questions you or other Members of the Committee may have. For further information regarding this testimony, please contact Cynthia A. Bascetta at (202) 512-7101. Michael T. Blair, Jr., Cherie Starck, Cynthia Forbes, and Janet Overton also contributed to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In fiscal year 2003, the Department of Veterans Affairs (VA) estimated that about 157,000 veterans were legally blind, and about 44,000 of these veterans were enrolled in VA health care. The Chairman of the Subcommittee on Health, House Veterans' Affairs Committee, and the Ranking Minority Member, Senate Veterans' Affairs Committee expressed concerns about VA's rehabilitation services for blind veterans. GAO reviewed (1) the availability of VA outpatient blind rehabilitation services, (2) whether legally blind veterans benefit from VA and non-VA outpatient services, and (3) what factors affect VA's ability to increase veterans' access to blind rehabilitation outpatient services. GAO reviewed VA's blind rehabilitation policies; interviewed officials from VA, the Blinded Veterans Association, state and private nonprofit agencies, and visited five Blind Rehabilitation Centers (BRC). VA provides three types of blind rehabilitation outpatient training services. These services, which are available at a small number of VA locations, range from short-term programs provided in VA facilities to services provided in the veteran's own home. They are Visual Impairment Services Outpatient Rehabilitation, Visual Impairment Center to Optimize Remaining Sight, and Blind Rehabilitation Outpatient Specialists. VA reported to GAO that some legally blind veterans could benefit from increased access to outpatient blind rehabilitation services. When VA reviewed all of the veterans who, as of March 31, 2004, were on the waiting list for admission to the five BRCs GAO visited, VA officials reported that 315 out of 1,501 of them, or 21 percent, could potentially be better served through access to outpatient blind rehabilitation services, if such services were available. GAO also identified two factors that may affect the expansion of VA's outpatient blind rehabilitation services. The first involves VA's longstanding position that training for legally blind veterans is best provided in a comprehensive inpatient setting. The second reported factor is VA's method of allocating funds for medical care. VA is currently working to develop an allocation amount that would better reflect the cost of providing blind rehabilitation services on an outpatient basis. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.