report
stringlengths 319
46.5k
| summary
stringlengths 127
5.75k
| input_token_len
int64 78
8.19k
| summary_token_len
int64 29
1.02k
|
---|---|---|---|
In August 2004, the President issued HSPD-12, which directed the Department of Commerce to develop a new standard for secure and reliable forms of ID for federal employees and contractors to enable a common standard across the federal government by February 27, 2005. The directive defines secure and reliable ID as meeting four control objectives. Specifically, the identification credentials must be * based on sound criteria for verifying an individual employee's or contractor's identity; * strongly resistant to identity fraud, tampering, counterfeiting, and * able to be rapidly authenticated electronically; and issued only by providers whose reliability has been established by an official accreditation process. HSPD-12 stipulates that the standard must include criteria that are graduated from "least secure" to "most secure" to ensure flexibility in selecting the appropriate level of security for each application. In addition, the directive directs agencies to implement, to the maximum extent practicable, the standard for IDs issued to federal employees and contractors in order to gain physical access to controlled facilities and logical access to controlled information systems by October 27, 2005. In response to HSPD-12, NIST published FIPS 201, Personal Identity Verification of Federal Employees and Contractors, on February 25, 2005. The standard specifies the technical requirements for PIV systems to issue secure and reliable ID credentials to federal employees and contractors for gaining physical access to federal facilities and logical access to information systems and software applications. Smart cards are a primary component of the envisioned PIV system. The FIPS 201 standard is composed of two parts, PIV-I and PIV-II. PIV-I sets standards for PIV systems in three areas: (1) identity proofing and registration, (2) card issuance and maintenance, and (3) protection of card applicants' privacy. There are many steps to the identity proofing and registration process, such as completing a background investigation of the applicant, conducting and adjudicating a fingerprint check prior to credential issuance, and requiring applicants to provide two original forms of identity source documents from an OMB-approved list of documents. The card issuance and maintenance process should include standardized specifications for printing photographs, names, and other information on PIV cards and for other activities, such as capturing and storing biometric and other data, and issuing, distributing, and managing digital certificates. Finally, agencies are directed to perform activities to protect the privacy of the applicants, such as assigning an individual to the role of "senior agency official for privacy" to oversee privacy-related matters in the PIV system; providing full disclosure of the intended uses of the PIV card and related privacy implications to the applicants; and using security controls described in NIST guidance to accomplish privacy goals, where applicable. The second part of the FIPS 201 standard, PIV-II, provides technical specifications for interoperable smart card-based PIV systems. The components and processes in a PIV system, as well as the identity authentication information included on PIV cards, are intended to provide for consistent authentication methods across federal agencies. The PIV-II cards (see example in fig. 1) are intended to be used to access all federal physical and logical environments for which employees are authorized. The PIV cards contain a range of features--including photographs, cardholder unique identifiers (CHUID), fingerprints, and Public Key Infrastructure (PKI) certificates--to enable enhanced identity authentication at different assurance levels. To use these enhanced capabilities, specific infrastructure needs to be in place. This infrastructure may include biometric (fingerprint) readers, personal ID number (PIN) input devices, and connections to information systems that can process PKI digital certificates and CHUIDs. Once acquired, these various devices need to be integrated with existing agency systems, such as a human resources system. Furthermore, card readers that are compliant with FIPS 201 need to exchange information with existing physical and logical access control systems in order to enable doors and systems to unlock once a cardholder has been successfully authenticated and access has been granted. FIPS 201 includes specifications for three types of electronic authentication that provide varying levels of security assurance. * The CHUID or visual inspection, provides some confidence. * A biometric check without the presence of a security guard or attendant at the access point, offers a high level of assurance of the cardholders' identity. * A PKI check, independently or in conjunction with both biometric and visual authentication, offers a very high level of assurance in the identity of the cardholder. OMB guidance and FIPS 201 direct agencies to use risk-based methods to decide which type of authentication is appropriate in a given circumstance. In addition to the three authentication methods, PIV cards also support the use of PIN authentication, which may be used in conjunction with one of these capabilities. For example, the PIN can be used to control access to biometric data on the card when conducting a fingerprint check. NIST has issued several publications that provide supplemental guidance on various aspects of the FIPS 201 standard. NIST also developed a suite of tests to be used by approved commercial laboratories to validate whether commercial products for the PIV card and the card interface are in conformation with the standard. In August 2005, OMB issued a memorandum to executive branch agencies with instructions for implementing HSPD-12 and the new standard. The memorandum specifies to whom the directive applies; to what facilities and information systems FIPS 201 applies; and, as outlined in the following text, the schedule that agencies must adhere to when implementing the standard. * October 27, 2005--For all new employees and contractors, adhere to the identity proofing, registration, card issuance, and maintenance requirements of the first part (PIV-I) of the standard. * October 27, 2006--Begin issuing cards that comply with the second part (PIV-II) of the standard and implementing the privacy requirements. * October 27, 2007--Verify and/or complete background investigations for all current employees and contractors who have been with the agency for 15 years or less. Issue PIV cards to these employees and contractors, and require that they begin using their cards by this date. * October 27, 2008--Complete background investigations for all individuals who have been federal agency employees for more than 15 years. Issue cards to these employees and require them to begin using their cards by this date. Figure 2 shows a timeline that illustrates when HSPD-12 and additional guidance was issued as well as the major deadlines for implementing HSPD-12. The General Services Administration (GSA) has also provided implementation guidance and product performance and interoperability testing procedures. In addition, GSA establishe Managed Service Office (MSO) that offers shared service civilian agencies to help reduce the costs of procuring FIPS 201- compliant equipment, software, and services by sharing some of the infrastructure, equipment, and services among participating agencies. According to GSA, the shared service offering--referred to as the USAccess Program--is intended to provide several ser such as producing and issuing the PIV cards. As of October 2007, GSA had 67 agency customers with more than 700,000 government employees and contractors to whom cards would be issued throug shared service providers. In addition, as of December 31, 2007, the MSO had installed over 50 enrollment stations with 15 agencies actively enrolling employees and issuing PIV cards. While there are several services offered by the MSO, it is not intended to provide support for all aspects of HSPD-12 implementation. For example, the MSO does not provide services to help agencies integrate theirphysical and logical access control systems with their PIV systems . In 2006, GSA's Office of Governmentwide Policy established the interagency HSPD-12 Architecture Working Group, which is intended to develop interface specifications for HSPD-12 system interoperability across the federal government. As of July 200 7, the group had issued 10 interface specification documents, including a specification for exchanging data between an agency and a shared service provider. In February 2006, we reported tha challenges in implementing FIPS 2 time frames and funding uncertainties as well as incomplete implementation guidance. We recommended that OMB monitor agencies' implementation process and completion of key acti In response to this recommendation, beginning on March 1, 2007, OMB directed agencies to post to their public Web sites quarterly reports on the number of PIV cards they had issued to their employees, contractors, and other individuals. In addition, in Augu 2006, OMB directed each agency to submit an updated implementation plan. t agencies faced several 01, including constrained testing vities. Military servicemembers, federal workers, and industry personnel must obtain security clearances to gain access to classified information. Clearances are categorized into three levels: top secre secret, and confidential. The level of classification denotes t degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably cause tonational security. The degree of expected damage that unauthoriz disclosure could reasonably be expected to cause is "exceptional ly grave damage" for top secret information, "serious damage" for secret information, and "damage" for confidential information. We designated DOD's personnel security clearance program a high- risk area in January 2005 and continued that designation in the updated list of high-risk areas that we published in 2007. We identified this program as a high-risk area because of long-standing delays in determining clearance eligibility and other challenges. DOD represents about 80 percent of the security clearances adjudicated by the federal government and problems in the clearance program can negatively affect national security. For example, delays in renewing security clearances for personnel who are already doing classified work can lead to a heightened risk of unauthorized disclosure of classified information. In contrast, delays in providing initial security clearances for previously non-cleared personnel can result in other negative consequences, such as additional costs and delays in completing national security-related contracts, lost opportunity costs, and problems retaining the best qualified personnel. DOD's Office of the Under Secretary of Defense for Intelligence has responsibility for determining eligibility for clearances for servicemembers, DOD civilian employees, and industry personnel performing work for DOD and 23 other federal agencies, and employees in the federal legislative branch. That responsibility includes obtaining background investigations, primarily through the Office of Personnel Management (OPM). Within DOD, government employees use the information in OPM- provided investigative reports to determine clearance eligibility of clearance subjects. Recent significant events affecting the clearance program of DOD and other federal agencies include the passage of the Intelligence Reform and Terrorism Prevention Act of 2004 and the issuance of the June 2005 Executive Order 13381, "Strengthening Processes Relating to Determining Eligibility for Access to Classified National Security Information." The act included milestones for reducing the time to complete clearances, general specifications for a database on security clearances, and requirements for reciprocity of clearances. Among other things, the executive order established as policy that agency functions relating to determining eligibility for access to classified national security information shall be appropriately uniform, centralized, efficient, effective, timely, and reciprocal and provided that the Director of OMB would ensure the policy's effective implementation. Agencies had made limited progress in implementing and using PIV cards. While the eight agencies we reviewed had generally taken steps to complete background checks on most of their employees and contractors and establish basic infrastructure, such as purchasing card readers, none of the agencies met OMB's goal of issuing PIV cards by October 27, 2007, to all employees and contractor personnel who had been with the agency for 15 years or less. In addition, for the limited number of cards that had been issued, agencies generally had not been using the electronic authentication capabilities on the cards. A key contributing factor for why agencies had made limited progress in adopting the use of PIV cards is that OMB, which is tasked with ensuring that federal agencies implement HSPD-12, focused agencies' attention on card issuance, rather than on full use of the cards' capabilities. Until OMB revises its approach to focus on the full use of card capabilities, HSPD-12's objective of increasing the quality and security of ID and credentialing practices across the federal government may not be fully achieved. As we have previously described, by October 27, 2007, OMB had directed federal agencies to issue PIV cards and require PIV card use by all employees and contractor personnel who have been with the agency for 15 years or less. HSPD-12 requires that the cards be used for physical access to federally controlled facilities and logical access to federally controlled information systems. In addition, to issue cards that fully meet the FIPS 201 specification, basic infrastructure--such as ID management systems, enrollment stations, PKI, and card readers--will need to be put in place. OMB also directed that agencies verify and/or complete background investigations by this date for all current employees and contractors who have been with the agency for 15 years or less. Agencies had taken steps to complete background checks that were directed by OMB, on their employees and contractors and establish basic infrastructure to help enable the use of PIV capabilities. For example, Commerce, Interior, NRC, and USDA had established agreements with GSA's MSO to use its shared infrastructure, including its PKI, and enrollment stations. Other agencies, including DHS, HUD, Labor, and NASA--which chose not to use GSA's shared services offering--had acquired and implemented other basic elements of infrastructure, such as ID management systems, enrollment stations, PKI, and card readers. However, none of the eight agencies had met the October 2007 deadline regarding card issuance. In addition, for the limited number of cards that had been issued, agencies generally had not been using the electronic authentication capabilities on the cards. Instead, for physical access, agencies were using visual inspection of the cards as their primary means to authenticate cardholders. While it may be sufficient in certain circumstances--such as in very small offices with few employees--in most cases, visual inspection will not provide an adequate level of assurance. OMB strongly recommends minimal reliance on visual inspection. Also, seven of the eight agencies we reviewed had not been using the cards for logical access control. Furthermore, most agencies did not have detailed plans in place to use the various authentication capabilities. For example, as of October 30, 2007, Labor had not yet developed plans for implementing the electronic authentication capabilities on the cards. Similarly, Commerce officials stated that they would not have a strategy or time frame in place for using the electronic authentication capabilities of PIV cards until June 2008. Table 1 provides details about the progress each of the eight agencies had made as of December 1, 2007. A key contributing factor to why agencies had made limited progress is that OMB--which is tasked with ensuring that federal agencies implement HSPD-12--had emphasized the issuance of the cards, rather than the full use of the cards' capabilities. Specifically, OMB's milestones were not focused on implementation of the electronic authentication capabilities that are available through PIV cards, and had not set acquisition milestones that would coincide with the ability to make use of these capabilities. Furthermore, despite the cost of the cards and associated infrastructure, OMB had not treated the implementation of HSPD-12 as a major new investment and had not ensured that agencies have guidance to ensure consistent and appropriate implementation of electronic authentication capabilities across agencies. Until these issues are addressed, agencies may continue to acquire and issue costly PIV cards without using their advanced capabilities to meet HSPD-12 goals. While OMB had established milestones for near-term card issuance, it had not established milestones to require agencies to develop detailed plans for making the best use of the electronic authentication capabilities of PIV cards. Consequently, agencies had concentrated their efforts on meeting the card issuance deadlines. For example, several of the agencies we reviewed chose to focus their efforts on meeting the next milestone--that cards be issued to all employees and contractor personnel and be in use by October 27, 2008. Understandably, meeting this milestone was perceived to be more important than making optimal use of the cards' authentication capabilities, because card issuance is the measure that OMB is monitoring and asking agencies to post on their public Web sites. The PIV card and the services involved in issuing and maintaining the data on the card, such as the PKI certificates, are costly. For example, PIV cards and related services offered by GSA through its shared service offering cost $82 per card for the first year and $36 per card for each of the remaining 4 years of the card's life. In contrast, traditional ID cards with limited or no electronic authentication capabilities cost significantly less. Therefore, agencies that do not implement electronic authentication techniques are spending a considerable amount per card for capabilities that they are not able to use. A more economical approach would be to establish detailed plans for implementing the technical infrastructure necessary to use the electronic authentication capabilities on the cards and time the acquisition of PIV cards to coincide with the implementation of this infrastructure. Without OMB focusing its milestones on the best use of the authentication capabilities available through PIV cards, agencies are likely to continue to implement minimum authentication techniques and not be able to take advantage of advanced authentication capabilities. Before implementing major new systems, agencies are generally directed to conduct thorough planning to ensure that costs and time frames are well understood and that the new systems meet their needs. OMB establishes budget justification and reporting requirements for all major information technology investments. Specifically, for such investments, agencies are directed to prepare a business case--OMB Exhibit 300--which is supported by a number of planning documents that are essential in justifying decisions regarding how, when, and the extent to which an investment would be implemented. However, OMB determined that because agencies had ID management systems in place prior to HSPD-12 and that the directive only directed agencies to "standardize" their systems. the implementation effort did not constitute a new investment. According to an OMB senior policy analyst, agencies should be able to fund their HSPD-12 implementations through existing resources and should not need to develop a business case or request additional funding. While OMB did not direct agencies to develop business cases for HSPD-12 implementation efforts, PIV card systems are likely to represent significant new investments at several agencies. For example, agencies such as Commerce, HUD, and Labor had not implemented PKI technology prior to HSPD-12, but they are now directed to do so. In addition, such agencies' previous ID cards were used for limited purposes and were not used for logical access. These agencies had no prior need to acquire or maintain card readers for logical access control or to establish connectivity with their ID management systems for logical access control and, consequently, had previously allocated very little money for the operations and maintenance of these systems. For example, according to Labor officials, operations and maintenance costs for its pre-HSPD-12 legacy system totaled approximately $169,000, while its fiscal year 2009 budget request for HSPD-12 implementation is approximately $3 million--17 times more expensive. While these agencies recognized that they are likely to face substantially greater costs in implementing PIV card systems, they had not always thoroughly assessed all of the expenses they are likely to incur. For example, agency estimates may not have included the cost of implementing advanced authentication capabilities where they are needed. The extent to which agencies need to use such capabilities could significantly impact an agency's cost for implementation. While the technical requirements of complying with HSPD-12 dictated that a major new investment be made, generally, agencies had not been directed by OMB to take the necessary steps to thoroughly plan for these investments. For example, six of the eight agencies we reviewed had not developed detailed plans regarding their use of PIV cards for physical and logical access controls. In addition, seven of the eight agencies had not prepared cost-benefit analyses that weighed the costs and benefits of implementing different authentication capabilities. Without treating the implementation of HSPD-12 as a major new investment by requiring agencies to develop detailed plans based on risk-based assessments of agencies' physical and logical access control needs that support the extent to which electronic authentication capabilities are to be implemented, OMB will continue to limit its ability to ensure that agencies properly plan and implement HSPD-12. OMB Had Not Provided Guidance for Determining Which PIV Card Authentication Capabilities to Implement for Physical and Logical Access Controls Another factor contributing to agencies' limited progress is that OMB had not provided guidance to agencies regarding how to determine which electronic authentication capabilities to implement for physical and logical access controls. While the FIPS 201 standard describes three different assurance levels for physical access (some, high, and very high confidence) and associates PIV authentication capabilities with each level, it is difficult for agencies to link these assurance levels with existing building security assurance standards that are used to determine access controls for facilities. The Department of Justice has developed standards for assigning security levels to federal buildings, ranging from level I (typically, a leased space with 10 or fewer employees, such as a military recruiting office) to level V (typically, a building, such as the Pentagon or Central Intelligence Agency headquarters, with a large number of employees and a critical national security mission). While there are also other guidelines that agencies could use to conduct assessments of their buildings, several of the agencies we reviewed use the Justice guidance to conduct risk assessments of their facilities. Officials from several of the agencies we reviewed indicated that they had not been using the FIPS 201 guidance to determine which PIV authentication capabilities to use for physical access because they had not found the guidance to be complete. Specifically, they were unable to determine which authentication capabilities should be used for the different security levels. The incomplete guidance has contributed to several agencies--including Commerce, DHS, and NRC--not reaching decisions on what authentication capabilities they were going to implement. More recently, NIST has begun developing guidelines for applying the FIPS 201 confidence levels to physical access control systems. However, this guidance has not yet been completed and was not available to agency officials when we were conducting our review. Agencies also lacked guidance regarding when to use the enhanced authentication capabilities for logical access control. Similar to physical access control, FIPS 201 describes graduated assurance levels for logical access (some, high, and very high confidence) and associates PIV authentication capabilities with each level. However, as we have previously reported, neither FIPS 201 nor supplemental OMB guidance provides sufficient specificity regarding when and how to apply the standard to information systems. For example, such guidance does not inform agencies how to consider the risk and level of confidence needed when different types of individuals require access to government systems, such as a researcher uploading data through a secure Web site or a contractor accessing government systems from an off-site location. Until complete guidance is available, agencies will likely continue either to delay in making decisions on their implementations or to make decisions that may need to be modified later. As defined by OMB, one of the primary goals of HSPD-12 is to enable interoperability across federal agencies. As we have previously reported, prior to HSPD-12, there were wide variations in the quality and security of ID cards used to gain access to federal facilities. To overcome this limitation, HSPD-12 and OMB guidance direct that ID cards have standard features and means for authentication to enable interoperability among agencies. While steps had been taken to enable future interoperability, progress had been limited in implementing such capabilities in current systems, partly because key procedures and specifications had not yet been developed. As we have previously stated, NIST established conformance testing for the PIV card and interface, and GSA established testing for other PIV products and services to help enable interoperability. In addition, the capability exists for determining the validity and status of a cardholder from another agency via PKI. However, procedures and specifications to enable cross-agency interoperability using the CHUID--which is expected to be more widely used than PKI--had not been established. While PIV cards and FIPS 201-compliant readers may technically be able to read the information encoded on any PIV card--including cards from multiple agencies--this functionality is not adequate to allow one agency to accept another agency's PIV card, because there is no common interagency framework in place for agencies to electronically exchange status information on PIV credentials. For example, the agency that issued a PIV card could revoke the cardholder's authorization to access facilities or systems if the card is lost or if there has been a change in the cardholder's employment status. The agency attempting to process the card would not be able to access this information because a common framework to electronically exchange status information does not exist. The interfaces and protocols that are needed for querying the status of cardholders have not yet been developed. In addition, procedures and policies had not been established for sharing information on contractor personnel who work at multiple federal agencies. Without such procedures and policies, agencies will issue PIV cards to their contractor staff for access only to their own facilities. Contractors who work at multiple agencies may need to obtain separate PIV cards for each agency. GSA recognized the need to address these issues and has actions under way to do so. According to GSA, the Federal Identity Credentialing Committee is developing guidance on the issuance and maintenance of PIV cards to the contractor community. GSA is also developing a standard specification that will enable interoperability in the exchange of identity information among agencies. According to GSA officials, they plan to complete and issue guidance by the end of September 2008. Additionally, NIST is planning to issue an update to a special publication that focuses on interfaces for PIV systems. Such guidance should help enable agencies to establish cross-agency interoperability--a primary goal of HSPD-12. To help ensure that the objectives of HSPD-12 are achieved, we made several recommendations in our report. First, we recommended that OMB establish realistic milestones for full implementation of the infrastructure needed to best use the electronic authentication capabilities of PIV cards in agencies. In commenting on a draft of our report, OMB stated that its guidance requires agencies to provide milestones for when they intend to leverage the capabilities of PIV credentials. However, in order to ensure consistent governmentwide implementation of HSPD-12, it is important for OMB to establish such milestones across agencies, rather than to allow individual agencies to choose their own milestones. Next, we recommended that OMB require each agency to develop a risk-based, detailed plan for implementing electronic capabilities. OMB stated that previous guidance required agencies to provide milestones for when they plan to fully leverage the capabilities of PIV credentials for physical and logical access controls. However, agencies were required to provide only the dates they plan to complete major activities, and not detailed, risk-based plans. Until OMB requires agencies to implement such plans, OMB will be limited in its ability to ensure agencies make the best use of their cards' electronic authentication capabilities. We also recommended that OMB require agencies to align the acquisition of PIV cards with plans for implementing the cards' electronic authentication capabilities. In response, OMB stated that HSPD-12 aligns with other information security programs. While OMB's statement is correct, it would be more economical for agencies to time the acquisition of PIV cards to coincide with the implementation of the technical infrastructure necessary for enabling electronic authentication techniques. This approach has not been encouraged by OMB, which instead measures agencies primarily on how many cards they issue. Lastly, we recommended that OMB ensure guidance is developed that maps existing physical security guidance to FIPS 201 guidance. OMB stated that NIST is in the process of developing additional guidance to clarify the relationship between facility security levels and PIV authentication levels. In March 2008, NIST released a draft of this guidance to obtain public comments. In our previous reports, we have also documented a variety of problems present in DOD's personnel security clearance program. Some of the problems that we noted in our 2007 high-risk report included delays in processing clearance applications and problems with incomplete investigative and adjudicative reports to determine clearance eligibility. Delays in the clearance process continue to increase costs and risk to national security, such as when new industry employees are not able to begin work promptly and employees with outdated clearances have access to classified documents. Moreover, DOD and the rest of the federal government provide limited information to one another on how they individually ensure the quality of clearance products and procedures. While DOD continues to face challenges in timeliness and quality in the personnel security clearance process, high-level governmentwide attention has been focused on improving the security clearance process. As we noted in February 2008, delays in the security clearance process continue to increase costs and risk to national security. An August 2007 DOD report to Congress noted that delays in processing personnel security clearances for industry have been reduced, yet that time continues to exceed requirements established by the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA). The act currently requires that adjudicative agencies make a determination on at least 80 percent of all applications for a security clearance within an average of 120 days after the date of receipt of the application, with 90 days allotted for the investigation and 30 days allotted for the adjudication. However, DOD's August 2007 report on industry clearances stated that, during the first 6 months of fiscal year 2007, the end-to-end processing of initial top secret clearances took an average of 276 days; renewal of top secret clearances, 335 days; and all secret clearances, 208 days. We also noted in February 2008, that delays in clearance processes can result in additional costs when new industry employees are not able to begin work promptly and increased risks to national security because previously cleared industry employees are likely to continue working with classified information while the agency determines whether they should still be eligible to hold a clearance. To improve the timeliness of the clearance process, we recommended in September 2006 that OMB establish an interagency working group to identify and implement solutions for investigative and adjudicative information-technology problems that have resulted in clearance delays. In commenting on our recommendation, OMB's Deputy Director for Management stated that the National Security Council's Security Clearance Working Group had begun to explore ways to identify and implement improvements to the process. DOD and the Rest of the Government Provide Limited Information on How to Ensure the Quality of Clearance Products and Procedures As we reported in February 2008, DOD and the rest of the federal government provide limited information to one another on how they individually ensure the quality of clearance products and procedures. For example, DOD's August 2007 congressionally mandated report on clearances for industry personnel documented improvements in clearance processes but was largely silent regarding quality in clearance processes. While DOD described several changes to the processes and characterized the changes as progress, the department provided little information on (1) any measures of quality used to assess clearance processes or (2) procedures to promote quality during clearance investigation and adjudication processes. Specifically, DOD reported that the Defense Security Service, DOD's adjudicative community, and OPM are gathering and analyzing measures of quality for the clearance processes that could be used to provide the national security community with a better product. However, the DOD report did not include any of those measures. In September 2006, we reported that while eliminating delays in clearance processes is an important goal, the government cannot afford to achieve that goal by providing investigative and adjudicative reports that are incomplete in key areas. We additionally reported that the lack of full reciprocity--when one government agency fully accepts a security clearance granted by another government agency--is an outgrowth of agencies' concerns that other agencies may have granted clearances based on inadequate investigations and adjudications. Without fuller reciprocity of clearances, agencies could continue to require duplicative investigations and adjudications, which result in additional costs to the federal government. In the report we issued in February 2008, we recommended that DOD develop measures of quality for the clearance process and include them in future reports to Congress. Statistics from such measures would help to illustrate how DOD is balancing quality and timeliness requirements in its personnel security clearance program. DOD concurred with that recommendation, indicating it had developed a baseline performance measure of the quality of investigations and adjudications and was developing methods to collect information using this quality measure. Recent High-Level Governmentwide Attention Has Been Focused On Improving the Security Clearance Process In February 2008, we reported that while DOD continues to face timeliness and quality challenges in the personnel security clearance program, high-level governmentwide attention has been focused on improving the security clearance process. For example, we reported that OMB's Deputy Director of Management has been responsible for a leadership role in improving the governmentwide processes since June 2005. During that time, OMB has overseen, among other things, the growth of OPM's investigative workforce and greater use of OPM's automated clearance-application system. In addition, an August 9, 2007, memorandum from the Deputy Secretary of Defense indicates that DOD's clearance program is drawing attention at the highest levels of the department. Streamlining security clearance processes is one of the 25 DOD transformation priorities identified in the memorandum. Another indication of high-level government attention we reported in February 2008 is the formation of an interagency security clearance process reform team in June 2007. Agencies included in the governmentwide effort are OMB, the Office of the Director of National Intelligence, DOD, and OPM. The team's memorandum of agreement indicates that it seeks to develop, in phases, a reformed DOD and intelligence community security clearance process that allows the granting of high-assurance security clearances in the least time possible and at the lowest reasonable cost. The team's July 25, 2007, terms of reference indicate that the team plans to deliver "a transformed, modernized, fair, and reciprocal security clearance process that is universally applicable" to DOD, the intelligence community, and other U.S. government agencies. A further indication of high level government attention is a memorandum issued by the President on February 5, 2008 which called for aggressive efforts to achieve meaningful and lasting reform of the processes to conduct security clearances. In the memorandum, the President acknowledged the work being performed by the interagency security clearance process reform team and directed that the team submit to the President an initial reform proposal not later than April 30, 2008. In closing, OMB, GSA, and NIST have made significant progress in laying the foundation for implementation of HSPD-12. However, agencies did not meet OMB's October 2007 milestone for issuing cards and most have made limited progress in using the advanced security capabilities of the cards that have been issued. These agency actions have been largely driven by OMB's guidance, which has emphasized issuance of cards rather than the full use of the cards' capabilities. As a result, agencies are acquiring and issuing costly PIV cards without using the advanced capabilities that are critical to achieving the objectives of HSPD-12. Until OMB provides additional leadership by guiding agencies to perform the planning and assessments that will enable them to fully use the advanced capabilities of these cards, agencies will likely continue to make limited progress in using the cards to improve security over federal facilities and systems. Regarding security clearances, in June 2005, OMB took responsibility for a leadership role for improving the governmentwide personnel security clearance process. The current interagency security clearance process reform team represents a positive step to address past impediments and manage security clearance reform efforts. Although the President has called for a reform proposal to be provided no later than April 30, 2008, much remains to be done before a new system can be implemented. Mr. Chairman and members of the subcommittee, this concludes our statement. We would be happy to respond to any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact Linda D. Koontz at (202) 512-6240 or Brenda S. Farrell at (202) 512-3604 or by e-mail at [email protected] or [email protected]. Other key contributors to this testimony include John de Ferrari (Assistant Director), Neil Doherty, Nancy Glover, James P. Klein, Rebecca Lapaze, Emily Longcore, James MacAulay, David Moser and Shannin O'Neill. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In an effort to increase the quality and security of federal identification (ID) practices, the President issued Homeland Security Presidential Directive 12 (HSPD-12) in August 2004. This directive requires the establishment of a governmentwide standard for secure and reliable forms of ID. GAO was asked to testify on its report, being released today, assessing the progress selected agencies have made in implementing HSPD-12. For this report, GAO selected eight agencies with a range of experience in implementing ID systems and analyzed actions these agencies had taken. GAO was also asked to summarize challenges in the DOD personnel security clearance process. This overview is based on past work including reviews of clearance-related documents. Military servicemembers, federal workers, and industry personnel must obtain security clearances to gain access to classified information. Long-standing delays in processing applications for these clearances led GAO to designate the Department of Defense's (DOD) program as a high-risk area in 2005. In its report on HSPD-12, GAO made recommendations to the Office of Management and Budget (OMB), to, among other things, set realistic milestones for implementing the electronic authentication capabilities. GAO has also made recommendations to OMB and DOD to improve the security clearance process. Much work had been accomplished to lay the foundations for implementation of HSPD-12--a major governmentwide undertaking. However, none of the eight agencies GAO reviewed--the Departments of Agriculture, Commerce, Homeland Security, Housing and Urban Development, the Interior, and Labor; the Nuclear Regulatory Commission; and the National Aeronautics and Space Administration--met OMB's goal of issuing ID cards by October 27, 2007, to all employees and contractor personnel who had been with the agency for 15 years or less. In addition, for the limited number of cards that had been issued, most agencies had not been using the electronic authentication capabilities on the cards and had not developed implementation plans for those capabilities. A key contributing factor for this limited progress is that OMB had emphasized issuance of the cards, rather than full use of the cards' capabilities. Furthermore, agencies anticipated having to make substantial financial investments to implement HSPD-12, since ID cards are considerably more expensive than traditional ID cards. However, OMB had not considered HSPD-12 implementation to be a major new investment and thus had not required agencies to prepare detailed plans regarding how, when, and the extent to which they would implement the electronic authentication mechanisms available through the cards. Until OMB revises its approach to focus on the full use of the capabilities of the new ID cards, HSPD-12's objectives of increasing the quality and security of ID and credentialing practices across the federal government may not be fully achieved. Regarding personnel security clearances, GAO's past reports have documented problems in DOD's program including delays in processing clearance applications and problems with the quality of clearance related reports. Delays in the clearance process continue to increase costs and risk to national security, such as when new DOD industry employees are not able to begin work promptly and employees with outdated clearances have access to classified documents. Moreover, DOD and the rest of the federal government provide limited information to one another on how they individually ensure the quality of clearance products and procedures. While DOD continues to face challenges in timeliness and quality in the personnel security clearance process, high-level government attention has been focused on improving the clearance process.
| 7,975 | 728 |
Since 2011, we have identified 11 areas across DHS where fragmentation, overlap, or potential duplication exists, and suggested 24 actions to the department and Congress to help strengthen the efficiency and effectiveness of DHS operations. In some cases, there is sufficient information available to show that if actions are taken to address individual issues, significant financial benefits may be realized. In other cases, precise estimates of the extent of potential unnecessary duplication, and the cost savings that can be achieved by eliminating any such duplication, are difficult to specify in advance of congressional and executive branch decision making. However, given the range of areas we identified at DHS and the magnitude of many of the programs, the cost savings associated with addressing these issues could be significant. In April 2013, we identified 2 new areas where DHS could take actions to address fragmentation, overlap, or potential duplication. First, we found that DHS does not have a department-wide policy defining research and development (R&D) or guidance directing how components are to report R&D activities. As a result, the department does not know its total annual investment in R&D, a fact that limits DHS's ability to oversee components' R&D efforts and align them with agency-wide R&D goals and priorities. DHS's Science and Technology Directorate, Domestic Nuclear Detection Office, and the U.S. Coast Guard--the only DHS components that report R&D-related budget authority to the Office of Management and Budget (OMB) as part of the budget process--reported $568 million in fiscal year 2011 R&D budget authority. However, we identified at least 6 components with R&D activities and an additional $255 million in R&D obligations in fiscal year 2011 by other DHS components that were not reported to OMB in the budget process. To address this issue, we suggested that DHS develop and implement policies and guidance for defining and overseeing R&D at the department. Second, we reported that the fragmentation of field-based information sharing can be disadvantageous if activities are uncoordinated, as well as if opportunities to leverage resources across entities are not fully exploited. We suggested that DHS and other relevant agencies develop a mechanism that will allow them to hold field-based information-sharing entities accountable for coordinating with each other and monitor and evaluate the coordination results achieved, as well as identify characteristics of entities and assess specific geographic areas in which practices that could enhance coordination and reduce unnecessary overlap could be adopted. DHS generally agreed with our suggestions and is reported taking steps to address them. Moving forward, we will monitor DHS's progress to address these actions. Concurrent with the release of our 2013 annual report, we updated our assessments of the progress that DHS has made in addressing the actions we suggested in our 2011 and 2012 annual reports.outlines the 2011-2012 DHS-related areas in which we identified Table 1 fragmentation, overlap, or potential duplication, and highlights DHS's and Congress's progress in addressing them. In our March 2011 and February 2012 reports, in particular, we suggested that DHS or Congress take 21 actions to address the areas of overlap or potential duplication that we found. Of these 21 actions, 2 (approximately 10 percent) have been addressed, 13 (approximately 62 percent) have been partially addressed, and the remaining 6 (approximately 29 percent) have not been addressed. For example, to address the potential for overlap among three information-sharing mechanisms that DHS funds and uses to communicate security-related information with public transit agencies, in March 2011, we suggested that DHS could identify and implement ways to more efficiently share security-related information by assessing the various mechanisms available to public transit agencies.We assessed this action as partially addressed because TSA has taken steps to streamline information sharing with public transit agencies, but the agency continues to maintain various mechanisms to share such information. In March 2011, we also found that TSA's security assessments for hazardous material trucking companies overlapped with efforts conducted by the Department of Transportation's (DOT) Federal Motor Carrier Safety Administration (FMCSA), and as a result, government resources were not being used effectively. After we discussed this overlap with TSA in January 2011, agency officials stated that, moving forward, they intend to only conduct reviews on trucking companies that are not covered by FMCSA's program, an action that, if implemented as intended, we projected could save more than $1 million over the next 5 years. We also suggested that TSA and FMCSA could share each other's schedules for conducting future security reviews, and avoid scheduling reviews on hazardous material trucking companies that have recently received, or are scheduled to receive, a review from the other agency. We assessed this action as addressed because in August 2011, TSA reported that it had discontinued conducting security reviews on trucking companies that are covered by the FMCSA program. Discontinuing such reviews should eliminate the short-term overlap between TSA's and FMCSA's reviews of hazardous material trucking companies. Although the executive branch and Congress have made some progress in addressing the issues that we have previously identified, additional steps are needed to address the remaining areas and achieve associated benefits. For example, to eliminate potential duplicating efforts of interagency forums in securing the northern border, in March 2011, we reported that DHS should provide guidance to and oversight of interagency forums to prevent duplication of efforts and help effectively utilize personnel resources to strengthen coordination efforts along the northern border. Further, the four DHS grant programs that we reported on in February 2012--the State Homeland Security Program, the Urban Areas Security Initiative, the Port Security Grant Program, and the Transit Security Grant Program--have multiple areas of overlap and can be sources of potential unnecessary duplication. These grant programs, which FEMA used to allocate about $20.3 billion to grant recipients from fiscal years 2002 through 2011, have similar goals and fund similar activities, such as equipment and training, in overlapping jurisdictions. To address these areas of overlap, we reported that Congress may want to consider requiring DHS to report on the results of its efforts to identify and prevent unnecessary duplication within and across these grant programs, and consider these results when making future funding decisions for these programs. Such reporting could help ensure that both Congress and FEMA steer scarce resources to homeland security needs in the See appendix I, table 4, for a most efficient, cost-effective way possible.summary of the fragmentation, overlap, and duplication areas and actions we identified in our 2011-2013 annual reports that are relevant to DHS. Our 2011-2013 annual reports also identified 13 areas where DHS or Congress should consider taking 29 actions to reduce the cost of operations or enhance revenue collection for the Department of the Treasury. Most recently, in April 2013, we identified 4 cost-savings and revenue enhancement areas related to DHS. Table 2 provides a summary of the 2011-2012 DHS-related areas in which we identified opportunities for cost savings or revenue enhancement, as well the status of efforts to address these areas. In addition, in April 2013 we also reported on the steps that DHS and Congress have taken to address the cost savings and revenue enhancement areas identified in our 2011 and 2012 annual reports. Table 3 provides a summary of the 2011-2012 DHS-related areas in which we identified opportunities for cost savings or revenue enhancement, as well the status of efforts to address these areas. Of the 21 related actions we suggested that DHS or Congress take in our March 2011 and February 2012 reports to either reduce the cost of government operations or enhance revenue collection, as of March 2013, 3 (about 14 percent) have been addressed, 11 (about 52 percent) have been partially addressed, and 7 (about 33 percent) have not been addressed. For example, in February 2012, we reported that to increase the likelihood of successful implementation of the Arizona Border Surveillance Technology Plan, minimize performance risks and help justify program funding, the Commissioner of CBP should update the agency's cost estimate for the plan using best practices. This year, we assessed this action as partially addressed because CBP initiated action to update its cost estimate, using best practices, for the plan by providing revised cost estimates in February and March 2012 for the plan's two largest projects. However, CBP has not independently verified its life cycle cost estimates for these projects with independent cost estimates and reconciled any differences with each system's respective life cycle cost estimate, consistent with best practices. Such action would help CBP better ensure the reliability of each system's cost estimate. Further, in March 2011, we stated that Congress may wish to consider limiting program funding pending receipt of an independent assessment of TSA's Screening of Passengers by Observation Techniques (SPOT) program. This year, we assessed this action as addressed because Congress froze the program funds at the fiscal year 2010 level and funded less than half of TSA's fiscal year 2012 request for full-time behavior detection officers. Although DHS and Congress have made some progress in addressing the issues that we have previously identified that may produce cost savings or revenue enhancements, additional steps are needed. For example, in February 2012, we reported that FEMA should develop and implement a methodology that provides a more comprehensive assessment of a jurisdiction's capability to respond to and recover from a disaster without federal assistance. As of March 2013, FEMA had not addressed this action. In addition, in the 2012 report, we suggested that Congress, working with the Administrator of TSA, may wish to consider increasing the passenger aviation security fee according to one of many options, including but not limited to the President's Deficit Reduction Plan option ($7.50 per one-way trip by 2017) or the Congressional Budget Office, President's Debt Commission, and House Budget Committee options ($5 per one-way trip). These options could increase fee collections over existing levels from about $2 billion to $10 billion over 5 years. However, as of March 2013, Congress had not passed legislation to increase the passenger security fee. For additional information on our assessment of DHS's and Congress's efforts to address our previously reported actions, see GAO's Action Tracker . Following its establishment in 2003, DHS focused its efforts primarily on implementing its various missions to meet pressing homeland security needs and threats, and less on creating and integrating a fully and effectively functioning department. As the department matured, it has put into place management policies and processes and made a range of other enhancements to its management functions, which include acquisition, information technology, financial, and human capital management. However, DHS has not always effectively executed or integrated these functions. The department has made considerable progress in transforming its original component agencies into a single cabinet-level department and positioning itself to achieve its full potential; however, challenges remain for DHS to address across its range of missions. DHS has also made important strides in strengthening the department's management functions and in integrating those functions across the department. As a result, in February 2013, we narrowed the scope of the high-risk area and changed the focus and name from Implementing and Transforming the Department of Homeland Security to Strengthening the Department of Homeland Security Management Functions. Of the 31 actions and outcomes GAO identified as important to addressing this area, DHS has fully or mostly addressed 8, partially addressed 16, and initiated 7. Moving forward, continued progress is needed in order to mitigate the risks that management weaknesses pose to mission accomplishment and the efficient and effective use of the department's resources. For example: Acquisition management: Although DHS has made progress in strengthening its acquisition function, most of DHS's major acquisition programs continue to cost more than expected, take longer to deploy than planned, or deliver less capability than promised. We identified 42 programs that experienced cost growth, schedule slips, or both, with 16 of the programs' costs increasing from a total of $19.7 billion in 2008 to $52.2 billion in 2011--an aggregate increase of 166 percent. We reported in September 2012 that DHS leadership has authorized and continued to invest in major acquisition programs even though the vast majority of those programs lack foundational documents demonstrating the knowledge needed to help manage risks and measure performance. We recommended that DHS modify acquisition policy to better reflect key program and portfolio management practices and ensure acquisition programs fully comply with DHS acquisition policy. DHS concurred with our recommendations and reported taking actions to address some of them. Moving forward, DHS needs to, for example, validate required acquisition documents in a timely manner, and demonstrate measurable progress in meeting cost, schedule, and performance metrics for its major acquisition programs. Information technology management: DHS has defined and begun to implement a vision for a tiered governance structure intended to improve information technology (IT) program and portfolio management, which is generally consistent with best practices. However, the governance structure covers less than 20 percent (about 16 of 80) of DHS's major IT investments and 3 of its 13 portfolios, and the department has not yet finalized the policies and procedures associated with this structure. In July 2012, we recommended that DHS finalize the policies and procedures and continue to implement the structure. DHS agreed with these recommendations and estimated it would address them by September 2013. Financial management: DHS has, among other things, received a qualified audit opinion on its fiscal year 2012 financial statements for the first time since the department's creation. DHS is working to resolve the audit qualification to obtain an unqualified opinion for fiscal year 2013. However, DHS components are currently in the early planning stages of their financial systems modernization efforts, and until these efforts are complete, their current systems will continue to inadequately support effective financial management, in part because of their lack of substantial compliance with key federal financial management requirements. Without sound controls and systems, DHS faces challenges in obtaining and sustaining audit opinions on its financial statement and internal controls over financial reporting, as well as ensuring its financial management systems generate reliable, useful, and timely information for day-to-day decision making. Human capital management: In December 2012, we identified several factors that have hampered DHS's strategic workforce planning efforts and recommended, among other things, that DHS identify and document additional performance measures to assess workforce planning efforts.recommendations and stated that it plans to take actions to address them. In addition, DHS has made efforts to improve employee morale, such as taking actions to determine the root causes of morale DHS agreed with these problems. Despite these efforts, however, federal surveys have consistently found that DHS employees are less satisfied with their jobs than the government-wide average. In September 2012, we recommended, among other things, that DHS improve its root cause analysis efforts of morale issues. DHS agreed with these recommendations and noted actions it plans to take to address them. In conclusion, given DHS's significant leadership responsibilities in securing the homeland, it is critical that the department's programs and activities are operating as efficiently and effectively as possible; that they are sustainable; and that they continue to mature, evolve, and adapt to address pressing security needs. Since it began operations in 2003, DHS has implemented key homeland security operations and achieved important goals and milestones in many areas. These accomplishments are especially noteworthy given that the department has had to work to transform itself into a fully functioning cabinet department while implementing its missions. However, our work has shown that DHS can take actions to reduce fragmentation, overlap, and unnecessary duplication to improve the efficiency of its operations and achieve cost savings in several areas. Further, DHS has taken steps to strengthen its management functions and integrate them across the department; however, continued progress is needed to mitigate the risks that management weaknesses pose to mission accomplishment and the efficient and effective use of the department's resources. DHS has indeed made significant strides in protecting the homeland, but has yet to reach its full potential. Chairman Duncan, Ranking Member Barber, and members of the subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that members of the subcommittee may have. For further information regarding this testimony, please contact Cathleen A. Berrick at (202) 512-3404 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Kathryn Bernet, Assistant Director; Elizabeth Luke; and Meg Ullengren. This enclosure presents a summary of the areas and actions we identified in our 2011-2013 annual reports that are relevant to the Department of Homeland Security (DHS). It also includes our assessment of the overall progress made in each of the areas and the progress made on each action that we identified in our 2011 and 2012 annual reports in which Congress and DHS could take actions to reduce or eliminate fragmentation, overlap, and potential duplication or achieve other potential financial benefits. As of April 26, 2013, we have not assessed DHS's progress in addressing the relevant 2013 areas. Table 4 presents our assessment of the overall progress made in implementing the actions needed in the areas related to fragmentation, overlap, or duplication. Table 5 presents our assessment of the overall progress made in implementing the actions needed in the areas related to cost savings or revenue enhancement.
|
Since beginning operations in 2003, DHS has become the third-largest federal department, with more than 224,000 employees and an annual budget of about $60 billion. Over the past 10 years, DHS has implemented key homeland security operations and achieved important goals to create and strengthen a foundation to reach its potential. Since 2003, GAO has issued more than 1,300 reports and congressional testimonies designed to strengthen DHS's program management, performance measurement efforts, and management processes, among other things. GAO has reported that overlap and fragmentation among government programs, including those of DHS, can cause potential duplication, and reducing it could save billions of tax dollars annually and help agencies provide more efficient and effective services. Moreover, in 2003, GAO designated implementing and transforming DHS as high risk because it had to transform 22 agencies into one department, and failure to address associated risks could have serious consequences. This statement addresses (1) opportunities for DHS to reduce fragmentation, overlap, and duplication in its programs; save tax dollars; and enhance revenue, and (2) opportunities for DHS to strengthen its management functions. Since 2011, GAO has identified 11 areas across the Department of Homeland Security (DHS) where fragmentation, overlap, or potential duplication exists and 13 areas of opportunity for cost savings or enhanced revenue collections. In these reports, GAO has suggested 53 total actions to the department and Congress to help strengthen the efficiency and effectiveness of DHS operations. In GAO's 2013 annual report on federal programs, agencies, offices, and initiatives that have duplicative goals or activities, GAO identified 6 new areas where DHS could take actions to address fragmentation, overlap, or potential duplication or achieve significant cost savings. For example, GAO found that DHS does not have a department-wide policy defining research and development (R&D) or guidance directing components how to report R&D activities. Thus, DHS does not know its total annual investment in R&D, which limits its ability to oversee components' R&D efforts. In particular, GAO identified at least 6 components with R&D activities and an additional $255 million in R&D obligations in fiscal year 2011 by DHS components that was not centrally tracked. GAO suggested that DHS develop and implement policies and guidance for defining and overseeing R&D at the department. In addition, GAO reported that by reviewing the appropriateness of the federal cost share the Transportation Security Administration (TSA) applies to agreements financing airport facility modification projects related to the installation of checked baggage screening systems, TSA could, if a reduced cost share was deemed appropriate, achieve cost efficiencies of up to $300 million by 2030 and be positioned to install a greater number of optimal baggage screening systems. GAO has also updated its assessments of the progress that DHS and Congress have made in addressing the suggested actions from the 2011 and 2012 annual reports. As of March 2013, of the 42 actions from these reports, 5 have been addressed (12 percent), 24 have been partially addressed (57 percent), and the remaining 13 have not been addressed (31 percent). Although DHS and Congress have made some progress in addressing the issues that GAO has previously identified, additional steps are needed to address the remaining areas to achieve associated benefits. While challenges remain across its missions, DHS has made considerable progress since 2003 in transforming its original component agencies into a single department. As a result, in its 2013 biennial high-risk update, GAO narrowed the scope of the area and changed its focus and name from Implementing and Transforming the Department of Homeland Security to Strengthening the Department of Homeland Security Management Functions. To more fully address this area, DHS needs to further strengthen its acquisition, information technology, and financial and human capital management functions. Of the 31 actions and outcomes GAO identified as important to addressing this area, DHS has fully or mostly addressed 8, partially addressed 16, and initiated 7. Moving forward, DHS needs to, for example, validate required acquisition documents in a timely manner, and demonstrate measurable progress in meeting cost, schedule, and performance metrics for its major acquisition programs. In addition, DHS has begun to implement a governance structure to improve information technology management consistent with best practices, but the structure covers less than 20 percent of DHS's major information technology investments. While this testimony contains no new recommendations, GAO previously made about 1,800 recommendations to DHS designed to strengthen its programs and operations. The department has implemented more than 60 percent of them and has actions under way to address others.
| 3,669 | 957 |
According to IRS data, about 60,000 FCCs and about 2.3 million USCCs filed income tax returns in 1995. FCCs and USCCs may not pay U.S. income tax for a variety of reasons. For instance, some corporations may have zero tax liabilities because of current-year operating losses; losses carried forward from preceding tax years; or sufficient tax credits available to offset tax liabilities. Other corporations may report no taxable income because of the improper pricing of intercompany transactions. Any company that has a related company with which it transacts business needs to establish transfer prices for those intercompany transactions. The pricing of intercompany transactions affects the distribution of profits and ultimately the taxable income of the companies. Research efforts have attempted to explain why, on average, USCCs appear to be more profitable than FCCs. Some researchers have concluded that part of the difference is attributable to differences in the characteristics of FCCs and USCCs, while acknowledging that transfer price abuses may also explain some of the difference. The 1997 study, for example, focused on why FCCs tended to report a lower ratio of net income to gross receipts--a measure of profitability-- than USCCs did. According to this study, differences in investment income, industrial classification, age, and amount of interest expense explain much of the difference in the profitability between FCCs and USCCs. That study also found that corporations whose largest foreign shareholders owned only 25 to 50 percent of the corporations' stock had low profitability, which was similar to corporations that were 100-percent owned by a single foreign shareholder. The author suggested that, if income-shifting through transfer price abuses were an important factor in explaining the differences in profitability across corporations, then one would expect single shareholder corporations to be less profitable because they would seem to have less difficulty in shifting income between affiliates. We were unable to identify any studies that were able to control for all potential factors, other than transfer price abuse, that may explain the difference in profitability. To meet our objectives, we obtained data from IRS' Statistics of Income (SOI) Division on corporate tax returns for 1989 through 1995. We used these data to determine the percentages of corporations that did not pay taxes in each year and to obtain information about their characteristics. We did not audit the SOI data; however, we conducted reliability tests to ensure the consistency of the data with selected FCC and USCC corporate statistics published by the SOI Division. In this report, we defined a corporation as being large if its reported total assets in tax year 1995 were at least $250 million or its reported gross receipts totaled at least $50 million. For years preceding tax year 1995, we deflated the $250 million asset and $50 million receipt definition of large corporate size by the gross domestic product price deflator for those years. We made this adjustment in the dollar magnitude of the definition because changing price levels, over time, alter the purchasing power of gross receipts and assets. Our report also compares "new" and "older" corporations. New corporations are those whose income tax returns showed incorporation dates within 3 years of the tax year date. For example, for tax year 1995, new corporations are those with incorporation dates no earlier than 1993. Other corporations are "older" corporations. A limitation on the precision of our comparison between "new" versus "older" FCCs and USCCs is that the definition of "new" relies on the dates of incorporation indicated on corporate tax returns. In some cases, corporations that merge may also reincorporate under a new corporate name. While the "new" entity may represent the combination of two mature corporations, our definition would count them as one "new" corporation. The SOI data in this report are based on SOI's probability sample of taxpayer returns and thus are subject to some imprecision owing to sampling variability. Using SOI's sampling weights, we estimated confidence intervals for the percentage of nontaxpaying FCCs and USCCs for tax years 1989 through 1995. We requested comments on a draft of this report from the Commissioner of Internal Revenue and the Director of the Department of the Treasury's Office of Tax Analysis. IRS and Treasury's comments are discussed near the end of this letter. We did our review from July through December 1998 in accordance with generally accepted government auditing standards. In each year from 1989 through 1995, a majority of corporations, both foreign- and U.S.-controlled, paid no U.S. income tax. However, in each of these years, a higher percentage of FCCs than USCCs paid no taxes. The percentage of FCCs not paying taxes ranged from 67 percent to 73 percent during those years, while the percentage of USCCs not paying taxes ranged from 59 to 62 percent, as shown in figure 1 and appendix I, table I.1. Large corporations, both FCCs and USCCs, were more likely to pay taxes than smaller corporations. Among large corporations, the percentage of FCCs that paid no tax exceeded that for USCCs from 1989 to 1993. However, in 1994, the difference between the two groups was not statistically significant, and in 1995, the percentage of large FCCs that paid no U.S. income tax was slightly less than that of large USCCs. In 1989, 33 percent of large FCCs and 27 percent of large USCCs paid no tax, while in 1995, 29 percent of large FCCs and 32 percent of large USCCs paid no tax. (See fig. 1 and app. I, table I.2.) Although nontaxpaying corporations, both foreign- and U.S.-controlled, were the majority of all corporations that filed tax returns in 1995, they accounted for well under half of all corporate assets and receipts. The 67 percent of FCCs that paid no federal income tax in 1995 accounted for 24 percent of the assets and 25 percent of the gross receipts of all FCCs in that year, as shown in figure 2. Similarly, the 61 percent of USCCs that paid no U.S. income tax in 1995 accounted for 21 percent of the assets owned by all USCCs and 17 percent of their receipts. (See also, table I.3. in app. I.) Large nontaxpaying FCCs and USCCs filed a small percentage of all returns filed by nontaxpaying corporations, yet they accounted for most of the assets of those corporations in 1995. Specifically, large nontaxpaying FCCs made up about 2 percent of all nontaxpaying FCCs but accounted for 84 percent of the assets of all nontaxpaying FCCs. Similarly, large nontaxpaying USCCs made up only about four-tenths of 1 percent of all nontaxpaying USCCs, yet they accounted for 80 percent of all nontaxpaying USCC assets. Also in 1995, large nontaxpaying FCCs accounted for 86 percent of the receipts generated by all nontaxpaying FCCs, while large nontaxpaying USCCs accounted for 48 percent of the receipts generated by all nontaxpaying USCCs. (See fig. I.1 in app. I.) The concentration of assets and receipts in the large nontaxpaying FCCs and USCCs was similar during the earlier years of our study period. This concentration was not unique to nontaxpaying corporations; taxpaying corporations were similarly concentrated. Other ways to compare large FCCs and USCCs include examining (1) the percentage of large FCCs and USCCs that paid relatively little tax and (2) the taxes paid relative to gross receipts by large corporations, as shown in table 1. In 1995, the percentage of large FCCs and USCCs that paid less than $100,000 in tax was 42 percent for FCCs and 40 percent for USCCs. In 1995, large FCCs, as a whole, paid significantly less tax per $1,000 in gross receipts than did large USCCs, despite the fact that a greater percentage of large USCCs paid no tax. The reason for this is that the large FCCs that paid relatively little or no tax had significantly greater average gross receipts than did the large USCCs that paid little or no tax. An earlier study of the relative profitability of FCCs and USCCs suggested that the lower relative age of FCCs partially explained their lower reported profitability. That study also showed that the reported profitability of both FCCs and USCCs varied across industrial sectors. From 1989 to 1993, a greater percentage of large FCCs than large USCCs were new (i.e., incorporated for 3 years or less). However, as shown in figure 3, this relationship was reversed for 1994 and 1995, and although not statistically significant, the percentage of new large USCCs exceeded the percentage of new large FCCs. This change could explain, in part, why the percentage of nontaxpaying large FCCs declined relative to the percentage of nontaxpaying large USCCs over the same time period. The IRS data that we examined for tax years 1989-95 also showed that, in each of those years, the percentage of new large corporations paying no tax exceeded the percentage of older large corporations paying no tax. This relationship held for large FCCs, large USCCs, and all large corporations together. (See table I.4 in app. I.) Large FCCs were more heavily concentrated in the manufacturing and wholesale trade sectors and less heavily concentrated in the financial services sector than were large USCCs. These differences in industrial concentration were found whether one compared large nontaxpaying FCCs to large nontaxpaying USCCs or all large FCCs to all large USCCs. Table 2 shows these comparisons for 1995. The ratios of the costs of goods sold and other costs to receipts varied significantly across industries, which could account for some of the difference between the amount of taxes that large FCCs paid per dollar of receipts and the amount that large USCCs paid. The ratio of taxable income per dollar of receipts should be inversely related to the ratio of costs per dollar of receipts. Corporations in the manufacturing, wholesale trade, and retail trade industries, on average, had significantly higher ratios of costs of goods sold to receipts than corporations in the financial and nonfinancial service industries. The largest component of costs of goods sold is purchases from other businesses, which, as table 3 on p. 10 indicates, are relatively unimportant for the two service industries. In contrast, corporations in the financial services industry, on average, had significantly higher ratios of interest expenses to receipts. This pattern of differences in cost ratios across industries was similar for both all large FCCs and all large USCCs. The pattern also was similar for large nontaxpaying FCCs and USCCs, with one exception: the ratio of interest expenses to receipts for nontaxpaying USCCs in the financial services sector was not significantly higher than that for nontaxpaying USCCs overall. (See table I.5 in app. I.) Wholesale trade, the industry with the highest ratio of costs of goods sold to receipts, had the lowest ratio of taxes paid to receipts, while financial services, the industry with the lowest ratio of costs of goods sold to receipts, had the highest ratio of taxes paid to receipts among the major industries. These relationships were similar for both large FCCs and large USCCs. The fact that a significantly larger percentage of all large USCCs were in the financial services industry and a significantly smaller percentage of them were in the wholesale trade industry (compared to large FCCs) may, in part, explain why the aggregate ratio of taxes paid to receipts, shown in table 3, was significantly higher for USCCs. Nevertheless, within each industry, the ratio of taxes paid to receipts was higher for large USCCs than for large FCCs in 1995. Moreover, in every major industry except the financial services industry, a greater percentage of large FCCs than large USCCs paid no tax at all for all the years that we examined. (See tables I.6 through I.8 in app. I.) The data in table 3 do not reveal any logical relationships across industries between (1) the ratios of either costs or taxes paid to receipts and (2) the percentage of corporations paying tax in each industry. For example, even though corporations in the financial services industry, on average, had the lowest ratio of costs of goods sold to receipts and the highest ratio of taxes paid to receipts, a higher percentage of corporations in that industry paid no tax compared with all the other industries. We requested comments on a draft of this report from the Commissioner of Internal Revenue and the Director of the Department of the Treasury's Office of Tax Analysis. On March 8, 1999, we received comments prepared by IRS' Chief Operations Officer through the Office of the National Director for Legislative Affairs. The Director of the Office of Tax Analysis and his staff provided comments in a February 25, 1999, meeting. Both IRS and Treasury were in overall agreement with the draft report. Both elaborated on issues we had raised, and both provided some technical comments, which we incorporated where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to Senator William V. Roth, Jr., Chairman, and Senator Daniel Patrick Moynihan, Ranking Minority Member, Senate Committee on Finance; Representative Bill Archer, Chairman, and Representative Charles B. Rangel, Ranking Minority Member, House Committee on Ways and Means; Representative Amo Houghton, Chairman, and Representative William J. Coyne, Ranking Minority Member, Subcommittee on Oversight, House Committee on Ways and Means; and other interested congressional committees. We will also send copies to The Honorable Robert E. Rubin, Secretary of the Treasury; The Honorable Charles O. Rossotti, Commissioner of Internal Revenue; and other interested parties. Copies will also be made available to others upon request. This report was prepared under the direction of Charlie W. Daniel, Assistant Director. Other major contributors are listed in appendix II. If you have any questions, please call Mr. Daniel or me on (202) 512-9110. The tables and figure in this statistical compendium supplement those in the letter. All the values were obtained from IRS' SOI corporate data files for tax years 1989-95. Other includes transportation and public utilities; mining; construction; agriculture, forestry, and fishing; and other trades. Shirley A. Jones, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided an update to its report on the nonpayment of U.S. income taxes by foreign-controlled corporations (FCC) and U.S.-controlled corporations (USCC), focusing on comparisons of: (1) the percentages of FCCs and USCCs that filed income tax returns showing no tax liabilities for 1989 through 1995, the latest years for which data were available; and (2) selected characteristics, including age, industrial sector, and certain cost ratios, of large corporations--those with assets of $250 million or more or gross receipts of $50 million or more. GAO noted that: (1) in each year between 1989 and 1995, a majority of corporations, both foreign- and U.S.-controlled, paid no U.S. income tax; (2) among large corporations, the percentage of FCCs that paid no tax exceeded that for USCCs from 1989 through 1993; (3) in 1994, the difference between the two groups was not statistically significant, and in 1995, the percentage of large FCCs that paid no U.S. income tax was slightly less than that of large USCCs; (4) differences in the characteristics of large FCCs and USCCs may account for part of the differences in the amount of taxes paid by the two groups; (5) one difference was the percentage of new corporations--3 years old or less--in each group; (6) the Internal Revenue Service data GAO reviewed indicate that newer corporations were less likely than older corporations to pay taxes; (7) from 1989 to 1993, a greater percentage of large FCCs than large USCCs were new, but from 1994 to 1995, a greater percentage of large USCCs than large FCCs were new; (8) another significant difference between large FCCs and large USCCs was in their distribution across industrial sectors; (9) in 1995, in comparison to large USCCs, large FCCs were more heavily concentrated in the manufacturing and wholesale trade sectors and less concentrated in the financial services sector; (10) aggregate ratios of costs to receipts for all large corporations differed significantly across industrial sectors; (11) the difference in cost ratios across industries, combined with the fact that large FCCs and USCCs were concentrated in different industries, could account for some of the difference in the amount of taxes that large FCCs paid per dollar of receipts and that large USCCs paid; and (12) the ratio of taxable income per dollar of receipts should be inversely related to the ratio of costs per dollar of receipts.
| 3,418 | 548 |
The federal-aid highway program is financed through motor fuel taxes and other levies on highway users. Federal aid for highways is provided largely on a cash basis from the Highway Trust Fund. States have financed roads primarily through a combination of state revenues and federal aid. Typically, states raise their share of the funds by taxing motor fuels and charging user fees. In addition, debt financing--issuing bonds to pay for highway development and construction-- represents about 10 percent of total state funding for highways, although some states make greater use of borrowing than others. Federal-aid highway funding to states is typically in the form of grants. These grants are distributed from the Highway Trust Fund and apportioned to states based on a series of funding formulas. Funding is subject to grant-matching rules--for most federally funded highway projects, an 80-percent federal and 20-percent state funding ratio. States are subject to pay-as-you-go rules where they obligate all of the funds needed for a project up front and are reimbursed for project costs as they are incurred. In the mid-1990s, FHWA and the states tested and evaluated a variety of innovative financing techniques and strategies. Many financing innovations were approved for use through administrative action or legislative changes under NHS and TEA-21. Three of the techniques approved were SIBs, GARVEEs, and TIFIA loans. SIBs are state revolving loan funds that make loans or loan guarantees to approved projects; the loans are subsequently repaid, and recycled back into the revolving fund for additional loans. GARVEEs are any state issued bond or note repayable with future federal-aid highway funds. Through the issuance of GARVEE bonds, projects are able to meet the need for up-front capital as well as use future federal highway dollars for debt service. TIFIA allows FHWA to provide credit assistance, up to 33 percent of eligible project costs, to sponsors of major transportation projects. Credit assistance can take the form of a loan, loan guarantee, or line of credit. See appendix II for additional information about these financing techniques. According to FHWA, the goals of its Innovative Finance Program are to accelerate projects by reducing inefficient and unnecessary constraints on states' management of federal highway funds; expand investment by removing barriers to private investment; encourage the introduction of new revenue streams, particularly for the purpose of retiring debt obligations; and reduce financing and related costs, thus freeing up the savings for investments into the transportation system itself. When Congress established the TIFIA program in TEA-21, it set out goals for the program to offer sponsors of large transportation projects a new tool to leverage limited Federal resources, stimulate additional investment in our nation's infrastructure, and encourage greater private sector participation in meeting our transportation needs. Over the last 8 years, many states have used one or more of the FHWA- sponsored alternative financing tools to fund their highway and transit infrastructure projects. As of June 2002: 32 states (including the Commonwealth of Puerto Rico) have established SIBs and have entered into 294 loan agreements with a dollar value of about $4.06 billion; 9 states (including the District of Columbia and Commonwealth of Puerto Rico) have entered into TIFIA credit assistance agreements for 11 projects, representing $15.4 billion in transportation investment; and 6 states have issued GARVEE bonds with face amounts totaling $2.3 billion. These mechanisms have given states additional options to accelerate the construction of projects and leverage federal assistance. It has also provided them with greater flexibility and more funding techniques. States' use of innovative financing techniques has resulted in projects being constructed more quickly than they would be under traditional pay- as-you-go financing. This is because techniques such as SIBs can provide loans to fill a funding gap, which allows the project to move ahead. For example, using a $25 million SIB loan for land acquisition in the initial phase of the Miami Intermodal Center, Florida accelerated the project by 2 years, according to FHWA. Similarly, South Carolina used an array of innovative finance tools when it undertook its "27 in7 program"--a plan to accomplish infrastructure investment projects that were expected to take 27 years and reduce that to just 7 years. Officials in the states that we contacted that were using FHWA innovative finance tools noted that project acceleration was one of the main reasons for using them. Innovative finance--in particular the TIFIA program--can leverage federal funds by attracting additional nonfederal investments in infrastructure projects. For example, the TIFIA program funds a lower share of eligible project costs than traditional federal-aid programs, thus requiring a larger investment by other, non-federal funding sources. It also attracts private creditors by assuming a lower priority on revenues pledged to repay debt. Bond rating companies told us they view TIFIA as "quasi-equity" because the federal loan is subordinate to all other debt in terms of repayments and offers debt service grace periods, low interest costs, and flexible repayment terms. It is often difficult to measure precisely the leveraging effect of the federal investment. As a recent FHWA evaluation report noted, just comparing the cost of the federal subsidy with the size of the overall investment can overstate the federal influence--the key issue being whether the projects assisted were sufficiently credit-worthy even without federal assistance and the federal impact was to primarily lower the cost of the capital for the project sponsor. However, TIFIA's features, taken together, can enhance senior project debt ratings and thus make the project more attractive to investors. For example, the $3.2 billion Central Texas Turnpike project--a toll road to serve the Austin-San Antonio corridor--received a $917 million TIFIA loan and will use future toll revenues to repay debt on the project, including revenue bonds issued by the Texas Transportation Commission and the TIFIA loan. According to public finance analysts from two ratings firms, the project leaders were able to offset potential concerns about the uncertain toll road revenue stream by bringing the TIFIA loan to the project's financing. FHWA's innovative finance techniques provide states with greater flexibility when deciding how to put together project financing. By having access to various alternatives, states can finance large transportation projects that they may not have been able to build with pay-as-you-go financing. For example, faced with the challenge of Interstate highway needs of over $1.0 billion, the state of Arkansas determined that GARVEE bonds would make up for the lack of available funding. In June 1999, Arkansas voters approved the issuance of $575 million in GARVEE bonds to help finance this reconstruction on an accelerated schedule. The state will use future federal funds, together with the required state matching funds and the proceeds from a diesel fuel tax increase, to retire the bonds. The GARVEE bonds allow Arkansas to rebuild approximately 380 miles, or 60 percent of its total Interstate miles, within 5 years. Although FHWA's innovative financing tools have provided states with additional options for meeting their needs, a number of factors can limit the use of these tools. State DOTs are not always willing to use federal innovative financing tools, nor do they always see advantages to using them. For example, officials in two states indicated that they had a philosophy against committing their federal aid funding to debt service. Moreover, not all states see advantages to using FHWA innovative financing tools. For example, one official indicated that his state did not have a need to accelerate projects because the state has only a few relatively small urban areas and thus does not face the congestion problems that would warrant using innovative financing tools more often. Officials in another state noted that because their DOT has the authority to issue tax-exempt bonds as long as the state has a revenue stream to repay the debt, they could obtain financing on their own and at lower cost. Not all state DOTs have the authority to use certain financing mechanisms, and others have limitations on the extent to which they can issue debt. For example, California requires voter approval in order to use its allocations from the Highway Trust Fund to pay for debt servicing costs. In Texas, the state constitution prohibits using highway funds to pay the state's debt service. Other states limit the amount of debt that can be incurred. For example, Montana has a debt ceiling of $150 million and is now paying off bonds issued in the late 1970s and early 1980s and plans to issue a GARVEE bond in the next few years. Some financing tools have limitations set in law. For example, five states are currently authorized to use TEA-21 federal-aid funding to capitalize their SIBs. Although other states have created SIBs and use them, they could not use their TEA-21 federal-aid funding to capitalize them. Similarly, TIFIA credit assistance can be used only for certain projects. TIFIA's requirement that, in general, projects cost at least $100 million restricts its use to large projects. We assessed the costs that federal, state and local governments (or special purpose entities they create) would incur to finance $10 billion in infrastructure investment using four current and newly proposed financing mechanisms for meeting infrastructure investment needs. To date, most federal funding for highways and transit projects has come through the federal-aid highway grants--appropriated by Congress from the Highway Trust Fund. Through the TIFIA program, the federal government also provides subsidized loans for state highway and transit projects. In addition, the federal government also subsidizes state and local bond financing of highways by exempting the interest paid on those bonds from federal income tax. Another type of tax preference--tax credit bonds-- has been used, to a very limited extent, to finance certain school investments. Investors in tax credit bonds receive a tax credit against their federal income taxes instead of interest payments from the bond issuer. Proposals have been made to extend the use of this relatively new financing mechanism to other public investments, including transportation projects. The use of these four mechanisms to finance $10 billion in infrastructure investment result in differences in (1) total costs--and how much of the cost is incurred within the short term 5-year period and how much of it is postponed to the future; (2) sharing costs--or the extent to which states must spend their own money, or obtain private investment, in order to receive the federal subsidy; and (3) risks--which level of government bears the risk associated with an investment (or compensates others for taking the risk). As a result of these differences, for any given amount of highway investment, combined and federal government budget costs will vary, depending on which financing mechanism is used. Total costs--and how much of the cost is incurred within the short term 5- year period and how much of it is postponed to the future--differ under each of the four mechanisms. As figure 1 shows, grant funds are the lowest-cost method to finance a given amount of investment expenditure, $10 billion. The reason for this result is that it is the only alternative that does not involve borrowing from the private sector through the issuance of bonds. Bonds are more expensive than grants because the governments have to compensate private investors for the risks that they assume (in addition to paying them back the present value of the bond principal). However, because the grants alternative does not involve borrowing, all of the public spending on the project must be made up front. The TIFIA direct loan, tax credit bond, and tax-exempt loan alternatives involve increased amounts of borrowing from the private sector and, therefore, increased overall costs. Grants entail the highest short term costs as these costs, in our example, are all incurred on a pay-as-you-go basis. The tax-exempt bond alternative, which involves the most borrowing and has the highest combined costs, also requires the least amount of public money up front. There are significant differences across the four alternatives in the cost sharing between federal and state governments. (See fig. 2). Federal costs would be highest under the tax credit bond alternative, under which the federal government pays the equivalent of 30 years of interest on the bonds. Grants are the next most costly alternative for the federal government. Federal costs for the tax-exempt bond and TIFIA loan alternatives are significantly lower than for tax credit bonds and grants. In some past and current proposals for using tax credit bonds to finance transportation investments, the issuers of the bonds would be allowed to place the proceeds from the sales of some bonds into a "sinking fund" and, thereby, earn investment income that could be used to redeem bond principal. This added feature would reduce (or eliminate) the costs of the bond financing to the issuers, but this would come at a significant additional cost to the federal government. For example, in our example where states issue $8 billion of tax credit bonds to finance highway projects, if the states were allowed to issue an additional $ 2.4 billion of bonds to start a sinking fund, they would be able to earn enough investment income to pay back all of the bonds without raising any of their own money. However, this added benefit for the states could increase costs to the federal government by about 30 percent--an additional $2.7 billion (in present value), raising the total federal cost to $11.7 billion. In some cases private investors participate in highway projects, either by purchasing "nonrecourse" state bonds that will be repaid out of project revenues (such as tolls) or by making equity investments in exchange for a share of future toll revenues. By making these investments the investors are taking the risk that project revenues will be sufficient to pay back their principal, plus an adequate return on their investment. In the case where the nonrecourse bond is a tax-exempt bond, the state must pay an interest rate that provides an adequate after-tax rate of return, including compensation for the risk assumed by the investors. By exempting this interest payment from income tax, the federal government is effectively sharing the cost of compensating investors for risk. Nevertheless, the state still bears some of the risk-related cost and, therefore has an incentive to either select investment projects that have lower risks, or select riskier projects only if the expected benefits from those projects are large enough to warrant taking on the additional risk. In the case of a tax credit bond where project revenues would be the only source of financing to redeem the bonds and the federal government would be committed to paying whatever rate of credit investors would demand to purchase bonds at par value, the federal government would bear all of the cost of compensating the investors for risk. States would no longer have a financial incentive to balance higher project risks with higher expected project benefits. Alternatively, the credit rate could be set equal to the interest rate that would be required to sell the average state bonds (issued within the same timeframe) at par value. In that case, states would bear the additional cost of selling bonds for projects with above- average risks. In the case of a TIFIA loan for a project that has private sector participation, the federal loan does not compensate the private investors for their risk; instead, the federal government assumes some of the risk and, thereby, lowers the risk to the private investors and lowers the amount that states have to pay to compensate for that risk. In summary, Mr. Chairman, alternative financing mechanisms have accelerated the pace of some surface transportation infrastructure improvement projects and provided states additional tools and flexibility to meet their needs--goals of FHWA's Innovative Finance Program. FHWA and the states have made progress to attain the goal Congress set for the TIFIA program--to stimulate additional investment and encourage greater private sector participation--but measuring success involves measuring the leverage effect of the federal investment, which is often difficult. Our work raises a number of issues concerning the potential costs and benefits of expanding alternative financing mechanisms to meet our nation's surface transportation needs. Congress likely will weigh these potential costs and benefits as it considers reauthorizing TEA-21. Expanding the use of alternative financing mechanisms has the potential to stimulate additional investment and private participation. But expanding investment in our nation's highways and transit systems raises basic questions of who pays, how much, and when. How alternative financing mechanisms are structured determines how much of the needs are met through federal funding and how much are met by the states and others. The structure of these mechanisms also determines how much of the cost of meeting our current needs are met by current users and taxpayers versus future users and taxpayers. While alternative finance mechanisms can leverage federal investments, they are, in the final analysis, different forms of debt financing. This debt ultimately must be repaid, with interest, either by highway users--through tolls, fuel taxes, or licensing and vehicle fees--or by the general population through increases in general fund taxes or reductions in other government services. Proposals for tax credit bonds would shift the costs of highway investments away from the traditional user-financed sources, unless revenues from the Highway Trust Fund are specifically earmarked to pay for these tax credits. Mr. Chairman this concludes my prepared statement. I would be pleased to answer any questions you or other members of the Committees have. For further information on this testimony, please contact JayEtta Z. Hecker ([email protected]) or Steve Cohen ([email protected]). Alternatively, they may be reached on (202) 512-2834. Individuals making key contributions to this testimony include Lynn Filla-Clark, Jennifer Gravelle, Gail Marnik, Jose Oyola, Eric Tempelis, Stacey Thompson, and Jim Wozny. We estimated the costs that the federal, state or local governments (or special purpose entities they create) would incur if they financed $10 billion in infrastructure investment using each of four alternative financing mechanisms: grants, tax credit bonds, tax-exempt bonds, and direct federal loans. The following subsections explain our cost computations for each alternative. We converted all of our results into present value terms, so that the value of the dollars spent in the future are adjusted to make them comparable to dollars spent today. This adjustment is particularly important when comparing the costs of bond repayment that occur 30 years from now with the costs of grants that occur immediately. We estimated the cost to the federal and state governments of traditional grants with a state match. We assume the state was responsible for 20% of the investment expenditures. We then found the percentage of federal grants such that the federal grant plus the state match totaled $10 billion. This form of matching resulted in the state being responsible for $2 billion of the spending and the federal government being responsible for $8 billion. We estimated the cost to the federal and state governments of issuing $8 billion in tax credit bonds with a state match of $2 billion. The cost to the federal government equals the amount of tax credits that would be paid out over a given loan term. We estimated the amount of credit payment in a given year by multiplying the amount of outstanding bonds in a given year by the credit rate. We assumed that the credit rate would be approximately equal to the interest rates on municipal bonds of comparable maturity, grossed up by the marginal tax rate of bond purchasers. For the results presented in figures 1 and 2 we assumed that the bonds would have a 30-year term and would have a credit rating between Aaa and Baa. The cost to the issuing states would consist of the repayment of bond principal in future years, plus the upfront cost of $2 billion in state appropriations for the matching contribution. The cost of tax-exempt bonds to the state or local government (or special purpose entity) issuers would consist of the interest payments on the bonds and the repayment of bond principal. The cost to the federal government would equal the taxes forgone on the income that bond purchasers would have earned form the investments they would have made if the tax-exempt bonds were not available for purchase. For the results presented in figures 1 and 2 we made the same assumptions regarding the terms and credit rating of the bonds as we did for the tax credit bond alternative. We computed the cost of interest payments by the state by multiplying the amount of outstanding bonds by the current interest rate for municipal bonds with the same term and credit rating. We assumed that the pretax rate of return that bond purchasers would have earned on alternative investments would have been equal to the municipal bond rate divided by one minus the investors' average marginal tax rate. Consequently, the federal revenue loss was equal to that pretax rate of return, multiplied by the amount of tax-exempt bonds outstanding each year (in this example), and then multiplied by the investors' average marginal tax rate. In order to have our direct loan example reflect the financing packages typical of current TIFIA projects, we used data from FHWA's June 2002 Report to Congress to determine what shares of total project expenditures were financed by TIFIA direct loans, federal grants, bonds issued by state or local governments or by special purpose entities, private investment, and other sources. We assumed that the $10 billion of expenditures in our example was financed by these various sources in roughly the same proportions as they are used, on average, in current TIFIA projects. We estimated the federal and nonfederal costs of the grants and bond financing components in the same manner as we did for the grants and tax-exempt bond examples above. To compute the federal cost of the direct loan component, we multiplied the dollar amount of the direct loan in our example by the average amount of federal subsidy per dollar of TIFIA loans, as reported in the TIFIA report. In the results presented in figure 1, this portion of the federal cost amounted to $130 million. The nonfederal costs of the loan component consist of the loan repayments and interest payments to the federal government. We assumed that the term of the loan was 30 years and that the interest rate was set equal to the federal cost of funds, which is TIFIA's policy. The private investment (other than through bonds), which accounted for less than one percent of the spending, and the "other" sources, which accounted for about three percent of the spending, were treated as money spend immediately on the project. A number of factors--including general interest rate levels, the terms of the bonds or loans, the individual risks of the projects being financed-- affect the relative costs of the various alternatives. For this reason, we examined multiple scenarios for each alternative. In particular, current interest rates are relatively low by historical standards. In our alternative scenarios we used higher interest rates, typical of those in the early 1990s. At higher interest rates, the combined costs of the alternatives that involve bond financing would be higher, while the costs of grants would remain the same. If we had used bonds with 20-year terms, instead of 30-year terms in the examples, the costs of the three alternatives that involve bond financing would be lower, but they would still be greater than the costs of grants. One of the earliest techniques tested to fund transportation infrastructure was revolving loan funds. Prior to 1995, Federal law did not permit states to allocate federal highway funds to capitalize revolving loan funds. However, in the early 1990s, transportation officials began to explore the possibility of adding revolving loan fund capitalization to the list of eligible uses for certain federal transportation funds. Under such a proposal, federal funding is used to "capitalize" or provide seed money for the revolving fund. Then money from the revolving fund would be loaned out to projects, repaid, and recycled back into the revolving fund, and subsequently reinvested in the transportation system through additional loans. In 1995, the federally capitalized transportation revolving loan fund concept took shape as the State Infrastructure Bank (SIB) pilot program, authorized under Section 350 of the NHS Act. This pilot program was originally available only to a maximum of 10 states, but then was expanded under the 1997 U.S. DOT Appropriations Act, which appropriated $150 million in federal general funds for SIB capitalization. TEA-21 established a new SIB pilot program, but limited participation to four states--California, Florida, Missouri, and Rhode Island. Texas subsequently obtained authorization under TEA-21. These states may enter into cooperative agreements with the U.S. DOT to capitalize their banks with federal-aid funds authorized in TEA-21 for fiscal years 1998 through 2003. Of the states currently authorized, only Florida and Missouri have capitalized their SIBs with TEA-21 funds. As part of TEA-21, Congress authorized the Transportation Infrastructure Finance and Innovation Act of 1998 (TIFIA) to provide credit assistance, in the form of direct loans, loan guarantees, and standby lines of credit to projects of national significance. The TIFIA legislation authorized $10.6 billion in credit assistance and $530 million in subsidy cost to cover the expected long-term cost to the government for providing credit assistance. TIFIA credit assistance is available to highway, transit, passenger rail and multi-modal project, as well as projects involving installation of intelligent transportation systems (ITS). The TIFIA statute sets forth a number of prerequisites for participation in the TIFIA program. The project costs must be reasonably expected to total at least $100 million, or alternatively, at least 50 percent of the state's annual apportionment of federal-aid highway funds, whichever is less. For projects involving ITS, eligible project costs must be expected to total at least $30 million. Projects must be listed on the state's transportation improvement program, have a dedicated revenue source for repayment, and must receive an investment grade rating for their senior debt. Finally, TIFIA assistance cannot exceed 33 percent of the project costs and the final maturity date of any TIFIA credit assistance cannot exceed 35 years after the project's substantial completion date.
|
As Congress considers reauthorizing the Transportation Equity Act for the 21st Century (TEA-21) in 2003, it does so in the face of a continuing need for the nation to invest in its surface transportation infrastructure at a time when both the federal and state governments are experiencing severe financial constraints. As transportation needs have grown, Congress provided states--in the National Highway System Designation Act of 1995 and TEA-21--additional means to make highway investments through alternative financing mechanisms. A number of states are using existing alternative financing tools such as State Infrastructure Banks, Grant Anticipation Revenue Vehicles bonds, and loans under the Transportation Infrastructure Finance and Innovation Act. These tools can provide states with additional options to accelerate projects and leverage federal assistance--they can also provide greater flexibility and more funding techniques. Federal funding of surface transportation investments includes federal-aid highway program grant funding appropriated by Congress out of the Highway Trust Fund, loans and loan guarantees, and bonds that are issued by states that are exempt from federal taxation. Expanding the use of alternative financing mechanisms has the potential to stimulate additional investment and private participation. However, expanding investment in the nation's highways and transit systems raises basic questions of who pays, how much, and when.
| 5,602 | 265 |
Fiscal sustainability presents a national challenge shared by all levels of government. The federal government and state and local governments share in the responsibility of fulfilling important national goals, and these subnational governments rely on the federal government for a significant portion of their revenues. To provide Congress and the public with a broader perspective on our nation's fiscal outlook, we developed a fiscal model of the state and local sector. This model enables us to simulate fiscal outcomes for the entire state and local government sector in the aggregate for several decades into the future. Our state and local fiscal model projects the level of receipts and expenditures for the sector in future years based on current and historical spending and revenue patterns. This model complements GAO's long-term fiscal simulations of federal deficits and debt levels under varying policy assumptions. We have published long-term federal fiscal simulations since 1992. We first published the findings from our state and local fiscal model in 2007. Our model shows that the state and local government sector faces growing fiscal challenges. The model includes a measure of fiscal balance for the state and local government sector for each year until 2050. The operating balance net of funds for capital expenditures is a measure of the ability of the sector to cover its current expenditures out of current receipts. The operating balance measure has historically been positive most of the time, ranging from about zero to about 1 percent of gross domestic product (GDP). Thus, the sector usually has been able to cover its current expenses with incoming receipts. Our January 2008 report showed that this measure of fiscal balance was likely to remain within the historical range in the next few years, but would begin to decline thereafter and fall below the historical range within a decade. That is, the model suggested the state and local government sector would face increasing fiscal stress in just a few years. We recently updated the model to incorporate current data available as of August 2008. As shown in Figure 1, these more recent results show that the sector has begun to head out of balance. These results suggest that the sector is currently in an operating deficit. Our simulations show an operating balance measure well below the historical range and continuing to fall throughout the remainder of the simulation timeframe. Since most state and local governments are required to balance their operating budgets, the declining fiscal conditions shown in our simulations suggest the fiscal pressures the sector faces and are a foreshadowing of the extent to which these governments will need to make substantial policy changes to avoid growing fiscal imbalances. That is, absent policy changes, state and local governments would face an increasing gap between receipts and expenditures in the coming years. One way of measuring the long-term challenges faced by the state and local sector is through a measure known as the "fiscal gap." The fiscal gap is an estimate of the action needed today and maintained for each and every year to achieve fiscal balance over a certain period. We measured the gap as the amount of spending reduction or tax increase needed to maintain debt as a share of GDP at or below today's ratio. As shown in figure 2, we calculated that closing the fiscal gap would require action today equal to a 7.6 percent reduction in state and local government current expenditures. Closing the fiscal gap through revenue increases would require action of the same magnitude to increase state and local tax receipts. Growth in health-related costs serves as the primary driver of the fiscal challenges facing the state and local sector over the long term. Medicaid is a key component of their health-related costs. CBO's projections show federal Medicaid grants to states per recipient rising substantially more than GDP per capita in the coming years. Since Medicaid is a federal and state program with federal Medicaid grants based on a matching formula, these estimates indicate that expenditures for Medicaid by state governments will rise quickly as well. We also estimated future expenditures for health insurance for state and local employees and retirees. Specifically, we assumed that the excess cost factor--the growth in these health care costs per capita above GDP per capita--will average 2.0 percentage points per year through 2035 and then begin to decline, reaching 1.0 percent by 2050. The result is a rapidly growing burden from health-related activities in state and local budgets. Our simulations show that other types of state and local government expenditures--such as wages and salaries of state and local workers, pension contributions, and investments in infrastructure--are expected to grow slightly less than GDP. At the same time, most revenue growth is expected to be approximately flat as a percentage of GDP. The projected rise in health- related costs is the root of the long-term fiscal difficulties these simulations suggest will occur. Figure 3 shows our simulations for expenditure growth for state and local government health-related and other expenditures. On the receipt side, our model suggests that most of these tax receipts will show modest growth in the future--and some are projected to experience a modest decline--relative to GDP. We found that state personal income taxes show a small rise relative to GDP in coming years. This likely reflects that some state governments have a small degree of progressivity in their income tax structures. Sales taxes of the sector are expected to experience a slight decline as a percentage of GDP in the coming years, reflecting trends in the sector's tax base. While historical data indicate that property taxes--which are mostly levied by local governments--could rise slightly as a share of GDP in the future, recent events in the housing market suggest that the long-term outlook for property tax revenue could also shift downward. These differential tax growth projections indicate that any given jurisdiction's tax revenue prospects are uniquely tied to the composition of taxes it imposes. The only source of revenue expected to grow rapidly under current policy is federal grants to state governments for Medicaid. That is, we assume that current policy remains in place and the shares of Medicaid expenditures borne by the federal government and the states remain unchanged. Since Medicaid is a matching formula grant program, the projected escalation in federal Medicaid grants simply reflects expected increased Medicaid expenditures that will be shared by state governments. These long-term simulations do not attempt to assume how recent actions to stabilize the financial system and economy will be incorporated into the federal budget estimates in January 2009. The outlook presented by our state and local model is exacerbated by current economic conditions. During economic downturns, states can experience difficulties financing programs such as Medicaid. Economic downturns result in rising unemployment, which can lead to increases in the number of individuals who are eligible for Medicaid coverage, and in declining tax revenues, which can lead to less available revenue with which to fund coverage of additional enrollees. For example, during the most recent period of economic downturn prior to 2008, Medicaid enrollment rose 8.6 percent between 2001 and 2002, which was largely attributed to states' increases in unemployment. During this same time period, state tax revenues fell 7.5 percent. According to the Kaiser Commission on Medicaid and the Uninsured, in 2008, most states have made policy changes aimed at controlling Medicaid costs. Recognizing the complex combination of factors affecting states during economic downturns--increased unemployment, declining state revenues, and increased downturn-related Medicaid costs--this Committee and several others asked us to assist them as they considered a legislative response that would help states cope with Medicaid cost increases. In response to this request, our 2006 report on Medicaid and economic downturns explored the design considerations and possible effects of targeting supplemental assistance to states when they are most affected by a downturn. We constructed a simulation model that adjusts the amount of funding a state could receive on the basis of each state's percentage increase in unemployment and per person spending on Medicaid services. Such a supplemental assistance strategy would leave the existing Medicaid formula unchanged and add a new, separate assistance formula that would operate only during times of economic downturn and use variables and a distribution mechanism that differ from those used for calculating matching rates. This concept is embodied in the health reform plan released by Chairman Baucus last week. Using data from the past three recessions, we simulated the provision of such targeted supplemental assistance to states. To determine the amount of supplemental federal assistance needed to help states address increased Medicaid expenditures during a downturn, we relied on research that estimated a relationship between changes in unemployment and changes in Medicaid spending. Our model incorporated a retrospective assessment which involved assessing the increase in each state's unemployment rate for a particular quarter compared to the same quarter of the previous year. Our simulation included an economic trigger turned on when 23 or more states had an increase in the unemployment rate of 10 percent or more compared to the unemployment rate that existed for the same quarter 1 year earlier (such as a given state's unemployment rate increasing from 5 percent to 5.5 percent). We chose these two threshold values--23 or more states and increased unemployment of 10 percent or more--to work in tandem to ensure that the national economy had entered a downturn and that the majority of states were not yet in recovery from the downturn. These parameters were based on our quantitative analysis of prior recessions. As shown in figure 4, for the 1990-1991 downturn, 6 quarters of assistance would have been provided beginning with the third quarter of 1991 and ending after the fourth quarter of 1992. Analysis of recent unemployment data indicate that such a strategy would already be triggered based on changes in unemployment for 2007 and 2008. In other words, current data confirm the economic pressures currently facing the states. Considerations involved in such a strategy include: Timing assistance so that it is delivered as soon as it is needed, Targeting assistance according to the extent of each state's downturn, Temporarily increasing federal funding so that it turns off when states' economic circumstances sufficiently improve, and Triggering so the starting and ending points of assistance respond to indicators of states' economic distress. Any potential legislative response would need to be considered within the context of broader health care and fiscal challenges--including continually rising health care costs, a growing elderly population, and Medicare and Medicaid's increasing share of the federal budget. Additional criteria could be established to accomplish other policy objectives, such as controlling federal spending by limiting the number of quarters of payments or stopping payments after predetermined spending caps are reached. The federal government depends on states and localities to provide critical services including health care for low-income populations. States and localities depend on the federal government to help fund these services. As the largest share of federal grant funding and a large and growing share of state budgets, Medicaid is a critical component of this intergovernmental partnership. The long-term structural fiscal challenges facing the state and local sector further complicate the provision of Medicaid services. These challenges are exacerbated during periods of economic downturn when increased unemployment leads to increased eligibility for the Medicaid program. The current economic downturn presents additional challenges as states struggle to meet the needs of eligible residents in the midst of a credit crisis. Our work on the long-term fiscal outlook for state and local governments and strategies for providing Medicaid-related fiscal assistance is intended to offer the Committee a useful starting point for considering strategic evidence-based approaches to addressing these daunting intergovernmental fiscal issues. For information about this statement for the record, please contact Stanley J. Czerwinski, Director, Strategic Issues, at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony and related products include: Kathryn G. Allen, Director, Quality and Continuous Improvement; Thomas J. McCool, Director, Center for Economics; Amy Abramowitz, Meghana Acharya, Romonda McKinney Bumpus, Robert Dinkelmeyer, Greg Dybalski, Nancy Fasciano, Jerry Fastrup, Carol Henn, Richard Krashevski, Summer Lingard, James McTigue, Donna Miller, Elizabeth T. Morrison, Michelle Sager, Michael Springer, Jeremy Schwartz, Melissa Wolf, and Carolyn L. Yocom. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
GAO was asked to provide its views on projected trends in health care costs and their effect on the long-term outlook for state and local governments in the context of the current economic environment. This statement addresses three key points: (1) the state and local government sector's long-term fiscal challenges; (2) rapidly rising health care costs which drive the sector's long-term fiscal difficulties, and (3) the considerations involved in targeting supplemental funds to states through the Medicaid program during economic downturns. To provide Congress and the public with a broader perspective on our nation's fiscal outlook, GAO previously developed a fiscal model of the state and local sector. This model enables GAO to simulate fiscal outcomes for the sector in the aggregate for several decades into the future. GAO first published the findings from the state and local fiscal model in 2007. This statement includes August 2008 data to update the simulations. This Committee and others also asked GAO to analyze strategies to help states address increased Medicaid expenditures during economic downturns. GAO simulated the provision of such supplemental assistance to states. As we previously reported, the simulation model adjusts the amount of funding states would receive based on changes in unemployment and spending on Medicaid services. Rapidly rising health care costs are not simply a federal budget problem. Growth in health-related spending also drives the fiscal challenges facing state and local governments. The magnitude of these challenges presents long-term sustainability challenges for all levels of government. The current financial sector turmoil and broader economic conditions add to fiscal and budgetary challenges for these governments as they attempt to remain in balance. States and localities are facing increased demand for services during a period of declining revenues and reduced access to capital. In the midst of these challenges, the federal government continues to rely on this sector for delivery of services such as Medicaid, the joint federal-state health care financing program for certain categories of low-income individuals. Our model shows that in the aggregate the state and local government sector faces growing fiscal challenges. Incorporation of August 2008 data shows that the position of the sector has worsened since our January 2008 report. The long-term outlook presented by our state and local model is exacerbated by current economic conditions. During economic downturns, states can experience difficulties financing programs such as Medicaid. Downturns result in rising unemployment, which can increase the number of individuals eligible for Medicaid, and declining tax revenues, which can decrease revenue available to fund coverage of additional enrollees. GAO's simulation model to help states respond to these circumstances is based on assumptions under which the existing Medicaid formula would remain unchanged and add a new, separate assistance formula that would operate only during times of economic downturn. Considerations involved in such a strategy could include: (1) timing assistance so that it is delivered as soon as it is needed, (2) targeting assistance according to the extent of each state's downturn, (3) temporarily increasing federal funding so that it turns off when states' economic circumstances sufficiently improve, and (4) triggering so the starting and ending points of assistance respond to indicators of economic distress.
| 2,673 | 646 |
Carryover balances consist of unobligated funds and uncosted obligations. Each fiscal year, NASA requests obligational authority from the Congress to meet the costs of running its programs. Once NASA receives this authority, it can obligate funds by placing orders or awarding contracts for goods and services that will require payment during the same fiscal year or in the future. Unobligated balances represent the portion of its authority that NASA has not obligated. Uncosted obligations represent the portion of its authority that NASA has obligated for goods and services but for which it has not yet incurred costs. Through the annual authorization and appropriations process, the Congress determines the purposes for which public funds may be used and sets the amounts and time period for which funds will be available. Funding provided for NASA's Human Space Flight and Science, Aeronautics, and Technology programs is available for obligation over a 2-year period. Authority to obligate any remaining unobligated balances expires at the end of the 2-year period. Five years later, outstanding obligations are canceled and the expired account is closed. Some level of carryover balance is appropriate for government programs. In particular, NASA's Human Space Flight and Science, Aeronautics, and Technology appropriations are available for obligation over a 2-year period. In such circumstances, some funds are expected to be obligated during the second year of availability. Funds must travel through a series of approvals at headquarters and the field centers before the money is actually put on contracts so that work can be performed. According to NASA officials, it can be difficult to obligate funds that are released late in the year. In addition, the award of contracts and grants may sometimes be delayed. Once contracts and grants are awarded, costs may not be incurred or reported for some time thereafter. Expenditures, especially final payments on contract or grant closeouts, will lag still further behind. Finally, costs and expenditures for a multiyear contract or grant will be paced throughout the life of the contract. For these reasons, all NASA programs have carryover balances. The unobligated balances expire at the end of their period of availability, and according to NASA officials, uncosted obligations carried over will eventually be expended to cover contract costs. Carryover balances at the end of fiscal year 1995 for Human Space Flight and Science, Aeronautics, and Technology programs totaled $3.6 billion. Of this amount, $2.7 billion was obligated but not costed and $0.9 billion was unobligated. Table 1 shows the carryover balances by program. The balance carried over from fiscal year 1995 plus the new budget authority in fiscal year 1996 provides a program's total budget authority. The total budget authority less the planned costs results in the estimated balance at the end of fiscal year 1996. Table 2 starts with the carryover from fiscal year 1996 and ends with the balance that NASA estimates will carry over from fiscal year 1997 into 1998. The cost plans shown in the tables reflect the amount of costs estimated to be accrued during the fiscal year. The carryover balances will change if actual cost and budget amounts differ from projections. NASA program officials are in the process of updating their 1996 cost plan estimates. Officials in some programs now expect the actual costs to be less than planned, resulting in higher carryover balances at the end of 1996 than those shown in the tables. NASA often discusses and analyzes carryover balances in terms of equivalent months of a fiscal year's budget authority that will be carried into the next fiscal year. For example, the Aeronautical Research and Technology carryover balance of $217.9 million at the end of fiscal year 1996 is equivalent to 3 months of the $877.3 million new budget authority, based on an average monthly rate of $73.1 million. Table 3 shows each program's carryover in equivalent months of budget authority. The carryover balances at the end of fiscal year 1995 ranged from the equivalent of 1 month for the Space Shuttle to 16 months for Academic Programs. NASA officials gave several overall reasons for the large relative differences in carryover amounts. One major reason was that programs such as the Space Station and the Space Shuttle, which have fewer months of carryover, prepare budgets based on the amount of work estimated to be costed in a fiscal year. Other programs, such as MTPE and Space Science, have based their budgets on the phasing of obligations over a number of fiscal years. Another major reason given was that some programs fund a substantial number of grants, which are typically funded for a 12-month period regardless of what point in the fiscal year they are awarded. This practice, coupled with slow reporting and processing of grant costs, contributes to higher carryover balances. Science programs such as MTPE, Space Science, and Life and Microgravity Sciences and Applications, fund grants to a much greater extent than the Space Station and the Space Shuttle. NASA officials also said the size of contractors affects carryover balances, with larger contractors requiring less funding into next year than smaller contractors. NASA officials gave two major reasons for MTPE's carryover balance at the end of fiscal year 1995. First, the MTPE program has undergone several major restructurings since its inception in 1991. During the periods when the content of the program was being changed, selected program activities were restrained until the new baseline program was established. Since several contract start dates were delayed, the carryover balance grew. MTPE officials emphasized that all work for which funding was provided would be performed in accordance with the approved baseline and that, in most cases, the new baseline included the same end dates for major missions and ground systems. Officials expect the balances to decrease as delayed work is accomplished. The second reason given for the large carryover balance at the end of fiscal year 1995 is the large number of grants funded in the MTPE program. As discussed earlier, the process for awarding grants and delays in reporting costs on grants contributes to carryover balances. Officials from the Aeronautical Research and Technology program attributed their relatively low level of carryover to aggressively managing carryover balances. Officials have studied their carryover balances in detail and have greatly reduced their levels. In 1989, the program had a carryover balance of 43 percent, equivalent to 5 months of funding. Program financial managers analyzed their carryover and determined that it could be reduced substantially. By 1992, the carryover balance was about 25 percent, or 3 months, of new budget authority, and it is estimated to remain at that level through fiscal year 1996. In fiscal year 1997, program managers hope to achieve a 15-percent, or 2-month, carryover level. Officials attributed their improved performance to thoroughly understanding their carryover balances, emphasizing work to be accomplished and costed in preparing budgets, and carefully tracking projects' performance. They believe that some of their methods and systems for managing carryover balances could be applied to other NASA programs. Although carryover naturally occurs in the federal budget process, NASA officials became concerned that the balances were too high. NASA is taking actions to analyze and reduce these balances. NASA's Chief Financial Officer directed a study that recommended changes to reduce carryover balances. NASA's Comptroller will review justifications for carryover balances as part of the fiscal year 1998 budget development process. A NASA steering group was tasked by NASA's Chief Financial Officer to review carryover balances as part of a study to address NASA's increasing level of unliquidated budget authority. The study identified a number of reasons for the current balances, including NASA's current method of obligations-based budgeting, reserves held for major programs, delays in awarding contractual instruments, late receipt of funding issued to the centers, and grant reporting delays. The study recommended a number of actions to reduce carryover balances through improved budgeting, procurement, and financial management practices, including implementing cost-based budgeting throughout the agency and establishing thresholds for carryover balances. According to the study, cost-based budgeting takes into account the estimated level of cost to be incurred in a given fiscal year as well as unused obligation authority from prior years when developing a budget. The organization then goes forward with its budget based on new obligation authority and a level of proposed funding that is integrally tied to the amount of work that can be done realistically over the course of the fiscal year. However, the study cautioned that a cost-based budgeting strategy should recognize that cost plans are rarely implemented without changes. Therefore, program managers should have the ability to deal with contingencies by having some financial reserves. The study recommended that NASA implement thresholds for the amount of funds to be carried over from one fiscal year to the next. NASA had about 4 months of carryover at the end of fiscal year 1995, according to the study. It recommended that NASA implement a threshold of 3 months for total carryover: 2 months of uncosted obligations for forward funding on contracts and 1 month of unobligated balance for reserves. The study noted that carryover balances should be reviewed over the next several years to determine if this threshold is realistic. NASA's Chief Financial Officer said the next logical step is to analyze balances in individual programs in more depth. We agree that the appropriateness of the threshold should be examined over time and that further study is needed to more fully understand carryover balances in individual programs. We also believe that individual programs must be measured against an appropriate standard. One problem with looking at carryover balances in the aggregate is that programs substantially under the threshold in effect mask large carryover balances in other programs. For example, at the end of fiscal year 1996, the total amount of carryover in excess of 3 months for seven programs is estimated to be $1.05 billion. However, the carryover balance for the Space Shuttle and the Space Station programs in the same year is estimated to be $1.03 billion under the threshold, which almost completely offsets the excess amount. We compared the balances of individual Human Space Flight and Science, Aeronautics, and Technology programs to this 3-month threshold and found that at the end of fiscal year 1995, nine programs exceeded the threshold by a total of $1.3 billion. By the end of fiscal year 1997, only four programs are expected to significantly exceed the threshold by a total of $0.6 billion. Table 4 compares individual program carryover amounts with the 3-month threshold at the end of fiscal years 1995, 1996, and 1997. As mentioned earlier, the estimates are based on projected costs for fiscal year 1996 and projected budgets and costs for fiscal year 1997. If actual costs and budgets are different, the amount of carryover exceeding the threshold will change. The NASA Comptroller is planning to review carryover balances in each program. According to the Comptroller and program financial managers, carryover balances have always been considered a part of the budget formulation process, but factoring them into the process is difficult since budget submissions must be prepared well before the actual carryover balances are known. For example, NASA's fiscal year 1997 budget request was prepared in the summer of 1995 and submitted to the Office of Management and Budget in the fall. At that point, NASA's appropriations for fiscal year 1996 were not final and costs for 1996 could only be estimated. Estimates of budget authority, obligations, and accrued costs of program activities will be specifically scrutinized to ensure that the timing of the budget authority to accrued costs is consistent with minimal, carefully justified balances of uncosted budget authorities at fiscal year end. Carryover of uncosted balances in excess of eight weeks of cost into the next fiscal year will have to be specifically justified. The carryover referred to by the Comptroller is the equivalent of 8 weeks, or 15 percent, of the next fiscal year's cost. For example, the fiscal year 1996 budget, factoring in carryover from the prior years, should include enough budget authority to cover all costs in 1996 plus 8 weeks of costs in fiscal year 1997. The Comptroller stressed that he was not attempting to set a threshold for the appropriate level of carryover, but instead was setting a criteria beyond which there should be a strong justification for carryover. The Comptroller also told us that although the guidance specifically addressed preparation of the fiscal year 1998 budget, he has asked programs to justify carryover balances in excess of 8 weeks starting with the end of fiscal year 1996. Table 5 compares program carryover balances at the end of fiscal years 1995, 1996, and 1997 to the 8-week criteria. NASA was not able to provide cost plan data for fiscal year 1998. To approximate the 1997 carryover balances in excess of 8 weeks, we used the fiscal year 1997 cost plan. If a program cost plan for 1998 is higher than 1997, the 8-week criteria would also be higher and the carryover in excess of 8 weeks would be lower. On the other hand, a lower cost plan in 1998 would result in a higher balance in excess of 8 weeks. As shown in table 5, significant amounts of carryover funding would have to be justified. In fiscal year 1995, $1.9 billion would have had to be justified. In fiscal years 1996 and 1997, the amounts requiring justification are estimated at $1.5 billion and $1 billion, respectively. We discussed a draft of this report with NASA officials and have incorporated their comments where appropriate. We reviewed carryover balances for programs in the Human Space Flight and Science, Aeronautics, and Technology appropriations as of September 30, 1995, and estimated balances as of September 30, 1996, and 1997. We relied on data from NASA's financial management systems for our analyses and calculations and did not independently verify the accuracy of NASA's data. We reviewed budget and cost plans and discussed carryover balances with NASA's Chief Financial Officer; NASA's Comptroller and his staff; and with financial management staff for the MTPE, Space Science, Space Station, Space Shuttle, and Aeronautics programs. We also reviewed NASA's internal study of carryover balances and discussed the study with the NASA staff responsible for preparing it. We performed our work at NASA headquarters, the Goddard Space Flight Center, the Jet Propulsion Laboratory, the Johnson Space Center, and the Marshall Space Flight Center. We conducted our work between April 1996 and July 1996 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce this report's contents earlier, we plan no further distribution of the report until 10 days after its issue date. We will then send copies to the Administrator of NASA; the Director, Office of Management and Budget; and other congressional committees responsible for NASA authorizations, appropriations, and general oversight. We will also provide copies to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix I. Frank Degnan Vijay Barnabas James Beard Richard Eiserman Monica Kelly Thomas Mills The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the extent of carryover budget balances for the National Aeronautics and Space Administration's (NASA) Mission to Planet Earth (MTPE) program and other NASA programs. GAO found that: (1) carryover balances in NASA's Human Space Flight and Science, Aeronautics, and Technology programs totalled $3.6 billion by the end of fiscal year (FY) 1995; (2) individual programs carried over varying amounts, ranging from the equivalent of 1 month to 16 months of FY 1995 new budget authority; (3) MTPE carried $695 million, or more than 6 months, of budget authority into FY 1996; (4) Under NASA's current budget and cost plans, these balances will be reduced in FY 1996 and 1997, but the actual reductions depend on the extent NASA's projected costs match the actual costs incurred and the amount of new budget authority received for FY 1997; (5) NASA officials are concerned that the current amounts are too high and are taking actions to reduce these balances; (6) a recent NASA study of carryover balances determined that the equivalent of 3 months of budget authority should be carried into the next fiscal year and recommended actions to bring NASA programs within that threshold, and also noted that the threshold needs to be studied over time to determine if it is appropriate; (7) applying the initial 3-month threshold to estimated carryover balances at the end of FY 1996 shows that 7 of the 11 Human Space Flight and Science, Aeronautics, and Technology programs have a total carryover of $1.1 billion beyond the threshold; (8) NASA's Comptroller intends to carefully scrutinize carryover amounts as part of the FY 1998 budget development process, and formally requested program managers to justify carryover balances that exceed amounts necessary to fund program costs for 8 weeks of the next fiscal year; (9) the 8 weeks was not a threshold for the appropriate level of carryover, but rather a criterion for identifying balances for review; (10) at the end of FY 1996, nine programs would need to justify $1.5 billion beyond the Comptroller's 8-week criterion; and (11) the three programs with the largest estimated balances requiring justification are Space Science with $558 million, MTPE with $435 million, and Life and Microgravity Sciences and Applications with $257 million.
| 3,480 | 511 |
Today's consumers are demanding more--and more detailed--health information, and are becoming more active in making medical and lifestyle decisions that affect them. The demand for health information has climbed steadily in the past 5 to 10 years. In the early 1990s, for example, mail inquiries to the Public Health Service's information clearinghouses rose by over 40 percent, and telephone inquiries more than doubled. Public libraries reported in 1994 that 10 percent of all reference questions were health-related, accounting for about 52 million inquiries annually. Despite this interest, however, in a 1994 survey published by the Medical Library Association, almost 70 percent of the respondents reported problems in gaining access to appropriate health information. When queried, 60 percent said that they would be willing to pay for an easy way to access an integrated resource to provide such health and wellness information. The need for information is particularly apparent in self-care situations, for example when dealing with one's own minor injury or illness. About 80 percent of all health care involves problems treated at home, according to the president of Healthwise, Inc., a nonprofit center for health care promotion and self-care research and development. Effective management of these problems can prevent the illness or injury from progressing to the point of needing professional intervention. However, consumers' self-treatment must follow the correct self-diagnosis or benefits from automated dissemination of information could be negated and overall health could be harmed. The increasing demand for health information has driven the development of consumer health informatics systems. In fact, a number of informatics systems were developed by individuals who were frustrated by their inability to find needed information about their own health conditions or those of family members or friends. Several hundred informatics systems--using a range of technologies, from telephones to interactive on-line systems--have been developed in the past decade alone. Over half of the projects we identified were in operation for 2 years or less, or were still in the very early stages of development. Advances in technology also make access to consumer health information easier, responding to this increasing consumer demand. In 1995, as reported by the Council on Competitiveness, 37 percent of U.S. households had computers; that number was expected to reach 40 percent by the beginning of 1996. The use of technology in schools is also on the rise. According to Quality Data, Inc., the number of computers in the nation's classrooms has grown steadily just in the past few years, reaching about 4.1 million for the 1994-1995 school year. (In contrast, about 2.3 million computers were in our nation's classrooms in the 1991-1992 school year.) Growth has likewise been rapid in the use of the Internet and commercial on-line computer services. The Congressional Research Service has called the Internet "the fastest growing communications medium in history." The number of Internet users has doubled in size yearly since 1988; between 1993 and 1994 that number rose from 15 million to 30 million people. Consumer health informatics is the union of health care content with the speed and ease of technology. Informatics systems provide health information to consumers in a wide range of settings. While many people access health information through personal computers in their homes, others access these systems in more public locations such as libraries, clinics, hospitals, and physicians' waiting rooms. Informatics supports consumers' ability to obtain health-related information through three general types of systems--those that simply provide information (one-way communication), those that tailor specific information to a user's unique situation (customized information), and those that allow users to communicate and interact either with health care providers or other users (two-way communication). I'd like to offer some examples of each of these three general types of systems that are being used in informatics today. First, examples of providing information in one direction include on-line health-related articles, and computer software containing health encyclopedias or specific simple medical instructions, such as how to inject insulin; telephone-based systems that can be automatically connected to databases to call individuals with appointment reminders also fall into this category. Second, tailoring specific lifestyle recommendations aimed at improving one's health can be accomplished with automated systems that request information from the consumer--via a questionnaire dealing with current health habits (such as exercise or smoking) and individual and family health history, for example. Information obtained in this way can then be analyzed, scored according to a set standard, and fed back to the user in the form of recommendations for improved health management. Finally, interactive communication is available through on-line discussion groups, which offer the chance for those seeking information on certain health topics or concerns to communicate with other users or a physician or other health care provider. Systems vary a great deal in terms of the technology employed, costs, and sponsors. The kinds of technologies used in the 78 projects we surveyed included (1) telephones and voice systems, (2) computers, software, and on-line services, and (3) interactive televisions and videotape. (Attachment 1 at the end of this statement provides a sample showing the range of projects included in our review.) The systems costs we were able to identify ranged from very little to $20 million to develop, and maintenance costs at the high end were up to $1.5 million annually (most cost information was proprietary). One factor affecting cost is whether existing equipment and personnel resources can be utilized. According to an expert from the University of Montana, a low-cost, Internet-type system was developed by students there as a class project, with the university providing the equipment. More complex systems that permit user interaction are usually among the most expensive. For example, Access Health, Inc., contracts with insurers, managed care programs, and employers to provide advice on illness prevention, disease management, and general health information to their enrollees and employees. The company employs close to 500 people, including nurses and technical support personnel; it reports that it has spent about $20 million on systems development over the last 7 years. Since informatics is a new field, only limited research has been performed to confirm its full monetary benefits. Some studies have shown, however, that informatics offers the potential to reduce some unnecessary medical services, thereby lowering health care costs. Information technologies also offer other advantages over hard-copy text material; for example, a consumer can more readily review material at his or her own pace and at the needed level of detail. The Shared Decision-making system, an interactive video program, was developed to help patients participate in treatment decisions; evaluators have also reported potential cost savings. According to its developer, the system helps educate the consumer, allowing patients and doctors to function together as a team. An evaluation of one program in this system--dealing with noncancerous prostate enlargement--found a 40-percent drop in elective surgery rates. According to the Agency for Health Care Policy and Research, potential cost savings could be substantial, as this is the second most common surgical procedure performed in the Medicare population. Cleveland's ComputerLink--developed to help support Alzheimer's caregivers by reducing their feelings of isolation--can also help save money, according to researchers at Case Western Reserve University, where it was developed. This is because when caregivers are provided access to such systems and other community-based services, according to the researchers, they tend to need fewer traditional health services, potentially saving taxpayers thousands of dollars. Other advantages cited by developers and system users include anonymity--increased ability to remain unknown while dealing with personal or sensitive information, allowing a more accurate health picture to emerge; outreach--improved access by those in rural or underserved areas; convenience--ability to use the system at any time, day or night; scope--increased ability to reach large numbers of people; and support--ease of establishing on-line relationships with others. In response to our on-line survey of Internet consumers, we found that consumers value support groups for many different reasons. One Internet user said he gains support and understanding from his on-line friends, who know exactly what his disease--Chronic Fatigue Syndrome--is like. Another Internet user said she obtains information electronically that she cannot easily get from other sources about what she called "the true facts from real people living the nightmare of ovarian cancer." Similarly, a homebound caregiver of an Alzheimer's patient described ComputerLink as her "lifeline to sanity." Finally, an individual said he gained "immense benefit" from hearing of the experiences of fellow prostate-cancer sufferers, adding his belief that "accessing this material saved my life." Informatics systems do not and cannot replace visits with physicians; they can, however, make such encounters more productive, for both doctor and patient. Such systems can also prepare doctors to more effectively treat certain patients. For example, doctors were able to diagnose alcoholism with the help of a pre-visit questionnaire because patients tended to be more candid with the computer, which many see as "nonjudgmental." Specifically, in the case of one patient, doctors' notes indicated that the patient "uses alcohol socially"; in contrast, the computer found that the patient had monthly blackouts. Likewise, a computer questionnaire identified more potential blood donors who had HIV-related factors in their health histories than did personal interviews by health care providers. While it is not difficult to find consumers and groups who endorse this technology, there are--in the opinions of the experts we interviewed--several issues raised by the rapid growth of informatics, issues that need to be resolved in the coming years. In survey interviews and at our conference last winter, the experts identified specific issues that will need to be addressed concerning the future development of consumer health informatics, and options for addressing them. The three issues identified as most significant were access, cost, and information quality. The other five issues raised dealt with security and privacy, computer literacy, copyright, systems development, and consumer information overload. (Attachment 2 shows the experts' views on the significance of these issues.) Some health informatics systems are available only to those with available computers, modems, and telephones, which raises the issue considered central to many experts: access. About 60 percent of U.S. households lack computers, and at least 6 percent lack telephones. Other identified issues involving access include physical barriers, such as those affecting residents of remote or rural areas, and those affecting individuals dealing with physical handicaps. The next issue, cost, was seen as affecting the consumer's use of informatics in terms of expenses associated with purchasing software, fees for using on-line services, and, for some, transportation costs to a library or other public source of information. The costs of developing informatics systems were also important to the experts: these issues included how much funding is needed, where funding comes from, and the cost of keeping up-to-date with changes in technology. Information quality was also seen as a very significant issue because the information in informatics systems could be incomplete, inaccurate, or outdated. According to one expert, CD-ROMs in use with current dates could in reality be based on much earlier, out-of-date research. Also identified was the potential for biased information that may have been developed by a person or organization with a vested interest. Another risk is that consumers could take information out of context or misapply it to their own medical situations. If they were to act on such information without first checking with a qualified medical professional, harmful health consequences could result. Experts discussed several options for addressing these issues, ranging from applying broad practices to following more specific suggestions. One solution, establishing public-and private-sector partnerships addresses all three issues, especially access. To illustrate: the Newark (N.J.) Public Schools joined with the University of Medicine and Dentistry of New Jersey and a private, nonprofit corporation to provide technology to people lacking access to computers. In addition to using their own resources, in 1994 and 1995, this group was awarded a total of over $200,000 in federal grants. Public- and private-sector leaders noted that the project was an effective approach for ensuring access and one that could be replicated in other communities. Experts also indicated that federal, state, and local governments--as well as universities and venture capitalists--could support research to further demonstrate the costs and benefits of consumer informatics. Specific suggestions were also provided to address the quality issue. Peer reviews of informatics systems could help ensure quality, or a consortium of experts in a field could be used, involving government and private-sector representation, to establish quality guidelines. The experts also suggested that ways could be found to notify consumers if information is from an unknown source. Five other issues were seen as somewhat less critical but still needing attention. Security and privacy were seen as important, particularly with on-line networks, where consumers may wish to share personal information anonymously. Further, experts felt that while copyright laws protect the proprietary nature of systems so that others will not be able to unfairly reap the rewards that rightfully belong to developers, at the same time copyright restrictions can slow the immediate availability of information to the consumer. In the area of systems development, the experts noted issues with compatibility, infrastructure, and standardization. When hardware or software incompatibilities exist, information transfer among systems is hindered because it is difficult for the different media to communicate and exchange information without programming changes or additional hardware. Further, no nationwide infrastructure exists to link information from hospitals, clinics, and physicians' offices, making it difficult to share critical health-related and patient information. And standardization refers to the computer file formats in which patients' health data are stored; various providers use different information systems, further hindering data-sharing. Finally, information overload and computer literacy were identified as issues related to the consumer. Mr. Chairman, we are a nation with a wealth of information--and on-line information contributes to this situation. Experts indicated that on-line information could overwhelm the consumer and provide him or her with too much technical information to comfortably handle. Most experts also felt that although systems are becoming more user-friendly, some people still fear computers and other technologies. Experts also noted specific options for addressing these issues. Sound systems development practices, along with helping ensure that a project is well-designed, can also significantly help safeguard the data. Carefully assessing and identifying user needs will also help develop a system that is user-friendly and accommodates the target users' needs, which can increase consumers' comfort levels with using new technology. The federal government in general--and the Department of Health and Human Services (HHS) in particular--are actively involved in consumer health informatics. This involvement takes the form of project development and testing, providing sources of consumer health information, funding clearinghouses and information centers, and providing grants to organizations that produce informatics systems. (Attachment 3 lists a sample of government agencies involved in these activities.) HHS is charged with controlling disease and improving the health of Americans, and includes consumer information and education among its activities to accomplish this. Many agencies within HHS also have central roles related to consumer health information and services. These include the Health Care Financing Administration (HCFA), the Centers for Disease Control and Prevention, the National Institutes of Health, the Food and Drug Administration, and the Agency for Health Care Policy and Research. Outside of HHS, other agencies having components that deal with health information include the Departments of Agriculture, Commerce, Defense, Energy, and Labor. As an example of federal involvement, last December HCFA awarded a 1-year grant to the University of Wisconsin's Comprehensive Health Enhancement Support System (CHESS), which supports Medicare patients diagnosed with early-stage breast cancer. Patients choosing to participate are provided with computers in their homes containing the CHESS software, which includes detailed health-related articles and the ability to communicate with medical experts and support groups. The project will review the impact of this system on participants' health and treatment decisions and will help determine the appropriateness of this technology for the Medicare population. States and local communities are also supporting projects that use technology to disseminate health information to their residents. One large-scale undertaking is the John A. Hartford Foundation-sponsored Community Health Management Information System (CHMIS). Collaborating with several states and local health care organizations, CHMIS provides a community network of health care information and may provide an initial infrastructure that could be used to disseminate consumer information more widely. As an example of local involvement in informatics, Fort Collins, Colorado, has developed its own system, called FortNet, providing health and other kinds of information for city residents. Fort Collins contributed over $60,000, to which federal and private contributions were added. A similar project exists in Taos, New Mexico, where the local community enjoys free access to on-line resources that include directories of local health providers. The system is funded by federal, state, and local contributions, including those of the University of New Mexico. As for the future, HHS has sent a report to the Vice President containing recommendations for federal activities that will enhance the availability of health care information to consumers through the National Health Information Infrastructure, a project that is being jointly undertaken by 14 private companies and nonprofit institutions and the federal government. The National Institute of Standards and Technology has awarded the C. Everett Koop Foundation a grant totaling $30 million--half in government funds and half in matching private funds--to develop the tools needed for such an information network. On the state level, Washington plans to develop an automated system containing clinical information and other medical data; it will be made available to all state residents. Local involvement in consumer health informatics is expected to continue as well. For example, the local communities involved in CHMIS projects plan to provide expanded services over the established networks--additional content areas to serve the health information needs of their consumers. HHS and other consumer health experts have recognized that federal coordination of government activities in consumer health informatics needs to be improved; while no single, comprehensive inventory of all federal activity exists for this new field, many federal agencies have plans for greater coordination and evaluation of consumer health informatics. For example, HHS' National Institutes of Health plans to consolidate on-line information for its various institutes. Through its Gateway project, HHS is developing a database that is expected to contain hundreds of publications on health topics. The agency is also involved in developing guidelines for evaluating informatics projects; such an evaluation could be of value in helping the government determine how it is investing in technology in this area. Mr. Chairman, informatics is a young and emerging field, and systems have grown rapidly in a very short time; they are clearly providing benefits to many. As the use of informatics systems increases, the benefits and issues will become more apparent. Measuring these benefits and determining how we will deal with some of the issues raised by the experts will be necessary to ensure that consumers receive the best information possible. This concludes my prepared statement. I would be happy to respond to any questions you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the emergence of consumer health informatics. GAO noted that: (1) the demand for health-related information has increased steadily in the past 5 to 10 years; (2) many consumers have reported problems in gaining access to appropriate health information, especially in self-care situations; (3) several hundred informatics systems have been developed in the past decade, but most systems are still in early stages of development; (4) consumers are able to obtain health-related information through one-way communications, tailor specific information to unique situations, or communicate with health care providers through two-way communications systems; (5) more complex systems that permit user interaction are usually the most expensive; (6) consumers are able to reduce unnecessary medical services and lower health care costs by accessing health informatics systems; (7) these systems also help health care providers to more effectively treat certain patients; (8) the most significant issues that need to be addressed include system access, system development cost, and information quality; (9) there is no nationwide infrastructure to link information from hospitals, clinics, and physicians' offices; (10) states and local communities are supporting projects to disseminate health information to their residents; and (11) many federal agencies are planning greater coordination and evaluation of consumer health informatics.
| 4,375 | 265 |
In our June 2006 report, we found that DOD and VA had taken actions to facilitate the transition of medical and rehabilitative care for seriously injured servicemembers who were being transferred from MTFs to PRCs. For example, in April 2004, DOD and VA signed a memorandum of agreement that established referral procedures for transferring injured servicemembers from DOD to VA medical facilities. DOD and VA also established joint programs to facilitate the transfer to VA medical facilities, including a program that assigned VA social workers to selected MTFs to coordinate transfers. Despite these coordination efforts, we found that DOD and VA were having problems sharing the medical records VA needed to determine whether servicemembers' medical conditions allowed participation in VA's vigorous rehabilitation activities. DOD and VA reported that as of December 2005 two of the four PRCs had real-time access to the electronic medical records maintained at Walter Reed Army Medical Center and only one of the two also had access to the records at the National Naval Medical Center. In cases where medical records could not be accessed electronically, the MTF faxed copies of some medical information, such as the patient's medical history and progress notes, to the PRC. Because this information did not always provide enough data for the PRC provider to determine if the servicemember was medically stable enough to be admitted to the PRC, VA developed a standardized list of the minimum types of health care information needed about each servicemember transferring to a PRC. Even with this information, PRC providers frequently needed additional information and had to ask for it specifically. For example, if the PRC provider notices that the servicemember is on a particular antibiotic therapy, the provider may request the results of the most recent blood and urine cultures to determine if the servicemember is medically stable enough to participate in strenuous rehabilitation activities. According to PRC officials, obtaining additional medical information in this way, rather than electronically, is very time consuming and often requires multiple phone calls and faxes. VA officials told us that the transfer could be more efficient if PRC medical personnel had real- time access to the servicemembers' complete DOD electronic medical records from the referring MTFs. However, problems existed even for the two PRCs that had been granted electronic access. During a visit to those PRCs in April 2006, we found that neither facility could access the records at Walter Reed Army Medical Center because of technical difficulties. As discussed in our January 2005 report, the importance of early intervention for returning individuals with disabilities to the workforce is well documented in vocational rehabilitation literature. In 1996, we reported that early intervention significantly facilitates the return to work but that challenges exist in providing services early. For example, determining the best time to approach recently injured servicemembers and gauge their personal receptivity to considering employment in the civilian sector is inherently difficult. The nature of the recovery process is highly individualized and requires professional judgment to determine the appropriate time to begin vocational rehabilitation. Our 2007 High-Risk Series: An Update designates federal disability programs as "high risk" because they lack emphasis on the potential for vocational rehabilitation to return people to work. In our January 2005 report, we found that servicemembers whose disabilities are definitely or likely to result in military separation may not be able to benefit from early intervention because DOD and VA could work at cross purposes. In particular, DOD was concerned about the timing of VA's outreach to servicemembers whose discharge from military service is not yet certain. DOD was concerned that VA's efforts may conflict with the military's retention goals. When servicemembers are treated as outpatients at a VA or military hospital, DOD generally begins to assess whether the servicemember will be able to remain in the military. This process can take months. For its part, VA took steps to make seriously injured servicemembers a high priority for all VA assistance. Noting the importance of early intervention, VA instructed its regional offices in 2003 to assign a case manager to each seriously injured servicemember who applies for disability compensation. VA had detailed staff to MTFs to provide information on all veterans' benefits, including vocational rehabilitation, and reminded staff that they can initiate evaluation and counseling, and, in some cases, authorize training before a servicemember is discharged. While VA tries to prepare servicemembers for a transition to civilian life, VA's outreach process may overlap with DOD's process for evaluating servicemembers for a possible return to duty. In our report, we concluded that instead of working at cross purposes to DOD goals, VA's early intervention efforts could facilitate servicemembers' return to the same or a different military occupation, or to a civilian occupation if the servicemembers were not able to remain in the military. In this regard, the prospect for early intervention with vocational rehabilitation presents both a challenge and an opportunity for DOD and VA to collaborate to provide better outcomes for seriously injured servicemembers. In our May 2006 report, we described DOD's efforts to identify and facilitate care for OEF/OIF servicemembers who may be at risk for PTSD. To identify such servicemembers, DOD uses a questionnaire, the DD 2796, to screen OEF/OIF servicemembers after their deployment outside of the United States has ended. The DD 2796 is used to assess servicemembers' physical and mental health and includes four questions to identify those who may be at risk for developing PTSD. We reported that according to a clinical practice guideline jointly developed by DOD and VA, servicemembers who responded positively to at least three of the four PTSD screening questions may be at risk for developing PTSD. DOD health care providers review completed questionnaires, conduct face-to-face interviews with servicemembers, and use their clinical judgment in determining which servicemembers need referrals for further mental health evaluations. OEF/OIF servicemembers can obtain the mental health evaluations, as well as any necessary treatment for PTSD, while they are servicemembers--that is, on active duty--or when they transition to veteran status if they are discharged or released from active duty. Despite DOD's efforts to identify OEF/OIF servicemembers who may need referrals for further mental health evaluations, we reported that DOD cannot provide reasonable assurance that OEF/OIF servicemembers who need the referrals receive them. Using data provided by DOD, we found that 22 percent, or 2,029, of the 9,145 OEF/OIF servicemembers in our review who may have been at risk for developing PTSD were referred by DOD health care providers for further mental health evaluations. Across the military service branches, DOD health care providers varied in the frequency with which they issued referrals to OEF/OIF servicemembers with three or more positive responses to the PTSD screening questions------ the Army referred 23 percent, the Air Force about 23 percent, the Navy 18 percent, and the Marines about 15 percent. According to DOD officials, not all of the OEF/OIF servicemembers with three or four positive responses on the screening questionnaire need referrals. As directed by DOD's guidance for using the DD 2796, DOD health care providers are to rely on their clinical judgment to decide which of these servicemembers need further mental health evaluations. However, at the time of our review DOD had not identified the factors its health care providers used to determine which OEF/OIF servicemembers needed referrals. Knowing these factors could explain the variation in referral rates and allow DOD to provide reasonable assurance that such judgments are being exercised appropriately. We recommended that DOD identify the factors that DOD health care providers used in issuing referrals for further mental health evaluations to explain provider variation in issuing referrals. DOD concurred with the recommendation. Although OEF/OIF servicemembers may obtain mental health evaluations or treatment for PTSD through VA when they transition to veteran status, VA may face a challenge in meeting the demand for PTSD services. In September 2004 we reported that VA had intensified its efforts to inform new veterans from the Iraq and Afghanistan conflicts about the health care services--including treatment for PTSD--VA offers to eligible veterans. We observed that these efforts, along with expanded availability of VA health care services for Reserve and National Guard members, could result in an increased percentage of veterans from Iraq and Afghanistan seeking PTSD services through VA. However, at the time of our review officials at six of seven VA medical facilities we visited explained that while they were able to keep up with the current number of veterans seeking PTSD services, they may not be able to meet an increase in demand for these services. In addition, some of the officials expressed concern because facilities had been directed by VA to give veterans from the Iraq and Afghanistan conflicts priority appointments for health care services, including PTSD services. As a result, VA medical facility officials estimated that follow-up appointments for veterans receiving care for PTSD could be delayed. VA officials estimated the delays to be up to 90 days. As discussed in our April 2006 testimony, problems related to military pay have resulted in overpayments and debt for hundreds of sick and injured servicemembers. These pay problems resulted in significant frustration for the servicemembers and their families. We found that hundreds of battle-injured servicemembers were pursued for repayment of military debts through no fault of their own, including at least 74 servicemembers whose debts had been reported to credit bureaus and private collections agencies. In response to our audit, DOD officials said collection actions on these servicemembers' debts had been suspended until a determination could be made as to whether these servicemembers' debts were eligible for relief. Debt collection actions created additional hardships on servicemembers by preventing them from getting loans to buy houses or automobiles or pay off other debt, and sending several servicemembers into financial crisis. Some battle-injured servicemembers forfeited their final separation pay to cover part of their military debt, and they left the service with no funds to cover immediate expenses while facing collection actions on their remaining debt. We also found that sick and injured servicemembers sometimes went months without paychecks because debts caused by overpayments of combat pay and other errors were offset against their military pay. Furthermore, the longer it took DOD to stop the overpayments, the greater the amount of debt that accumulated for the servicemember and the greater the financial impact, since more money would eventually be withheld from the servicemember's pay or sought through debt collection action after the servicemember had separated from the service. In our 2005 testimony about Army National Guard and Reserve servicemembers, we found that poorly defined requirements and processes for extending injured and ill reserve component servicemembers on active duty have caused servicemembers to be inappropriately dropped from active duty. For some, this has led to significant gaps in pay and health insurance, which has created financial hardships for these servicemembers and their families. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact Cynthia A. Bascetta at (202) 512-7101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Michael T. Blair, Jr., Assistant Director; Cynthia Forbes; Krister Friday; Roseanne Price; Cherie' Starck; and Timothy Walker made key contributions to this statement. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. VA and DOD Health Care: Efforts to Provide Seamless Transition of Care for OEF and OIF Servicemembers and Veterans. GAO-06-794R. Washington, D.C.: June 30, 2006. Post-Traumatic Stress Disorder: DOD Needs to Identify the Factors Its Providers Use to Make Mental Health Evaluation Referrals for Servicemembers. GAO-06-397. Washington, D.C.: May 11, 2006. Military Pay: Military Debts Present Significant Hardships to Hundreds of Sick and Injured GWOT Soldiers. GAO-06-657T. Washington, D.C.: April 27, 2006. Military Disability System: Improved Oversight Needed to Ensure Consistent and Timely Outcomes for Reserve and Active Duty Service Members. GAO-06-362. Washington, D.C.: March 31, 2006. Military Pay: Gaps in Pay and Benefits Create Financial Hardships for Injured Army National Guard and Reserve Soldiers. GAO-05-322T. Washington, D.C.: February 17, 2005. Vocational Rehabilitation: More VA and DOD Collaboration Needed to Expedite Services for Seriously Injured Servicemembers. GAO-05-167. Washington, D.C.: January 14, 2005. VA and Defense Health Care: More Information Needed to Determine If VA Can Meet an Increase in Demand for Post-Traumatic Stress Disorder Services. GAO-04-1069. Washington, D.C.: September 20, 2004. SSA Disability: Return-to-Work Strategies from Other Systems May Improve Federal Programs. GAO/HEHS-96-133. Washington, D.C.: July 11, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
As of March 1, 2007, over 24,000 servicemembers have been wounded in action since the onset of Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF), according to the Department of Defense (DOD). GAO work has shown that servicemembers injured in combat face an array of significant medical and financial challenges as they begin their recovery process in the health care systems of DOD and the Department of Veterans Affairs (VA). GAO was asked to discuss concerns regarding DOD and VA efforts to provide medical care and rehabilitative services for servicemembers who have been injured during OEF and OIF. This testimony addresses (1) the transition of care for seriously injured servicemembers who are transferred between DOD and VA medical facilities, (2) DOD's and VA's efforts to provide early intervention for rehabilitation for seriously injured servicemembers, (3) DOD's efforts to screen servicemembers at risk for post-traumatic stress disorder (PTSD) and whether VA can meet the demand for PTSD services, and (4) the impact of problems related to military pay on injured servicemembers and their families. This testimony is based on GAO work issued from 2004 through 2006 on the conditions facing OEF/OIF servicemembers at the time the audit work was completed. Despite coordinated efforts, DOD and VA have had problems sharing medical records for servicemembers transferred from DOD to VA medical facilities. GAO reported in 2006 that two VA facilities lacked real-time access to electronic medical records at DOD facilities. To obtain additional medical information, facilities exchanged information by means of a time-consuming process resulting in multiple faxes and phone calls. In 2005, GAO reported that VA and DOD collaboration is important for providing early intervention for rehabilitation. VA has taken steps to initiate early intervention efforts, which could facilitate servicemembers' return to duty or to a civilian occupation if the servicemembers were unable to remain in the military. However, according to DOD, VA's outreach process may overlap with DOD's process for evaluating servicemembers for a possible return to duty. DOD was also concerned that VA's efforts may conflict with the military's retention goals. In this regard, DOD and VA face both a challenge and an opportunity to collaborate to provide better outcomes for seriously injured servicemembers. DOD screens servicemembers for PTSD but, as GAO reported in 2006, it cannot ensure that further mental health evaluations occur. DOD health care providers review questionnaires, interview servicemembers, and use clinical judgment in determining the need for further mental health evaluations. However, GAO found that 22 percent of the OEF/OIF servicemembers in GAO's review who may have been at risk for developing PTSD were referred by DOD health care providers for further evaluations. According to DOD officials, not all of the servicemembers at risk will need referrals. However, at the time of GAO's review DOD had not identified the factors its health care providers used to determine which OEF/OIF servicemembers needed referrals. Although OEF/OIF servicemembers may obtain mental health evaluations or treatment for PTSD through VA, VA may face a challenge in meeting the demand for PTSD services. VA officials estimated that follow-up appointments for veterans receiving care for PTSD may be delayed up to 90 days. GAO's 2006 testimony pointed out problems related to military pay have resulted in debt and other hardships for hundreds of sick and injured servicemembers. Some servicemembers were pursued for repayment of military debts through no fault of their own. As a result, servicemembers have been reported to credit bureaus and private collections agencies, been prevented from getting loans, gone months without paychecks, and sent into financial crisis. In a 2005 testimony GAO reported that poorly defined requirements and processes for extending the active duty of injured and ill reserve component servicemembers have caused them to be inappropriately dropped from active duty, leading to significant gaps in pay and health insurance for some servicemembers and their families.
| 2,991 | 876 |
The export of domestically produced crude oil has generally been restricted since the 1970s. In particular, the Energy Policy and Conservation Act of 1975 (EPCA) led the Department of Commerce's Bureau of Industry and Security (BIS) to promulgate regulations that require crude oil exporters to obtain a license.that BIS will issue licenses for the following crude oil exports: exports from Alaska's Cook Inlet, exports to Canada for consumption or use therein, exports in connection with refining or exchange of SPR crude oil, exports of certain California crude oil up to twenty-five thousand barrels per day, exports consistent with certain international energy supply exports consistent with findings made by the President under certain exports of foreign origin crude oil that has not been commingled with crude oil of U.S. origin. Other than for these exceptions, BIS considers export license applications for exchanges involving crude oil on a case-by-case basis, and BIS can approve them if it determines that the proposed export is consistent with the national interest and purposes of EPCA. In addition to BIS's export controls, other statutes control the export of domestically produced crude oil, depending on where it was produced and how it is transported. In these cases, BIS can approve exports only if the President makes the necessary findings under applicable laws. Some of the authorized exceptions, outlined above, are the result of such presidential findings. As we previously found, recent increases in U.S. crude oil production have lowered the cost of some domestic crude oils. For example, prices for West Texas Intermediate (WTI) crude oil--a domestic crude oil used as a benchmark for pricing--were historically about the same price as Brent, an international benchmark crude oil from the North Sea between However, from 2011 through Great Britain and the European continent.2014, the price of WTI averaged $12 per barrel lower than Brent (see fig. 1). In 2014, prices for these benchmark crude oils narrowed as global oil prices declined, and WTI averaged $52 from January through May 2015, while Brent averaged $57. The development of U.S. crude oil production has created some challenges for crude oil transportation infrastructure because some production has been in areas with limited linkages to refining centers. According to EIA, these infrastructure constraints have contributed to discounted prices for some domestic crude oils. Much of the crude oil currently produced in the United States has characteristics that differ from historic domestic production. Crude oil is generally classified according to two parameters: density and sulfur content. Less dense crude oils are known as "light," and denser crude oils are known as "heavy." Crude oils with relatively low sulfur content are known as "sweet," and crude oils with higher sulfur content are known as "sour." As shown in figure 2, according to EIA, most domestic crude oil produced over the last 5 years has tended to be light oil. Specifically, according to EIA estimates, about all of the 1.8 million barrels per day increase in production from 2011 to 2013 consisted of lighter sweet crude oils. Light crude oil differs from the crude oil that many U.S. refineries are designed to process. Refineries are configured to produce transportation fuels and other products (e.g., gasoline, diesel, jet fuel, and kerosene) from specific types of crude oil. Refineries use a distillation process that separates crude oil into different fractions, or interim products, based on their boiling points, which can then be further processed into final products. Many refineries in the United States are configured to refine heavier crude oils and have therefore been able to take advantage of historically lower prices of heavier crude oils. For example, in 2013, the average density of crude oil used at domestic refineries was 30.8, while nearly all of the increase in production in recent years has been lighter crude oil with a density of 35 or above. According to EIA, additional production of light crude oil over the past several years has been absorbed into the market through several mechanisms, but the capacity of these mechanisms to absorb further increases in light crude oil production may be limited in the future for the following reasons: Reduced imports of similar grade crude oils: According to EIA, additional production of light oil in the past several years has primarily been absorbed by reducing imports of similar grade crude oils. Light crude oil imports fell from 1.7 million barrels per day in 2011 to 1 million barrels per day in 2013. As a result, there may be dwindling amounts of light crude oil imports that can be reduced in the future, according to EIA. Increased crude oil exports: Crude oil exports have increased recently, from less than thirty thousand barrels per day in 2008 to 396 thousand barrels per day in June 2014. Continued increases in crude oil exports will depend, in part, on the extent of any relaxation of current export restrictions, according to EIA. Increased use of light crude oils at domestic refineries: Domestic refineries have increased the average gravity of crude oils that they refine. The average American Petroleum Institute (API) gravity of crude oil used in U.S. refineries increased from 30.2 degrees in 2008 to 30.8 degrees in 2013, according to EIA. Continued shifts to use additional lighter crude oils at domestic refineries can be enabled by investments to relieve constraints associated with refining lighter crude oils at refineries that were optimized to refine heavier crude oils, according to EIA. Increased use of domestic refineries: In recent years, domestic refineries have been run more intensively, allowing the use of more domestic crude oils. Utilization--a measure of how intensively refineries are used that is calculated by dividing total crude oil and other inputs used at refineries by the amount refineries can process under usual operating conditions--increased from 86 percent in 2011 to 88 percent in 2013. There may be limits to further increases in utilization of refineries that are already running at high rates, according to EIA. In our September 2014 report, we reported that according to the studies we reviewed and the stakeholders we interviewed, removing crude oil export restrictions would likely increase some domestic crude oil prices, but could decrease consumer fuel prices, although the extent of consumer fuel price changes are uncertain and may vary by region. As discussed earlier, increasing domestic crude oil production has resulted in lower prices of some domestic crude oils compared with international benchmark crude oils. Three of the studies we reviewed also concluded that, absent changes in crude oil export restrictions, the expected growth in crude oil production may not be fully absorbed by domestic refineries or through exports (where allowed), contributing to even wider differences in prices between some domestic and international crude oils. According to these studies, by removing the export restrictions, these domestic crude oils could be sold at prices closer to international prices, reducing the price differential and aligning the price of domestic crude oil with international benchmarks. Specifically, the Department of Commerce's definition of crude oil includes condensates, which are light liquid hydrocarbons recovered primarily from natural gas wells. subject to export restrictions. One stakeholder stated that this may lead to more condensate exports than expected. Within the context of these uncertainties, estimates of potential price effects vary in the four studies we reviewed, as shown in table 1. Specifically, estimates in these studies of the increase in domestic crude oil prices due to removing crude oil export restrictions ranged from about $2 to $8 per barrel. was $103 per barrel, and these estimates represented 2 to 8 percent of that price. In addition, NERA Economic Consulting found that removing export restrictions would have no measurable effect in a case that assumes a low future international oil price of $70 per barrel in 2015 According to the NERA Economic rising to less than $75 by 2035.Consulting study, current production costs are close to these values, so that removing export restrictions would provide little incentive to produce more light crude oil. Unless otherwise noted, dollar estimates in the rest of this report have been converted to 2014 year dollars. These are average price effects over the study time frames, and some cases in some studies projected larger price effects in the near term that declined over time. ICF International West Texas Intermediate crude oil prices increase $2.35 to $4.19 per barrel on average from 2015- 2035. IHS Prices increase $7.89 per barrel on average from 2016-2030. NERA Economic Consulting Prices increase $1.74 per barrel in the reference case and $5.95 per barrel in the high case on average from 2015-2035. Implications refer to the difference between the reference case and its baseline with export restrictions in place, and also the difference between the high oil and gas recovery case and its corresponding baseline. NERA Economic Consulting also found that removing crude oil export restrictions would have no measurable effect in the low world oil price case. Regarding consumer fuel prices, such as gasoline, diesel, and jet fuel, the studies we reviewed and most of the stakeholders we interviewed suggested that consumer fuel prices could decrease as a result of removing crude oil export restrictions. A decrease in consumer fuel prices could occur because such prices tend to follow international crude oil prices rather than domestic crude oil prices, according to the studies reviewed and most of the stakeholders interviewed. If domestic crude oil exports caused international crude oil prices to decrease, consumer fuel Table 2 shows that the estimates of the prices could decrease as well. price effects on consumer fuels varied in the four studies we reviewed. Price estimates ranged from a decrease of 1.5 to 13 cents per gallon. These estimates represented 0.4 to 3.4 percent of the average U.S. retail gasoline price at the beginning of June 2014. In addition, NERA Economic Consulting found that removing export restrictions would have no measurable effect on consumer fuel prices when assuming a low future world crude oil price. Resources for the Future also estimates a decrease in consumer fuel prices but this decrease is as a result of increased refinery efficiency (even with an estimated slight increase in the international crude oil price). ICF International Petroleum product prices would decline by 1.5 to 2.4 cents per gallon on average from 2015-2035. IHS Gasoline prices would decline by 9 to 13 cents per gallon on average from 2016- 2030. NERA Economic Consulting Petroleum product prices would decline by 3 cents per gallon on average from 2015-2035 in the reference case and 11 cents per gallon in the high case. Gasoline prices would decline by 3 cents per gallon in the reference case and 10 cents per gallon in the high case. Fuel prices would not be affected in a low world oil price case. Implications refer to the difference between the reference case and its baseline with export restrictions in place, and the difference between the high oil and gas recovery case and its corresponding baseline. The effect of removing crude oil export restrictions on domestic consumer fuel prices depends on several uncertainties, as we discussed in our September 2014 report. First, it would depend on the extent to which domestic versus international crude oil prices determine the domestic price of consumer fuels. A 2014 research study examining the relationship between domestic crude oil and gasoline prices concluded that low domestic crude oil prices in the Midwest during 2011 did not result in lower gasoline prices in that region. This research supports the assumption made in the four studies we reviewed that to some extent higher prices of some domestic crude oils as a result of removing crude oil export restrictions would not be passed on to consumer fuel prices. However, some stakeholders told us that this may not always be the case and that more recent or detailed data could show that lower prices for some domestic crude oils have influenced consumer fuel prices. The Merchant Marine Act of 1920, also known as the Jones Act, in general, requires that any vessel (including barges) operating between two U.S. ports be U.S.-built, -owned, and -operated. closure, especially those located in the Northeast. However, according to one stakeholder, domestic refiners still have a significant cost advantage in the form of less expensive natural gas, which is an important energy source for many refineries. For this and other reasons, one stakeholder told us they did not anticipate refinery closures as a result of removing export restrictions. The studies we reviewed for our September 2014 report, generally suggested that removing crude oil export restrictions may increase domestic crude oil production and may affect the environment and the economy: Crude oil production. Removing crude oil export restrictions may increase domestic crude oil production. Even with current crude oil export restrictions, given various scenarios, EIA projected that domestic production will continue to increase through 2020. If export restrictions were removed, according to the four studies we reviewed, the increased prices of domestic crude oil are projected to lead to further increases in crude oil production. Projections of this increase varied in the studies we reviewed--from a low of an additional 130,000 barrels per day on average from 2015 through 2035, according to the ICF International study, to a high of an additional 3.3 million barrels per day on average from 2015 through 2035 in NERA Economic Consulting's study.almost 40 percent of production in April 2014. This is equivalent to 1.5 percent to Environment. Two of the studies we reviewed stated that the increased crude oil production that could result from removing the restrictions on crude oil exports may affect the environment. Most stakeholders we interviewed echoed this statement. This is consistent with what we found in a September 2012 report.we found that crude oil development may pose certain inherent environmental and public health risks. However, the extent of the risk is unknown, in part, because the severity of adverse effects depends on various location- and process-specific factors, including the location of future shale oil and gas development and the rate at which it occurs. It also depends on geology, climate, business practices, and regulatory and enforcement activities. The stakeholders who raised concerns about the effect of removing the restrictions on crude oil exports on the environment identified risks including those related to the quality and quantity of surface and groundwater sources; increases in greenhouse gas and other air emissions, and increases in the risk of spills from crude oil transportation. The economy. The four studies we reviewed suggested that removing crude oil export restrictions would increase the size of the economy. Three of the studies projected that removing export restrictions would lead to additional investment in crude oil production and increases in employment. This growth in the oil sector would--in turn--have additional positive effects in the rest of the economy. For example, NERA Economic Consulting's study projected an average of 230,000 to 380,000 workers would be removed from unemployment through 2020 if export restrictions were eliminated in 2015. These employment benefits would largely disappear if export restrictions were not removed until 2020 because by then the economy would have returned to full employment. Two of the studies we reviewed suggested that removing export restrictions would increase government revenues, although the estimates of the increase vary. One study estimated that total government revenue would increase by a combined $1.4 trillion in additional revenue from 2016 through 2030, and another study estimated that U.S. federal, state, and local tax receipts combined with royalties from drilling on federal lands could increase by an annual average of $3.9 to $5.7 billion from 2015 through 2035. Chairman Conaway, Ranking Member Peterson, and Members of the Committee, this completes my prepared statement. I would be pleased to answer any questions that you may have at this time. If you or your staff members have any questions concerning this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions include Christine Kehr (Assistant Director), Quindi Franco, Alison O'Neill, and Kiki Theodoropoulos. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
After decades of generally falling U.S. crude oil production, technological advances in the extraction of crude oil from shale formations have contributed to increases in U.S. production. In response to these and other market developments, some have proposed removing the 4 decade old restrictions on crude oil exports, underscoring the need to understand how allowing crude oil exports could affect crude oil prices, and the prices of consumer fuels refined from crude oil, such as gasoline and diesel. This testimony discusses what is known about the pricing and other key potential implications of removing crude oil export restrictions. It is based on GAO's September 2014 report ( GAO-14-807 ), and information on crude oil production and prices updated in June 2015. For that report, GAO reviewed four studies issued in 2014 on crude oil exports; including two sponsored by industry and conducted by consultants, one sponsored by a research organization and conducted by consultants, and one conducted at a research organization. Market conditions have changed since these studies were conducted, underscoring some uncertainties surrounding estimates of potential implications of removing crude oil export restrictions. For its 2014 report, GAO also summarized the views of a nongeneralizable sample of 17 stakeholders including representatives of companies and interest groups with a stake in the outcome of decisions regarding crude oil export restrictions, as well as academic, industry, and other experts. In September 2014, GAO reported that according to studies it reviewed and stakeholders it interviewed, removing crude oil export restrictions would likely increase domestic crude oil prices, but could decrease consumer fuel prices, although the extent of price changes are uncertain and may vary by region. The studies identified the following implications for U.S. crude oil and consumer fuel prices: Crude oil prices . The four studies GAO reviewed estimated that if crude oil export restrictions were removed, U.S. crude oil prices would increase by about $2 to $8 per barrel--bringing them closer to international prices. Prices for some U.S. crude oils have been lower than international prices--for example, one benchmark U.S. crude oil averaged $52 per barrel from January through May 2015, while a comparable international crude oil averaged $57. In addition, one study found that, when assuming low future crude oil prices overall, removing export restrictions would have no measurable effect on U.S. crude oil prices. Consumer fuel prices. The four studies suggested that U.S. prices for gasoline, diesel, and other consumer fuels follow international prices. If domestic crude oil exports caused international crude oil prices to decrease, consumer fuel prices could decrease as well. Estimates of the consumer fuel price implications in the four studies GAO reviewed ranged from a decrease of 1.5 to 13 cents per gallon. In addition, one study found that, when assuming low future crude oil prices, removing export restrictions would have no measurable effect on consumer fuel prices. Some stakeholders cautioned that estimates of the price implications of removing export restrictions are subject to several uncertainties, such as the extent of U.S. crude oil production increases, and how readily U.S. refiners are able to absorb such increases. Some stakeholders further told GAO that there could be important regional differences in the price implications of removing export restrictions. The studies GAO reviewed and the stakeholders it interviewed generally suggested that removing crude oil export restrictions may also have the following implications: Crude oil production . Removing export restrictions may increase domestic production--over 8 million barrels per day in April 2014--because of increasing domestic crude oil prices. Estimates ranged from an additional 130,000 to 3.3 million barrels per day on average from 2015 through 2035. Environment . Additional crude oil production may pose risks to the quality and quantity of surface groundwater sources; increase greenhouse gas and other emissions; and increase the risk of spills from crude oil transportation. The economy . Three of the studies projected that removing export restrictions would lead to additional investment in crude oil production and increases in employment. This growth in the oil sector would--in turn--have additional positive effects in the rest of the economy, including for employment and government revenues.
| 3,476 | 879 |
CBP and APHIS have taken four major steps intended to strengthen the AQI program since the transfer of responsibilities following passage of the Homeland Security Act of 2002. To date, we have not done work to assess the implementation and effectiveness of these actions. First, CBP and APHIS expanded the hours of training on agricultural issues for CBP officers, whose primary duty is customs and immigration inspection, and for CBP agriculture specialists, whose primary duty is agricultural inspection. Specifically, newly hired CBP officers receive 16 hours of training on agricultural issues, whereas before the transfer to CBP, customs inspectors received 4 hours of agricultural training, and immigration inspectors received 2 hours. CBP and APHIS also expanded agriculture training for CBP officers at their respective ports of entry to help them make better-informed decisions on agricultural items at high- volume border traffic areas. Additionally, CBP and APHIS have standardized the in-port training program and have developed a national standard for agriculture specialists with a checklist of activities for agriculture specialists to master. These activities are structured into an 8- week module on passenger inspection procedures and a 10-week module on cargo inspection procedures. Based on our survey of agriculture specialists, we estimate that 75 percent of specialists hired by CBP believe that they received sufficient training (on the job and at the Professional Development Center) to enable them to perform their agriculture inspection duties. Second, CBP and APHIS have taken steps designed to better target shipments and passengers that potentially present a high risk to U.S. agriculture. Specifically, some CBP agriculture specialists received training and were given access to CBP's Automated Targeting System, a computer system that, among other things, is designed to focus limited inspection resources on higher-risk passengers and cargo and facilitate expedited clearance or entry for low-risk passengers and cargo. This system gives agriculture specialists detailed information from cargo manifests and other documents that shipping companies are required to submit before the ship arrives in a port to help them select high-risk cargo for inspection. CBP and APHIS headquarters personnel also use this information to identify companies that had previously violated U.S. quarantine laws. For example, according to a senior APHIS official, the two agencies used this system to help identify companies that have used seafood containers to smuggle uncooked poultry products from Asia, which are currently banned because of concerns over avian influenza. Third, CBP and APHIS established a formal assessment process intended to ensure that ports of entry carry out agricultural inspections in accordance with the agricultural quarantine inspection program's regulations, policies, and procedures. The process, called Joint Agency Quality Assurance Reviews, covers topics such as (1) CBP coordination with other federal agencies; (2) agriculture specialist training; (3) specialist access to regulatory manuals; and (4) specialist adherence to processes for handling violations at the port, inspecting passenger baggage and vehicles, and intercepting, seizing, and disposing of confiscated materials. The reviews address best practices and deficiencies at each port and make recommendations for corrective actions to be implemented within 6 weeks. For example, regarding best practices, a review of two ports found that the placement of CBP, APHIS, and Food and Drug Administration staff in the same facility enhanced their coordination. This review also lauded their targeting of non-agricultural products that are packed with materials, such as wood, that may harbor pests or diseases that could pose a risk to U.S. agriculture. Regarding deficiencies, this review found that the number of CBP agriculture specialists in each port was insufficient, and that the specialists at one of the ports were conducting superficial inspections of commodities that should have been inspected more intensely. According to CBP, the agency took actions to correct these deficiencies, although we have not evaluated those actions. In September 2007, CBP said that the joint review team had conducted 13 reviews in fiscal years 2004 through 2006, and 7 reviews were completed or underway for fiscal year 2007. Seven additional reviews are planned for fiscal year 2008. Lastly, in May 2005, CBP required each director in its 20 district field offices to appoint an agriculture liaison, with background and experience as an agriculture specialist, to provide CBP field office directors with agriculture-related input for operational decisions and agriculture specialists with senior-level leadership. The agriculture liaisons are to, among other things, advise the director of the field office on agricultural functions; provide oversight for data management, statistical analysis, and risk management; and coordinate agriculture inspection alerts. CBP officials told us that all district field offices had established the liaison position as of January 2006. Since the creation of the position, agriculture liaisons have facilitated the dissemination of urgent alerts from APHIS to CBP. They also provide information back to APHIS. For example, following a large increase in the discovery of plant pests at a port in November 2005, the designated agriculture liaison sent notice to APHIS, which then issued alerts to other ports. APHIS and CBP subsequently identified this agriculture liaison as a contact for providing technical advice for inspecting and identifying this type of plant pest. In fiscal year 2006, we surveyed a representative sample of CBP agriculture specialists regarding their experiences and opinions since the transfer of the AQI program from APHIS to CBP. In general, the views expressed by these specialists indicate that they believe that the agricultural inspection mission has been compromised. We note that morale issues are not unexpected in a merger such as the integration of the AQI mission and staff into CBP's primary anti-terrorism mission. GAO has previously reported on lessons learned from major private and public sector experiences with mergers that DHS could use when combining its various components into a unified department. Among other things, productivity and effectiveness often decline in the period following a merger, in part because employees often worry about their place in the new organization. Nonetheless, based on the survey results, while 86 percent of specialists reported feeling very well or somewhat prepared for their duties as an agriculture specialist, many believed that the agriculture mission had been compromised by the transfer. Specifically, 59 percent of experienced specialists indicated that they are doing either somewhat or many fewer inspections since the transfer, and 60 percent indicated that they are doing somewhat or many fewer interceptions. 63 percent of agriculture specialists believed their port did not have enough specialists to carry out agriculture-related duties. Agriculture specialists reported that they spent 62 percent of their time on agriculture inspections, whereas 35 percent of their time was spent on non-agricultural functions such as customs and immigration inspections. In addition, there appear to be morale issues based on the responses to two open-ended questions: (1) What is going well with respect to your work as an agriculture specialist? and (2) What would you like to see changed or improved with respect to your work as an agriculture specialist? Notably, the question about what needs improving generated a total of 185 pages of comments--roughly 4 times more than that generated by the responses to our question on what was going well. Further, "Nothing is going well" was the second-most frequent response to the question on what is going well. We identified common themes in the agriculture specialists' responses to our first question about what is going well with respect to their work as an agriculture specialist. The five most common themes were: Working relationships. An estimated 18 percent of agriculture specialists cited the working relationship among agriculture specialists and CBP officers and management as positive. These specialists cited increasing respect and interest by non-specialists in the agriculture mission, and the attentiveness of CBP management to agriculture specialists' concerns. Nothing. An estimated 13 percent of agriculture specialists reported that nothing is going well with their work. For example, some respondents noted that the agriculture inspection mission has been compromised under CBP and that agriculture specialists are no longer important or respected by management. Salary and Benefits. An estimated 10 percent of agriculture specialists expressed positive comments about their salary and benefits, with some citing increased pay under CBP, a flexible work schedule, increased overtime pay, and retirement benefits as reasons for their views. Training. An estimated 8 percent of agriculture specialists identified elements of classroom and on-the-job training as going well. Some observed that new hires are well trained and that agriculture-related classroom training at the Professional Development Center in Frederick, Maryland, is adequate for their duties. General job satisfaction. An estimated 6 percent of agriculture specialists were generally satisfied with their jobs, reporting, among other things, that they were satisfied in their working relationships with CBP management and coworkers and that they believed in the importance of their work in protecting U.S. agriculture from foreign pests and diseases. In contrast, agriculture specialists wrote nearly 4 times as much in response to our question about what they would like to see changed or improved with respect to their work as agriculture specialists. In addition, larger proportions of specialists identified each of the top five themes. Declining mission. An estimated 29 percent of agriculture specialists were concerned that the agriculture mission is declining because CBP has not given it adequate priority. Some respondents cited the increase in the number of cargo items and flights that are not inspected because of staff shortages, scheduling decisions by CBP port management, and the release of prohibited or restricted products by CBP officers. Working relationships. An estimated 29 percent of the specialists expressed concern about their working relationships with CBP officers and management. Some wrote that CBP officers at their ports view the agriculture mission as less important than CBP's other priorities, such as counternarcotics and anti-terrorism activities. Others noted that CBP management is not interested in, and does not support, agriculture inspections. CBP chain of command. An estimated 28 percent of agriculture specialists identified problems with the CBP chain of command that impede timely actions involving high-risk interceptions, such as a lack of managers with an agriculture background and the agency's rigid chain-of-command structure. For example, agriculture specialists wrote that requests for information from USDA pest identification experts must be passed up the CBP chain of command before they can be conveyed to USDA. Training. An estimated 19 percent of agriculture specialists believed that training in the classroom and on the job is inadequate. For example, some respondents expressed concern about a lack of courses on DHS's targeting and database systems, which some agriculture specialists use to target high-risk shipments and passengers. Also, some agriculture specialists wrote that on-the-job training at their ports is poor, and that CBP officers do not have adequate agriculture training to recognize when to refer items to agriculture specialists for inspection. Lack of equipment. An estimated 17 percent of agriculture specialists were concerned about a lack of equipment and supplies. Some respondents wrote that the process for purchasing items under CBP results in delays in acquiring supplies and that there is a shortage of agriculture-specific supplies, such as vials, gloves, and laboratory equipment. These themes are consistent with responses to relevant multiple-choice questions in the survey. For example, in response to one of these questions, 61 percent of agriculture specialists believed their work was not respected by CBP officers, and 64 percent believed their work was not respected by CBP management. Although CBP and APHIS have taken a number of actions intended to strengthen the AQI program since its transfer to CBP, several management problems remain that may leave U.S. agriculture vulnerable to foreign pests and diseases. Most importantly, CBP has not used available data to evaluate the effectiveness of the program. These data are especially important in light of many agriculture specialists' views that the agricultural mission has been compromised and can help CBP determine necessary actions to close any performance gaps. Moreover, at the time of our May 2006 review, CBP had not developed sufficient performance measures to manage and evaluate the AQI program, and the agency had allowed the agricultural canine program to deteriorate. Furthermore, based on its staffing model, CBP does not have the agriculture specialists needed to perform its AQI responsibilities. CBP has not used available data to monitor changes in the frequency with which prohibited agricultural materials and reportable pests are intercepted during inspection activities. CBP agriculture specialists record monthly data in the Work Accomplishment Data System for each port of entry, including (1) arrivals of passengers and cargo to the United States via airplane, ship, or vehicle; (2) agricultural inspections of arriving passengers and cargo; and (3) inspection outcomes, i.e., seizures or detections of prohibited (quarantined) agricultural materials and reportable pests. As of our May 2006 report, CBP had not used these data to evaluate the effectiveness of the AQI program. For example, our analysis of the data for the 42 months before and 31 months after the transfer of responsibilities from APHIS to CBP shows that average inspection and interception rates have changed significantly in some geographical regions of the United States, with rates increasing in some regions and decreasing in others. (Appendixes I and II provide more information on average inspection and interception rates before and after the transfer from APHIS to CBP.) Specifically, average inspection rates declined significantly in the Baltimore, Boston, Miami, and San Francisco district field offices, and in preclearance locations in Canada, the Caribbean, and Ireland. Inspection rates increased significantly in seven other districts--Buffalo, El Paso, Laredo, San Diego, Seattle, Tampa, and Tucson. In addition, the average rate of interceptions decreased significantly at ports in six district field offices--El Paso, New Orleans, New York, San Juan, Tampa, and Tucson--while average interception rates have increased significantly at ports in the Baltimore, Boston, Detroit, Portland, and Seattle districts. Of particular note are three districts that have experienced a significant increase in their rate of inspections and a significant decrease in their interception rates since the transfer. Specifically, since the transfer, the Tampa, El Paso, and Tucson districts appear to be more efficient at inspecting (e.g., inspecting a greater proportion of arriving passengers or cargo) but less effective at interceptions (e.g., intercepting fewer prohibited agricultural items per inspection). Also of concern are three districts--San Juan, New Orleans, and New York--that are inspecting at about the same rate, but intercepting less, since the transfer. When we showed the results of our analysis to senior CBP officials, they were unable to explain these changes or determine whether the current rates were appropriate relative to the risks, staffing levels, and staff expertise associated with individual districts or ports of entry. These officials also noted that CBP has had problems interpreting APHIS data reports because CBP lacked staff with expertise in agriculture and APHIS's data systems in some district offices. As of our May 2006 report, CBP had not yet completed or implemented its plan to add agriculture- related data to its system for monitoring customs inspections. However, in September 2007, CBP said it had taken steps to use these data to evaluate the program's effectiveness. For example, CBP publishes a monthly report that includes analysis of efficiency inspections, arrivals, exams, and seizures of prohibited items, including agricultural quarantine material and pest interceptions, for each pathway. CBP also conducts a mid-year analysis of APHIS and CBP data to assess agricultural inspection efficiency at ports of entry. While these appear to be positive steps, we have not assessed their adequacy to measure the AQI program's effectiveness. A second management problem for the AQI program is an incomplete set of performance measures to balance multiple responsibilities and demonstrate results. As of our May 2006 report, CBP had not developed and implemented its own performance measures for the program. Instead, according to CBP officials, CBP carried over two measures that APHIS had used to assess the AQI program before the transfer: the percentages of international air passengers and border vehicle passengers that comply with program regulations. However, these measures addressed only two pathways for agricultural pests, neglecting other pathways such as commercial aircraft, vessels, and truck cargo. Further, these performance measures did not provide information about changes in inspection and interception rates, which could help assess the efficiency and effectiveness of agriculture inspections in different regions of the country or at individual ports of entry. They also did not address the AQI program's expanded mission--to prevent agro-terrorism while facilitating the flow of legitimate trade and travel. In early 2007, a joint team from CBP and APHIS agreed to implement additional performance measures for AQI activities in all major pathways at ports of entry. Specifically, CBP said that in fiscal year 2007 it implemented measures for the percentages of land border, air, and maritime regulated cargo and shipments in compliance with AQI regulations. Furthermore, the agency plans to add additional performance measures such as percentage of passengers, vehicles, or mail in compliance in fiscal years 2008 and 2009. However, we have not evaluated the adequacy of these new performance measures for assessing the AQI program's effectiveness at intercepting foreign pests and diseases. Third, the number and proficiency of canine teams decreased substantially between the time of the transfer, March 2003, and the time of our review, May 2006. In the past, these dogs have been a key tool for targeting passengers and cargo for detailed inspections. Specifically, APHIS had approximately 140 canine teams nationwide at the time of the transfer, but CBP had only 80 such teams at the time of our review. With regard to proficiency, 60 percent of the 43 agriculture canine teams tested by APHIS in 2005 failed proficiency tests. These tests require the dog to respond correctly in a controlled, simulated work environment and ensure that dogs are working effectively to catch potential prohibited agricultural material. In general, canine specialists we interviewed expressed concern that the proficiency of their dogs was deteriorating due to a lack of working time. That is, the dogs were sidelined while the specialists were assigned to other duties. In addition, based on our survey results, 46 percent of canine specialists said they were directed to perform duties outside their primary canine duties daily or several times a week. Furthermore, 65 percent of canine specialists indicated that they sometimes or never had funding for training supplies. Another major change to the canine program, following the transfer, was CBP's elimination of all canine management positions. Finally, based on its staffing model, CBP lacks adequate numbers of agriculture specialists to accomplish the agricultural mission. The Homeland Security Act authorized the transfer of up to 3,200 AQI personnel from USDA to DHS. In March 2003, APHIS transferred a total of 1,871 agriculture specialist positions, including 317 vacancies, to CBP and distributed those positions across CBP's 20 district field offices, encompassing 139 ports of entry. Because of the vacancies, CBP lacked adequate numbers of agriculture specialists from the beginning and had little assurance that appropriate numbers of specialists were staffed at each port of entry. Although CBP has made some progress in hiring agriculture specialists since the transfer, we previously reported that CBP lacked a staffing model to ensure that more than 630 newly hired agriculture specialists were assigned to the ports with the greatest need, and to ensure that each port had at least some experienced specialists. Accordingly, in May 2006 we recommended that APHIS and CBP work together to develop a national staffing model to ensure that agriculture staffing levels at each port are sufficient. Subsequently, CBP developed a staffing model for its ports of entry and provided GAO with its results. Specifically, as of mid-August 2007, CBP said it had 2,116 agriculture specialists on staff, compared to 3,154 such specialists needed according to the model. The global marketplace of agricultural trade and international travel has increased the number of pathways for the movement and introduction into the United States of foreign and invasive agricultural pests and diseases such as foot-and-mouth disease and avian influenza. Given the importance of agriculture to the U.S. economy, ensuring the effectiveness of federal programs to prevent accidental or deliberate introduction of potentially destructive organisms is critical. Accordingly, effective management of the AQI program is necessary to ensure that agriculture issues receive appropriate attention. Although we have reported that CBP and APHIS have taken steps to strengthen agricultural quarantine inspections, many agriculture specialists believe that the agricultural mission has been compromised. While morale issues, such as the ones we identified, are to be expected in the merger establishing DHS, CBP had not used key data to evaluate the program's effectiveness and could not explain significant increases and decreases in inspections and interceptions. In addition, CBP had not developed performance measures to demonstrate that it is balancing its multiple mission responsibilities, and it does not have sufficient agriculture specialists based on its staffing model. Until the integration of agriculture issues into CBP's overall anti-terrorism mission is more fully achieved, U.S. agriculture may be left vulnerable to the threat of foreign pests and diseases. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Lisa Shames at (202) 512- 3841 or [email protected]. Key contributors to this testimony were James Jones, Jr., Assistant Director, and Terrance Horner, Jr. Josey Ballenger, Kevin Bray, Chad M. Gorman, Lynn Musser, Omari Norman, Alison O'Neill, and Steve C. Rossman also made important contributions. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-1240T. Washington, D.C.: September 18, 2007. Department of Homeland Security: Progress Report on Implementation of Mission and Management Functions. GAO-07-454. Washington, D.C.: August 17, 2007. Customs Revenue: Customs and Border Protection Needs to Improve Workforce Planning and Accountability. GAO-07-529. Washington, D.C.: April 12, 2007. Homeland Security: Agriculture Specialists' Views of Their Work Experiences after Transfer to DHS. GAO-07-209R. Washington, D.C.: November 14, 2006. Invasive Forest Pests: Recent Infestations and Continued Vulnerabilities at Ports of Entry Place U.S. Forests at Risk. GAO-06-871T. Washington, D.C.: June 21, 2006. Homeland Security: Management and Coordination Problems Increase the Vulnerability of U.S. Agriculture to Foreign Pests and Disease. GAO- 06-644. Washington, D.C.: May 19, 2006. Homeland Security: Much Is Being Done to Protect Agriculture from a Terrorist Attack, but Important Challenges Remain. GAO-05-214. Washington, D.C.: March 8, 2005. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies. GAO-03-293SP. Washington, D.C.: November 14, 2002. Homeland Security: Critical Design and Implementation Issues. GAO- 02-957T. Washington, D.C.: July 17, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
U.S. agriculture generates over $1 trillion in economic activity annually, but concerns exist about its vulnerability to foreign pests and diseases. Under the agricultural quarantine inspection (AQI) program, passengers and cargo are inspected at U.S. ports of entry to intercept prohibited material and pests. The Homeland Security Act of 2002 transferred responsibility for inspections from the U.S. Department of Agriculture's (USDA) Animal and Plant Health Inspection Service (APHIS) to the Department of Homeland Security's (DHS) Customs and Border Protection (CBP). APHIS retained some AQI-related responsibilities, such as policy setting and training. This testimony is based on issued GAO reports and discusses (1) steps DHS and USDA took that were intended to strengthen the AQI program, (2) views of agriculture specialists of their work experiences since the transfer, and (3) management problems. As part of these reports, GAO surveyed a representative sample of agriculture specialists on their work experiences, analyzed inspection and interception data, and interviewed agency officials. CBP and APHIS have taken steps intended to strengthen the AQI program since transfer of inspection responsibilities from USDA to DHS in March 2003. Specifically, CBP and APHIS have expanded the hours and developed a national standard for agriculture training; given agricultural specialists access to a computer system that is to better target inspections at ports; and established a joint review process for assessing compliance with the AQI program on a port-by-port basis. In addition, CBP has created new agricultural liaison positions at the field office level to advise regional port directors on agricultural issues. We have not assessed the implementation and effectiveness of these actions. However, GAO's survey of CBP agriculture specialists found that many believed the agriculture inspection mission had been compromised by the transfer. Although 86 percent of agriculture specialists reported feeling very well or somewhat prepared for their duties, 59 and 60 percent of specialists answered that they were conducting fewer inspections and interceptions, respectively, of prohibited agricultural items since the transfer. When asked what is going well with respect to their work, agriculture specialists identified working relationships (18 percent), nothing (13 percent), salary and benefits (10 percent), training (10 percent), and general job satisfaction (6 percent). When asked what areas should be changed or improved, they identified working relationships (29 percent), priority given to the agriculture mission (29 percent), problems with the CBP chain of command (28 percent), training (19 percent), and inadequate equipment and supplies (17 percent). Based on private and public sector experiences with mergers, these morale issues are not unexpected because employees often worry about their place in the new organization. CBP must address several management problems to reduce the vulnerability of U.S. agriculture to foreign pests and diseases. Specifically, as of May 2006, CBP had not used available inspection and interception data to evaluate the effectiveness of the AQI program. CBP also had not developed sufficient performance measures to manage and evaluate the AQI program. CBP's measures focused on only two pathways by which foreign pests and diseases may enter the country and pose a threat to U.S. agriculture. However, in early 2007, CBP initiated new performance measures to track interceptions of pests and quarantine materials at ports of entry. We have not assessed the effectiveness of these measures. In addition, CBP has allowed the agricultural canine program to deteriorate, including reductions in the number of canine teams and their proficiency. Lastly, CBP had not developed a risk-based staffing model for determining where to assign agriculture specialists. Without such a model, CBP did not know whether it had an appropriate number of agriculture specialists at each port. Subsequent to our review, CBP developed a model. As of mid-August 2007, CBP had 2,116 agriculture specialists on staff, compared with 3,154 specialists needed, according to the staffing model.
| 5,127 | 831 |
Several measures of price are commonly used within the health care sector to measure the price of prescription drugs. These varying price measures are due to the different prices that drug manufacturers and retail pharmacies charge different purchasers, and drug prices can vary substantially depending on the purchaser. (See fig. 1.) The U&C price, the retail price for a drug, is the price an individual without prescription drug coverage would pay at a retail pharmacy. The U&C price includes the acquisition cost of the drug paid by the retail pharmacy and a markup charged by the pharmacy. AWP is the average of the list prices or sticker price that a manufacturer of a drug suggests wholesalers charge pharmacies. AWP is typically less than the U&C price, which includes the pharmacy's own markup. AWP is not the actual price that large purchasers normally pay. Nevertheless, AWP is part of the formula used by many state Medicaid programs and private third-party payers to reimburse retail pharmacies. AMP is the average of prices paid to a manufacturer by wholesalers for a drug distributed to the retail pharmacy class of trade, after subtracting any account cash discounts or other price reductions. CMS uses AMP in determining rebates drug manufacturers must provide, as required by the Omnibus Budget Reconciliation Act of 1990, to state Medicaid programs as a condition for the federal contribution to Medicaid spending for the manufacturers' outpatient prescription drugs. For brand drugs, the minimum rebate amount is the number of units of the drug multiplied by 15.1 percent of the AMP. From January 2000 through December 2004, the average U&C prices for a typical 30-day supply of 96 prescription drugs frequently used by BCBS FEP Medicare and non-Medicare enrollees increased 24.5 percent. The average U&C prices for 75 prescription drugs frequently used by Medicare beneficiaries and for 76 prescription drugs frequently used by non- Medicare enrollees increased at similar rates. The average U&C prices for 50 frequently used brand drugs increased three times faster than the average U&C prices for 46 frequently used generic drugs. From January 2000 through December 2004, the average U&C price collected from retail pharmacies by PACE and EPIC for a 30-day supply for 96 prescription drugs frequently used by BCBS FEP Medicare beneficiaries and non-Medicare enrollees increased 24.5 percent, a 4.6 percent average annual rate of increase. (See fig. 2.) During the same period, using nationwide data from the Bureau of Labor Statistics (BLS), prices for prescription drugs and medical supplies for all urban consumers increased 21.3 percent, a 4.0 percent average annual rate of increase. Additionally, using BLS data, prices for all consumer items for all urban consumers--the Consumer Price Index--increased 12.7 percent, a 2.5 percent average annual rate of increase from January 2000 through December 2004. While U&C prices increased each year from 2000 through 2004, the greatest annual rate of increase--6.1 percent--occurred from January 2002 to January 2003. (See fig. 3.) Since then, annual rates of increase have been less, increasing 5.2 percent from January 2003 to January 2004 and 4.2 percent from January 2004 to December 2004. Twenty drugs, representing 33 percent of BCBS FEP prescriptions for the 96 drugs we reviewed, accounted for 64 percent of the total increase in the U&C price index from January 2000 through December 2004. The drug with the largest effect on the price index was Lipitor 10mg, which accounted for 6.6 percent of the total increase. Nineteen of the 20 drugs were brand drugs and 1 was a generic drug, Hydrocodone/Acetaminophen 5/500mg. The twenty drugs accounting for the largest changes in the U&C price index are listed below. From January 2000 through December 2004, the average U&C prices collected by PACE and EPIC for 75 prescription drugs frequently used by BCBS FEP Medicare beneficiaries increased at a similar rate as the average U&C prices for 76 prescription drugs frequently used by BCBS FEP non-Medicare enrollees. (See fig. 4.) The prices of 75 Medicare drugs increased 24.0 percent, a 4.5 percent average annual rate of increase. The prices of 76 non-Medicare drugs increased 24.8 percent, a 4.6 percent average annual rate of increase. From January 2000 through December 2004, the average U&C price (based on PACE and EPIC data) for 50 frequently used brand drugs rose three times faster than the average U&C price for 46 frequently used generic drugs. (See fig. 5.) Specifically, the average U&C price for brand drugs increased 28.9 percent, a 5.3 percent average annual rate of increase, whereas U&C prices for generic drugs increased 9.4 percent, a 1.8 percent average annual rate of increase. From the first quarter of 2000 through the fourth quarter of 2004, AMPs and U&C prices for the 50 brand drugs increased at similar rates, but AWPs increased at a faster rate. The quarterly AWPs for 50 brand prescription drugs increased 31.6 percent, a 6.0 percent average annual rate of increase. For these same 50 drugs, the quarterly AMPs increased 28.2 percent, a 5.4 percent average annual rate of increase, while the average quarterly U&C prices increased 27.5 percent, a 5.2 percent average annual rate of increase. Over the entire period, the AWP index increased about 3 to 4 percentage points more than the AMP or U&C price indexes. (See fig. 6.) The difference between the levels of AWP and U&C prices for brand drugs narrowed slightly during the time period we analyzed. Whereas in the first quarter of 2000 AWP was on average about 91 percent of the U&C price for the same drug, by the fourth quarter of 2004 AWP was on average about 94 percent of the U&C price. In contrast, AMP stayed a similar portion of U&C in first quarter 2000 and fourth quarter 2004, with the AMP on average about 72 percent of the U&C price. Ten brand drugs in each index, representing one-third or more of the prescriptions for the 50 brand drugs, accounted for almost 50 percent of the increase for the quarterly AMP, AWP, and U&C price indexes. Eight of these 10 drugs were the same across all three price indexes. The drug accounting for the largest portion of the change in the AMP and AWP indexes was Celebrex 200mg, accounting for 8.6 percent of the increase for AMP and 7.5 percent for AWP. Lipitor 10mg was the drug accounting for the largest portion of the change in the quarterly U&C price index and accounted for 7.2 percent of the increase for the 50 brand drugs. (See fig. 7.) From 2000 through 2004, retail prices for drugs frequently used by Medicare beneficiaries increased 24.0 percent--an average rate of 4.5 percent per year. In general, higher drug prices mean higher spending by consumers and health insurance sponsors, including employers and federal and state governments. With brand drug prices increasing three times as fast as generic drug prices, public and private health insurance sponsors will likely continue to focus on strategies to encourage increased use of generic drugs when available. Starting in 2006, with the introduction of the Medicare prescription drug benefit, Medicare will be paying claims for a wider array of drugs and, as a result, the federal government will be affected more than previously by rising drug prices. We found that from 2000 through 2004, on average the AWPs for 50 frequently used brand drugs rose 0.8 percent per year faster than the retail prices for these same drugs. A continuation of this difference between AWP and retail prices increases could affect many Medicaid programs and private third-party payers that base their reimbursement of drug claims on AWPs. We provided a draft of this report to CMS, PACE, EPIC, and BCBS FEP. In commenting on this report, CMS highlighted the discounts and price information tools that will be available under the Medicare drug benefit. CMS also stated that neither the U&C price nor AWP reflect discounts, such as manufacturers' discount programs, or other price concessions affecting a drug's price. We noted in the report that U&C represents the retail pharmacy price paid by consumers without insurance. The U&C does not reflect prices available from other sources, such as mail order pharmacies. We also noted that AWP is a list price that is not the actual price paid by large purchasers. We agree that consumers may be able to obtain lower prices than reflected by the U&C and AWP. However, the focus of our analysis was to examine price trends rather than price levels, and U&C and AWP are consistent measures used to assess price trends. Further, increases in the published AWP may increase what many public or private third-party purchasers pay for prescription drugs because AWP is often included in the formula to calculate payments to pharmacies. Additionally, CMS suggested that we examine the effect on prices when generic alternatives are introduced. We agree that the introduction of generic drugs can reduce consumer payments for drugs. Examining changes in consumer spending for drugs, which are also affected by changes in utilization and the introduction of new drug alternatives, would be useful, but was beyond the scope of this report in examining price trends for frequently-used brand and generic drugs. PACE and BCBS provided technical comments that we incorporated as appropriate; EPIC stated that it did not have any comments. As agreed with your offices, unless you publicly announce the contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send copies of this report to the Administrator of CMS and other interested parties. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine the change in retail prices for prescription drugs frequently used by Medicare beneficiaries and other individuals with health insurance, we used data from the Blue Cross and Blue Shield (BCBS) Federal Employee Program (FEP) to select the 100 prescription drugs most frequently dispensed through retail pharmacies in 2003 for BCBS FEP Medicare enrollees and the 100 most frequently dispensed for BCBS FEP non-Medicare enrollees. Combined, these two lists included 133 unique drugs. We obtained average monthly usual and customary (U&C) prices reported by retail pharmacies to Pennsylvania's Pharmaceutical Assistance Contract for the Elderly (PACE) program from January 2000 through December 2004 and New York's Elderly Pharmaceutical Insurance Coverage (EPIC) program from August 2000 through December 2004. We collected prices based on a specific strength, dosage form, and common number of units (such as pills), typically for a 30-day supply. Based on combined PACE and EPIC data, 96 of the 133 drugs we selected had prices reported for every month from January 2000 through December 2004. We analyzed price trends on a monthly basis from January 2000 through December 2004 for these 96 drugs. Of the 96 drugs, 75 were among those most frequently used by BCBS FEP Medicare enrollees, and 76 were among those most frequently used by BCBS FEP non-Medicare enrollees. Fifty-five of the 96 drugs were frequently used by both BCBS Medicare enrollees and non-Medicare enrollees. We first determined the total number of prescriptions in 2003 for the drugs we selected dispensed to BCBS FEP Medicare enrollees and the total number of prescriptions dispensed to BCBS FEP non-Medicare enrollees. Separately for drugs frequently used by Medicare and by non- Medicare enrollees, we calculated the share of the total number of BCBS FEP prescriptions attributed to each drug. The price of each drug was then weighted by its relative share of total Medicare or total non-Medicare prescriptions in 2003 to calculate the average price for frequently used Medicare drugs and the average price for frequently used non-Medicare drugs for each month from January 2000 through December 2004. We standardized these averages to create a Medicare price index and a non- Medicare price index, each with a value of 100 as of January 2000. We also separately analyzed monthly trends in U&C prices for brand and generic drugs frequently used by BCBS FEP enrollees. Of the 96 drugs, 50 were brand drugs and 46 were generic drugs. Similar to our calculation of Medicare and non-Medicare price indexes, we calculated indexes for brand drugs and generic drugs based on each drug's share of the total number of brand or generic prescriptions dispensed to BCBS FEP enrollees in 2003. To examine the change in retail prices for frequently used drugs compared to other drug price benchmarks, we compared an index based on the U&C prices reported by PACE and EPIC for 50 brand drugs to indexes based on the average manufacturer prices (AMP) and average wholesale prices (AWP) for these 50 drugs on a quarterly basis from the first quarter of 2000 through the fourth quarter of 2004. The Centers for Medicare & Medicaid Services (CMS) requires manufacturers to report AMP within 30 days of the end of each calendar quarter. Manufacturers submit AWPs on a periodic basis to publishers of drug-pricing data, such as First DataBank. Using the National Drug Codes (NDC) reported by PACE and EPIC for the U&C prices for the 50 brand drugs, we obtained per unit AMPs from CMS and per unit AWPs from First DataBank associated with each NDC. For each drug, we calculated a quarterly AMP and a quarterly AWP by multiplying the per unit price by the most common number of units for a 30-day supply. We created an AMP and AWP index by weighting the 50 brand drugs by the number of prescriptions in 2003 from BCBS FEP. Similarly, we recalculated the U&C price for the 50 brand drugs on a quarterly basis to make comparisons to AMP and AWP. We also determined how much each drug's change in price contributed to the overall change in price for the 50 brand drugs for AMPs, AWPs, and U&C prices. We measured the share each drug contributed to the overall index by comparing the ratio of (1) each drug's price change from January 2000 through December 2004 multiplied by its weight based on BCBS FEP prescriptions, to (2) the sum of all drugs price changes multiplied by their associated weights. Our analyses are limited to drugs most frequently used by Medicare beneficiaries and by non-Medicare enrollees in the 2003 BCBS FEP. Additionally, our analyses using U&C prices are limited to prices reported by retail pharmacies in Pennsylvania to the PACE program and by retail pharmacies in New York to the EPIC program. We reviewed the reliability of data from BCBS FEP, CMS, First DataBank, EPIC, and PACE, including screening for outlier prices in the PACE and EPIC data and ensuring that the price trends and frequently used drugs were consistent with other data sources. We determined that these data were sufficiently reliable for our purposes. We performed our work from April 2004 through July 2005 in accordance with generally accepted government auditing standards. Table 1 lists the 96 drugs used in constructing monthly U&C price indexes from January 2000 through December 2004. Fifty of the 96 drugs are brand drugs and were also used in examining price changes in AMP, AWP, and U&C on a quarterly basis from first quarter 2000 through fourth quarter 2004. Of the 96 drugs, 75 were frequently used by Medicare beneficiaries and 76 were frequently used by non-Medicare enrollees, with 55 of these drugs frequently used by both Medicare beneficiaries and non-Medicare enrollees. In addition to the contact named above, John E. Dicken, Director; Rashmi Agarwal; Jessica L. Cobert; Martha Kelly, Matthew L. Puglisi; and Daniel S. Ries made key contributions to this report.
|
Prescription drug spending has been the fastest growing segment of national health expenditures. As the federal government assumes greater financial responsibility for prescription drug expenditures with the introduction of Medicare part D, federal policymakers are increasingly concerned about prescription drug prices. GAO was asked to examine the change in retail prices and other pricing benchmarks for drugs frequently used by Medicare beneficiaries and other individuals with health insurance from 2000 through 2004. To examine the change in retail prices from 2000 through 2004, we obtained usual and customary (U&C) prices from two state pharmacy assistance programs for drugs frequently used by Medicare beneficiaries and non-Medicare enrollees in the 2003 Blue Cross and Blue Shield (BCBS) Federal Employee Program (FEP). The U&C price is the price an individual without prescription drug coverage would pay at a retail pharmacy. Additionally, we compared the change in U&C prices for brand drugs from 2000 through 2004 to the change in two pricing benchmarks: average manufacturer price (AMP), which is the average of prices paid to manufacturers by wholesalers for drugs distributed to the retail pharmacy class of trade, and average wholesale price (AWP), which represents the average of list prices that a manufacturer suggests wholesalers charge pharmacies. We found the average U&C prices at retail pharmacies reported by two state pharmacy assistance programs for a 30-day supply of 96 drugs frequently used by BCBS FEP Medicare and non-Medicare enrollees increased 24.5 percent from January 2000 through December 2004. Of the 96 drugs: Twenty drugs accounted for nearly two-thirds of the increase in the U&C price index; the increase in average U&C prices for 75 prescription drugs frequently used by Medicare beneficiaries was similar to the increase for 76 prescription drugs frequently used by non-Medicare enrollees; and the average U&C prices for 50 frequently used brand prescription drugs increased three times as much as the average for 46 generic frequently used prescription drugs. AWPs increased at a faster rate than AMPs and U&C prices for the 50 frequently used brand drugs from first quarter 2000 through fourth quarter 2004. Ten drugs in each index accounted for almost 50 percent of the increase for AMP, AWP, and U&C prices. Eight of these 10 drugs were consistent across the three price indexes. The Centers for Medicare & Medicaid Services (CMS), two state pharmacy assistance programs, and BCBS FEP reviewed a draft of this report. While CMS noted that U&C and AWP do not reflect discounts in a drug's price, this report's focus was to examine price trends rather than price levels. Technical comments were incorporated as appropriate.
| 3,600 | 565 |
The Airport and Airway Trust Fund was established by the Airport and Airway Revenue Act of 1970 (P.L. 91-258) to finance FAA's investments in the airport and airway system, such as construction and safety improvements at airports and technological upgrades to the air traffic control system. Historically, about 87 percent of the tax revenues for the Trust Fund have come from a tax on domestic airline tickets. The remainder of the Trust Fund is financed by a $6 per passenger charge on flights departing the United States for international destinations, a 6.25-percent charge on the amount paid to transport domestic cargo by air, a 15-cents-per-gallon charge on purchases of noncommercial aviation gasoline, and a 17.5-cents-per-gallon charge on purchases of noncommercial jet fuel. FAA is responsible for a wide range of functions, which range from certifying new aircraft; to inspecting the existing fleet; to providing air traffic services, such as controlling takeoffs and landings and managing the flow of aircraft between airports. Over the past decade, the growth of domestic and international air travel has greatly increased the demand for FAA's services. At the same time, FAA must operate in an environment of increasingly tight federal resources. In this context, we have generally supported FAA's consideration of charging commercial users for the agency's services. In particular, we have previously suggested that FAA examine the feasibility of charging fees to new airlines for the agency's certification activities and to foreign airlines for flights that pass through our nation's airspace. Similarly, we have reported our view that the various commercial users of the nation's airspace and airports should pay their fair share of the costs that they impose on the system. In addition, to ensure full cost recovery, we have suggested that FAA consider raising the fees that it charges for the certification and surveillance of foreign repair stations. Because the various taxes that make up the Trust Fund are not based on factors that directly relate to the system's costs, the extent to which the current financing system charges users according to their demand on the system is open to question. For example, two airlines flying the same number of passengers on the same type of aircraft from Minneapolis, Minnesota, to Des Moines, Iowa, at the same time of day will impose the same costs on the airport and air traffic control system. However, because the ticket tax is based on the fares paid, the airline that charges the lower fares in this example will pay less for the system's use, even though both airlines had the same number of takeoffs and landings and flew the same number of passengers, the same type of aircraft, and the same distance. Motivated by their belief that the current system unfairly subsidizes their low-fare competitors, the nation's seven largest airlines have proposed that the ticket tax be replaced by user fees on domestic operations. Under the proposal, airlines would pay fees for domestic operations according to the following three-part formula: (1) $4.50 per originating passenger, (2) $2 per seat on jet aircraft with 71 or more seats and $1 per seat on jets and turboprop aircraft with 70 or fewer seats, and (3) $0.005 per nonstop passenger mile. By using two factors in particular--originating passengers and nonstop passenger miles--the formula tends to favor the larger airlines, which operate hub-and-spoke systems, at the expense of the low-fare and small airlines, which tend to operate point-to-point systems. This relationship can best be shown by example. Consider the two possible routings between St. Louis, Missouri, and Orlando, Florida, shown in figure 1. The "hubbing" airline first takes the passenger to a hub, such as Chicago's O'Hare Airport, to connect to another flight to Orlando. The point-to-point carrier takes the St. Louis passenger nonstop to Orlando. The airline that lands at O'Hare to transfer the passenger to another flight to Orlando has twice as many takeoffs and landings as the airline that flies nonstop between St. Louis and Orlando. As a result, the costs imposed by the hubbing airline on the air traffic control system are greater. However, by charging $4.50 per "originating" passenger, the airline that flies the passenger from St. Louis to Orlando via O'Hare would pay the same amount as the airline that flies the passenger nonstop between St. Louis and Orlando, even though the hubbing carrier puts a greater burden on the system. In addition, by charging $0.005 per "nonstop passenger mile"--or the straight-line distance between the points of origin and destination--the formula does not charge the hubbing airlines for the circuitous routings that are common to their hub-and-spoke operations. As a result, the airline transporting a passenger 297 miles from St. Louis to O'Hare and then flying that passenger 1,157 miles to Orlando would be charged the same as an airline flying a passenger nonstop from St. Louis to Orlando, even though the hubbing carrier placed a greater burden on the air traffic control system. Because the seven largest airlines operate hub-and-spoke systems and most low-fare and small airlines operate point-to-point systems, the proposed fee system would shift the financial burden away from the larger airlines and onto their competitors. For example, as figure 2 shows, on the basis of FAA's traffic forecasts for fiscal year 1997, if the ticket tax were replaced by this proposal, the cost to the nation's seven largest airlines would decrease by nearly $600 million, while the cost to Southwest Airlines, America West, and other low-fare and small airlines would increase by nearly $550 million. In addition, the coalition's proposal would charge commuter carriers $1 per seat while charging airlines $2 per seat. Most major commuter carriers are owned by or affiliated with one of the coalition airlines; Continental Express, for example, is a wholly-owned subsidiary of Continental Airlines. As a result, by charging commuter carriers less per seat, the proposal would provide the coalition airlines with an additional benefit. Implementing a proposal that would shift nearly $600 million in costs from one segment of the industry to another could have substantial competitive impacts and needs to be studied first. While the ticket tax might provide low-fare airlines with a competitive advantage, other public policies favor some large carriers. For example, a few large airlines control nearly all the takeoff and landing slots at the four "slot-controlled" airports, which give them an advantage over their competitors. Simply eliminating the potential "subsidy" to low-fare airlines created by the ticket tax, while leaving the other policies in place that provide some large airlines with a competitive advantage, might result in higher fares and a reduction in service options for consumers. In addition, the proposal as written could have a dramatic shift in costs that could affect regions differently. On the one hand, consumers in regions such as the West and Southwest that have benefitted from the entry of low-fare airlines could pay more than they do under the ticket tax. On the other hand, consumers in the East and Upper Midwest, who have not experienced the entry of low-fare airlines to the same extent, could pay relatively less. Nevertheless, under any fee system that incorporated common measures of the system's usage, such as departures and aircraft miles flown, it is likely that the relative share paid by low-fare airlines would increase compared with what they pay now under the ticket tax. In 1995, for example, Southwest accounted for 6.3 percent of the airlines' payments under the ticket tax. In that year, Southwest accounted for 10 percent of the industry's departures and 7 percent of the aircraft miles flown. However, if only these two measures were considered, Southwest's share would not increase to the same extent as under the large airlines' proposal. Under the coalition's proposal, Southwest's share of the industry's contribution to the Trust Fund would increase to 10.3 percent. A more precise fee system, however, would account for those costs incurred by FAA in managing the airport and airway system, which vary greatly by the amount, type, and timing of various airline operations. For example, the air traffic control costs imposed by a flight arriving at 5 p.m. at New York's congested LaGuardia Airport--regardless if that flight involves a large jet or commuter aircraft--are much greater than those imposed by a flight arriving at noon at the noncongested airport in Des Moines. Likewise, hubbing operations at the nation's largest airports increase the peak service demands on the airway system and increase FAA's operating and staffing costs. Neither the 10-percent ticket tax nor the largest airlines' proposal accounts for these factors. Determining how best to finance FAA involves complex issues, requiring careful examination. In addition, an evaluation of alternative financing for FAA would need to involve the Department of Transportation's (DOT) Office of Aviation and International Affairs. This office is responsible for evaluating the potential competitive implications of any changes to our aviation system. By changing what each airline pays, any new funding mechanism will have ramifications for airline competition that DOT would be better positioned to examine for the Congress than FAA. Likewise, DOT may also be better positioned than FAA to determine the extent to which a new financing mechanism might otherwise affect the aviation system. Recognizing the complexities associated with determining how best to finance FAA, the Congress recently directed that the issues involved be studied further. Specifically, the Federal Aviation Reauthorization Act (P.L. 104-264), enacted in October 1996, requires FAA to contract with an independent firm to assess the agency's funding needs and the costs occasioned by each segment of the aviation industry on the airport and airway system. This assessment, which the contractor is required to complete by February 1997, will be a critical piece in designing a new fee system if the Congress ultimately decides to replace the ticket tax. The 1996 act also created the National Civil Aviation Review Commission, which is charged with studying how best to finance FAA in light of the contractor's independent assessment of funding needs and system costs. The commission is to have 21 members--13 appointed by the Secretary of Transportation and 8 appointed by the Congress--and represent "a balanced view of the issues important to general aviation, major air carriers, air cargo carriers, regional air carriers, business aviation, airports, aircraft manufacturers, the financial community, aviation industry workers, and airline passengers." The commission must report its findings and recommendations to the Secretary of Transportation within 6 months of receiving the contractor's independent assessment--in other words, by August 1997. After receiving the commission's report, the Secretary of Transportation is required to consult with the Secretary of the Treasury and report to the Congress by October 1997 on the Administration's recommendations for funding the needs of the aviation system through 2002. We provided DOT with a draft copy of this report for review and comment. We discussed the draft with DOT officials, including the Deputy Assistant Secretary for Aviation and International Affairs, who stated that the agency was in complete agreement with the report. DOT also provided us with two comments, which we incorporated where appropriate. First, the agency noted that the coalition's proposal also benefits the largest airlines by charging commuter carriers $1 per seat while charging airlines $2 per seat. DOT pointed out that because most of the commuter carriers are owned by or affiliated with one of the coalition airlines, this differential would provide the coalition airlines with an additional benefit. Second, in our draft report, we stated that FAA was completing work on its own cost allocation study, which the agency expected to release by the end of the year. DOT commented, however, that because of the recent congressional mandate that FAA contract with an independent firm to undertake such an assessment, FAA would likely not release its study. We obtained information for this report from (1) documents and data provided by DOT, FAA, and the coalition airlines and (2) our discussions with representatives of the coalition as well as the executives of several large carriers, including the CEO of American Airlines, and representatives of low-fare and other smaller airlines, including the CEO of Southwest Airlines. For our analysis of the implications of reinstating the taxes, we used the rates in effect as of November 1996. For FAA's funding levels, we used the agency's enacted fiscal year 1997 budget. We performed our review from June through November 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Transportation; the Acting Administrator, FAA; the Director, Office of Management and Budget; and other interested parties. We will send copies to others upon request. If you or your staff have any questions, please call me at (202) 512-2834. Major contributors to this report are listed in appendix II. During fiscal years 1990 through 1996, the Airport and Airway Trust Fund financed 100 percent of three FAA accounts--Grants-in-Aid for Airports (the Airport Improvement Program); Facilities and Equipment; and Research, Engineering, and Development. Also, during this period, the Trust Fund has, with the exception of fiscal year 1990, financed about half of FAA's fourth account--Operations--and the remainder of this account was financed by the General Fund. Under FAA's fiscal year 1997 budget, as enacted, the Trust Fund would continue to finance 100 percent of three accounts and would finance one-third of the Operations account if the taxes that finance the Trust Fund are extended beyond December 31, 1996. Table I.1 shows FAA's funding sources for fiscal years 1990 through 1996 and FAA's fiscal year 1997 budget as enacted. Table I.1: FAA Funding, Fiscal Years 1990-97 (3,017) (2,033) (2,250) (2,251) (2,285) (2,122) (2,420) (3,255) (807) (2,003) (2,110) (2,279) (2,295) (2,450) (2,223) (1,700) (3,017) (2,033) (2,250) (2,251) (2,285) (2,122) (2,420) (3,255) (4,123) (6,103) (6,637) (6,611) (6,294) (6,112) (5,725) (5,306) Under Public Law 104-205, FAA must use up to $75 million in user fees charged for air traffic control and related services to nongovernmental aircraft that fly over but do not takeoff or land in the United States in lieu of General Fund financing. If the taxes that finance the Trust Fund are not extended beyond December 31, 1996, the Trust Fund's balance available (referred to as the uncommitted balance) will be below the level needed to finance FAA's fiscal year 1997 budget as enacted. Specifically, the Trust Fund will be about $1 billion short of the funding needed to finance its portion of FAA's fiscal year 1997 budget. Therefore, the total funding commitments that FAA can make are reduced by this amount. According to FAA's estimates, the Trust Fund could provide about $4.28 billion for FAA's budget and $65 million for non-FAA expenditures, thereby bringing the Trust Fund's total contribution to about $4.35 billion. However, FAA's budget as enacted calls for $5.31 billion from the Trust Fund and $3.26 billion from the General Fund. FAA also estimates that, under current law, the Trust Fund balance available will be $0 by early July 1997 if the taxes are not reinstated (or the tax on airline tickets is not replaced by user fees). Table I.2 shows the Trust Fund's enacted share of FAA and non-FAA budgets and the potential funding shortfall for fiscal year 1997. Also, the authority to transfer the tax receipts from the Treasury to the Trust Fund will expire on December 31, 1996. As a result, some taxes imposed late in 1996 will not be deposited in the Treasury until 1997 and, therefore, cannot be transferred to the Trust Fund. FAA estimates that this amount will total about $300 million. If the Congress provides transfer authority for moving this $300 million to the Trust Fund, then FAA estimates that the Trust Fund balance available to finance FAA would not reach $0 until late July 1997 and the potential shortfall would be reduced to $724 million. FAA officials estimate that in order for the Trust Fund to finance $5.31 billion of FAA's fiscal year 1997 budget, the taxes would need to be reinstated by July 1997. However, according to FAA officials, reinstatement by this date allows for almost no margin of error in the agency's estimates of tax revenue. Consequently, if revenue is less than estimated, congressional action would be required to obtain additional financing from the General Fund. Also, the Trust Fund balance available to finance FAA's fiscal year 1998 budget would depend on when the taxes and transfer authority are reinstated, as shown in figure I.1. Charles R. Chambers Gerald L. Dillingham Timothy F. Hannegan Julian L. King Robert E. Levin Francis P. Mulvey John T. Noto The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO examined the proposal by a coalition of the seven largest U.S. airlines to replace the ticket tax with user fees, focusing on: (1) whether the ticket tax should be replaced by a different fee system; (2) what the potential competitive impacts of the fees proposed by the coalition airlines would be; (3) what factors need to be considered if a new fee system were to be developed; and (4) the implications on the Federal Aviation Administration's (FAA) budget of reinstating or not reinstating the taxes that finance the Airport and Airway Trust Fund. GAO found that: (1) because the ticket tax is based on the fares paid by travelers and not an allocation of actual FAA costs, it may not fairly allocate the system's costs among the users; (2) the coalition airlines' proposal to replace the ticket tax with user fees only incorporates factors that would substantially increase the fees paid by low-fare and small airlines and decrease the fees paid by the seven coalition airlines; (3) the proposal would dramatically redistribute the cost burden among airlines and could have substantial implications for domestic competition; (4) any replacement system for the ticket tax would need to account for the wide range of costs incurred by FAA in managing the airport and airway system; (5) the views of all affected parties, not just any particular group of airlines, would need to be included in assessing the mechanisms for financing the airport and airway system; and (6) Congress established a commission to study how best to meet FAA financing needs which will help ensure that, in the long term, FAA has a secure funding source, commercial users of the system pay their fair share, and a strong, competitive airline industry continues to exist.
| 3,983 | 370 |
DHS and its components have used various mechanisms over time to coordinate border security operations. In September 2013, we reported that the overlap in geographic and operational boundaries among DHS components underscored the importance of collaboration and coordination among these components. To help address this issue and mitigate operational inflexibility, DHS components, including those with border security-related missions such as CBP, Coast Guard, and ICE, employed a variety of collaborative mechanisms to coordinate their missions and share information. These mechanisms had both similarities and differences in how they were structured and on which missions or threats they focused, among other things, but they all had the overarching goal of increasing mission effectiveness and efficiencies. For example: In 2011, the Joint Targeting Team originated as a CBP-led partnership among the Del Rio area of Texas, including Border Patrol, CBP's Office of Field Operations, and ICE. This mechanism was expanded to support the South Texas Campaign (STC) mission to disrupt and dismantle transnational criminal organizations, and its membership grew to include additional federal, state, local, tribal, and international law enforcement agencies. In 2005, the first Border Enforcement Security Task Force (BEST) was organized and led by ICE, in partnership with CBP, in Laredo, Texas, and additional units were subsequently formed along both the southern and northern borders. The BESTs' mission was to identify, disrupt, and dismantle existing and emerging threats at U.S. land, sea, and air borders. In 2011, CBP, Coast Guard, and ICE established Regional Coordinating Mechanisms (ReCoM) to utilize the fusion of intelligence, planning, and operations to target the threat of transnational terrorist and criminal acts along the coastal border. Coast Guard served as the lead agency responsible for planning and coordinating among DHS components. In June 2014, we reported on STC border security efforts along with the activities of two additional collaborative mechanisms: (1) the Joint Field Command (JFC), which had operational control over all CBP resources in Arizona; and (2) the Alliance to Combat Transnational Threats (ACTT), which was a multiagency law enforcement partnership in Arizona. We found that through these collaborative mechanisms, DHS and CBP had coordinated border security efforts in information sharing, resource targeting and prioritization, and leveraging of assets. For example, to coordinate information sharing, the JFC maintained an operations coordination center and clearinghouse for intelligence information. Through the ACTT, interagency partners worked jointly to target individuals and criminal organizations involved in illegal cross-border activity. The STC leveraged assets of CBP components and interagency partners by shifting resources to high-threat regions and conducting joint operations. More recently, the Secretary of Homeland Security initiated the Southern Border and Approaches Campaign Plan in November 2014 to address the region's border security challenges by commissioning three DHS joint task forces to, in part, enhance collaboration among DHS components, including CBP, ICE, and Coast Guard. Two of DHS's joint task forces are geographically based, Joint Task Force - East and Joint Task Force - West, and one which is functionally based, Joint Task Force - Investigations. Joint Task Force - West is separated into geographic command corridors with CBP as the lead agency responsible for overseeing border security efforts to include: Arizona, California, New Mexico/West Texas, and South Texas. Coast Guard is the lead agency responsible for Joint Task Force - East, which is responsible for the southern maritime and border approaches. ICE is the lead agency responsible for Joint Task Force - Investigations, which focuses on investigations in support of Joint Task Force - West and Joint Task Force - East. Additionally, DHS has used these task forces to coordinate various border security activities, such as use of Predator B UAS, as we reported in February 2017 and discuss below. In September 2013, we reported on successful collaborative practices and challenges identified by participants from eight border security collaborative field mechanisms we visited--the STC, four BESTs and 3 ReCoMs. Their perspectives were generally consistent with the seven key issues to consider when implementing collaborative mechanisms that we identified in our 2012 report on interagency collaboration. Among participants who we interviewed, there was consensus that certain practices facilitated more effective collaboration, which, according to participants, contributed to the groups' overall successes. For example, participants identified three of the seven categories of practices as keys to success: (1) positive working relationships/communication, (2) sharing resources, and (3) sharing information. Specifically, in our interviews, BEST officials stated that developing trust and building relationships helped participants respond quickly to a crisis, and communicating frequently helped participants eliminate duplication of efforts. Participants from the STC, BESTs, and ReCoMs also reported that having positive working relationships built on strong trust among participants was a key factor in their law enforcement partnerships because of the sensitive nature of law enforcement information, and the risks posed if it is not protected appropriately. In turn, building positive working relationships was facilitated by another collaborative factor identified as important by a majority of participants: physical collocation of mechanism stakeholders. Specifically, participants from the mechanisms focused on law enforcement investigations, such as the STC and BESTs, reported that being physically collocated with members from other agencies was important for increasing the groups' effectiveness. Participants from the eight border security collaborative field mechanisms we visited at the time also identified challenges or barriers that affected their collaboration across components and made it more difficult. Specifically, participants identified three barriers that most frequently hindered effective collaboration within their mechanisms: (1) resource constraints, (2) rotation of key personnel, and (3) lack of leadership buy- in. For example, when discussing resource issues, a majority of participants said funding for their group's operation was critical and identified resource constraints as a challenge to sustaining their collaborative efforts. These participants also reported that since none of the mechanisms receive dedicated funding, the participating federal agencies provided support for their respective representatives assigned to the selected mechanisms. Also, there was a majority opinion among mechanism participants we visited that rotation of key personnel and lack of leadership buy-in hindered effective collaboration within their mechanisms. For example, STC participants stated that the rotation of key personnel hindered the STC's ability to develop and retain more seasoned personnel with expertise in investigations and surveillance techniques. In addition, in June 2014, we identified coordination benefits and challenges related to the JFC, STC, and ACTT. For example, DHS and CBP leveraged the assets of CBP components and interagency partners through these mechanisms to conduct a number of joint operations and deploy increased resources to various border security efforts. In addition, these mechanisms provided partner agencies with increased access to specific resources, such as AMO air support and planning assistance for operations. Officials involved with the JFC, STC, and ACTT also reported collaboration challenges at that time. For example, officials from 11 of 12 partner agencies we interviewed reported coordination challenges related to the STC and ACTT, such as limited resource commitments by participating agencies and lack of common objectives. In particular, one partner with the ACTT noted that there had been operations in which partners did not follow through with the resources they had committed during the planning stages. Further, JFC and STC officials cited the need to improve the sharing of best practices across the various collaborative mechanisms, and CBP officials we interviewed identified opportunities to more fully assess how the mechanisms were structured. We recommended that DHS establish written agreements for some of these coordination mechanisms and a strategic-level oversight mechanism to monitor interagency collaboration. DHS concurred and these recommendations were closed as not implemented due to planned changes in the collaborative mechanisms. In February 2017, we found that as part of using Predator B aircraft to support other government agencies, CBP established various mechanisms to coordinate Predator B operations. CBP's Predator B aircraft are national assets used primarily for detection and surveillance during law enforcement operations, independently and in coordination with federal, state, and local law enforcement agencies throughout the United States. For example, at AMO National Air Security Operations Centers (NASOC) in Arizona, North Dakota, and Texas, personnel from other CBP components are assigned to support and coordinate mission activities involving Predator B operations. Border Patrol agents assigned to support NASOCs assist with directing agents and resources to support its law enforcement operations and collecting information on asset assists provided for by Predator B operations. Further, two of DHS's joint task forces also help coordinate Predator B operations. Specifically, Joint Task Force - West, Arizona and Joint Task Force - West, South Texas coordinate air asset tasking and operations, including Predator B operations, and assist in the transmission of requests for Predator B support and communication with local field units during operations, such as Border Patrol stations and AMO air branches. In addition to these mechanisms, CBP has documented procedures for coordinating Predator B operations among its supported or partner agencies in Arizona specifically by developing a standard operating procedure for coordination of Predator B operations through its NASOC in Arizona. However, CBP has not documented procedures for coordination of Predator B operations among its supported agencies through its NASOCs in Texas and North Dakota. CBP has also established national policies for its Predator B operations that include policies for prioritization of Predator B missions and processes for submission and review of Predator B mission or air support requests. However, these national policies do not include coordination procedures specific to Predator B operating locations or NASOCs. Without documenting its procedures for coordination of Predator B operations with supported agencies, CBP does not have reasonable assurance that practices at NASOCs in Texas and North Dakota align with existing policies and procedures for joint operations with other government agencies. Among other things, we recommended that CBP develop and document procedures for Predator B coordination among supported agencies in all operating locations. CBP concurred with our recommendation and stated that it plans to develop and implement an operations coordination structure and document its coordination procedures for Predator B operations through Joint Task Force - West, South Texas and document its coordination procedures for Predator B operations through its NASOC in Grand Forks, North Dakota. In January 2017, we reported that Border Patrol agents use the CDS to classify each alien apprehended illegally crossing the border and then apply one or more post-apprehension consequences determined to be the most effective and efficient to discourage recidivism, that is, further apprehensions for illegal cross-border activity. We found that Border Patrol uses an annual recidivism rate to measure performance of the CDS; however, methodological weaknesses limit the rate's usefulness for assessing CDS effectiveness. Specifically, Border Patrol's methodology for calculating recidivism--the percent of aliens apprehended multiple times along the southwest border within a fiscal year--does not account for an alien's apprehension history over multiple years. In addition, Border Patrol's calculation neither accounts for nor excludes apprehended aliens for whom there is no ICE record of removal from the United States. Our analysis of Border Patrol and ICE data showed that when calculating the recidivism rate for fiscal years 2014 and 2015, Border Patrol included in the total number of aliens apprehended, tens of thousands of aliens for whom ICE did not have a record of removal after apprehension and who may have remained in the United States without an opportunity to recidivate. Specifically, our analysis of ICE enforcement and removal data showed that about 38 percent of the aliens Border Patrol apprehended along the southwest border in fiscal years 2014 and 2015 may have remained in the United States as of May 2016. To better inform the effectiveness of CDS implementation and border security efforts, we recommended that, among other things, (1) Border Patrol strengthen the methodology for calculating recidivism, such as by using an alien's apprehension history beyond one fiscal year and excluding aliens for whom there is no record of removal; and (2) the Assistant Secretary of ICE and Commissioner of CBP collaborate on sharing immigration enforcement and removal data to help Border Patrol account for the removal status of apprehended aliens in its recidivism rate measure. CBP did not concur with our first recommendation and stated that CDS uses annual recidivism rate calculations to measure annual change, which is not intended to be, or used, as a performance measure for CDS, and that Border Patrol annually reevaluates the CDS to ensure that the methodology for calculating recidivism provides the most effective and efficient post apprehension outcomes. We continue to believe that Border Patrol should strengthen its methodology for calculating recidivism, as the recidivism rate is used as a performance measure by Border Patrol and DHS. DHS concurred with our second recommendation, but stated that collecting and analyzing ICE removal and enforcement data would not be advantageous to Border Patrol for CDS purposes since CDS is specific to Border Patrol. However, DHS also stated that Border Patrol and ICE have discussed the availability of the removal and enforcement data and ICE has agreed to provide Border Patrol with these data, if needed. DHS requested that we consider this recommendation resolved and closed. While DHS's planned actions are a positive step toward addressing our recommendation, DHS needs to provide documentation of completion of these actions for us to consider the recommendation closed as implemented. In February 2017, we reported on CBP's efforts to secure the border between U.S. ports of entry using tactical infrastructure, including fencing, gates, roads, bridges, lighting, and drainage. For example, border fencing is intended to benefit border security operations in various ways, according to Border Patrol officials, including supporting Border Patrol agents' ability to execute essential tasks, such as identifying illicit-cross border activities. CBP collects data that could help provide insight into how border fencing contributes to border security operations, including the location of illegal entries. However, CBP has not developed metrics that systematically use these data, among other data it collects, to assess the contributions of its pedestrian and vehicle border fencing to its mission. For example, CBP could potentially use these data to determine the extent to which border fencing diverts illegal entrants into more rural and remote environments, and border fencing's impact, if any, on apprehension rates over time. Developing metrics to assess the contributions of fencing to border security operations could better position CBP to make resource allocation decisions with the best information available to inform competing mission priorities and investments. To ensure that Border Patrol has the best available information to inform future investments and resource allocation decisions among tactical infrastructure and other assets Border Patrol deploys for border security, we recommended, among other things, that Border Patrol develop metrics to assess the contributions of pedestrian and vehicle fencing to border security along the southwest border using the data Border Patrol already collects and apply this information, as appropriate, when making investment and resource allocation decisions. DHS concurred with our recommendation and plans to develop metrics and incorporate them into the Border Patrol's Requirements Management Process. These actions, if implemented effectively, should address the intent of our recommendation. In February 2017, we found that CBP has taken actions to assess the effectiveness of its Predator B UAS and tactical aerostats for border security, but could improve its data collection efforts. CBP collects a variety of data on its use of the Predator B UAS, tactical aerostats, and TARS, including data on their support for the apprehension of individuals, seizure of drugs, and other events (asset assists). For Predator B UAS, we found that mission data--such as the names of supported agencies and asset assists for seizures of narcotics--were not recorded consistently across all operational centers, limiting CBP's ability to assess the effectiveness of the program. We also found that CBP has not updated its guidance for collecting and recording mission information in its data collection system to include new data elements added since 2014, and does not have instructions for recording mission information such as asset assists. In addition, not all users of CBP's system have received training for recording mission information. We reported that updating guidance and fully training users, consistent with internal control standards, would help CBP better ensure the quality of data it uses to assess effectiveness. For tactical aerostats, we found that Border Patrol collection of asset assist information for seizures and apprehensions does not distinguish between its tactical aerostats and TARS. Data that distinguishes between support provided by tactical aerostats and support provided by TARS would help CBP collect better and more complete information and guide resource allocation decisions, such as the redeployment of tactical aerostat sites based on changes in illegal cross- border activity for the two types of systems that provide distinct types of support when assisting with, for example, seizures and apprehensions. To improve its efforts to assess the effectiveness of its Predator B and tactical aerostat programs, we recommended, among other things, that CBP (1) update guidance for recording Predator B mission information in its data collection system; (2) provide training to users of CBP's data collection system for Predator B missions; and (3) update Border Patrol's data collection practices to include a mechanism to distinguish and track asset assists associated with tactical aerostats from TARS. CBP concurred and identified planned actions to address the recommendations, including incorporating a new functionality in its data collection system to include tips and guidance for recording Predator B mission information and updating its user manual for its data collection system; and making improvements to capture data to ensure asset assists are properly reported and attributed to tactical aerostats, and TARS, among other actions. In March 2014, we reported that CBP had identified mission benefits for technologies under its Arizona Border Surveillance Technology Plan-- which included a mix of radars, sensors, and cameras to help provide security for the Arizona border--but had not yet developed performance metrics for the plan. CBP identified mission benefits such as improved situational awareness and agent safety. Further, a DHS database enabled CBP to collect data on asset assists, instances in which a technology--such as a camera, or other asset, such as a canine team-- contributed to an apprehension or seizure, that in combination with other relevant performance metrics or indicators, could be used to better determine the contributions of CBP's surveillance technologies and inform resource allocation decisions. However, we found that CBP was not capturing complete data on asset assists, as Border Patrol agents were not required to record and track such data. We concluded that requiring the reporting and tracking of asset assist data could help CBP determine the extent to which its surveillance technologies are contributing to CBP's border security efforts. To assess the effectiveness of deployed technologies at the Arizona border and better inform CBP's deployment decisions, we recommended that CBP (1) require tracking of asset assist data in its Enforcement Integrated Database, which contains data on apprehensions and seizures and (2) once data on asset assists are required to be tracked, analyze available data on apprehensions and seizures and technological assists, in combination with other relevant performance metrics to determine the contribution of surveillance technologies to CBP's border security efforts. DHS concurred with our first recommendation, and Border Patrol issued guidance in June 2014 and Border Patrol officials confirmed with us in June 2015 that agents are required to enter this information into the database. These actions met the intent of our recommendation. DHS also concurred with our second recommendation, and as of September 2016 has taken some action to assess its technology assist data and other measures to determine contributions of surveillance technologies to its mission. However, until Border Patrol completes its efforts to fully develop and apply key attributes for performance metrics for all technologies to be deployed under the Arizona Border Surveillance Technology Plan, it will not be well positioned to fully assess its progress in determining when mission benefits have been fully realized. Chairwoman McSally, Ranking Member Vela, and members of the subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For further information about this testimony, please contact Rebecca Gambler at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Kirk Kiester (Assistant Director), as well as Stephanie Heiken, David Lutter, Sasan "Jon" Najmi, and Carl Potenzieri. Southwest Border Security: Additional Actions Needed to Better Assess Fencing's Contributions to Operations and Provide Guidance for Identifying Capability Gaps. GAO-17-331. Washington, D.C.: February 16, 2017. Border Security: Additional Actions Needed to Strengthen Collection of Unmanned Aerial Systems and Aerostats Data. GAO-17-152. Washington, D.C.: February 16, 2017. Border Patrol: Actions Needed to Improve Oversight of Post- Apprehension Consequences. GAO-17-66. Washington, D.C.: January 12, 2017. Border Security: DHS Surveillance Technology Unmanned Aerial Systems and Other Assets. GAO-16-671T. Washington, D.C.: May 24, 2016. Southwest Border Security: Additional Actions Needed to Assess Resource Deployment and Progress. GAO-16-465T. Washington, D.C.: March 1, 2016. Border Security: Progress and Challenges in DHS's Efforts to Implement and Assess Infrastructure and Technology. GAO-15-595T. Washington, D.C.: May 13, 2015. Border Security: Opportunities Exist to Strengthen Collaborative Mechanisms along the Southwest Border. GAO-14-494: Washington, D.C.: June 27, 2014. Arizona Border Surveillance Technology Plan: Additional Actions Needed to Strengthen Management and Assess Effectiveness. GAO-14-368. Washington, D.C.: March 3, 2014. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Securing U.S. borders is the responsibility of DHS, in collaboration with other federal, state, local, and tribal entities. Within DHS, CBP is the lead agency for border security and is responsible for, among other things, keeping terrorists and their weapons, criminals and their contraband, and inadmissible aliens out of the country. In recent years, GAO has reported on a variety of DHS collaborative mechanisms and efforts to assess its use of border security resources. This statement addresses (1) DHS's efforts to implement collaborative mechanisms along the southwest border and (2) DHS's efforts to assess its use of resources and programs to secure the southwest border. This statement is based on GAO reports and testimonies issued from September 2013 through February 2017 that examined DHS efforts to enhance border security and assess the effectiveness of its border security operations. GAO's reports and testimonies incorporated information GAO obtained by examining DHS collaborative mechanisms, reviewing CBP policies and procedures for coordinating use of assets, analyzing DHS data related to enforcement programs, and interviewing relevant DHS officials. The Department of Homeland Security (DHS) and its U.S. Customs and Border Protection (CBP) have implemented various mechanisms along the southern U.S. border to coordinate security operations, but could strengthen coordination of Predator B unmanned aerial system (UAS) operations to conduct border security efforts. In September 2013, GAO reported that DHS and CBP used collaborative mechanisms along the southwest border--including interagency Border Enforcement Security Task Forces and Regional Coordinating Mechanisms--to coordinate information sharing, target and prioritize resources, and leverage assets. GAO interviewed participants from the various mechanisms who provided perspective on successful collaboration, such as establishing positive working relationships, sharing resources, and sharing information. Participants also identified barriers, such as resource constraints, rotation of key personnel, and lack of leadership buy-in. GAO recommended that DHS take steps to improve its visibility over field collaborative mechanisms. DHS concurred and collected data related to the mechanisms' operations. Further, as GAO reported in June 2014, officials involved with mechanisms along the southwest border cited limited resource commitments by participating agencies and a lack of common objectives. Among other things, GAO recommended that DHS establish written interagency agreements with mechanism partners, and DHS concurred. Lastly, in February 2017, GAO reported that DHS and CBP had established mechanisms to coordinate Predator B UAS operations but could better document their coordination procedures. GAO made recommendations for DHS and CBP to improve coordination of UAS operations, and DHS concurred. GAO recently reported that DHS and CBP could strengthen efforts to assess their use of resources and programs to secure the southwest border. For example, in February 2017, GAO reported that CBP does not record mission data consistently across all operational centers for its Predator B UAS, limiting CBP's ability to assess program effectiveness. In addition, CBP has not updated its guidance for collecting and recording mission information in its data collection system since 2014. Updating guidance consistent with internal control standards would help CBP better ensure the quality of data it uses to assess effectiveness. In January 2017, GAO found that methodological weaknesses limit the usefulness for assessing the effectiveness of CBP's Border Patrol Consequence Delivery System. Specifically, Border Patrol's methodology for calculating recidivism--the percent of aliens apprehended multiple times along the southwest border within a fiscal year--does not account for an alien's apprehension history over multiple years. Border Patrol could strengthen the methodology for calculating recidivism by using an alien's apprehension history beyond one fiscal year. Finally, CBP has not developed metrics that systematically use the data it collects to assess the contributions of its pedestrian and vehicle border fencing to its mission. Developing metrics to assess the contributions of fencing to border security operations could better position CBP to make resource allocation decisions with the best information available to inform competing mission priorities and investments. GAO made recommendations to DHS and CBP to update guidance, strengthen its recidivism calculation methodology, and develop metrics, and DHS generally concurred. GAO has previously made numerous recommendations to DHS to improve the function of collaborative mechanisms and use of resources for border security, and DHS has generally agreed. DHS has taken actions or described planned actions to address the recommendations, which GAO will continue to monitor.
| 4,884 | 957 |
While the decennial census has long collected data on race and ethnicity, a specific question on Hispanic origin was first added to the 1970 Census in response to the 1965 Voting Rights Act, which required the data to ensure equality in voting. Today, antidiscrimination provisions in a number of statutes require census data on race and Hispanic origin in order to monitor and enforce equal access to housing, education, employment, and other areas. The Office of Management and Budget (OMB), through its Federal Statistical Policy Directive No. 15, sets the standards governing federal agencies' collection and reporting of race and ethnicity data. At least seven cabinet-level government departments, the Federal Reserve, every state government, and a number of public and private organizations use Hispanic data. Although not required by federal legislation or OMB standards, Hispanic subgroup data are also used for many of these same purposes. In addition, subgroup data are especially important to communities with rapidly growing and diverse Hispanic populations. Collecting data on race and ethnicity has been a persistent challenge for the Bureau. Race and ethnicity are subjective characteristics, which makes measurement difficult. Moreover, the Bureau has found that some Hispanics equate their ethnicity--Hispanic--with race, and thus find it difficult to classify themselves by the standard race categories that include, for example, white, black, and Asian. The Bureau's preparations for the 2000 Census included an extensive research and testing program to improve the Hispanic count. In 1990, the Bureau estimated that it did not enumerate 5 percent of the Hispanic population. Further, the ethnicity question, which was posed to all respondents, appeared to confuse both Hispanics and non-Hispanics. For example, many non-Hispanics, thinking the question only pertained to Hispanics, did not answer the question. Overall, 10 percent of respondents failed to answer the 1990 Hispanic question--the highest of any short form item in 1990. As a result, the Bureau made improving the Hispanic count a major priority for the 2000 Census. Our objectives were to review (1) the Bureau's decision-making process that led to its dropping the list of subgroup examples from the Hispanic question on the 2000 Census form, (2) the research conducted by the Bureau to aid in this decision, and (3) the Bureau's future plans for collecting Hispanic subgroup data. To address each of these objectives, we interviewed key Bureau officials and examined Bureau, OMB, and other documents, including planning materials and internal memos. To obtain a local perspective of how municipal governments and community leaders use Hispanic subgroup data, we met with data users in New York City, including representatives of the New York Department of Planning and the Dominican and Puerto Rican communities. We also attended a meeting of the Dominican American National Round Table, a Dominican American advocacy group that discussed issues relating to the 2000 Census count of Dominican Hispanics. We also attended meetings of the Census Advisory Committee on Race and Ethnicity that addressed the issue of the quality of the Hispanic subgroup data. Finally, to examine the research behind the Bureau's decision to remove the example subgroups from the 2000 questionnaire, we reviewed the results of the Bureau's National Content Survey, Targeted Race and Ethnicity Test, and other research conducted throughout the 1990s in preparation for the 2000 Census. Additionally, we reviewed information from the Bureau's meetings with its Advisory Committee on the Decennial Census and its Advisory Committee on Race and Ethnicity. We also examined relevant materials from OMB's Interagency Committee for the Review of the Racial and Ethnic Standards. To review the Bureau's future plans for collecting Hispanic subgroup data, we attended meetings of the National Academy of Science Panel on Future Census Methods, the Decennial Census Advisory Committee, and the Census Advisory Committee on Race and Ethnicity. We also discussed these plans with Bureau officials. Our audit work was conducted in New York City and Washington, D.C., and at the Bureau's headquarters in Suitland, Maryland, from January through September 2002. Our work was done in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Commerce. On November 27, 2002, the Secretary forwarded the U.S. Census Bureau's written comments on the draft. The comments are reprinted in appendix I. We address these comments at the end of this report. Collecting accurate ethnic data has challenged the Bureau for over 30 years. Since the 1970 Census, when the Bureau first included a question on Hispanic origin, every census has had comparatively high Hispanic undercounts that reduced the quality of the data. As a result, the Bureau has modified the Hispanic question on every census since then as part of a continuing effort to improve the Hispanic count. (See fig. 1.) In addition, a Spanish language version of the census form has been available upon request since 1980. For the 2000 Census, Hispanics could identify themselves as Mexican, Puerto Rican, Cuban, or "other Spanish/Hispanic/Latino." Respondents who checked off this last category could write in a specific subgroup such as "Salvadoran." Although this approach was similar to that used for the 1990 Census, as shown in figure 1, the "other" category in the 1990 Census included examples of other Hispanic subgroups. The Bureau deleted these examples as one of several changes to the Hispanic question for the 2000 Census. Other changes included (1) adding the word "Latino" to the designation Spanish/Hispanic, (2) dropping the word "origin" from the question, and (3) moving the location of instructions on writing in an unlisted subgroup. According to Bureau officials, these latter three changes were made to improve the Hispanic count. The Bureau removed the subgroup examples as part of a broader effort to simplify the questionnaire and thus help reverse the downward trend in mail response rates that had been occurring since 1970. Indeed, evaluations of the 1990 Census indicated that the overall design of the form was confusing to many and contributed to lower response rates, particularly among some hard-to-enumerate groups such as Hispanics. In redesigning the questionnaire, the Bureau added as much white space as possible, and removed unnecessary words to make the questionnaire shorter and more readable. As shown in figure 2, the 2000 questionnaire appears more "respondent-friendly" compared to the 1990 questionnaire. The Bureau initially proposed removing the example write-in subgroups during 1990 through 1992. A first version of the questionnaire without the example subgroups was used in the 1992 National Census Test. However, as discussed in the next section, testing continued from 1992 to 1996 to ensure that removing the write-in example groups did not harm the overall count of Hispanics. From 1995 to 1997, after testing showed that removal of the write-in example groups would not harm the overall Hispanic count, the Bureau finalized its decision to remove the example subgroups. Although federal law and OMB standards only require information on whether an individual is Hispanic, Bureau officials told us they collect subgroup data to help improve the overall Hispanic count. According to the Bureau, many Hispanics do not view themselves as Hispanic, but identify instead with their country of origin or with a particular Hispanic subgroup. State and local governments, academic institutions, community organizations, and marketing firms, among other organizations, also use Hispanic subgroup data for a variety of purposes. For example, officials in the New York City Department of Planning told us that they need accurate information on the number and distribution of Hispanic subgroups in planning the delivery of numerous city services. According to a Bureau official, no data are available on the precise impact the questionnaire redesign had on overall response rates in part because it was made in conjunction with other efforts to improve the response rate, such as a more aggressive outreach and promotion campaign. However, the initial mail response rate was 64 percent, 3 percentage points higher than the Bureau's expectations, and comparable to the similar 1990 mail response rate. Moreover, evaluations conducted since the 2000 Census by the Bureau indicate that the Bureau obtained a more complete count of Hispanics in the 2000 Census than it did in 1990. For example, Bureau data show that the 2000 Census missed an estimated 2.85 percent of the Hispanic population compared to an estimated 4.99 percent in 1990--a 43 percent reduction of the undercount. The Bureau credits the improvement in part to the changes it made to the questionnaire. However, as discussed in the next section, removing the examples of Hispanic subgroups may have reduced the completeness of data on individual segments of the Hispanic population. Bureau guidance requires that any changes to the census form must first be thoroughly tested. For example, according to Bureau officials, before changing a question, the Bureau must first conduct research studies, cognitive tests, and field tests to determine how best to sequence and word the question, and to see if the proposed changes are likely to achieve the desired results. Additionally, the census questionnaire is to be reviewed by a variety of census advisory groups, OMB, and Congress before it is finalized. Nevertheless, while the Bureau conducted a number of tests of the sequencing and wording of the race and ethnicity questions, according to Bureau officials, it did not specifically design any tests to determine the impact of the changes on the quality of Hispanic subgroup data. Because OMB standards do not require data on Hispanic subgroups, Bureau officials said that the Bureau targeted its resources on testing and research aimed at improving the overall count of Hispanics. Throughout the 1990s, in revising the race and ethnicity questions, the Bureau sought input from several expert panels, including the Interagency Committee formed by OMB and the Census Advisory Committee on Racial and Ethnic Populations, one of several panels with which the Bureau consulted to help it plan the 2000 Census. In addition, the Bureau conducted several tests of the questionnaire to assess respondents' understanding of the questions and their ability to complete them properly. They included the 1992 National Census Test, which field tested potential questions for the 1996 National Content Survey, which examined a number of issues to improve race and ethnic reporting; and 1996 Race and Ethnic Targeted Test, which tested alternative formats for asking race and ethnic questions. In addition, the Bureau analyzed the results of Hispanic data from the 1990 Census (which led to its conclusions about the undercount), but did not conduct any specific evaluations of the quality of the 1990 Hispanic subgroup data. The consultation, research, and testing played a key role in the Bureau's decisions to place the ethnicity question before the race question and make several other changes discussed earlier in this report. The test results also indicated that the example subgroups could produce conflicting results. On the one hand, the Bureau found that providing the example subgroups could help prevent respondents' confusion over how to describe their ethnicity. On the other hand, the Bureau found that removing the example subgroups could help reduce the bias caused by the example effect, which occurs when a respondent erroneously selects a response because it is provided in the questionnaire. Although the Bureau conducted a dress rehearsal for the 2000 Census in 1998 in order to test its overall design, the dress rehearsal did not identify any problems with the Hispanic subgroup question. According to Bureau officials, this could have been because none of the three test sites--the city of Sacramento, California; Menominee County, Wisconsin, including the Menominee American Indian Reservation; and the city of Columbia, South Carolina, and its 11 surrounding counties--had a large and diverse enough Hispanic population for the problems to become evident. In May 2001, the Bureau released data on Hispanics and Hispanic subgroups as part of its first release summarizing the results of the 2000 Census, called the SF-1 file. The Bureau also published The Hispanic Population, a 2000 Census brief that provided an overview of the size and distribution of the Hispanic population in 2000 and highlighted changes in the population since the 1990 census. For the first time, the Bureau released data on Hispanic subgroups as a part of its release of the full count SF-1 data even though it had not fully tested the impact of questionnaire changes on the subgroup data and provided little discussion of the potential limitations of the data. Following the initial release of the Hispanic data, local government officials and Hispanic advocacy groups raised questions about the accuracy of the counts of Hispanic subgroups listed as examples on the 1990 census form, but not the 2000 form. The 2000 Census showed lower counts of several Hispanic subgroups than analysts had expected based on their own estimates using a variety of information sources such as vital statistics, immigration statistics, population surveys, and other data. In New York City, local government officials and representatives of Hispanic subgroups who partnered with the Bureau to improve the enumeration of Hispanics told us that they were particularly concerned about low subgroup counts in their communities in part because they needed accurate numbers to plan and deliver specialized services to particular subgroups. Moreover, they said that because "official census numbers" are often considered definitive, problems with the released Hispanic subgroup numbers could lead to faulty decision making by data users. Since the release of the 2000 Census Hispanic data, the Bureau has conducted evaluations of the data that provided more information on how removing the subgroup examples may have affected the quality of Hispanic subgroup data. One key evaluation was the Alternative Questionnaire Experiment, in which the Bureau sent out 1990-style census forms to a sample of individuals as part of the 2000 Census. As shown in figure 3, the Bureau's research indicates that the 1990-style form elicited more reports of specific Hispanic subgroups than the 2000-style questionnaire. Indeed, 93 percent of Hispanics given the 1990-style form reported a specific subgroup, compared to 81 percent of Hispanics given the 2000-style form. Moreover, virtually every subgroup reported in the 2000-style form composed a smaller percentage of the overall Hispanic count than the 1990- style form. Thus, while the Bureau reported what respondents checked off on their questionnaires, because of respondents' confusion over the wording of the question, the 2000 subgroup data could be misleading. Figure 3 also suggests that one possible reason for this might be that many respondents did not understand what they were supposed to write in, as many more people on the 2000-style form wrote in "Hispanic," "Spanish," or "Latino" (as opposed to a specific subgroup) compared to the 1990-style questionnaire. Additionally, a higher percentage of the respondents did not provide codeable (useable) responses. Moreover, based on its analysis of the Census 2000 Supplementary Survey--an operational test for collecting long-form-type data based on a nationwide sample of 700,000 households--the Bureau estimated that there were about 150,000 more Dominican Hispanics than were counted in the 2000 Census. Some attribute the discrepancy to the fact that many respondents to the supplementary survey provided their answers by telephone, where enumerators were able to help them better understand the question on Hispanic subgroups. Because of concerns relating to the 2000 Census counts of Hispanic subgroups, Bureau officials said that they plan to focus testing and research on these questions in preparation for the 2010 Census. In particular, they stated that the Bureau would examine the likely impact of including Hispanic subgroup examples in the question again, as well as other aspects of the question that caused problems for some respondents. Before deciding on a new version of the Hispanic question, the Bureau must finish evaluating the results of the 2000 Census, conduct a number of cognitive tests, and field-test proposed changes to the question. The Bureau plans to begin testing the Hispanic question in 2003 and, as part of a field test in 2004, to administer the questionnaire in parts of Queens, New York, which the Bureau selected for its racial and ethnic diversity. The Bureau intends to complete its testing and decide on changes to the Hispanic question from 2006 through 2008. Any changes to the Hispanic question are relevant not only for the 2010 Census, but also for other Bureau questionnaires, such as the proposed ACS. Bureau officials told us that they expect that the ACS will continue to use the 2000 Census Hispanic question until research and testing on a new version is complete. While continued research could help the Bureau collect better-quality Hispanic subgroup data, it will also be important for the Bureau to address what led it to release data that could mislead users. A key factor in this regard is that the Bureau lacks adequate guidelines for making decisions about how data quality considerations affect the release of data to the public. Had such guidelines been in place prior to releasing the Hispanic subgroup data, they could have (1) prompted the Bureau to apply more rigorous quality checks on the Hispanic subgroup data, (2) provided a basis for either releasing, delaying, or suppressing the data, and (3) informed decisions on how to describe any limitations to data released. This is not the first time that the lack of Bureau-wide guidelines on the level of quality needed for census results to be released to the public has created difficulties for the Bureau and data users. As we noted in our companion report on the Bureau's methods for collecting and reporting data on the homeless and others without conventional housing, one cause of the Bureau's shifting position on reporting those data and the resulting public confusion appears to be its lack of documented, clear, transparent, and consistently applied guidelines on the level of quality needed to release data to the public. With the Hispanic subgroup data, the Bureau released the information as planned before it could properly assess its quality, identify problems, and report its limitations. More rigorous guidelines could help ensure that decisions about the quality of all census data the Bureau releases are more consistent and better understood by the public. In 2000, the Bureau initiated a program aimed at documenting Bureau-wide protocols designed to ensure the quality of data it collected and released. Because this effort is still in its early stages, we could not assess it. However, Bureau officials believe that the program is a significant first step in addressing the Bureau's lack of data quality guidelines. As the Bureau develops its protocols further, it will be important that they be well documented, transparent, clearly defined, consistently applied, and properly communicated to the public. Throughout the 1990s, the Bureau went to great lengths to improve response rates to the 2000 Census in general, and participation of Hispanics in particular. Although the unique contributions of the individual components of the Bureau's efforts cannot be determined, the mail response rate was similar to the 1990 level, and the Bureau's preliminary data suggest that the 2000 Census count of Hispanics was an improvement over the 1990 count. However, the counts of Hispanic subgroups do not appear to have been improved and, in fact, there is concern that some of these subgroup counts may be less accurate than the 1990 counts. Moreover, the Bureau's experience in simplifying the questionnaire in part by removing the examples of the Hispanic subgroups shows the challenge the Bureau faces in trying to improve one component of the census count without adversely and unintentionally affecting other aspects of the census count. In light of these findings, it will be important for the Bureau to continue with its planned research on how best to enumerate Hispanic subgroups. The Bureau's release of Hispanic subgroup numbers raised questions about the quality of the reported data and the Bureau's decision to report these data as a part of its release of the SF-1 data. Although the specific questions about the Hispanic subgroup data differed from those identified in our review of the Bureau's efforts to collect and report data on the homeless and others without conventional housing, a common cause of both sets of problems was the Bureau's lack of agencywide guidelines for its decisions on the level of quality needed to release data to the public. As we recommended in our report on homeless counts, the Bureau needs to develop well-documented guidelines that spell out how to characterize any limitations in the data, and when it is acceptable to suppress these data. The Bureau should also ensure that these guidelines are documented, transparent, clearly defined, consistently applied, and properly communicated to the public. To ensure that the 2010 Census will provide public data users with more accurate information on specific Hispanic subgroups, we recommend that the Secretary of Commerce ensure that the Director of the U.S. Census Bureau implements Bureau plans to research the Hispanic question, taking steps to properly test the impact of the wording, format, and sequencing on the completeness and accuracy of the data on Hispanic subgroups and Hispanics overall. In addition, as we also recommended in our companion report on the homeless and others without conventional housing, we recommend that the Bureau develop agencywide guidelines governing the level of quality needed to release data to the public, when and how to characterize any limitations, and when it is acceptable to delay or suppress data. The Secretary of Commerce forwarded written comments from the U.S. Census Bureau on a draft of this report (see app. I). The Bureau agreed with our conclusions and recommendations and, as indicated in the letter, is taking steps to implement them. However, it expressed several general concerns about our findings. The Bureau's principal concerns and our response are presented below. The Bureau also suggested minor wording changes to provide additional context and clarification. We accepted the Bureau's suggestions and made changes to the text as appropriate. The Bureau took exception to our findings concerning the adequacy of its data quality guidelines noting that it "conducted the review of the data on the Hispanic origin population using standard review techniques for reasonableness and quality." We do not question the Bureau's commitment to presenting quality data. Rather, our point is that the Bureau needs to translate its commitment to quality into well documented, transparent, clearly defined guidelines to provide a basis for consistent decision making on the level of quality needed to release data to the public, and on when and how to characterize any limitations. During our review, Bureau officials, including the Associate Director for Methodology and Standards, told us that the Bureau had few written guidelines, standards, or procedures related to the quality of data released to the public. A second general concern expressed by the Bureau dealt with our characterization of problems with the Hispanic subgroup counts. The Bureau said that the data met an acceptable level of quality because they accurately reflect what people reported and therefore cannot be characterized as erroneous. We agree with the Bureau on this specific point. However, we take a broader view of data quality. Specifically, we believe that questions about the accuracy of the Hispanic subgroup data must also take into account problems that the respondents had in understanding the meaning of the question. The Bureau challenged our assertion that the wording of the question "confused" some respondents, preferring to say that some respondents may have "interpreted" the question wording, instructions, and examples differently than expected. We agree with the Bureau that additional research will be required to understand the extent of this problem. Nevertheless, we believe there is sufficient evidence from the Bureau's subsequent research and from analysis of trends in the data to support our concerns about the accuracy of Hispanic example subgroup counts in the 2000 Census. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Chairman of the House Committee on Government Reform, the Secretary of Commerce, and the Director of the U.S. Census Bureau. Copies will be made available to others on request. This report will also be available at no charge on GAO's home page at http://www.gao.gov. Please contact me on (202) 512-6806 or by E-mail at [email protected] if you have any questions. Other key contributors to this report were Robert Goldenkoff, Christopher Miller, Elizabeth Powell, Timothy Wexler, Ty Mitchell, Benjamin Crawford, James Whitcomb, Robert Parker, and Michael Volpe. Decennial Census: Methods for Reporting and Collecting Data on the Homeless and Others without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. 2000 Census: Refinements to Full Count Review Program Could Improve Future Data Quality. GAO-02-562. Washington, D.C.: July 3, 2002. 2000 Census: Coverage Evaluation Matching Implemented As Planned, but Census Bureau Should Evaluate Lessons Learned. GAO-02-297. Washington, D.C.: March 14, 2002. 2000 Census: Best Practices and Lessons Learned for a More Cost- Effective Nonresponse Follow-Up. GAO-02-196. Washington, D.C.: February 11, 2002. 2000 Census: Coverage Evaluation Interviewing Overcame Challenges, but Further Research Needed. GAO-02-26. Washington, D.C.: December 31, 2001. 2000 Census: Analysis of Fiscal Year 2000 Budget and Internal Control Weaknesses at the U.S. Census Bureau. GAO-02-30. Washington, D.C.: December 28, 2001. 2000 Census: Significant Increase in Cost Per Housing Unit Compared to 1990 Census. GAO-02-31. Washington, D.C.: December 11, 2001. 2000 Census: Better Productivity Data Needed for Future Planning and Budgeting. GAO-02-4. Washington, D.C.: October 4, 2001. 2000 Census: Review of Partnership Program Highlights Best Practices for Future Operations. GAO-01-579. Washington, D.C.: August 20, 2001. Decennial Censuses: Historical Data on Enumerator Productivity Are Limited. GAO-01-208R. Washington, D.C.: January 5, 2001. 2000 Census: Information on Short- and Long-Form Response Rates. GAO/GGD-00-127R. Washington, D.C.: June 7, 2000. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO's Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to daily E-mail alert for newly released products" under the GAO Reports heading.
|
To help boost response rates of both the general and Hispanic populations, the U.S. Census Bureau (Bureau) redesigned the 2000 questionnaire, in part by deleting a list of examples of Hispanic subgroups from the question on Hispanic origin. While more Hispanics were counted in 2000 compared to 1990, the counts for Dominicans and other Hispanic subgroups were lower than expected. Concerned that this was caused by the deletion of Hispanic subgroup examples, congressional requesters asked us to investigate the research and management activities behind the changes. In both the 1990 and 2000 census, Hispanics could identify themselves as Mexican, Puerto Rican, Cuban, or other Hispanic. Respondents checking off this latter category could write in a specific subgroup such as "Salvadoran." The "other" category in the 1990 Census included examples of subgroups to clarify the question. For the 2000 Census, the Bureau removed the subgroup examples as part of a broader effort to simplify the questionnaire and help improve response rates. The Bureau removed unnecessary words and added blank space to shorten the questionnaire and make it more readable. Although the Bureau conducted a number of tests on the sequencing and wording of the race and ethnicity questions, and sought input from several expert panels, no Bureau tests were designed specifically to measure the impact of the questionnaire changes on the quality of Hispanic subgroup data. According to Bureau officials, because federal laws and guidelines require data on Hispanics but not Hispanic subgroups, the Bureau targeted its resources on research aimed at improving the overall count of Hispanics. Bureau evaluations conducted after the census indicated that deleting the subgroup examples might have confused some respondents and produced less-than-accurate subgroup data. A key factor behind the Bureau's release of the questionable subgroup data was its lack of adequate guidelines governing the quality needed before making data publicly available. As part of its planning for the 2010 Census, the Bureau intends to conduct further research on the Hispanic origin question, including a field test in parts of New York City. However, until research on a new version of the question is finalized, Bureau officials said that other census surveys will continue to use the 2000 Census format of the Hispanic origin question.
| 6,041 | 502 |
Posing as private citizens, our undercover investigators purchased several sensitive excess military equipment items that were improperly sold to the public at DOD liquidation sales. These items included three ceramic body armor inserts identified as small arms protective inserts (SAPI), which are the ceramic inserts currently in demand by soldiers in Iraq and Afghanistan; a time selector unit used to ensure the accuracy of computer- based equipment, such as global positioning systems and system-level clocks; 12 digital microcircuits used in F-14 Tomcat fighter aircraft; guided missile radar test sets used to check the operation of the data link antenna on the Navy's Walleye (AGM-62) air-to-ground guided missile; and numerous other electronic items. In instances where DOD required an EUC as a condition of sale, our undercover investigator was able to successfully defeat the screening process by submitting bogus documentation and providing plausible explanations for discrepancies in his documentation. In addition, we identified at least 79 buyers for 216 sales transactions involving 2,669 sensitive military items that DOD's liquidation contractor sold to the public between November 2005 and June 2006. We are referring information on these sales to the appropriate federal law enforcement agencies for further investigation. Our investigators also posed as DOD contractor employees, entered DRMOs in two east coast states, and obtained several other items that are currently in use by the military services. DRMO personnel even helped us load the items into our van. These items included two launcher mounts for shoulder-fired guided missiles, an all-band antenna used to track aircraft, 16 body armor vests, body armor throat and groin protectors, six circuit card assemblies used in computerized Navy systems, and two Palm V personal data assistant (PDA) organizers. Using a fictitious identity as a private citizen, our undercover investigator applied for and received an account with DOD's liquidation sales contractor. Our investigator was then able to purchase several sensitive excess military items noted above that were being improperly sold to the public. During our undercover purchases, our investigator engaged in numerous conversations with liquidation sales contractor staff during warehouse inspections of items advertised for sale and with DRMS and DLA's Criminal Investigative Activity (DCIA) staff during the processing of our EUCs. On one occasion our undercover investigator was told by a DCIA official that information provided on his EUC application had no match to official data and that he had no credit history. Our investigator responded with a plausible story and submitted a bogus utility bill to confirm his mailing address. Following these screening procedures, the EUC was approved by DCIA and our undercover investigator was able to purchase our targeted excess military items. Once our initial EUC was approved, our subsequent EUC applications were approved based on the information on file. Although the sensitive military items that we purchased had a reported acquisition cost of $461,427, we paid a liquidation sales price of $914 for them--less than a penny on the dollar. We observed numerous sales of additional excess sensitive military items that were improperly advertised for sale or sold to the public, including fire control components for weapon systems, body armor, and weapon system components. The demilitarization codes for these items required either key point or total destruction rather than disposal through public sale. Although we placed bids to purchase some of these items, we lost to higher bidders. We identified at least 79 buyers for 216 public liquidation sales transactions involving 2,669 sensitive military items. On July 13, 2006, we briefed federal law enforcement and intelligence officials on the details of our investigation. We are referring public sales of sensitive military equipment items to the federal law enforcement agencies for further investigation and recovery of the sensitive military equipment. During our undercover operations, we also noted 13 advertised sales events, including 179 items that were subject to demilitarization controls, where the items were not sold. In 5 of these sales involving 113 sensitive military parts, it appears that DOD or its liquidation sales contractor caught the error in demilitarization codes and pulled the items from sale. One of these instances involved an F-14 fin panel assembly that we had targeted for an undercover purchase. During our undercover inspection of this item prior to sale, a contractor official told our investigator that the government was in the process of changing demilitarization codes on all F-14 parts and it was likely that the fin panel assembly would be removed from sale. Of the remaining 8 sales lots containing 66 sensitive military parts, we could not determine whether the items were not sold because DOD or its contractor caught the demilitarization coding errors or because minimum bids were not received during the respective sales events. Our investigators used publicly available information to develop fictitious identities as DOD contractor personnel and enter DRMO warehouses (referred to as DRMO A and DRMO B) in two east coast states on separate occasions in June 2006, to requisition excess sensitive military parts and equipment valued at about $1.1 million. Our investigators were able to search for and identify excess items without supervision. In addition, DRMO personnel assisted our investigators in locating other targeted items in the warehouse and loading these items into our van. At no point during either visit, did DRMO personnel attempt to verify with the actual contractor that our investigators were, in fact, contractor employees. During the undercover penetration at DRMO A, our investigators obtained numerous sensitive military items that were required to be destroyed when no longer needed by DOD to prevent them from falling into the wrong hands. These items included two guided missile launcher mounts for shoulder-fired missiles, six Kevlar body armor fragmentation vests, a digital signal converter used in naval electronic surveillance, and an all- band antenna used to track aircraft. Posing as employees for the same DOD contractor identity used during our June 2006 penetration at DRMO A, our investigators entered DRMO B a day later for the purpose of testing security controls at that location. DRMO officials appeared to be unaware of our security penetration at DRMO A the previous day. During the DRMO B undercover penetration, our investigators obtained 10 older technology body armor fragmentation vests, throat and groin protection armor, six circuit card assemblies used in Navy computerized systems, and two Palm V personal digital assistants (PDA) that were certified as having their hard drives removed. Because PDAs do not have hard drives, after successfully requisitioning them, we asked our Information Technology (IT) security expert to test them and our expert confirmed that all sensitive information had been properly removed. Shortly after leaving the second DRMO, our investigators received a call from a contractor official whose employees they had impersonated. The official had been monitoring his company's requisitions of excess DOD property and noticed transactions that did not appear to represent activity by his company. He contacted personnel at DRMO A, obtained the phone number on our bogus excess property screening letter, and called us. Upon receiving the call from the contractor official, our lead investigative agent explained that he was with GAO, and we had performed a government test. Because significant numbers of new, unused A-condition excess items still being purchased or in use by the military services are being disposed of through liquidation sales, it was easy for our undercover investigator to pose as a liquidation sales customer and purchase several of these items for a fraction of what the military services are paying to obtain these same items from DLA supply depots. For example, we paid $1,146 for several wet-weather and cold-weather parkas, a portable field x-ray enclosure, high-security locks, a gasoline engine that can be used as part of a generator system or as a compressor, and a refrigerant recovery system used to service air conditioning systems on automobiles. The military services would have paid a total acquisition cost of $16,300 for these items if ordered from supply inventory, plus a charge for processing their order. Several of the items we purchased at liquidation sales events were being ordered from supply inventory by military units at or near the time of our purchase, and for one supply depot stocked item--the portable field x-ray enclosure--no items were in stock at the time we made our undercover purchase. At the time of our purchase, DOD's liquidation contractor sold 40 of these x-ray enclosures with a total reported acquisition cost of $289,400 for a liquidation sales price of $2,914--about a penny on the dollar. We paid a liquidation sales price of $87 for the x-ray enclosure which had a reported acquisition cost of $7,235. In another example, we purchased a gasoline engine in March 2006 for $355. The Marine Corps ordered 4 of these gas engines from DLA supply inventory in June 2006 and paid $3,119 each for them. At the time of our undercover purchase, 20 identical gasoline engines with a reported acquisition cost of $62,380 were sold to the public for a total liquidation sales price of $6,221, also about a penny on the dollar. In response to recommendations in our May 2005 report, DOD has taken a number of actions to improve systems, processes, and controls over excess property. Most of these efforts have focused on improving the economy and efficiency of DOD's excess property reutilization program. However, as demonstrated by our tests of security controls over sensitive excess military equipment, DOD does not yet have effective controls in place to prevent unauthorized parties from obtaining these items. For example, although DLA and DRMS have emphasized policies that prohibit batch lotting of sensitive military equipment items, we observed many of these items being sold in batch lots during our investigation and we were able to purchase several of them. In addition, DLA and DRMS have not ensured that DRMO personnel and DOD's liquidation sales contractor are verifying demilitarization codes on excess property turn-in documentation to assure appropriate disposal actions for items requiring demilitarization. Further, although DLA and DRMS implemented several initiatives to improve the overall reutilization rate for excess A-condition items, our analysis of DRMS data found that the reported reutilization rate as of June 30, 2006, remained the same as we had previously reported--about 12 percent. This is primarily because DLA reutilization initiatives are limited to using available excess A-condition items to fill customer orders and to maintain established supply inventory retention levels. As a result, excess A-condition items that are not needed to fill existing orders or replenish supply inventory are disposed of outside of DOD through transfers, donations, and public sales, which made it easy for us to purchase excess new, unused DOD items. Despite the limited reutilization supply systems approach for reutilization of A-condition excess items, DLA and DRMS data show that overall system and process improvements since the Subcommittee's June 2005 hearing have saved $38.1 million through June 2006. According to DLA data, interim supply system initiatives using the Automated Asset Recoupment Program, which is part of an old DOD legacy system, achieved reutilization savings of nearly $2.3 million since July 2005 and Business System Modernization supply system initiatives implemented in January 2006 as promised at the Subcommittee's June 2005 hearing, have resulted in reutilization savings of nearly $1.1 million. In addition, DRMS reported that excess property marketing initiatives implemented in late March 2006 have resulted in reutilization savings of a little over $34.8 million through June 2006. These initiatives include marketing techniques using Web photographs of high-dollar items and e-mail notices to repeat customers about the availability of A-condition items that they had previously selected for reutilization. Our most recent work shows that sensitive military equipment items are still being improperly released by DOD and sold to the public, posing a significant national security risk. The sensitive nature of these items requires particularly stringent internal security controls. Our tests, which were performed over a short duration, were limited to our observations, meaning that the problem may likely be more significant than what we identified. Although we have referred the sales of items identified during our investigation to federal law enforcement agencies for follow-up, the solution to this problem is to enforce controls for preventing improper release of these items outside DOD. Further, liquidation sales of items that military units are continuing to purchase at full cost from supply inventory demonstrates continuing waste to the taxpayer and inefficiency in DOD's excess property reutilization program. Mr. Chairman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or [email protected]. Major contributors to this testimony include Mario L. Artesiano, Donald L. Bumgardner, Matthew S. Brown, Paul R. Desaulniers, Stephen P. Donahue, Lauren S. Fassler, Gayle L. Fischer, Cinnimon Glozer, Jason Kelly, John Ledford, Barbara C. Lewis, Richard C. Newbold, John P. Ryan, Lori B. Ryza, Lisa M. Warde, and Emily C. Wold. Technical expertise was provided by Keith A. Rhodes, Chief Technologist, and Harold Lewis, Assistant Director, Information Technology Security, Applied Research and Methods. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In light of GAO's past three testimonies and two reports on problems with controls over excess DOD property, GAO was asked to perform follow-up investigations to determine if (1) unauthorized parties could obtain sensitive excess military equipment that requires demilitarization (destruction) when no longer needed by DOD and (2) system and process improvements are adequate to prevent sales of new, unused excess items that DOD continues to buy or that are in demand by the military services. GAO investigators posing as private citizens purchased several sensitive military equipment items from DOD's liquidation sales contractor, indicating that DOD has not enforced security controls for preventing sensitive excess military equipment from release to the public. GAO investigators at liquidation sales purchased ceramic body armor inserts currently used by deployed troops, a cesium technology timing unit with global positioning capabilities, a universal frequency counter, two guided missile radar test sets, 12 digital microcircuits used in F-14 fighter aircraft, and numerous other items. GAO was able to purchase these items because controls broke down at virtually every step in the excess property turn-in and disposal process. GAO determined that thousands of military items that should have been demilitarized (destroyed) were sold to the public. Further, in June 2006, GAO undercover investigators posing as DOD contractor employees entered two excess property warehouses and obtained about $1.1 million in sensitive military equipment items, including two launcher mounts for shoulder-fired guided missiles, several types of body armor, a digital signal converter used in naval surveillance, an all-band antenna used to track aircraft, and six circuit cards used in computerized Navy systems. At no point during GAO's warehouse security penetration were its investigators challenged on their identity and authority to obtain DOD military property. GAO investigators posing as private citizens also bought several new, unused items currently being purchased or in demand by the military services from DOD's excess property liquidation sales contractor. Although military units paid full price for these items when they ordered them from supply inventory, GAO paid a fraction of this cost to purchase the same items, demonstrating continuing waste and inefficiency.
| 3,047 | 467 |
Federal agencies' contracting with private businesses is, in most cases, subject to goals for various types of small businesses, including SDVOSBs. The Small Business Act sets a government-wide goal for small business participation of not less than 23 percent of the total value of all prime contract awards--contracts that are awarded directly by agencies--for each fiscal year. The Small Business Act also sets annual prime contracting goals for participation by four other types of small businesses: small disadvantaged businesses (5 percent); women-owned (WOSB, 5 percent); service-disabled veteran-owned, (3 percent); and businesses located in historically underutilized business zones (HUBZone, 3 percent). Although there is no government-wide prime contracting goal for participation by all VOSBs, VA had voluntarily set an internal goal for many years before the enactment of the 2006 Act. The Veterans Benefits Act of 2003 authorized agencies to set contracts aside and make sole-source awards of up to $3 million ($5 million for manufacturing) for SDVOSBs (but not other VOSBs). However, an agency can make a sole-source award to an SDVOSB only if the contracting officer expects just one SDVOSB to submit a reasonable offer. By contrast, VA's authorities under the 2006 Act apply both to SDVOSBs and other VOSBs. The 2006 Act provides VA authorities to make noncompetitive (sole-source) awards and to restrict competition for (set-aside) awards to SDVOSBs and VOSBs. VA is required to set aside contracts for SDVOSBs or other VOSBs (unless a sole-source award is used) if the contracting officer expects two or more such firms to submit offers and the award can be made at a fair and reasonable price that offers the best value to the United States. VA may make sole-source awards of up to $5 million. VA's Office of Small Disadvantaged Business Utilization (OSDBU) in conjunction with the Office of Acquisition and Logistics is responsible for development of policies and procedures to implement and execute the contracting goals and preferences under the 2006 Act. Additionally, OSDBU serves as VA's advocate for small business concerns; provides outreach and liaison support to businesses (large and small) and other members of the private sector for acquisition-related issues; and is responsible for monitoring VA's implementation of socioeconomic procurement programs, such as encouraging contracting with WOSBs and HUBZone businesses. The Center for Veterans Enterprise (CVE) within OSDBU seeks to help veterans interested in forming or expanding their own small businesses. For FY07, VA established a contracting goal for VOSBs at 7 percent--that is, VA's goal was to award 7 percent of its total procurement dollars to VOSBs. In FY07, VA exceeded this goal and awarded 10.4 percent of its contract dollars to VOSBs (see fig. 1). VA subsequently increased its VOSB contracting goals to 10 percent for FY08 and FY09, and exceeded those goals as well--awarding 14.7 percent of its contracting dollars to VOSBs in FY08 and 19.7 percent in FY09. For FY07, VA established a contracting goal for SDVOSBs equivalent to the government-wide goal of 3 percent and exceeded that goal by awarding 7.1 percent of its contract dollars to SDVOSBs (see fig. 2). VA subsequently increased this goal to 7 percent for FY08 and FY09, and exceeded the goal in those years as well. Specifically, VA awarded 11.8 and 16.7 percent of its contract dollars to SDVOSBs in FY08 and FY09, respectively. In nominal dollar terms, VA's contracting awards to VOSBs increased from $1.2 billion in FY07 to $2.8 billion in FY09, while at the same time, SDVOSB contracting increased from $832 million to $2.4 billion. The increase of awards to VOSBs and SDVOSBs largely was associated with the agency's greater use of the goals and preference authorities established by the 2006 Act. For example, veteran set-aside and sole- source awards represented 39 percent of VA's total VOSB contracting dollars in FY07. But in FY09, VA's use of these preference authorities increased to 59 percent of all VOSB contracting dollars. In nominal dollar terms, VA's use of these authorities increased by $1.2 billion over the past 3 years. According to SBA's Goaling Program, a small business can qualify for one or more small business categories and an agency may take credit for a contract awarded under multiple goaling categories. For example, if a small business is owned and controlled by a service-disabled, woman veteran, the agency may take credit for awarding a contract to this business under the SDVOSB, VOSB, and WOSB categories. All awards made to SDVOSBs also count towards VOSB goal achievement. In FY09, of the $2.8 billion awarded to VOSBs, the majority (63 percent) applied to both the VOSB and SDVOSB categories and no other (see fig. 3). Furthermore, of the $1.7 billion awarded through the use of veteran preference authorities (VOSB and SDVOSB set-aside and sole-source) in FY09, an even greater majority (77 percent) applied both to the VOSB and SDVOSB categories and no other (see fig. 3). In the Veterans' Benefits Improvement Act of 2008 (the 2008 Act) Congress enhanced the 2006 Act's provisions by requiring that any agreements VA enters with other government entities on or after January 1, 2009, to acquire goods or services on VA's behalf, must require the agencies to comply, to the maximum extent feasible, with VA's contracting goals and preferences for SDVOSBs and VOSBs. Since January 1, 2009, VA has entered into three interagency agreements (see table 1). According to agency officials, VA entered into agreements with additional federal agencies, such as the Army Corps of Engineers, before January 1, 2009, and therefore the provisions of the 2008 Act do not apply. VA issued guidance to all contracting officers about managing interagency acquisitions in March 2009. However, the agreement with DOI did not include the required language addressing VA's contracting goals and preferences until it was amended on March 19, 2010, after we informed the agency the agreement did not comply with the 2008 Act. According to VA officials, the agency's acquisition and contracting attorneys are responsible for reviewing interagency agreements for compliance with these requirements. VA uses Office of Management and Budget templates to develop its interagency agreements. However, VA did not ensure that all interagency agreements include the 2008 Act's required language or monitor the extent to which agencies comply with the requirements. For example, agency officials could not tell us whether contracts awarded under these agreements met the SDVOSB and VOSB preferences. Without a plan or oversight activity such as monitoring, VA cannot be assured that agencies have made maximum feasible efforts to contract with SDVOSBs or VOSBs. In May 2008--approximately a year and a half after the 2006 Act was enacted and a year after the provisions discussed here became effective-- VA began verifying businesses and published interim final rules in the Federal Register, which included eligibility requirements and examination procedures, but did not finalize the rules until February 2010 (see fig. 4). According to VA officials, CVE initially modeled its verification program on SBA's HUBZone program; however, CVE reconsidered verification program procedures after we reported on fraud and weaknesses in the HUBZone program. More recently, in December 2009, the agency finalized changes to its acquisition regulations (known as VAAR) that included an order of priority (preferences) for contracting officers to follow when awarding contracts and trained contracting officers on the preferences and the VetBiz.gov database from January through March 2010. Leadership and staff vacancies plus a limited overall number of positions also have contributed to the slow pace of implementation. For approximately 1 year, leadership in VA's OSDBU was lacking because the former Executive Director retired and the position remained vacant from January 2009 until January 2010. Furthermore, one of two leadership positions directly below the Executive Director has been vacant since October 2008 and an Acting Director temporarily filled the other position. The agency also faced delays in obtaining contracting support. More than a year after the agency began verifying businesses, a contractor began conducting site visits (which further investigate control and ownership of businesses as part of the verification process). As of April 2010, CVE had 6.5 full-time equivalent position vacancies, and VA officials told us existing staff have increased duties and responsibilities that also contributed to slowed implementation. The slow implementation of the program appears to have contributed to VA's inability to meet the requirement in the 2006 Act that it use its veteran preference authorities to contract only with verified businesses. Currently, contracting officers can use the veteran preference authorities with both self-certified and verified businesses listed in VetBiz.gov. However, in its December 2009 rule VA committed to awarding contracts using these authorities only to verified businesses as of January 1, 2012. According to our analysis of FPDS-NG data, in FY09 the majority of contract awards (75 percent) made using veteran preferences went to unverified businesses. In March 2010, the recently appointed Executive Director of OSDBU acknowledged in a Congressional hearing before this committee how large an undertaking the verification program has been and some challenges associated with starting a new program. As of April 8, 2010, VA had verified about 2,900 businesses--approximately 14 percent of VOSBs and SDVOSBs in the VetBiz.gov database. VA has been processing an additional 4,701 applications but the number of incoming applications continues to grow (see fig. 5). As of March 2010, CVE estimates it had received more than 10,000 applications for verification since May 2008. As discussed previously, VA must maintain a database of verified businesses and in doing so must verify the veteran or service-disability status, control, and ownership of each business. The rules that VA developed pursuant to this requirement require VOSBs and SDVOSBs to register in VetBiz.gov to be eligible to receive contracts awarded using veteran preference authorities. An applicant's business must qualify as "small" under federal size standards and meet five eligibility requirements for verification: (1) be owned and controlled by a service-disabled veteran or veteran; (2) demonstrate good character (any small business that has been debarred or suspended is ineligible); (3) make no false statements (any small business that knowingly submits false information is ineligible); (4) have no federal financial obligations (any small business that has failed to pay significant financial obligations to the federal government is ineligible); and (5) have not been found ineligible due to an SBA protest decision. VA has a two-step process to make the eligibility determinations for verification. CVE staff first review veteran status (and, if applicable, service-disability status) and publicly available, primarily self-reported information about control and ownership for all applicants. Business owners submit applications (VA Form 0877), which ask for basic information about ownership, through VetBiz.gov. When applicants submit Form 0877, they also must be able to provide upon request other items for review, such as financial statements; tax returns; articles of incorporation or organization; lease and loan agreements; payroll records; and bank account signature cards. Typically, these items are reviewed at the business during the second step of the review, known as the site visit. Site visits further investigate control and ownership for select high-risk businesses. In September 2008, VA adopted risk guidelines to determine which businesses would merit the visits. Staff must conduct a risk assessment for each business and assign a risk level ranging from 1 to 4-- with 1 being a high-risk business and 4 a low-risk one. The risk guidelines include criteria such as previous government contract dollars awarded, business license status, annual revenue, and percentage of veteran- ownership. For example, if a business has previous VA contracts totaling more than $5 million, staff must assign it a risk level of 1 (high). According to VA, it intends to examine all businesses assigned a high or elevated risk level with a site visit or by other means, such as extensive document reviews and phone interviews with the business' key personnel. VA plans to refine its verification processes to address recommendations from an outside contractor's review of the program. VA hired the contractor to assess the verification program's processes, benchmark VA's program to other similar programs, and provide recommendations for improving it. VA received the contractor's report and recommendations in November 2009. VA officials told us that they plan to implement the contractor's recommendations to require business owners to submit additional documentation as part of their initial application and to upgrade their data systems. Based on our review of a random sample of the files for 112 businesses that VA had verified by the end of FY09, an estimated 48 percent of the files lacked required information or documentation that CVE staff followed key verification procedures. Specifically, 20 percent were missing some type of required information, such as evidence that veteran status had been checked or a quality review took place; 39 percent lacked information about how staff justified determinations that control and ownership requirements were met; and 14 percent either were missing evidence that a risk assessment had taken place or the risk assessment that occurred did not follow guidelines. Data system limitations also appear to be contributing factors to weaknesses we identified in our file review. For example, data entry into CVE's internal database largely is done manually, which can result in missing information or errors. Furthermore, CVE's internal database does not contain controls to ensure that only complete applications that have received a quality review move forward. Internal control standards for federal agencies require that agencies effectively use information technology in a useful, reliable, and continuous way. According to agency officials, two efforts are underway to enhance CVE's data systems. For example, CVE plans systems enhancements that would automatically check and store information obtained about veteran status and from some public databases. Additionally, CVE plans to adopt case management software--as recommended in the contractors' report--to help manage its verification program files. The new system will allow CVE to better track new and renewal verification applications and manage the corresponding case files. VA started verifying businesses in May 2008, but did not start conducting site visits until October 2009. As of April 8, 2010, VA has used contractors to conduct 71 site visits but an additional 654 high- and elevated-risk businesses awaited visits. Because of this delay, it currently has a large backlog of businesses awaiting site visits and some higher-risk businesses have been verified months before their site visits occurred or were scheduled to occur. According to VA officials, the agency plans to use contractors to conduct an additional 200 site visits between May and October 2010. However, the current backlog likely will grow over future months. According to site visits reports, approximately 40 percent of the visits resulted in evidence that control or ownership requirements had not been met, but as of April 2010, CVE had not cancelled any business' verification status. According to these reports, evidence of misrepresentation dates to October 2009, but VA had not taken actions against these businesses as of April 2010. According to VA's Office of Inspector General, it has received one referral (on April 5, 2010) as a result of the verification program. Staff have made no requests for debarment as a result of verification program determinations as of April 2010. Under the 2006 Act, businesses determined by VA to have misrepresented their status as VOSBs or SDVOSBs are subject to debarment for a reasonable period of time, as determined by VA for up to 5 years. Additionally, under the verification program rules, whenever CVE determines that a business owner submitted false information, the matter will be referred to the Office of Inspector General for review and CVE will request that debarment proceedings be initiated. However, beyond the directive to staff to make a referral and request debarment proceeding, VA does not have detailed guidance in place (either in the verification program procedures or the site visit protocol) that would instruct staff under which circumstances to make a referral or a debarment request. To summarize our observations concerning VA's verification efforts, the agency has been slow to implement a comprehensive program to verify the veteran status, ownership, and control of small businesses and maintain a database of such businesses. The weaknesses in VA's verification process reduce assurances that verified firms are, in fact, veteran owned and controlled. Such verification is a vital control to ensure that only eligible veteran-owned businesses benefit from the preferential contracting authorities established under the 2006 Act. These remarks are based on our ongoing work, which is exploring these issues in more detail. As required by the 2006 Act, we will issue a report on VA's contracting with VOSBs and SDVOSBs later this year. We anticipate the forthcoming report will include recommendations to the Department of Veterans Affairs to facilitate progress in meeting and complying with the 2006 Act's requirements. Madam Chairwoman and Members of the Subcommittee, I appreciate this opportunity to discuss these important issues and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact William B. Shear at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Harry Medina, Assistant Director; Paola Bobadilla; Beth Ann Faraguna; Julia Kennon; John Ledford; Jonathan Meyer; Amanda Miller; Marc Molino; Mark Ramage; Barbara Roesmann; Kathryn Supinski; Paul Thompson; and William Woods. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Veterans Benefits, Health Care, and Information Technology Act of 2006 (the 2006 Act) requires the Department of Veterans Affairs (VA) to give priority to veteranowned and service-disabled veteran-owned small businesses (VOSB and SDVOSB) when awarding contracts to small businesses. This testimony discusses preliminary views on (1) the extent to which VA met its prime contracting goals for SDVOSBs and VOSBs in fiscal years 2007-2009, and (2) VA's progress in implementing procedures to verify the ownership, control, and veteran status of firms in its mandated database. GAO obtained and analyzed data on VA's contracting activities, and reviewed a sample of verified businesses to assess VA's verification program. VA exceeded its contracting goals with SDVOSBs and VOSBs for the past 3 years, but faces challenges in monitoring agreements with other agencies that conduct contract activity on VA's behalf. The increase of awards to SDVOSBs and VOSBs was associated with the agency's use of the unique veteran preferences authorities established by the 2006 Act. However, GAO's review of interagency agreements found that VA lacked an effective process to ensure that interagency agreements include required language that the other agencies comply to the maximum extent feasible with VA's contracting goals and preferences for SDVOSBs and VOSBs. VA has made limited progress in implementing its verification program. While the 2006 Act requires VA to use veteran preferences authorities only to award contracts to verified businesses, VA's regulation does not require that this take place until January 1, 2012. To date, VA has verified about 2,900 businesses-- approximately 14 percent of businesses in its mandated database of SDVOSBs and VOSBs. Among the weaknesses GAO identified in VA's verification program were files missing required information and explanations of how staff determined that control and ownership requirements had been met. VA's procedures call for site visits to investigate the ownership and control of higher-risk businesses, but the agency has a large and growing backlog of businesses awaiting site visits. Although site visit reports indicate a high rate of misrepresentation, VA has not developed guidance for referring cases of misrepresentation for enforcement action. Such businesses are subject to debarment under the 2006 Act.
| 3,998 | 498 |
The objective of the Executive Branch Management Scorecard is to provide a tool that can be used to track progress in achieving the President's Management Agenda. Using broad standards, the scorecards in the president's budget grade agencies' performance regarding five governmentwide initiatives, which are: strategic management of human capital, competitive sourcing, improved financial performance, expanded electronic government, and budget and performance integration. Central to effectively addressing the federal government's management problems is recognition that the five governmentwide initiatives cannot be addressed in an isolated or piecemeal fashion separate from the other major management challenges and high-risk areas facing federal agencies. As stated in the President's Management Agenda, they are mutually reinforcing. More generally, the initiatives must be addressed in an integrated way to ensure that they drive a broader transformation of the cultures of federal agencies. At its essence, this cultural transformation must seek to have federal agencies become less hierarchical, process oriented, stovepiped, and inwardly focused; and more flat, partnerial, results oriented, integrated, and externally focused. The focus that the administration's scorecard approach brings to improving management and performance is certainly a step in the right direction. As we have seen by your example, Chairman Horn, in calling attention to agencies' financial management, the year 2000 computer concerns, and computer security issues by grading agencies on their progress, this approach can create an incentive to improve management and performance. Similarly, we have found that our high-risk list has provided added emphasis on government programs and operations that warrant urgent attention to ensure our government functions in the most economical, efficient, and effective manner possible. The President's Management Agenda focuses on important challenges for the federal government. The items on the agenda are consistent in key aspects with the federal government's statutory framework of financial management, information technology, and results-oriented management reforms enacted during the 1990s. In crafting that framework, Congress sought to provide a basis for improving the federal government's effectiveness, financial condition, and operating performance. Moreover, I believe it is worth noting the clear linkages between the five governmentwide initiatives and the nine program-specific initiatives identified by the administration, and the high-risk areas and major management challenges that were covered in GAO's January 2001 Performance and Accountability Series and High-Risk Update. For example, we have designated strategic human capital management as a governmentwide high-risk area that presents a pervasive challenge throughout the federal government, and this is also one of the president's governmentwide initiatives. Our work has found strategic human capital management challenges in four key areas, which are: strategic human capital planning and organizational alignment; leadership continuity and succession planning; acquiring and developing staffs whose size, skills, and deployment meet creating results-oriented organizational cultures. In the area of improved financial performance, we have continued to point out that the federal government is a long way from successfully implementing the statutory reforms Congress enacted during the 1990s. Widespread financial management system weaknesses, poor recordkeeping and documentation, weak internal controls, and the lack of cost information have prevented the government from having the information needed to effectively and efficiently manage operations or accurately report a large portion of its assets, liabilities, and costs. Agencies need to take steps to continuously improve internal control and underlying financial and management information systems to ensure that managers and other decision makers have reliable, timely, and useful financial information to ensure accountability; measure, control, and manage costs; manage for results; and make timely and fully informed decisions about allocating limited resources. Another of the administration's initiatives is to integrate performance review with budget decisions, with a long-term goal of using information about program results in making decisions about which programs should continue and which to terminate or reform. The Office of Management and Budget (OMB) has changed the presentation of the president's budget to provide added focus on whether programs are effective, and a management focus is present throughout the budget document's discussions of the agencies. In our observations of agencies' efforts to implement the Government Performance and Results Act (GPRA) and the Chief Financial Officers Act, more agencies were able to show a direct link between expected performance, resources requested, and resources consumed. These linkages help promote agencywide performance management efforts and increase the need for reliable budget and financial data. However, our work has also shown that additional effort is needed to clearly describe the relationship between performance expectations, requested funding, and consumed resources. The uneven extent and pace of development should be seen in large measure as a reflection of the mission complexity and variety of operating environments across federal agencies. Describing the planned and actual use of resources in terms of measurable accurate results remains an essential action that will continue to require time and effort on the part of all agencies, working with OMB and Congress. The administration has identified areas where it believes the opportunity to improve performance is greater. However, as stated in the president's budget, "The marks that really matter will be those that record improvement, or lack of it, from these starting points." The administration has pledged to update the scores twice a year and to issue a mid-year report during the summer. Updates and future reports will be important in ensuring that progress continues as agencies attempt to improve their performance. It is key that rigorous criteria be applied to ensure that, in fact, progress has been made. According to the administration, the President's Management Agenda is a starting point for management reform. As such, we have drawn upon our wide-ranging work on federal management issues to identify elements that are particularly important in implementing and sustaining management improvement initiatives. These elements include: (1) demonstrate leadership and accountability for change, (2) integrate management improvement initiatives into programmatic decision making, (3) use thoughtful and rigorous planning to guide decisions, (4) involve and empower employees to build commitment and accountability, (5) align organizations to streamline operations and clarify accountability, and (6) maintain strong and continuing congressional involvement (which will be covered in the next section). These six elements have applicability for individual federal agencies, and the central management agencies, each of which plays a fundamental part in implementing reforms and improving federal government performance. One of the most important elements of successful management improvement initiatives is the demonstrated, sustained commitment of top leaders to change. Top leadership involvement and clear lines of accountability for making management improvements are critical to ensuring that the difficult changes that need to be made are effectively implemented throughout the organization. The unwavering commitment of top leadership in the agencies will be especially important to overcoming organizations' natural resistance to change, marshalling the resources needed in many cases to improve management, and building and maintaining the organizationwide commitment to new ways of doing business. Sustaining top leadership commitment to improvement is particularly challenging in the federal government because of the frequent turnover of senior agency political officials. As a result, sustaining improvement initiatives requires commitment by senior career executives, as well as political leaders. Career executives can help provide the long-term focus needed to institutionalize reforms that political executives' often more limited tenure does not permit. The Office of Personnel Management's (OPM) amended regulations that place increased emphasis on holding senior executives accountable for organizational goals provide an opportunity to reinforce leadership and accountability for management improvement. Specifically, the amended regulations require agencies to hold executives accountable for results; appraise executive performance on those results balanced against other dimensions, including customer satisfaction and employee perspectives; and use those results as the basis for performance awards and other personnel decisions. Agencies were to implement their policies for the senior executives for the appraisal cycles that began in 2001. Although the respective departments and agencies must have the primary responsibility and accountability to address their own issues, leaders of the central management agencies have the responsibility to keep everyone focused on the big picture by identifying the key issues across the government and ensuring that related efforts are complementary rather than duplicative. The top leadership of OMB, OPM, the General Services Administration (GSA), and the Department of the Treasury need to continue to be involved in developing and directing reform efforts, and helping to provide resources and expertise to further improve performance. To be successful, management improvement initiatives must be part of agencies' programs and day-to-day actions. Traditionally, the danger to any management reform is that it can become a hollow, paper-driven exercise where management improvement initiatives are not integrated into the day- to-day activities of the organization. The administration has recognized this danger and encouraged agency leaders to take responsibility for improving the day-to-day management of the government. Integrating management issues with budgeting is absolutely critical for progress in government performance and management. Such integration is obviously important to ensuring that management initiatives obtain the resource commitments needed to be successful. More generally, however, the budget process is the only annual process we have in government where programs and activities come up for regular review and reexamination. Integration also strengthens budget analysis by providing new tools to help analysts review the relative merits of competing agency claims and programs with the federal budget. The management issues in the president's agenda have both governmentwide and agency-specific components. Those aspects of the problem that are governmentwide and cut across agency boundaries demand crosscutting solutions as well. Interagency councils such as the President's Management Council, Chief Financial Officers' Council, the Chief Information Officers' Council, the Human Resources Management Council, the President's Council on Integrity and Efficiency, and the Joint Financial Management Improvement Program can play central roles in addressing governmentwide management challenges. As I have noted in a previous testimony, interagency councils provide a means to help foster communication across the executive branch, build commitment to reform efforts, tap talents that exist within agencies, focus attention on management issues, and initiate improvements. The magnitude of the challenges that many agencies face call for thoughtful and rigorous planning to guide decisions about how to improve performance. We have found, for example, that annual performance plans that include precise and measurable goals for resolving mission-critical management problems are important to ensuring that agencies have the institutional capacity to achieve results-oriented programmatic goals. On the basis of our long experience examining agency-specific and governmentwide improvement efforts, we believe the improvement plans that agencies are to develop in conjunction with tracking their progress in achieving the goals of the President's Management Agenda should establish (1) clear goals and objectives for the improvement initiative, (2) the concrete management improvement steps that will be taken, (3) key milestones that will be used to track the implementation status, and (4) the cost and performance data that will be used to gauge overall progress in addressing the identified weaknesses. While agencies will have to undertake the bulk of the effort in addressing their respective management weaknesses, the improvements needed have important implications for the central management agencies as well. OMB, OPM, GSA, and Treasury will need to remain actively engaged throughout the planning and implementation of the president's initiatives to ensure that agencies bring to bear the resources and capabilities to make real progress. These four agencies, therefore, need to ensure that they have the capabilities in place to support and guide agencies' improvement efforts. These capabilities will be critical in helping agencies identify the root causes of their management challenges and pinpointing specific improvement actions, providing agencies with tools and additional support--including targeted investments where needed--to address shortcomings, and assisting agencies in monitoring and reporting progress. For example, OMB can assist agencies in developing and refining useful performance measures and ensuring that performance information is used in deliberations and key decisions regarding agencies' programs. OPM can provide tools for agencies to use in better gauging the extent to which federal employees understand the link between their daily activities and agencies' results. In this regard, OPM has announced a major internal restructuring effort driven in large part by the need to provide better support and resources to agencies. Agencies can improve their performance by the way that they treat and manage their people, building commitment and accountability through involving and empowering employees. All members of an organization must understand the rationale for making organizational and cultural changes because everyone has a stake in helping to shape and implement initiatives as part of agencies' efforts to meet current and future challenges. Allowing employees to bring their expertise and judgment to bear in meeting their responsibilities can help agencies capitalize on their employees' talents, leading to more effective and efficient operations and improved customer service. However, our most recent survey of federal managers found that at only one agency did more than half of the managers report that to a great or very great extent they had the decision-making authority they needed to help the agency accomplish its strategic goals. Effective changes can only be made and sustained through the cooperation of leaders, union representatives, and employees throughout the organization. We believe that agencies can improve their performance, enhance employees' morale and job satisfaction, and provide a working environment where employees have a better understanding of the goals and objectives of their organizations and how they are contributing to the results that American citizens want. In that regard, our work has identified six practices that agencies can consider as they seek to improve their operations and respond to the challenges they are facing. These are: demonstrating top leadership commitment; engaging employee unions; training employees to enhance their knowledge, skills, and abilities; using employee teams to help accomplish agency missions; involving employees in planning and sharing performance information; delegating authorities to front-line employees. Successful management improvement efforts often entail organizational realignment to better achieve results and clarify accountability. Agencies will need to consider realigning their organizations in response to the initiatives in the President's Management Agenda. For example, as competitive sourcing, e-government, financial management, or other initiatives lead to changes in how an agency does business, agencies may need to change how they are organized to achieve results. In recent years, Congress has shown an interest in restructuring organizations to improve service delivery and program results and to address long-standing management weaknesses by providing authority and sharpening accountability for management. Most recently, Congress chartered the Transportation Security Administration in November 2001 and required: measurable goals to be outlined in a performance plan and their progress to be reported annually; an undersecretary who is responsible for aviation security, subject to a performance agreement, and entitled to a bonus based on performance; and a performance management system that included goals for managers and employees. In implementing the President's Management Agenda, it will be important to ensure that information is available so that Congress, other interested parties, and the public can assess progress and help to identify solutions to enhance improvement efforts. As stated in the president's budget, "The Administration cannot improve the federal government's performance and accountability on its own. It is a shared responsibility that must involve the Congress." Therefore, transparency will be crucial in developing an effective approach to making needed changes. It will only be through the continued attention of Congress, the administration, and federal agencies that progress can be sustained and, more importantly, accelerated. Support from Congress has proven to be critical in sustaining interest in management initiatives over time. Congress has, in effect, served as the institutional champion for many of these initiatives, providing a consistent focus for oversight and reinforcement of important policies. Making pertinent and reliable information available will be necessary for Congress to be able to adequately assess agencies' progress and to ensure accountability for results. Key information to start with includes the agencies' improvement plans that are being developed to address the agencies' scores. Congress can use these improvement plans to engage agencies in discussions about progress that is being made, additional steps that need to be taken, and what additional actions Congress can take to help with improvement efforts. More generally, effective congressional oversight can help improve federal performance by examining the program structures agencies use to deliver products and services to ensure that the best, most cost-effective mix of strategies are in place to meet agency and national goals. As part of this oversight, Congress can identify agencies and programs that address similar missions and consider the associated policy, management, and policy implications of these crosscutting programs. This will present challenges to the traditional committee structures and processes. A continuing issue for Congress to consider is how to best focus on common results when mission areas and programs cut across committee jurisdictions. In summary, Mr. Chairman, serious and disciplined efforts are needed to improve the management and performance of federal agencies. Highlighting attention through the President's Management Agenda and the Executive Branch Management Scorecards are steps in the right direction. At the same time, it is well recognized that consistent progress in implementing these initiatives will be the key to achieving improved performance across the federal government. In implementing the President's Management Agenda, the elements highlighted during this testimony should be considered and adapted as appropriate in view of the fact that experience has shown that when these elements are in place lasting management reforms are more likely to be implemented that ultimately lead to improvements. Finally, Congress must play a crucial role in helping develop and oversee management improvement efforts throughout the executive branch. Congress has proven to be critical in sustaining management reforms by monitoring implementation and providing the continuing attention necessary for management reform initiatives to be carried through to their successful completion. Mr. Chairman, we are pleased that you and your colleagues in Congress have often turned to GAO for assistance on federal management issues and we look forward to continuing to assist Congress and agencies in this regard. We have issued a large body of reports, guides, and tools on issues directly relevant to the President's Management Agenda. We will be issuing additional such products in the future that should prove also helpful to Congress and agencies in improving federal management and performance. This concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have. For further contacts regarding this testimony, please contact J. Christopher Mihm at (202) 512-6806. Individuals making key contributions to this testimony included Jacqueline Nowicki, Susan Ragland, and Aonghas St Hilaire.
|
Federal agencies need to work with other governmental organizations, nongovernmental organizations, and the private sector, both domestically and internationally, to achieve results. By focusing on accountable, results-oriented management, the federal government can use this network to deliver economical, efficient, and effective programs and services to the American people. The administration's plan to use the Executive Branch Management Scorecard to highlight agencies' progress in achieving management and performance improvements outlined in the President's Management Agenda is a promising first step. However, many of the challenges facing the federal government are long-standing and complex and will require sustained attention. Using broad standards, the scorecards in the president's budget grade agencies on the following five governmentwide initiatives: (1) strategic management of human capital, (2) competitive sourcing, (3) improved financial performance, (4) expanded electronic government, and (5) budget and performance integration. These initiatives cannot be addressed in an isolated or piecemeal fashion separate from other management challenges and high-risk areas.
| 3,766 | 208 |
Breast cancer is the second leading cause of cancer deaths among American women. The American Cancer Society estimates that there will be 184,300 new cases of breast cancer diagnosed in U.S. women in 1996 and that 44,300 women will die from the disease. One in eight women will develop breast cancer during her lifetime. Breast cancer is generally classified into four main stages based on the size of the tumor and the spread of the cancer at the time of diagnosis. Mortality rates are strongly related to the stage of the disease at the time of detection. Stage I patients have an excellent chance of long-term survival, while stage IV (metastatic) breast cancer is usually fatal. A wide variety of treatments exists for breast cancer patients, including surgery, chemotherapy, radiation therapy, and hormone therapy. The particular treatments used depend on the stage and characteristics of the cancer and other aspects of the patient and her health. ABMT is a therapy that allows a patient to receive much higher dosages of chemotherapy than is ordinarily possible. Because high-dose chemotherapy is toxic to the bone marrow (which supports the immune system), methods have been developed for restoring the bone marrow by reinfusing stem cells (the bone marrow cells that mature into blood cells) taken from the patient before chemotherapy. Stem cells are removed from the patient's blood or bone marrow, then concentrated, frozen, and sometimes purged in an attempt to remove any cancerous cells. The patient then undergoes chemotherapy at dosages 2 to 10 times the standard dosage. To restore the ability to produce normal blood cells and fight infections, the patient's concentrated stem cells are thawed and reinfused after chemotherapy. When the transplant is done from the blood rather than the bone marrow, the procedure is often referred to as peripheral blood stem cell transplantation. ABMT is an expensive treatment although the cost per patient has been falling in recent years. Aside from financial costs, the treatment is usually very unpleasant for the patient and may pose significant risks. The high doses of chemotherapy are very toxic, leading to treatment-related morbidity and mortality rates that, while declining, are still higher than for conventional chemotherapy. There may also be problems in restoring the patient's ability to produce normal blood cells and thereby fight infections. ABMT is being evaluated in the treatment of a number of types of cancer other than breast cancer and is considered standard therapy for treating certain types of leukemia and lymphoma under certain conditions. Many clinical trials have been conducted to assess ABMT for breast cancer, but most of these studies have been phase I and phase II trials, which most experts agree have been of limited use in firmly establishing the effectiveness of ABMT compared with conventional therapy. NCI is currently sponsoring three randomized clinical trials that seek to determine whether ABMT is better than current standard therapy in comparable breast cancer patients. These trials seek to ultimately involve a total of about 2,000 women at more than 70 institutions around the country. Although most experts believe the clinical research has not yet established that ABMT is superior to conventional therapy, and for which patients, insurance coverage of the treatment has become relatively common and use of the treatment is diffusing rapidly. According to the Autologous Blood and Marrow Transplant Registry-North America, the number of breast cancer patients receiving ABMT has increased rapidly, growing from an estimated 522 in 1989 to an estimated 4,000 in 1994. About one-third of all ABMTs reported to the Registry in 1992 were for breast cancer, making it the most common cancer being treated with this therapy. The Registry reports that although the treatment is most commonly used in women with advanced disease, there is a growing trend to use it more frequently on patients with earlier stages of breast cancer. There has also been a dramatic increase in the number of patients undergoing this treatment in Europe. Many insurers, including some of the nation's largest, now routinely cover ABMT for breast cancer both inside and outside of clinical trials, although some still deny coverage for the treatment because they consider it experimental. One study looked at 533 breast cancer patients in clinical trials who requested coverage for ABMT from 1989 through 1992. It found that 77 percent of them received approval for coverage of the treatment after their initial request. We reviewed the current medical literature and spoke with several leading oncologists and technology assessment experts regarding ABMT for breast cancer. While there were differences of opinion, the consensus of most of the experts and the literature was that current data indicate ABMT may be beneficial for some breast cancer patients but that there is not yet enough information to establish that it is more effective than standard chemotherapy. The medical literature includes several studies showing longer periods before relapse and improved survival for some poor prognosis, high-risk breast cancer patients receiving ABMT rather than conventional therapy.However, it is unclear whether the superior outcomes of patients receiving ABMT in these studies were the result of the treatment itself or the result of bias caused by the selection of patients chosen to receive the treatment. Most of the medical literature and nearly all of the experts we spoke with said that the current data are not yet sufficient to make definitive conclusions about the effectiveness of ABMT and about which groups of breast cancer patients would be most likely to benefit. Although there are wide differences of opinion about the appropriate use of ABMT, nearly all sides of the debate agree that the results of randomized clinical trials are needed to provide definitive data on the treatment's effectiveness. Several studies have reviewed and analyzed the extensive medical literature related to ABMT for breast cancer. In 1995, ECRI, an independent, nonprofit technology assessment organization, published an analysis stating that the weight of the evidence in the medical literature did not indicate greater overall survival for metastatic breast cancer patients receiving ABMT compared with conventional therapy. The Blue Cross and Blue Shield Association's Technology Evaluation Center, after reviewing the available data in 1994, concluded that the evidence was not yet sufficient to draw conclusions about the effectiveness of ABMT compared with conventional therapy for breast cancer patients. Similarly, NCI, at a congressional hearing, said that while ABMT has shown promise in some clinical studies, the results of the NCI randomized clinical trials were needed before conclusions could be reached about whether and for whom the treatment is more beneficial than conventional therapy. We interviewed the medical director, or another official who makes coverage decisions, at 12 U.S. health insurance companies. We discussed the insurer's coverage policies and the factors that influenced their coverage policy with regard to ABMT for breast cancer. The insurers' coverage policies regarding ABMT for breast cancer reflected some incongruity. In general, the insurers said they did not normally cover experimental or unproven treatments and that they believed ABMT for breast cancer fell into this category. Yet, with some restrictions, all 12 insurers nonetheless covered ABMT for breast cancer with only one requiring that patients enroll in clinical trials. In explaining this, most cited as the primary influence the fact that although until recently the treatment had not been tested in randomized trials, it has become widely used and that the existing research suggests it may be beneficial to certain patients. But insurers told us that a variety of nonclinical factors also strongly influenced their coverage policy, such as the threat of litigation, public relations concerns, and government mandates. All health insurers must decide whether and when they will cover a new or experimental treatment. To do this, they engage in some form of technology assessment, a process that seeks to assess the safety and effectiveness of a medical technology based on the best available information. For the most part, health insurers do not gather primary data but, rather, rely heavily on peer-reviewed medical literature and on the assessment of experts inside and outside of their companies. Some large health insurers have elaborate technology assessment units. One example is the Technology Evaluation Center, a collaboration of the Blue Cross and Blue Shield Association and Kaiser Permanente. The Center's staff includes physicians, research scientists, and other experts who review and synthesize existing scientific evidence to assess the safety and efficacy of specific medical technologies. The Center has published assessments for over 200 technologies since 1985, including several for ABMT for breast cancer. Other large insurers, including Aetna and Prudential, also have special programs that do formal assessments of specific technologies. Smaller insurers also do technology assessment, but on a smaller scale; for instance, they may have a small office that does literature searches or reviews the findings of larger technology assessment organizations. Using their assessments, insurers then decide whether they will cover a particular treatment and under what conditions. Whatever the overall policy, coverage of costly and complicated procedures may require special preapproval before they are covered. Among the insurers we spoke with, preapproval for ABMT was generally required by the office of the medical director or some other office that reviews claims for medical appropriateness. They said they wanted to ensure that a case meets any coverage restrictions and that ABMT is medically appropriate for that particular patient. For certain difficult cases, some insurers also use an outside panel of experts, serving as a mediation service, to determine whether ABMT is the appropriate treatment. Seven of the 12 insurers we spoke with explicitly characterized ABMT for breast cancer as experimental. Four others did not specifically term the treatment "experimental" but nonetheless said that ABMT for breast cancer should not yet be considered standard therapy since its effectiveness over conventional therapy had not yet been proven. One insurer did not express an opinion on the issue. Yet while the insurers said they typically do not cover experimental therapies, many said that in this case there was enough preliminary evidence that ABMT may be effective to justify covering it. Seven of the 12 insurers cited the clinical evidence as one of the primary reasons that they decided to cover ABMT. These insurers said that the existing data indicate that ABMT may hold promise for certain breast cancer patients and that flexibility was needed in paying for experimental treatments for seriously or terminally ill patients. Two insurers also said that they cover ABMT for breast cancer since, although its efficacy has not been established, it has become generally accepted medical practice in that it has become a common treatment for breast cancer throughout the United States and is covered by many other insurers. They said they would receive pressure from their beneficiaries if they were to deny coverage for a treatment that other insurers cover. While the medical evidence was an important factor in the coverage policy of a majority of the insurers, other factors were also clearly at work, with the threat of litigation being among the most important. When an insurer refuses to pay for a treatment requested by the patient or the patient's physician, coverage may ultimately be decided in the court system. Over the past several years, many breast cancer patients have sued their insurers after being denied coverage for ABMT. Nine of the 12 insurers that we spoke with specifically mentioned litigation, or the threat of litigation, as a factor in their ABMT coverage policy. For five of these insurers, legal concerns were characterized as among the most important reasons for choosing to cover ABMT for breast cancer. Before changing their policies to cover ABMT for breast cancer, six of the insurers we spoke with had been sued after denying coverage for the treatment. Overall, the insurers had not been very successful in these cases and had often either settled before judgment was rendered or had a judgment rendered against them. The insurers who had been sued on the issue said the financial costs of legal fees, settlements, and damages were high. For the most part, the insurers said they found different courts to be widely inconsistent in ruling whether ABMT is experimental and should be covered, a point also made in reviews of case law on the issue. In addition to the financial costs, insurers said the lawsuits were harmful to their public relations. Publicity of their coverage policy led to the impression that they were denying a gravely ill patient a beneficial therapy for economic reasons. The insurers we spoke with no longer face many lawsuits on the issue since they now generally cover ABMT. Court decisions on health insurance coverage disputes have usually turned on the language of the insurance contracts, which generally bar coverage for experimental treatments but are often ambiguous with regard to what is defined as "experimental." A recent review of such litigation noted that state courts have tended to favor policyholders in these coverage disputes, although federal courts, where disputes for self-insured companies are often decided, have been split on whether insurers must cover ABMT for breast cancer. The courts, in ruling whether an insurer must provide coverage for ABMT for breast cancer, have based their decisions on a number of factors. These have included whether ABMT is generally accepted in the medical community for the treatment of breast cancer, whether "experimental treatment" is defined clearly in the insurance policy, whether the treatment was intended primarily to benefit the patient or to further medical research, and whether the insurer's denial of coverage was influenced by its own economic self-interest. This last argument was the focus of Fox v. Health Net of California, a highly publicized case in which a California jury awarded $89 million in damages to a policyholder whose deceased wife had been denied coverage of ABMT for breast cancer.Plaintiffs in a number of recent cases have alleged that denial of coverage for ABMT constitutes discrimination against women in violation of civil rights laws or discrimination against a specific disease in violation of the Americans With Disabilities Act. Most of these cases are still pending. Insurers have had some success in court as well. Some state courts have ruled that ABMT is still widely considered to be experimental and that the health insurance contract clearly precluded coverage of experimental treatments. Courts in at least three federal circuits have also upheld insurers' coverage denials for ABMT to treat breast cancer. Courts in many of these cases permitted insurers wide discretion in making coverage decisions as long as the decisions were not arbitrary or capricious. The controversy over access to ABMT for breast cancer patients has led several states to propose or enact legislation regarding insurance coverage of the treatment. As of June 1995, at least seven states had enacted legislation that, under certain parameters, requires that insurers provide coverage for ABMT for breast cancer. At least seven additional states have similar legislation pending. Some of these laws are mandates requiring that coverage of ABMT for breast cancer be part of any basic package of health insurance. Other laws simply require that the treatment be made available as a coverage option, at perhaps a higher premium. The laws in six of the states require coverage whether or not the patient is enrolled in a clinical trial, while one state requires patients with certain types of breast cancer to join well-designed randomized or nonrandomized trials. Three of the 12 insurers we spoke with said they were required by a state mandate to cover ABMT for breast cancer for most of their beneficiaries. One of these three said it would not cover the treatment if it were not for the mandate. Those who advocate passage of the state laws argue that they are necessary to make a promising therapy available to breast cancer patients. Among the arguments used is that insurers classified ABMT for breast cancer as "experimental" as much for economic as medical reasons because ABMT is an expensive treatment. Insurers respond that ABMT for breast cancer is an experimental treatment still being evaluated in clinical trials and they should not be in the business of paying for research. Furthermore, insurers say that legislation mandating coverage of specific treatments is a poor way to make medical policy and that it distorts the market because self-funded plans are exempt from state mandates. The National Association of Insurance Commissioners (NAIC) is considering a model act for states that would set minimum standards of coverage for health insurers. The model act, which has not yet been approved by the full NAIC membership, would require insurers to cover an experimental treatment if the peer-reviewed medical literature has established that the treatment is an effective alternative to conventional treatment. A representative from NAIC told us that in a state that passed such an act, insurers would normally be required to cover ABMT for breast cancer if the treating physician considered it the medically appropriate treatment. Programs such as Medicaid, Medicare, and the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) have varying policies regarding coverage of ABMT for breast cancer. Coverage criteria for Medicaid, a jointly financed federal and state program that provides medical care to the poor, varies by state, but some states' Medicaid programs will cover ABMT for breast cancer under at least some circumstances. Of nine state Medicaid programs we contacted, five provided coverage for ABMT for breast cancer. The Medicare program, which provides health coverage primarily for the elderly, specifically excludes ABMT coverage for solid tumors such as breast cancer because the Health Care Financing Administration, which administers the Medicare program, considers the treatment experimental. The practical impact of the Medicare policy is limited since the elderly are not normally appropriate candidates for ABMT treatment. CHAMPUS, the Department of Defense's health care program for active duty and retired military personnel, and their dependents and survivors, considers ABMT for breast cancer experimental but provides coverage through a demonstration project in which beneficiaries may receive ABMT by enrolling in one of three NCI randomized clinical trials. The Federal Employees Health Benefits Program (FEHBP), run by OPM, provides health insurance coverage for over 9 million federal employees, retirees, and dependents through over 300 independent health plans. In September 1994, OPM imposed a requirement that participating health insurers must cover ABMT for breast cancer for all FEHBP beneficiaries both in and outside of clinical trials. OPM acknowledged to us that the evidence is mixed on the effectiveness of ABMT for breast cancer. They said they decided to mandate coverage largely because so many insurers were already covering the procedure and they wanted to make the benefit uniform across all of their carriers. Insurers we spoke with said they complied with the OPM mandate, although they criticized the mandate as a political rather than clinical decision. Two of the 12 insurers we spoke with specifically mentioned the OPM decision as having influenced their own coverage policy, largely because it brought so much publicity to the issue. Medical experts, insurers, and others have debated whether ABMT has become too widely used before there is convincing evidence of its efficacy. While the medical community seeks to learn whether ABMT is more effective for some breast cancer patients than conventional chemotherapy, the number of patients receiving the treatment and the number of facilities providing it continue to grow. If ABMT were a new drug, it would be restricted mostly to patients on clinical trials until its efficacy were established and the Food and Drug Administration (FDA) had approved its use in general medical practice. Yet because ABMT is a procedure, rather than a drug, it does not require approval from FDA, making it easier for it to be widely used while its effectiveness is still being tested in clinical trials. The rapid diffusion of ABMT for breast cancer has implications for patient care, health care costs, and research. There is debate over whether patients benefit from the rapid diffusion of a new technology that is still being tested in clinical trials. In the case of ABMT, the high doses of chemotherapy administered in conjunction with the treatment can make it a particularly difficult treatment for patients. This is evidenced both by the extreme sickness and side effects that patients may experience and by the higher rate of treatment mortality for ABMT than for conventional chemotherapy. If the clinical research ultimately shows ABMT to be preferable than conventional therapy for some groups of patients, then some of those patients will have benefited from the early diffusion of this technology. If it is shown not to be more effective, however, or if it is shown to be effective for a much smaller subset of patients than are currently being treated with the therapy, then many patients will have been unnecessarily subjected to an aggressive treatment that can be risky and produce many severe side effects. In addition, while ABMT formerly was available only at a select number of cancer research centers across the country, it is now being performed by a rapidly growing number of smaller hospitals and bone marrow transplant centers. Many physicians we talked with, including researchers and insurance company medical directors, expressed concerns that there may be some facilities that perform too few transplants to ensure sufficient staff expertise or that do not have the infrastructure needed to support this complicated procedure. Partly to address these concerns, several medical societies have developed guidelines that set out specific criteria for facilities that perform bone marrow transplants. ABMT is an expensive treatment, costing anywhere from $80,000 to over $150,000 per treatment, depending on the drugs used, any medical complications, and the length of hospital stay required. Conventional chemotherapy, by contrast, typically costs between about $15,000 and $40,000. The cost of ABMT has been decreasing over the years and is expected to decrease further as the technology is refined and becomes more common. Some medical centers have already been able to reduce the cost of the procedure by offering the treatment on more of an outpatient basis. While the cost per individual treatment is likely to decrease, total spending nationwide on the procedure is likely to increase. More patients in different stages of breast cancer are being treated with ABMT, a trend that is expected to continue. The fact that ABMT can be a highly profitable procedure for the institution that performs it, many experts say, has created further incentive for the diffusion of the treatment. Virtually all sides of the debate agree that ABMT is worth the cost if it is shown to be the best available treatment. But some worry that the research has not yet established which breast cancer patients, if any, are likely to benefit from ABMT and that the rapid diffusion of this costly treatment outside of research settings before its effectiveness has been proven may not be the best use of health care resources. There is clear consensus among the scientific community that, if possible, the best way to compare the effectiveness of a new treatment with conventional treatment is through randomized clinical trials. A randomized trial assigns patients either to a control group receiving conventional treatment or to one or more experimental groups receiving the treatment being tested. Random allocation helps ensure that differences in the outcome of the groups can be attributed to differences in the treatment and not differences in patient characteristics. In the case of ABMT, some experts have argued that early research showing favorable results for ABMT may have been due to the fact that the breast cancer patients receiving ABMT had more favorable characteristics than those who were not receiving the treatment. NCI has three large-scale randomized clinical trials ongoing to compare ABMT with conventional therapy for breast cancer. These trials randomly assign patients who fit certain criteria either to an experimental group that receives ABMT or to a control group that instead receives a more conventional form of therapy. NCI has had difficulty accruing enough patients to its randomized trials. Two of the three ongoing NCI trials are accruing patients at about half the rate researchers originally anticipated, and a fourth trial was closed because of low enrollment. NCI expanded the enrollment goal of the third trial to improve the statistical power of the results, and results from all three trials are not expected until nearly the turn of the century. NCI says patient accrual to the trials, although slow, appears to be progressing adequately, but many experts we spoke with questioned whether the NCI trials will ever be completed as planned. Many medical experts believe that the wide availability of the treatment is one reason researchers are having problems accruing patients to the randomized trials. ABMT is now widely available to many breast cancer patients either through other clinical trials or outside of a research trial. Under most circumstances, insurers that cover ABMT do not require that the patient enter a randomized trial, and many patients are reluctant to do so. Patients who believe ABMT is their best hope for survival may not be willing to enter a trial where they may be randomly assigned to a group receiving conventional chemotherapy. The ABMT Registry estimates that only about 5 percent of all breast cancer patients receiving ABMT are enrolled in the randomized clinical trials. Proponents of ABMT that we spoke with pointed out that most procedures in common medical practice today have not been subjected to the strict scrutiny of randomized trials and that this potentially lifesaving therapy should not be withheld until the NCI trials are completed many years from now. Other medical experts, insurers, and patient advocates we spoke with said that ABMT for breast cancer should only be available to patients enrolled in clinical trials, possibly only randomized trials. They argued that the proliferation of ABMT outside of randomized trials--or outside of any research setting at all--is making it difficult to gather the data necessary to assess whether and for whom ABMT may be a beneficial treatment. A large number of clinical trials are being conducted on ABMT for breast cancer apart from the NCI randomized trials. Many major cancer research centers are conducting nonrandomized trials, and numerous clinical trials are also under way at smaller hospitals and private transplant centers. Yet some experts have argued that many of these trials will contribute little useful information because the study population is too small, the trial is not sufficiently well-designed, or because the results will not be published. These experts are concerned that the proliferation of smaller clinical trials may be diverting patients from larger clinical trials, including the NCI randomized clinical trials, that are more likely to yield meaningful results about the effectiveness of ABMT for breast cancer. The controversy over ABMT has also highlighted the issue of the extent to which health insurers should pay for the costs of clinical research. Clinical research in the United States has been financed primarily by the federal government, private research institutions, the pharmaceutical industry, and insurers. Insurers have often paid the patient care costs for certain clinical trials. But given federal funding constraints and other economic pressures, many researchers and other experts we spoke with believe that health insurers should assume the costs of more clinical trials, especially the patient care costs of well-designed trials that offer promising treatments in an advanced stage of testing. They say the insurers would have to pay for patient care costs even if the patient were not in a trial and that the trials will ultimately benefit everyone by helping identify effective treatments. The insurance industry's position has been that insurers should pay only for standard medical care and that insurers should not be in the business of financing research. But insurers have made exceptions, especially for clinical trials involving promising treatments for patients with terminal illnesses. Many insurance industry officials we spoke with said they would be open to paying the costs of some clinical trials for promising treatments, as long as the costs were to be spread equitably among all insurers and health providers, and as long as there were strict standards to ensure that the research being funded was of high quality. The controversy over insurance coverage of ABMT for breast cancer illustrates several issues related to the dissemination and insurance coverage of new technologies. The rapid diffusion of new, often expensive, medical technologies puts in conflict several goals of the U.S. health care system: access to the best available care, the ability to control health care costs, and the ability to conduct research adequate to assess the efficacy of a new treatment. Specifically, the ABMT controversy illustrates the challenge health insurers in the United States face in determining whether and when to provide coverage for a new technology of unknown efficacy, given the decentralized process for assessing new medical technologies. Insurers have less clear direction regarding coverage of medical procedures than they do for drugs because of FDA's role in drug approval. Insurers thus have wide discretion, and little nationwide guidance, in determining whether and when a medical procedure should no longer be considered "experimental" and should be covered. The result can be great disparity in the coverage policies of insurers, with coverage decisions being influenced not just by the medical data and clinical judgments, but also by factors such as lawsuits and public relations concerns. Furthermore, the lack of a systematic process for the dissemination of new technologies in the United States raises issues for the health care system. Those who advocate widespread access to experimental technologies argue that patients should not be denied access to promising therapies, especially when clinical trials for those therapies may take many years. Those who advocate restricting access to new technologies argue that the rapid diffusion of a new treatment before its effectiveness has been definitively proven is not ultimately beneficial to patient care, may waste resources, and may impede controlled research on the treatment. NIH provided us with comments on a draft of this report. They agreed with the conclusions and stated that the report presented a balanced, thoughtful discussion of the controversial issues. NIH also noted that in the past, many insurers provided coverage only in the context of clinical trials, but this became untenable because of the factors discussed in the report, particularly the OPM decision to require FEHBP coverage of the treatment both inside and outside of clinical trials. NIH also recommended some technical changes, which we incorporated in the report where appropriate. (See app. I for a copy of the NIH comments.) OPM also reviewed the draft report and provided comments regarding the decision to require that all FEHBP health insurance plans provide coverage for ABMT for breast cancer. Their comments reemphasized that (1) many FEHBP plans were already providing this coverage; (2) the OPM decision was based on a desire to broaden coverage to all FEHBP enrollees; and (3) each plan retains the flexibility to determine when and how the treatment will be covered, but plans that limit coverage to patients enrolled in clinical trials have to offer coverage in nonrandomized as well as randomized trials. (See app. II for a copy of OPM's comments.) As agreed with your office, unless you release its contents earlier, we plan no further distribution of this report for 30 days. At that time, we will send copies to other congressional committees and members with an interest in this matter, the Secretary of Health and Human Services; the Director, NIH; and the Director, OPM. This report was prepared by William Reis, Assistant Director; Joan Mahagan; and Jason Bromberg under the direction of Mark Nadel, Associate Director. Please contact me on (202) 512-7119 or Mr. Reis on (617) 565-7488 if you or your staff have any questions on this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed insurance coverage of autologous bone marrow transplantation (ABMT) for breast cancer, focusing on: (1) the factors insurers consider when deciding whether to cover treatment; (2) the effectiveness of the treatment; and (3) the consequences of the increased use and insurance coverage of the treatment while it is still in clinical trials. GAO found that: (1) the use of ABMT has become widespread and many insurers cover ABMT; (2) sufficient data do not exist to establish that ABMT is more effective than traditional chemotherapy; (3) despite the lack of data, many insurers cover ABMT because the research results of its effectiveness are promising, its use is widespread, and they fear costly litigation battles with their customers; (4) as of June 1995, seven states had enacted a law that mandates insurance coverage for ABMT and seven other states have similar laws pending; (5) of the federally funded health insurance programs, Medicaid coverage for ABMT varies by state, Medicare does not cover ABMT for solid tumors such as breast cancer, the Civilian Health and Medical Program of the Uniformed Services covers ABMT through a demonstration project in which beneficiaries may receive the treatment by enrolling in a randomized clinical trial; and (6) the widespread use of ABMT prior to conclusive data about its effectiveness may jeopardize patients unresponsive to the treatment, raise health care costs, and deter participation in randomized clinical trials.
| 6,818 | 325 |
Customs' responsibility includes (1) enforcing the laws governing the flow of goods and persons across the borders of the United States and (2) assessing and collecting duties, taxes, and fees on imported merchandise. To speed the processing of imports and improve compliance with trade laws, the Congress in 1993 enacted legislation that enabled Customs to streamline import processing through automation. The legislation also eliminated certain legislatively mandated paper requirements, allowing Customs to move from a paper-intensive to an automated import environment. Further, it required Customs to establish NCAP and specified critical functions that this program must provide, including the ability to electronically file import entries at remote locations and process drawback claims. In response to the authorizing legislation, Customs launched a major initiative in 1994 to reorganize the agency, streamline operations, and modernize the automated systems that support operations. In the process, Customs identified its core business processes as trade compliance (imports), outbound goods (exports), and passengers. In 1992, prior to redesigning its operations, Customs decided to move from centralized to distributive computing and selected a suite of hardware, software, and telecommunications products to enable it to do so. Customs refers to its effort to move to decentralized computing using these products as the Customs Distributed Computing for the Year 2000 (CDC-2000) project. The agency plans to implement ACE and its other modernized systems applications on these products. According to Customs, as of October 1, 1995, it had spent $63 million purchasing these products including upgrading its personal computers, installing local area networks, and acquiring minicomputers and related peripherals. Although no detailed analysis has been prepared, the CDC-2000 project director estimated that when completed, total purchases could reach $500 million. In January 1995, Customs hired Gartner Group Consulting Services to review the adequacy of this approach, and the contractor issued its report in April 1995. About this same time, Customs engaged another contractor--IBM Consulting Group--to determine whether the agency was technically capable of developing ACE. IBM reported its findings in February 1995. Customs' strategy for implementing NCAP consists of three initiatives. First, Customs is redesigning the import process to better meet customer needs and improve operational efficiency and effectiveness. In doing so, the agency identified and prioritized the needs of its internal and external customers involved in import processing. Using this information, Customs determined how the new import process will work and is testing this new process at selected ports of entry. Customs plans to complete the definition of its redesigned import process by September 1997. Second, Customs is developing its new automated import processing system (ACE) applications to support the new import process and comply with NCAP-mandated functions. Customs is in the early stages of system development. Specifically, the agency has recently issued user requirements and is in the process of determining functional requirements. Customs estimates that when completed, the system will cost $125 million over its 10-year planned life. As of March 1996, Customs had spent $25 million on ACE. Customs plans to begin deploying ACE in October 1998. Finally, until ACE is deployed, Customs plans to enhance its existing import processing system--the Automated Commercial System--which operates in the existing centralized computing environment, to provide selected NCAP-mandated functions critical to meeting agency and trade community needs. For example, Customs is modifying this system to allow importers to file documentation at a port of entry other than where the goods are to arrive or be examined. Rather than wait for this function to be deployed with ACE, Customs plans to add this function to (1) facilitate inspections and import processing and (2) reduce the importers' administrative burden by eliminating the need of having importer staff at the port of entry. Customs is currently testing this capability with seven importers at selected locations. Customs is also enhancing its current Automated Commercial System to provide electronic filing capabilities for drawback claims. To date, Customs has modified the system to enable electronic (1) filing of such claims by the trade community and (2) comparison of key information on drawback claims to the original import entries. Customs also plans to improve its controls over duplicate and excessive drawback payments, which we previously noted were a problem, by enhancing this system to maintain a cumulative record of drawback amounts paid against individual line items on import entries. This enhancement is scheduled to be completed by October 1997. In implementing its NCAP strategy, Customs has not adhered to strategic information management best practices that help organizations (1) mitigate the risks associated with modernizing automated systems and (2) better position themselves to achieve success. Specifically, Customs did not (1) conduct the requisite analyses (e.g., cost-benefit, feasibility, alternatives) before committing to the CDC-2000 project, (2) redesign its import and other business processes before the agency selected the hardware for ACE and other systems, (3) manage ACE as an investment, and (4) designate strict accountability for ensuring that it successfully incorporates all NCAP-mandated functions into the agency's modernization effort. Organizations that have successfully modernized operations and systems use a structured approach to identify the architecture that most efficiently and effectively meets their information needs. First, they redesign their old business processes. Then they analyze the new processes to identify (1) the information needs of the entire organization and (2) alternative ways of meeting them, including consideration of costs and benefits. Finally, the organizations use this analysis to select an optimal businesswide configuration, which specifies where and how processing will occur and identifies the hardware, software, telecommunications, and other elements needed to support new automated systems. This configuration is commonly referred to as an architecture and serves as a guide for modernizing automated systems. Organizations that do not follow this disciplined approach risk (1) automating the wrong processes and (2) developing systems that do not function well or that cannot be readily integrated with other systems. Consequently, the agency may develop systems that do not enhance the agency's mission performance or that reach only a fraction of their potential to do so. However, Customs selected its CDC-2000 approach for ACE and other systems without using this disciplined approach. Specifically, the agency began buying minicomputers, software, and other equipment to support decentralized processing in 1993, but did not start to redesign its first critical business process (imports) until late 1994 and the other two processes (passenger, exports) until January and August 1995. In addition, Customs does not plan to complete these redesign efforts until September 1997, October 1996, and December 1996, respectively. In formulating the CDC-2000 project, Customs did not identify the information needs of the entire organization and consider alternative ways of meeting them as well as the respective costs and benefits. These shortcomings were also reported by Gartner. In this regard, the contractor stated that Customs' selected products were primarily a "buy list" and were largely identified without taking into consideration the information needs of agency processes and systems. While Gartner stated that "the CDC-2000 architecture is, in general, valid and reasonable," Gartner recommended that Customs use a disciplined approach to fully identify its needs and only then select products to meet those needs. Customs officials said they had selected the products included in the CDC-2000 initiative before the import process was redesigned because they needed to move from their current centralized system to decentralized processing and believed that the products selected would meet any future system needs. They also said that, at the time of selection, they did not believe a rigorous supporting analysis was needed because the products chosen were widely used by industry. Further, although CDC-2000 was adopted over 4 years ago, Customs does not believe it has wasted its time and resources because, according to the agency, only $4 million of the $63 million CDC-2000 funds spent to date have been used to buy minicomputers, software, and other equipment to support decentralized processing. Customs officials noted that, to date, $59 million has been used to upgrade and install personal computers and local area networks, which needed to be acquired regardless of the architecture that was ultimately formulated. We recognize Customs' need to improve office automation using personal computers and local area networks. However, Customs' rationale for purchasing minicomputers, software, and other equipment is based on several faulty assertions. First, Customs risks wasting hundreds of millions of dollars it plans to spend in the future on the CDC-2000 project should it continue purchasing hardware and software to support decentralized processing without conducting a thorough analysis. Second, while decentralized processing and the products Customs selected may be widely used, this has no bearing on whether they are a cost-effective approach to meeting Customs' needs. Further, since the agency does not yet know how it plans to conduct its business in the future or what automated systems would best support these new business processes, it is in no position to commit to CDC-2000. Third, the Federal Information Resources Management Regulation and Office of Management and Budget Circular A-130 require thorough analyses to justify major systems efforts such as CDC-2000. Finally, best practice organizations have learned that using a structured approach can help them effectively use resources and lead to order-of-magnitude gains in productivity. Successful organizations manage information system projects as investments rather than expenses. This includes (1) creating an investment review board of senior program and automated systems managers to select, monitor, and evaluate system projects, (2) establishing explicit criteria to assess the merits of each project relative to others, including the use of cost, benefit, and risk analyses, and (3) following structured systems development methodologies throughout the system's life. Such disciplined control processes are required by the Office of Management and Budget to help federal agencies decide which planned systems are worthwhile investments and ensure that the risks associated with building those systems are adequately controlled. Although its annual automated systems expenditures total about $150 million, Customs does not manage ACE and its other systems as investments. First, while Customs has a systems steering committee, composed of senior officials who meet periodically to monitor automation projects such as ACE, the committee functions primarily as a sounding board that addresses concerns raised by project managers as well as committee members rather than as an investment review board. For example, the committee has not developed explicit decision criteria to assess mission cost, benefits, and risk of both ongoing and planned projects. Instead, the committee makes decisions on ACE and other systems, including Automated Commercial System enhancements, without considering such critical information as the merits of each project relative to others, how well these systems will contribute to improving mission performance, if their value will exceed their cost, and how likely they are to succeed. Customs officials acknowledged the steering committee's shortcomings and told us that, while they had initiated an effort in January 1995 to redefine the steering committee's role, including managing systems as investments, not much progress has been made since then. Customs' Deputy Commissioner said he intends to restart efforts to establish an investment subcommittee under the steering committee but has not established a target date to do so. Second, although Customs' system development policies require cost-benefit analyses to be performed prior to developing critical and costly systems, we found that Customs had not performed such analyses for ACE and the CDC-2000 project. Gartner and IBM also reported that such analyses were lacking. In this regard, Gartner stated that Customs needed to assess the cost and benefits for CDC-2000 because (1) the agency had only a limited understanding of what it will ultimately cost and (2) if Customs waited much longer, the cost of purchases of selected products could mushroom beyond the agency's ability to control it. Similarly, IBM stated that to be successful with ACE, Customs needed to identify and continuously monitor the cost and benefits of this system. Customs officials told us they recognize that until the agency conducts these analyses, it will not know whether these major system investments are worthwhile. In response to these findings, Customs hired contractors to help perform these analyses, but it continues to develop ACE on CDC-2000 hardware and plans to continue making CDC-2000 purchases. These analyses are scheduled to be completed by July 1996. Third, in developing ACE, Customs also skipped or has not completed other required system development steps necessary to control development risks. Specifically, Customs has not resolved how to incorporate into ACE critical functions mandated over 2 years ago in NCAP. These functions include reconciling adjustments to importers' duties and processing drawback claims. It also did not prepare a security plan, although Customs has had problems in the past implementing effective internal controls to protect systems and data. Customs officials acknowledged that, given where they are in the ACE development process, they should have determined how to deliver NCAP-mandated functions and completed their security plan. In addition, they told us that it is their intention to complete the security plan in July 1996 and update the user requirements in June 1996. Assigning clear accountability and responsibility for information management decisions and results is another important practice identified by successful organizations. As we pointed out in our January 1995 testimony on Customs' plan to modernize the agency, Customs is in the midst of a major reorganization and during this time of change, it needs to clarify roles and responsibilities to reinforce accountability and facilitate mission success. We found, however, that clear accountability for meeting NCAP requirements is lacking. Customs has established a board called the Trade Compliance Board of Directors to redesign its import process. This board consists of senior officials who represent the import process and related systems. However, while the board's charter makes it accountable for the redesigned import process, it does not establish accountability for successfully implementing NCAP. Customs' Deputy Commissioner agreed that the agency needs to assign accountability and requisite authority to ensure that the functions mandated in NCAP are successfully implemented. Customs recognizes that it (1) cannot afford to fail in its effort to redesign and automate critical NCAP processes and (2) needs to make a more concentrated effort to implement best practices. However, Customs has not assigned responsibility for ensuring that NCAP is successfully implemented. Further, Customs has no assurance that continued buying of CDC-2000 equipment is the best way to accomplish its mission or that the hardware selected for ACE and other systems is appropriate. Customs is in the early stages of its modernization and has time to implement these best practices. While Customs is starting to take corrective action, the agency is at serious risk and vulnerable to failure until such action is completed. We recommend that, prior to additional CDC-2000 equipment purchases (except those for office automation needs) and before beginning to develop any applications software that will run on this equipment, the Commissioner of Customs should: Assign accountability and responsibility for implementing NCAP. Ensure that the export and passenger business processes are completed and the requirements generated from these two tasks, along with those of the import process requirements, are used to determine how Customs should accomplish its mission in the future, including who will perform operations and where they will be performed, what functions must be performed as part of these operations, what information is needed to perform these functions, and where data should be created and processed to produce such information, what alternative processing approaches could be used to satisfy Customs' requirements, and what are the costs, benefits, and risks of each approach, and what processing approach is optimal, and not resume CDC-2000 purchases unless CDC-2000 is determined to be the optimal approach. Complete the agency's effort to redefine the role of the systems steering committee to include managing systems as investments as required by the Office of Management and Budget's Circular A-130 and information technology investment guide. This effort should include developing and using explicit criteria to guide system development decisions and using the criteria to revisit whether Customs' planned investments, including ACE and Automated Commercial System enhancements, are appropriate. Direct the steering committee to ensure that all systems being developed strictly adhere to Customs' system development steps. As part of this oversight, we recommend that before applications are developed for ACE, the steering committee ensure that Customs resolves how to incorporate NCAP-mandated functions into ACE and prepares a security plan. In commenting on a draft of this report, Customs agreed with all of our recommendations and said it plans to or has acted to implement them. First, Customs agreed to clarify and document accountability and responsibility for implementing NCAP. Second, Customs agreed to perform the requisite analyses to determine the optimal architecture and to cease CDC-2000 purchases, except those for office automation needs and prototyping, until this determination is made, which is fully responsive to our recommendation. Third, according to Customs, the agency has formally established its investment subcommittee and is studying best investment practices of federal and private sector organizations, which the investment subcommittee plans to use to develop operating procedures and investment criteria for reviewing system decisions. Finally, Customs agreed to have the systems steering committee address compliance with agency system development procedures at the committee's next meeting. We are sending copies of this letter to the Chairmen and the Ranking Minority Members of the Senate Committee on Finance; the Subcommittees on Treasury, Postal Service and General Government of the Senate and House Appropriations Committees; the Senate Committee on Governmental Affairs; and the House Committee on Government Reform and Oversight. We are also sending copies to the Secretary of the Treasury, Commissioner of Customs, and Director of the Office of Management and Budget. Copies will also be available to others upon request. If you have questions about this letter, please contact me at (202) 512-6240. Major contributors are listed in appendix III. To determine the status of Customs' strategy for implementing the National Customs Automation Program (NCAP), we reviewed the law--and its legislative history--establishing NCAP. We interviewed key Customs program and information system officials regarding process improvement and systems modernization efforts for the import process. We examined Customs' People, Processes, and Partnerships report of September 1994, which outlines the agency's vision for organizational and process change, and examined the 5-year information systems plan of April 1995 for fiscal years 1997-2001. We also reviewed background information on Customs' existing automated import processing system and documents supporting current enhancements to that system as well as the (1) annual business plan, (2) project plan, and (3) user requirements documents for Customs' planned ACE system. To assess the adequacy of Customs' strategy for implementing NCAP, we assessed Customs' strategic information management processes for developing ACE. In analyzing Customs' processes, we applied fundamental best practices used by successful private and public sector organizations as discussed in our report, Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994), and our related guide Strategic Information Management (SIM) Self-Assessment Toolkit (GAO/Version 1.0, October 28, 1994, exposure draft). We also made our assessment using the (1) Office of Management and Budget's Circular A-130 Revised, Transmittal 2 (July 1994) and investment guide Evaluating Information Technology Investments, A Practical Guide (Version 1.0, November 1995) and (2) General Services Administration's guide Critical Success Factors for Systems Modernization (October 1988). Specifically, to determine if information resources management plans supported the agency mission and customer needs for imports, we interviewed planning officials and examined 5-year and annual business and information management plans. To assess whether the business process is being considered in developing ACE, we conducted interviews and examined documentation for the redesigned import process, including the structured methodology used to conduct this initiative. At user conferences held by Customs, we also interviewed internal and external users of the current import system to determine whether customer information requirements are being identified in developing ACE. To determine whether ACE was guided by an architecture, we reviewed internal studies evaluating Customs' distributed computing environment. We also analyzed commissioned studies, interviewed the contractors performing the studies, and obtained Customs' response to the technical studies. In assessing whether CDC-2000 meets agencywide information needs, we examined agency documents and interviewed all three core business process owners as well as information systems officials. To determine if ACE is managed as an investment, we interviewed members of Customs' systems steering committee and examined its minutes and an agenda book with background information for a committee meeting. Also, we reviewed Customs' systems development life cycle procedures and compared ACE to applicable procedures to determine if required steps were completed at this initial stage of ACE development. Finally, to determine whether a single official was designated to ensure that NCAP requirements are met we interviewed members of the Trade Compliance Board of Directors which provides oversight of the redesign of the import process. We also examined the board's charter, identified which Customs organizations were represented on the board, and reviewed minutes of meetings. Our work was performed at Customs headquarters in Washington, D.C., and its Data Center in Newington, Virginia. Mark E. Heatwole, Senior Assistant Director Antionette Cattledge, Assistant Director Brian C. Spencer, Technical Assistant Director Agnes I. Spruill, Senior Information Systems Analyst Gary N. Mountjoy, Senior Information Systems Analyst Cristina T. Chaplain, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the Customs Service's efforts to modernize its automated systems, focusing on: (1) the status and adequacy of Customs' implementation of the National Customs Automation Program (NCAP); and (2) whether Customs is using a best-practices approach to improve mission performance through strategic information management and technology in implementing NCAP. GAO found that: (1) Customs is redesigning its import process and plans to develop a new automated import system while it enhances its present system to meet NCAP mandates in the interim; (2) the new import process will serve customer needs better and improve operational efficiency and effectiveness; (3) Customs plans to deploy its new import system in October 1998; (4) Customs' modernization efforts are vulnerable to failure because it has not effectively applied best practices to the implementation of its NCAP strategy; (5) Customs selected new systems before it redesigned its key business processes and is not applying specific criteria in assessing projects, alternatives, costs and benefits, and systems architecture; (6) Customs has not managed its new automated system acquisition as an investment nor planned how to incorporate NCAP requirements into it; (7) two contractors' studies have highlighted the weaknesses in Customs' modernization plans and recommended ways to improve its efforts; (8) Customs plans to hire additional contractors to perform the needed modernization analyses, but it also intends to continue with system development and equipment purchases before these analyses are completed; and (9) Customs has not established clear accountability for ensuring that NCAP requirements are successfully implemented.
| 4,803 | 334 |
Category I special nuclear materials are present at the three design laboratories--the Los Alamos National Laboratory in Los Alamos, New Mexico; the Lawrence Livermore National Laboratory in Livermore, California; and the Sandia National Laboratory in Albuquerque, New Mexico--and two production sites--the Pantex Plant in Amarillo, Texas, and the Y-12 Plant in Oak Ridge, Tennessee, operated by NNSA. Special nuclear material is also present at former production sites, including the Savannah River Site in Savannah River, South Carolina, and the Hanford Site in Richland, Washington. These former sites are now being cleaned up by DOE's Office of Environmental Management (EM). Furthermore, NNSA's Office of Secure Transportation transports these materials among the sites and between the sites and DOD bases. Contractors operate each site for DOE. NNSA and EM have field offices collocated with each site. In fiscal year 2004, NNSA and EM expect to spend nearly $900 million on physical security at their sites. Physical security combines security equipment, personnel, and procedures to protect facilities, information, documents, or material against theft, sabotage, diversion, or other criminal acts. In addition to NNSA and EM, DOE has other important security organizations. DOE's Office of Security develops and promulgates orders and policies, such as the DBT, to guide the department's safeguards and security programs. DOE's Office of Independent Oversight and Performance Assurance supports the department by, among other things, independently evaluating the effectiveness of contractors' performance in safeguards and security. It also performs follow-up reviews to ensure that contractors have taken effective corrective actions and appropriately addressed weaknesses in safeguards and security. Under a recent reorganization, these two offices were incorporated into the new Office of Security and Safety Performance Assurance. Each office, however, retains its individual missions, functions, structure, and relationship to the other. The risks associated with Category I special nuclear materials vary but include the nuclear detonation of a weapon or test device at or near design yield, the creation of improvised nuclear devices capable of producing a nuclear yield, theft for use in an illegal nuclear weapon, and the potential for sabotage in the form of radioactive dispersal. Because of these risks, DOE has long employed risk-based security practices. The key component of DOE's well-established, risk-based security practices is the DBT, a classified document that identifies the characteristics of the potential threats to DOE assets. The DBT has been traditionally based on a classified, multiagency intelligence community assessment of potential terrorist threats, known as the Postulated Threat. The DBT considers a variety of threats in addition to the terrorist threat. Other adversaries considered in the DBT include criminals, psychotics, disgruntled employees, violent activists, and spies. The DBT also considers the threat posed by insiders, those individuals who have authorized, unescorted access to any part of DOE facilities and programs. Insiders may operate alone or may assist an adversary group. Insiders are routinely considered to provide assistance to the terrorist groups found in the DBT. The threat from terrorist groups is generally the most demanding threat contained in the DBT. DOE counters the terrorist threat specified in the DBT with a multifaceted protective system. While specific measures vary from site to site, all protective systems at DOE's most sensitive sites employ a defense-in- depth concept that includes sensors, physical barriers, hardened facilities and vaults, and heavily armed paramilitary protective forces equipped with such items as automatic weapons, night vision equipment, body armor, and chemical protective gear. Depending on the material, protective systems at DOE Category I special nuclear material sites are designed to accomplish the following objectives in response to the terrorist threat: Denial of access. For some potential terrorist objectives, such as the creation of an improvised nuclear device, DOE may employ a protection strategy that requires the engagement and neutralization of adversaries before they can acquire hands-on access to the assets. Denial of task. For nuclear weapons or nuclear test devices that terrorists might seek to steal, DOE requires the prevention and/or neutralization of the adversaries before they can complete a specific task, such as stealing such devices. Containment with recapture. Where the theft of nuclear material (instead of a nuclear weapon) is the likely terrorist objective, DOE requires that adversaries not be allowed to escape the facility and that DOE protective forces recapture the material as soon as possible. This objective requires the use of specially trained and well-equipped special response teams. The effectiveness of the protective system is formally and regularly examined through vulnerability assessments. A vulnerability assessment is a systematic evaluation process in which qualitative and quantitative techniques are applied to detect vulnerabilities and arrive at effective protection of specific assets, such as special nuclear material. To conduct such assessments, DOE uses, among other things, subject matter experts, such as U.S. Special Forces; computer modeling to simulate attacks; and force-on-force performance testing, in which the site's protective forces undergo simulated attacks by a group of mock terrorists. The results of these assessments are documented at each site in a classified document known as the Site Safeguards and Security Plan. In addition to identifying known vulnerabilities, risks, and protection strategies for the site, the Site Safeguards and Security Plan formally acknowledges how much risk the contractor and DOE are willing to accept. Specifically, for more than a decade, DOE has employed a risk management approach that seeks to direct resources to its most critical assets--in this case Category I special nuclear material--and mitigate the risks to these assets to an acceptable level. Levels of risk--high, medium, and low--are assigned classified numerical values and are derived from a mathematical equation that compares a terrorist group's capabilities with the overall effectiveness of the crucial elements of the site's protective forces and systems. Historically, DOE has striven to keep its most critical assets at a low risk level and may insist on immediate compensatory measures should a significant vulnerability develop that increases risk above the low risk level. Compensatory measures could include such things as deploying additional protective forces or curtailing operations until the asset can be better protected. In response to a September 2000 DOE Inspector General's report recommending that DOE establish a policy on what actions are required once high or moderate risk is identified, in September 2003, DOE's Office of Security issued a policy clarification stating that identified high risks at facilities must be formally reported to the Secretary of Energy or Deputy Secretary within 24 hours. In addition, under this policy clarification, identified high and moderate risks require corrective actions and regular reporting. Through a variety of complementary measures, DOE ensures that its safeguards and security policies are being complied with and are performing as intended. Contractors perform regular self-assessments and are encouraged to uncover any problems themselves. DOE Orders also require field offices to comprehensively survey contractors' operations for safeguards and security every year. DOE's Office of Independent Oversight and Performance Assurance provides yet another check through its comprehensive inspection program. All deficiencies identified during surveys and inspections require the contractors to take corrective action. In the immediate aftermath of September 11, 2001, DOE officials realized that the then current DBT, issued in April 1999 and based on a 1998 intelligence community assessment, was obsolete. The September 11, 2001, terrorist attacks suggested larger groups of terrorists, larger vehicle bombs, and broader terrorist aspirations to cause mass casualties and panic than were envisioned in the 1999 DOE DBT. However, formally recognizing these new threats by updating the DBT was difficult and took 21 months because of delays in issuing the Postulated Threat, debates over the size of the future threat and the cost to meet it, and the DOE policy process. As mentioned previously, DOE's new DBT is based on a study known as the Postulated Threat, which was developed by the U.S. intelligence community. The intelligence community originally planned to complete the Postulated Threat by April 2002; however, the document was not completed and officially released until January 2003, about 9 months behind the original schedule. According to DOE and DOD officials, this delay resulted from other demands placed on the intelligence community after September 11, 2001, as well as from sharp debates among the organizations developing the Postulated Threat over the size and capabilities of future terrorist threats and the resources needed to meet these threats. While waiting for the new Postulated Threat, DOE developed several drafts of its new DBT. During this process, debates, similar to those that occurred during the development of the Postulated Threat, emerged in DOE. Like the participants responsible for developing the Postulated Threat, during the development of the DBT, DOE officials debated the size of the future terrorist threat and the costs to meet it. DOE officials at all levels told us that concern over resources played a large role in developing the 2003 DBT, with some officials calling the DBT the "funding basis threat," or the maximum threat the department could afford. This tension between threat size and resources is not a new development. According to a DOE analysis of the development of prior DBTs, political and budgetary pressures and the apparent desire to reduce the requirements for the size of protective forces appear to have played a significant role in determining the terrorist group numbers contained in prior DBTs. Finally, DOE developed the DBT using DOE's policy process, which emphasizes developing consensus through a review and comment process by program offices, such as EM and NNSA. However, many DOE and contractor officials found that the policy process for developing the new DBT was laborious and not timely, especially given the more dangerous threat environment that has existed since September 11, 2001. As a result, during the time it took DOE to develop the new DBT, its sites were only required to defend against the terrorist group defined in the 1999 DBT, which, in the aftermath of September 11, 2001, DOE officials realized was obsolete. While the May 2003 DBT identifies a larger terrorist group than did the previous DBT, the threat identified in the new DBT, in most cases, is less than the terrorist threat identified in the intelligence community's Postulated Threat. The Postulated Threat estimated that the force attacking a nuclear weapons site would probably be a relatively small group of terrorists, although it was possible that an adversary might use a greater number of terrorists if that was the only way to attain an important strategic goal. In contrast to the Postulated Threat, DOE is preparing to defend against a significantly smaller group of terrorists attacking many of its facilities. Specifically, only for its sites and operations that handle nuclear weapons is DOE currently preparing to defend against an attacking force that approximates the lower range of the threat identified in the Postulated Threat. For its other Category I special nuclear material sites, all of which fall under the Postulated Threat's definition of a nuclear weapons site, DOE is requiring preparations to defend against a terrorist force significantly smaller than was identified in the Postulated Threat. DOE calls this a graded threat approach. Some of these other sites, however, may have improvised nuclear device concerns that, if successfully exploited by terrorists, could result in a nuclear detonation. Nevertheless, under the graded threat approach, DOE requires these sites only to be prepared to defend against a smaller force of terrorists than was identified by the Postulated Threat. Officials in DOE's Office of Independent Oversight and Performance Assurance disagreed with this approach and noted that sites with improvised nuclear device concerns should be held to the same requirements as facilities that possess nuclear weapons and test devices since the potential worst-case consequence at both types of facilities would be the same--a nuclear detonation. Other DOE officials and an official in DOD's Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence disagreed with the overall graded threat approach, believing that the threat should not be embedded in the DBT by adjusting the number of terrorists that might attack a particular target. DOE Office of Security officials cited three reasons for why the department departed from the Postulated Threat's assessment of the potential size of terrorist forces. First, these officials stated that they believed that the Postulated Threat only applied to sites that handled completed nuclear weapons and test devices. However, both the 2003 Postulated Threat, as well as the preceding 1998 Postulated Threat, state that the threat applies to nuclear weapons and special nuclear material without making any distinction between them. Second, DOE Office of Security officials believed that the higher threat levels contained in the 2003 Postulated Threat represented the worst potential worldwide terrorist case over a 10-year period. These officials noted that while some U.S. assets, such as military bases, are located in parts of the world where terrorist groups receive some support from local governments and societies thereby allowing for an expanded range of capabilities, DOE facilities are located within the United States, where terrorists would have a more difficult time operating. Furthermore, DOE Office of Security officials stated that the DBT focuses on a nearer-term threat of 5 years. As such, DOE Office of Security officials said that they chose to focus on what their subject matter experts believed was the maximum, credible, near-term threat to their facilities. However, while the 1998 Postulated Threat made a distinction between the size of terrorist threats abroad and those within the United States, the 2003 Postulated Threat, reflecting the potential implications of the September 2001 terrorist attacks, did not make this distinction. Finally, DOE Office of Security officials stated that the Postulated Threat document represented a reference guide instead of a policy document that had to be rigidly followed. The Postulated Threat does acknowledge that it should not be used as the sole consideration to dictate specific security requirements and that decisions regarding security risks should be made and managed by decision makers in policy offices. However, DOE has traditionally based its DBT on the Postulated Threat. For example, the prior DBT, issued in 1999, adopted exactly the same terrorist threat size as was identified by the 1998 Postulated Threat. Finally, the department's criteria for determining the severity of radiological, chemical, and biological sabotage may be insufficient. For example, the criterion used for protection against radiological sabotage is based on acute radiation dosages received by individuals. However, this criterion may not fully capture or characterize the damage that a major radiological dispersal at a DOE site might cause. For example, according to a March 2002 DOE response to a January 23, 2002, letter from Representative Edward J. Markey, a worst-case analysis at one DOE site showed that while a radiological dispersal would not pose immediate, acute health problems for the general public, the public could experience measurable increases in cancer mortality over a period of decades after such an event. Moreover, releases at the site could also have environmental consequences requiring hundreds of millions to billions of dollars to clean up. Contamination could also affect habitability for tens of miles from the site, possibly affecting hundreds of thousands of residents for many years. Likewise, the same response showed that a similar event at a NNSA site could result in a dispersal of plutonium that could contaminate several hundred square miles and ultimately cause thousands of cancer deaths. For chemical sabotage standards, the 2003 DBT requires sites to protect to industry standards. However, we reported March 2003 year that such standards currently do not exist. Specifically, we found that no federal laws explicitly require chemical facilities to assess vulnerabilities or take security actions to safeguard their facilities against a terrorist attack. Finally, the protection criteria for biological sabotage are based on laboratory safety standards developed by the U.S. Centers for Disease Control and not physical security standards. While DOE issued the final DBT in May 2003, it has only recently resolved a number of significant issues that may affect the ability of its sites to fully meet the threat contained in the new DBT in a timely fashion and is still addressing other issues. Fully resolving all of these issues may take several years, and the total cost of meeting the new threats is currently unknown. Because some sites will be unable to effectively counter the higher threat contained in the new DBT for up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. In order to undertake the necessary range of vulnerability assessments to accurately evaluate their level of risk under the new DBT and implement necessary protective measures, DOE recognized that it had to complete a number of key activities. DOE only recently completed three of these key activities. First, in February 2004, DOE issued its revised Adversary Capabilities List, which is a classified companion document to the DBT, that lists the potential weaponry, tactics, and capabilities of the terrorist group described in the DBT. This document has been amended to include, among other things, heavier weaponry and other capabilities that are potentially available to terrorists who might attack DOE facilities. DOE is continuing to review relevant intelligence information for possible incorporation into future revisions of the Adversary Capabilities List. Second, DOE also only recently provided additional DBT implementation guidance. In a July 2003 report, DOE's Office of Independent Oversight and Performance Assurance noted that DOE sites had found initial DBT implementation guidance confusing. For example, when the Deputy Secretary of Energy issued the new DBT in May 2003, the cover memo said the new DBT was effective immediately but that much of the DBT would be implemented in fiscal years 2005 and 2006. According to a 2003 report by the Office of Independent Oversight and Performance Assurance, many DOE sites interpreted this implementation period to mean that they should, through fiscal year 2006, only be measured against the previous, less demanding 1999 DBT. In response to this confusion, the Deputy Secretary issued further guidance in September 2003 that called for the following, among other things: DOE's Office of Security to issue more specific guidance by October 22, 2003, regarding DBT implementation expectations, schedules, and requirements. DOE issued this guidance January 30, 2004. Quarterly reports showing sites' incremental progress in meeting the new DBT for ongoing activities. The first series of quarterly progress reports may be issued in July 2004. Immediate compliance with the new DBT for new and reactivated operations. A third important DBT-related issue was just completed in early April 2004. A special team created in the 2003 DBT, composed of weapons designers and security specialists, finalized its report on each site's improvised nuclear device vulnerabilities. The results of this report were briefed to senior DOE officials in March 2004 and the Deputy Secretary of Energy issued guidance, based on this report, to DOE sites in early April 2004. As a result, some sites may be required under the 2003 DBT to shift to enhanced protection strategies, which could be very costly. This special team's report may most affect EM sites because their improvised nuclear device potential had not previously been explored. Finally, DOE's Office of Security has not completed all of the activities associated with the new vulnerability assessment methodology it has been developing for over a year. DOE's Office of Security believes this methodology, which uses a new mathematical equation for determining levels of risk, will result in a more sensitive and accurate portrayal of each site's defenses-in-depth and the effectiveness of sites' protective systems (i.e., physical security systems and protective forces) when compared with the new DBT. DOE's Office of Security decided to develop this new equation because its old mathematical equation had been challenged on technical grounds and did not give sites credit for the full range of their defenses-in-depth. While DOE's Office of Security completed this equation in December 2002, officials from this office believe it will probably not be completely implemented at the sites for at least another year for two reasons. First, site personnel who implement this methodology will require additional training to ensure they are employing it properly. DOE's Office of Security conducted initial training in December 2003, as well as a prototype course in February 2004, and has developed a nine-course vulnerability assessment certification program. Second, sites will have to collect additional data to support the broader evaluation of their protective systems against the new DBT. Collecting these data will require additional computer modeling and force-on-force performance testing. Because of the slow resolution of some of these issues, DOE has not developed any official long-range cost estimates or developed any integrated, long-range implementation plans for the May 2003 DBT. Specifically, neither the fiscal year 2003 nor 2004 budgets contained any provisions for DBT implementation costs. However, during this period, DOE did receive additional safeguards and security funding through budget reprogramming and supplemental appropriations. DOE is using most of these additional funds to cover the higher operational costs associated with the increased security condition (SECON) measures. DOE has gathered initial DBT implementation budget data and has requested additional DBT implementation funding in the fiscal year 2005 budget: $90 million for NNSA, $18 million for the Secure Transportation Asset within the Office of Secure Transportation, and $26 million for EM. However, DOE officials believe the budget data collected so far has been of generally poor quality because most sites have not yet completed the necessary vulnerability assessments to determine their resource requirements. Consequently, the fiscal year 2006 budget may be the first budget to begin to accurately reflect the safeguards and security costs of meeting the requirements of the new DBT. Reflecting these various delays and uncertainties, in September 2003, the Deputy Secretary changed the deadline for DOE program offices, such as EM and NNSA, to submit DBT implementation plans from the original target of October 2003 to the end of January 2004. NNSA and EM approved these plans in February 2004. DOE's Office of Security has reviewed these plans and is planning to provide implementation assistance to sites that request it. DOE officials have described these plans as being ambitious in terms of the amount of work that has to be done within a relatively short time frame and dependent on continued increases in safeguards and security funding, primarily for additional protective force personnel. However, some plans may be based on assumptions that are no longer valid. Revising these plans could require additional resources, as well as add time to the DBT implementation process. A DOE Office of Budget official told us that current DBT implementation cost estimates do not include items such as closing unneeded facilities, transporting and consolidating materials, completing line item construction projects, and other important activities that are outside of the responsibility of the safeguards and security program. For example, EM's Security Director told us that for EM to fully comply with the DBT requirements in fiscal year 2006 at one of its sites, it will have to close and de-inventory two facilities, consolidate excess materials into remaining special nuclear materials facilities, and move consolidated Category I special nuclear material, which NNSA's Office of Secure Transportation will transport, to another site. Likewise, the EM Security Director told us that to meet the DBT requirements at another site, EM will have to accelerate the closure of one facility and transfer special nuclear material to another facility on the site. The costs to close these facilities and to move materials within a site are borne by the EM program budget and not by the EM safeguards and security budget. Similarly, the costs to transport the material between sites are borne by NNSA's Office of Secure Transportation budget and not by EM's safeguards and security budget. A DOE Office of Budget official told us that a comprehensive, department-wide approach to budgeting for DBT implementation that includes such important program activities as described above is needed; however, such an approach does not currently exist. The department plans to complete DBT implementation by the end of fiscal year 2006. However, most sites estimate that it will take 2 to 5 years, if they receive adequate funding, to fully meet the requirements of the new DBT. During this time, sites will have to conduct vulnerability assessments, undertake performance testing, and develop Site Safeguards and Security Plans. Consequently, full DBT implementation could occur anywhere from fiscal year 2005 to fiscal year 2008. Some sites may be able to move more quickly and meet the department's deadline of the end of fiscal year 2006. Because some sites will be unable to effectively counter the threat contained in the new DBT for a period of up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. For example, the Office of Independent Oversight and Performance Assurance has concluded in recent inspections that at least two DOE sites face fundamental and not easily resolved security problems that will make meeting the requirements of the new DBT difficult. For other DOE sites, their level of risk under the new DBT remains largely unknown until they can conduct the necessary vulnerability assessments. In closing, while DOE struggled to develop its new DBT, the DBT that DOE ultimately developed is substantially more demanding than the previous one. Because the new DBT is more demanding and because DOE wants to implement it by end of fiscal year 2006--a period of about 29 months--DOE must press forward with a series of additional actions to ensure that it is fully prepared to provide a timely and cost effective defense of its most sensitive facilities. First, because the September 11, 2001, terrorist attacks suggested larger groups of terrorists with broader aspirations for causing mass casualties and panic, we believe that the DBT development process that was used requires reexamination. While DOE may point to delays in the development of the Postulated Threat as the primary reason for the almost 2 years it took to develop a new DBT, DOE was also working on the DBT itself for most of that time. We believe the difficulty associated with developing a consensus using DOE's traditional policy-making process was a key factor in the time it took to develop a new DBT. During this extended period, DOE's sites were only being defended against what was widely recognized as an obsolete terrorist threat level. Second, we are concerned about two aspects of the resulting DBT. We are not persuaded that there is sufficient difference, in its ability to achieve the objective of causing mass casualties or creating public panic, between the detonation of an improvised nuclear device and the detonation of a nuclear weapon or test device at or near design yield that warrants setting the threat level at a lower number of terrorists. Furthermore, while we applaud DOE for adding additional requirements to the DBT such as protection strategies to guard against radiological, chemical, and biological sabotage, we believe that DOE needs to reevaluate its criteria for terrorist acts of sabotage, especially in the chemical area, to make it more defensible from a physical security perspective. Finally, because some sites will be unable to effectively counter the threat contained in the new DBT for a period of up to several years, these sites should be considered to be at higher risk under the new DBT than they were under the old DBT. As a result, DOE needs to take a series of actions to mitigate these risks to an acceptable level as quickly as possible. To accomplish this, it is important for DOE to go about the hard business of a comprehensive department-wide approach to implementing needed changes in its protective strategy. Because the consequences of a successful terrorist attack on a DOE site could be so devastating, we believe it is important for DOE to better inform Congress about what sites are at high risk and what progress is being made to reduce these risks to acceptable levels. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information on this testimony, please contact Robin M. Nazzaro at (202) 512-3841. James Noel and Jonathan Gill also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
A successful terrorist attack on Department of Energy (DOE) sites containing nuclear weapons or the material used in nuclear weapons could have devastating consequences for the site and its surrounding communities. Because of these risks, DOE needs an effective safeguards and security program. A key component of an effective program is the design basis threat (DBT), a classified document that identifies, among other things, the potential size and capabilities of terrorist forces. The terrorist attacks of September 11, 2001, rendered the then-current DBT obsolete, resulting in DOE issuing a new version in May 2003. GAO (1) identified why DOE took almost 2 years to develop a new DBT, (2) analyzed the higher threat in the new DBT, and (3) identified remaining issues that need to be resolved in order for DOE to meet the threat contained in the new DBT. DOE took a series of actions in response to the terrorist attacks of September 11, 2001. While each of these has been important, in and of themselves, they are not sufficient to ensure that all of DOE's sites are adequately prepared to defend themselves against the higher terrorist threat present in the post September 11, 2001 world. Specifically, GAO found that DOE took almost 2 years to develop a new DBT because of (1) delays in developing an intelligence community assessment--known as the Postulated Threat--of the terrorist threat to nuclear weapon facilities, (2) DOE's lengthy comment and review process for developing policy, and (3) sharp debates within DOE and other government organizations over the size and capabilities of future terrorist threats and the availability of resources to meet these threats. While the May 2003 DBT identifies a larger terrorist threat than did the previous DBT, the threat identified in the new DBT, in most cases, is less than the threat identified in the intelligence community's Postulated Threat, on which the DBT has been traditionally based. The new DBT identifies new possible terrorist acts such as radiological, chemical, or biological sabotage. However, the criteria that DOE has selected for determining when facilities may need to be protected against these forms of sabotage may not be sufficient. For example, for chemical sabotage, the 2003 DBT requires sites to protect to "industry standards;" however, such standards currently do not exist. DOE has been slow to resolve a number of significant issues, such as issuing additional DBT implementation guidance, developing DBT implementation plans, and developing budgets to support these plans, that may affect the ability of its sites to fully meet the threat contained in the new DBT in a timely fashion. Consequently, DOE's deadline to meet the requirements of the new DBT by the end of fiscal year 2006 is probably not realistic for some sites.
| 6,141 | 579 |
cases, the files contained no evidence of OIRA changes, and we could not tell if that meant that there had been no such changes to the rules or whether the changes were just not documented. Also, the information in the dockets for some of the rules was quite voluminous, and many did not have indexes to help the public find the required documents. Therefore, we recommended that the OIRA Administrator issue guidance to the agencies on how to implement the executive order's transparency requirements and how to organize their rulemaking dockets to best facilitate public access and disclosure. The OIRA Administrator's comments in reaction to our recommendations appeared at odds with the requirements and intent of the executive order. Her comments may also signal a need for ongoing congressional oversight and, in some cases, greater specificity as Congress codifies agencies' public disclosure responsibilities and OIRA's role in the regulatory review process. For example, in response to our recommendation that OIRA issue guidance to agencies on how to improve the accessibility of rulemaking dockets, the Administrator said that "it is not the role of OMB to advise other agencies on general matters of administrative practice." However, section 2(b) of the executive order states that "o the extent permitted by law, OMB shall provide guidance to agencies...," and that OIRA "is the repository of expertise concerning regulatory issues, including methodologies and procedures that affect more than one agency...." We believe that OIRA has a clear responsibility under the executive order to exercise leadership and provide the agencies with guidance on such crosscutting regulatory issues, so we retained our recommendation. The OIRA Administrator also indicated in her comments that she believed the executive order did not require agencies to document changes made at OIRA's suggestion before a rule is formally submitted to OIRA. However, the Administrator also said that OIRA can become deeply involved in important agency rules well before they are submitted to OIRA for formal review. Therefore, adherence to her interpretation of the order would result in agencies' failing to document OIRA's early involvement in the rulemaking process. These transparency requirements were put in place because of earlier congressional concerns regarding how rules were changed during the regulatory review process. Congress was clearly interested in making OIRA's role in that process as transparent as possible. In response to the Administrator's comments, we retained our original recommendation but specified that OIRA's guidance should require agencies to document changes made at OIRA's suggestion whenever they occur. Finally, the OIRA Administrator said that "an interested individual" could identify changes made to a draft rule by comparing drafts of the rule. This position seems to change the focus of responsibility in Executive Order 12866. The order requires agencies to identify for the public changes made to draft rules. It does not place the responsibility on the public to identify changes made to agency rules. Also, comparison of a draft rule submitted for review with the draft on which OIRA concluded review would not indicate which of the changes were made at OIRA's suggestion, which is a specific requirement of the order. We believe that enactment of the public disclosure requirements in S. 981 would provide a statutory foundation for the public's right to regulatory review information. In particular, the bill's requirement that these rule changes be described in a single document would make it easier for the public to understand how rules change during the review process. We are also pleased to see that the new version of S. 981 requires agencies to document when no changes are suggested or recommended by OIRA. As I said earlier, the absence of documentation could indicate that either no changes were made to the rule or that the changes were not documented. Additional refinements to the bill may be needed in light of the OIRA Administrator's comments responding to our report. For example, S. 981 may need to state more specifically that agencies must document the changes made to rules at the suggestion or recommendation of OIRA whenever they occur, not just the changes made during the period of OIRA's formal review. Similarly, if Congress wants OIRA to issue guidance on how agencies can structure rulemaking dockets to facilitate public access, S. 981 may need to specifically instruct the agency to do so. During last September's hearing on S. 981, one of the witnesses indicated that Congress should determine the effectiveness of previously enacted regulatory reforms before enacting additional reforms. We recently completed a broad review of one of the most recent such reform efforts--title II of the Unfunded Mandates Reform Act of 1995 (UMRA).Title II of UMRA is similar to S. 981 in that it requires agencies to take a number of analytical and procedural steps during the rulemaking process. Therefore, analysis of UMRA's implementation may prove valuable in determining both the need for further reform and how agency requirements should be crafted. We concluded that UMRA's title II requirements had little effect on agencies' rulemaking actions because those requirements (1) did not apply to many large rulemaking actions, (2) permitted agencies not to take certain actions if the agencies determined they were duplicative or unfeasible, and (3) required agencies to take actions that they were already required to take. For example, title II of UMRA requires agencies to prepare "written statements" containing information on regulatory costs, benefits, and other matters for any rule (1) for which a proposed rule was published, (2) that includes a federal mandate, and (3) that may result in the expenditure of $100 million or more in any 1 year by state, local, or tribal governments, in the aggregate, or the private sector. We examined the 110 economically significant rules that were promulgated during the first 2 years of UMRA (March 22, 1995, until March 22, 1997) by agencies covered by the Act and concluded that UMRA's written statement requirements did not apply to 78 of these 110 rules. Some of the rules had no associated proposed rule. Others were not technically "mandates"--i.e., "enforceable duties" unrelated to a voluntary program or federal financial assistance. Some rules were "economically significant" in that they would have a $100 million effect on the economy, but did not require "expenditures" by state, local, or tribal governments or the private sector of $100 million in any 1 year. Certain sections of UMRA permitted agencies to decide what actions to take. For example, subsection 202(a)(3) says agencies' written statements must contain estimates of future compliance costs and any disproportionate budgetary effects "if and to the extent that the agency determines that accurate estimates are reasonably feasible." UMRA also permitted agencies to prepare the written statement as part of any other statement or analysis. Because the agencies' rules commonly contain the information required in the written statements (e.g., the provision of federal law under which the rule is being promulgated), the agencies only rarely prepared a separate UMRA written statement. development of regulatory proposals containing significant federal intergovernmental mandates. However, Executive Order 12875 required almost exactly the same sort of process when it was issued in 1993. Like UMRA, S. 981 contains some of the same requirements contained in Executive Orders 12866 and 12875, and in previous legislation. However, the requirements in the bill are also different from existing requirements in many respects. For example, S. 981 appears to cover all of the economically significant rules that UMRA did not cover, as well as rules by many independent regulatory agencies that were not covered by the executive orders. S. 981 would also address a number of topics that are not addressed by either UMRA or the executive orders, including risk assessments and peer review. These requirements could have the effect of improving the quality of the cost-benefit analyses that agencies are currently required to perform under Executive Order 12866. The new version of S. 981 contains one set of requirements that was not in the bill introduced last year--that agencies develop a plan for the periodic review of rules issued by the agency that have or will have a significant economic impact on a substantial number of small entities. Each agency is also required to publish in the Federal Register a list of rules that will be reviewed under the plan in the succeeding fiscal year. In one sense, these requirements are not really "new." They are a refinement and underscoring of requirements originally put in place by section 610 of the Regulatory Flexibility Act (RFA) of 1980. Our recent work related to the RFA suggests that at least some of the RFA's requirements are not being properly implemented. In 1997, we reported that only three agencies identified regulations that they planned to review within the next year in the November 1996 edition of the Unified Agenda of Federal Regulatory and Deregulatory Action. Of the 21 entries in that edition of the Unified Agenda that these 3 agencies listed, none met the requirements in the RFA. For example, although section 610 requires agencies to notify the public about an upcoming review of an existing rule to determine whether and, if so, what changes to make, many of the "section 610" entries in the Agenda announced regulatory actions that the agencies had taken or planned to take. Earlier this month we updated our 1997 report by reviewing agencies' use of the October 1997 Unified Agenda. We reported that seven agencies had used the Agenda to identify regulations that they said they planned to review. However, of the 34 such entries in that edition of the Agenda, only 3 met the requirements of the statute. Although the Unified Agenda is a convenient and efficient mechanism by which agencies can satisfy the notice requirements in section 610 of the RFA, agencies can print those notices in any part of the Federal Register. We did an electronic search of the 1997 Federal Register to determine whether it contained any other references to a "section 610 review." We found no such references. There is no way to know with certainty how many regulations in the Code of Federal Regulations have a "significant economic impact on a substantial number of small entities," or how many of those regulations the issuing agencies have reviewed pursuant to section 610. Agencies differ in their interpretation of this phrase, and we have recommended that a governmentwide definition be developed. Nevertheless, the relatively small number of section 610 notices in the Unified Agendas, combined with the fact that nearly all of those notices did not meet the requirements of the statute, suggests that agencies may not be conducting the required section 610 rule reviews. Although many federal agencies reviewed all of their regulations as part of the administration's "page-by-page review" effort to eliminate and revise regulations, those reviews would not meet the requirements of section 610 unless the agencies utilized the steps delineated in that section of the RFA that were designed to allow the public to be part of the review process. Therefore, we believe that the reaffirmation and refinement of the section 610 rule review process in S. 981 can serve to underscore Congress' commitment to periodic review of agencies' rules and the public's involvement in that process. Another critical element of S. 981 is its emphasis on cost-benefit analysis for major rules in the rulemaking process. Mr. Chairman, at your and Senator Glenn's request, we have been examining 20 economic analyses at 5 agencies to determine the extent to which those analyses contain the "best practices" elements recommended in OMB's January 1996 guidance for conducting cost-benefit analyses. We are also attempting to determine the extent to which the analyses are used in the agencies' decisionmaking processes. Although our review is continuing, we have some tentative results that are relevant to this Committee's consideration of S. 981. The 20 economic analyses varied significantly in the extent to which they contained the elements that OMB recommended. For example, although the guidance encourages agencies to monetize the costs and benefits of a broad range of regulatory alternatives, about half of the analyses did not monetize the costs of all alternatives and about two-thirds did not monetize the benefits. Several of the analyses did not discuss any alternatives other than the proposed regulatory action. The OMB guidance also stresses the importance of explicitly presenting the assumptions, limitations, and uncertainties in economic analyses. However, the 20 analyses that we reviewed frequently did not explain why certain assumptions or values were used, such as the discount rates used to determine the present-value of costs and benefits and the values assigned to a human life. Also, about a third of the analyses did not address the uncertainties associated with the analyses. For the most part, the analyses played a somewhat limited role in the agencies' decisionmaking process--examining the cost-effectiveness of various approaches an agency could use within a relatively narrow range of alternatives, or helping the agency define the regulations' coverage or implementation date. The analyses did not fundamentally affect agencies' decisions on whether or not to regulate, nor did they cause the agencies to select significantly different regulatory alternatives than the ones that had been originally considered. decisionmaking was the need to issue the regulations quickly due to emergencies, statutory deadlines, and court orders. Enactment of the analytical transparency and executive summary requirements in S. 981 would extend and underscore Congress' previous statutory requirements that agencies identify how regulatory decisions are made. We believe that Congress and the public have a right to know what alternatives the agencies considered and what assumptions they made in deciding how to regulate. Although those assumptions may legitimately vary from one analysis to another, the agencies should explain those variations. Mr. Chairman, S. 981 contains a number of provisions designed to improve regulatory management. These provisions strive to make the regulatory process more intelligible and accessible to the public, more effective, and better managed. Passage of S. 981 would provide a statutory foundation for such principles as openness, accountability, and sound science in rulemaking. This Committee has been diligent in its oversight of the federal regulatory process. However, our reviews of current regulatory requirements suggest that, even if S. 981 is enacted into law, Congress will need to carefully oversee its implementation to ensure that the principles embodied in the bill are faithfully implemented. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed its work on the Regulatory Improvement Act of 1998, focusing on federal agencies' implementation of: (1) the transparency requirements in Executive Order 12866; (2) title II of the Unfunded Mandates Reform Act (UMRA) of 1995; (3) the public notification in section 610 of the Regulatory Flexibility Act (RFA) of 1980; and (4) Office of Management and Budget's (OMB) best practices guide for economic analyses used in rulemaking. GAO noted that: (1) GAO reviewed four major rulemaking agencies' public dockets and concluded that it was usually very difficult to locate the documentation that the executive order required; (2) in many cases, the dockets contained some evidence of changes made during or because of the Office of Information and Regulatory Affairs (OIRA) review, but GAO could not be sure that all such changes had been documented; (3) in other cases, the files contained no evidence of OIRA changes, and GAO could not tell if there had been no such changes to the rule or whether the changes were just not documented; (4) UMRA's title II requirements had little effect on agencies' rulemaking actions because those requirements: (a) did not apply to many large rulemaking actions; (b) permitted agencies not to take certain actions if the agencies determined they were duplicative or unfeasible; and (c) required agencies to take actions that they were already required to take; (5) the new version of S. 981 contains one set of requirements that was not in the bill introduced last year--that agencies develop a plan for the periodic review of rules issued by the agency that have or will have a significant economic impact on a substantial number of small entities; (6) each agency is also required to publish in the Federal Register a list of rules that will be reviewed under the plan in the succeeding fiscal year; (7) although the Unified Agenda is a convenient and efficient mechanism by which agencies can satisfy the notice requirements in section 610 of the RFA, agencies can print those notices in any part of the Federal Register; (8) GAO believes that the reaffirmation and refinement of the section 610 rule review process in S. 981 can serve to underscore Congress' commitment to periodic review of agencies' rules and the public's involvement in that process; (9) another critical element of S. 981 is its emphasis on cost-benefit analysis for major rules in the rulemaking process; (10) GAO has been examining 20 economic analyses at 5 agencies to determine the extent to which those analyses contain the best practices elements recommended in OMB's January 1996 guidance for conducting cost-benefit analyses; (11) the 20 economic analyses varied significantly in the extent to which they contained the elements that OMB recommended; and (12) agency officials stated that the variations in the degree to which the economic analyses followed OMB guidance and the limited use of the economic analyses were primarily caused by the limited degree of discretion that the underlying statutes permitted.
| 3,245 | 639 |
In July 2002, President Bush issued the National Strategy for Homeland Security. The strategy set forth overall objectives to prevent terrorist attacks within the United States, reduce America's vulnerability to terrorism, and minimize the damage and assist in the recovery from attacks that occur. The strategy further identified a plan to strengthen homeland security through the cooperation and partnering of federal, state, local, and private sector organizations on an array of functions. It also specified a number of federal departments, as well as nonfederal organizations, that have important roles in securing the homeland, with DHS having key responsibilities in implementing established homeland security mission areas. This strategy was updated and reissued in October 2007. In November 2002, the Homeland Security Act of 2002 was enacted into law, creating DHS. The act defined the department's missions to include preventing terrorist attacks within the United States; reducing U.S. vulnerability to terrorism; and minimizing the damages, and assisting in the recovery from, attacks that occur within the United States. The act further specified major responsibilities for the department, including the analysis of information and protection of infrastructure; development of countermeasures against chemical, biological, radiological, and nuclear, and other emerging terrorist threats; securing U.S. borders and transportation systems; and organizing emergency preparedness and response efforts. DHS began operations in March 2003. Its establishment represented a fusion of 22 federal agencies to coordinate and centralize the leadership of many homeland security activities under a single department. We have evaluated many of DHS's management functions and programs since the department's establishment, and have issued over 400 related products. In particular, in August 2007, we reported on the progress DHS had made since its inception in implementing its management and mission functions. We also reported on broad themes that have underpinned DHS's implementation efforts, such as agency transformation, strategic planning, and risk management. Over the past five years, we have made over 900 recommendations to DHS on ways to improve operations and address key themes, such as to develop performance measures and set milestones for key programs and implement internal controls to help ensure program effectiveness. DHS has implemented some of these recommendations, taken actions to address others, and taken other steps to strengthen its mission activities and facilitate management integration. DHS has made progress in implementing its management functions in the areas of acquisition, financial, human capital, information technology, and real property management. Overall, DHS has made more progress in implementing its mission functions--border security; immigration enforcement; immigration services; and aviation, surface transportation, and maritime security; for example--than its management functions, reflecting an initial focus on implementing efforts to secure the homeland. DHS has had to undertake these critical missions while also working to transform itself into a fully functioning cabinet department--a difficult undertaking for any organization and one that can take, at a minimum, 5 to 7 years to complete even under less daunting circumstances. As DHS continues to mature as an organization, we have reported that it will be important that it works to strengthen its management areas since the effectiveness of these functions will ultimately impact its ability to fulfill its mission to protect the homeland. Acquisition Management. DHS's acquisition function includes managing and overseeing nearly $16 billion in acquisitions to support its broad and complex missions, such as information systems, new technologies, aircraft, ships, and professional services. DHS has recognized the need to improve acquisition outcomes and taken some positive steps to organize and assess the acquisition function, but continues to lack clear accountability for the outcomes of acquisition dollars spent. A common theme in our work on acquisition management is DHS's struggle to provide adequate support for its mission components and resources for departmentwide oversight. DHS has not yet accomplished its goal of integrating the acquisition function across the department. For example, the structure of DHS's acquisition function creates ambiguity about who is accountable for acquisition decisions because it depends on a system of dual accountability and cooperation and collaboration between the Chief Procurement Officer (CPO) and the component heads. In June 2007, DHS officials stated that they were in the process of modifying the lines of business management directive, which exempts the Coast Guard and the Secret Service from complying, to ensure that no contracting organization is exempt. This directive has not yet been revised. In September 2007, we reported on continued acquisition oversight issues at DHS, identifying that the department has not fully ensured proper oversight of its contractors providing services closely supporting inherently government functions. The CPO has established a departmentwide program to improve oversight; however, DHS has been challenged to provide the appropriate level of oversight and management attention to its service contracting and major investments, and we continue to be concerned that the CPO may not have sufficient authority to effectively oversee the department's acquisitions. DHS still has not developed clear and transparent policies and processes for all acquisitions. Concerns have been raised about how the investment review process has been used to oversee its largest acquisitions, and the investment review process in still under revision. We have ongoing work reviewing oversight of DHS's major investments which follows-up on our prior recommendations. Regarding the acquisition workforce, our work and the work of the DHS IG has found acquisition workforce challenges across the department; we have ongoing work in this area as well. Financial Management. DHS's financial management efforts include consolidating or integrating component agencies' financial management systems. DHS has made progress in addressing financial management and internal control weaknesses and has designated a Chief Financial Officer, but the department continues to face challenges in these areas. However, since its establishment, DHS has been unable to obtain an unqualified or "clean" audit opinion on its financial statements. For fiscal year 2007, the independent auditor issued a disclaimer on DHS's financial statements and identified eight significant deficiencies in DHS's internal control over financial reporting, seven of which were so serious that they qualified as material weaknesses. DHS has taken steps to prepare corrective action plans for its internal control weaknesses by, for example, developing and issuing a departmentwide strategic plan for the corrective action plan process and holding workshops on corrective action plans. While these are positive steps, DHS and its components have not yet fully implemented corrective action plans to address all significant deficiencies--including the material weaknesses--identified by previous financial statement audits. According to DHS officials, the department has developed goals and milestones for addressing these weaknesses in its internal control over financial reporting. Until these weaknesses are resolved, DHS will not be in position to provide reliable, timely, and useful financial data to support day-to-day decision making. Human Capital Management. DHS's key human capital management areas include pay, performance management, classification, labor relations, adverse actions, employee appeals, and diversity management. DHS has significant flexibility to design a modern human capital management system, and in October 2004 DHS issued its human capital strategic plan. DHS and the Office of Personnel Management jointly released the final regulations on DHS's new human capital system in February 2005. Although DHS intended to implement the new personnel system in the summer of 2005, court decisions enjoined the department from implementing certain labor management portions of the system. DHS has since taken actions to implement its human capital system. In July 2005 DHS issued its first departmental training plan, and in April 2007, it issued its Fiscal Year 2007 and 2008 Human Capital Operational Plan. This plan identifies five department priorities--hiring and retaining a talented and diverse workforce; creating a DHS-wide culture of performance; creating high-quality learning and development programs for DHS employees; implementing a DHS-wide integrated leadership system; and being a model of human capital service excellence. DHS has met some of the goals identified in the plan, such as developing a hiring model and a communication plan. However, more work remains for DHS to fully implement its human capital system. For example, DHS has not yet taken steps to fully link its human capital planning to overall agency strategic planning nor has it established a market-based and more performance- oriented pay system. DHS has also faced difficulties in developing and implementing effective processes to recruit and hire employees. Although DHS has developed its hiring model and provided it to all components, we reported in August 2007 that DHS had not yet assessed components' practices against the model. Furthermore, employee morale at DHS has been low, as measured by the results of the 2006 U.S. Office of Personnel Management Federal Human Capital Survey. DHS has taken steps to seek employee feedback and involve them in decision making by, for example, expanding its communication strategy and developing an overall strategy for addressing employee concerns reflects in the survey results. In addition, although DHS has developed a department-level training strategy, it has faced challenges in fully implementing this strategy. Information Technology Management. DHS's information technology management efforts should include: developing and using an enterprise architecture, or corporate blueprint, as an authoritative frame of reference to guide and constrain system investments; defining and following a corporate process for informed decision making by senior leadership about competing information technology investment options; applying system and software development and acquisition discipline and rigor when defining, designing, developing, testing, deploying, and maintaining systems; establishing a comprehensive, departmentwide information security program to protect information and systems; having sufficient people with the right knowledge, skills, and abilities to execute each of these areas now and in the future; and centralizing leadership for extending these disciplines throughout the organization with an empowered Chief Information Officer. DHS has undertaken efforts to establish and institutionalize the range of information technology management controls and capabilities noted above that our research and past work have shown are fundamental to any organization's ability to use technology effectively to transform itself and accomplish mission goals. For example, DHS has organized roles and responsibilities for information technology management under the Chief Information Officer. DHS has also developed an information technology human capital plan that is largely consistent with federal guidance and associated best practices. In particular, we reported that the plan fully addressed 15 and partially addressed 12 of 27 practices set forth in the Office of Personnel Management's human capital framework. However, we reported that DHS's overall progress in implementing the plan had been limited. With regard to information technology investment management, DHS has established a management structure to help manage its investments. However, DHS has not always fully implemented any of the key practices our information technology investment management framework specifies as being needed to actually control investments. Furthermore, DHS has developed an enterprise architecture, but we have reported that major DHS information technology investments have not been fully aligned with DHS's enterprise architecture. In addition, DHS has not fully implemented a comprehensive information security program. While it has taken actions to ensure that its certification and accreditation activities are completed, the department has not shown the extent to which it has strengthened incident detection, analysis, and reporting and testing activities. Real Property Management. DHS's responsibilities for real property management are specified in Executive Order 13327, "Federal Real Property Asset Management," and include the establishment of a Senior Real Property Officer, development of an asset inventory, and development and implementation of an asset management plan and performance measures. In June 2006, the Office of Management and Budget upgraded DHS's Real Property Asset Management Score from red to yellow after DHS developed an Asset Management Plan, developed a generally complete real property data inventory, submitted this inventory for inclusion in the governmentwide real property inventory database, and established performance measures consistent with Federal Real Property Council standards. DHS also designated a Senior Real Property Officer. However, in August 2007 we reported that DHS had yet to demonstrate full implementation of its asset management plan and full use of asset inventory information and performance measures in management decision making. Our work has identified various cross-cutting issues that have hindered DHS's progress in its management areas. We have reported that while it is important that DHS continue to work to strengthen each of its core management functions, it is equally important that these key issues be addressed from a comprehensive, departmentwide perspective to help ensure that the department has the structure and processes in place to effectively address the threats and vulnerabilities that face the nation. These issues include agency transformation, strategic planning and results management, and accountability and transparency. Agency Transformation. In 2007 we reported that DHS's implementation and transformation remained high risk because DHS had not yet developed a comprehensive management integration strategy and its management systems and functionsespecially related to acquisition, financial, human capital, and information technology managementwere not yet fully integrated and wholly operational. We have recommended, among other things, that agencies on the high-risk list produce a corrective action plan that defines the root causes of identified problems, identifies effective solutions to those problems, and provides for substantially completing corrective measures in the near term. Such a plan should include performance metrics and milestones, as well as mechanisms to monitor progress. In March 2008 we received a draft of DHS's corrective action plan and have provided the department with some initial feedback. We will continue to review the plan and expect to be able to provide additional comments on the plan in the near future. Strategic Planning and Results Management. DHS has not always implemented effective strategic planning efforts, has not yet issued an updated strategic plan, and has not yet fully developed adequate performance measures or put into place structures to help ensure that the agency is managing for results. DHS has developed performance goals and measures for some of its programs and reports on these goals and measures in its Annual Performance Report. However, some of DHS's components have not developed adequate outcome-based performance measures or comprehensive plans to monitor, assess, and independently evaluate the effectiveness of their plans and performance. Since issuance of our August 2007 report, DHS has begun to develop performance goals and measures for some areas in an effort to strengthen its ability to measures its progress in key management and mission areas. We commend DHS's efforts to measure its progress in these areas and have agreed to work with the department to provide input to help strengthen established measures. Accountability and Transparency. Accountability and transparency are critical to the department effectively integrating its management functions and implementing its mission responsibilities. We have reported that it is important that DHS make its management and operational decisions transparent enough so that Congress can be sure that it is effectively, efficiently, and economically using the billions of dollars in funding it receives annually. We have encountered delays at DHS in obtaining access to needed information, which have impacted our ability to conduct our work in a timely manner. Since we highlighted this issue last year to this subcommittee, our access to information at DHS has improved. For example, TSA has worked with us to improve its process for providing us with access to documentation. DHS also provided us with access to its national level preparedness exercise. Moreover, in response to the provision in the DHS Appropriations Act, 2008, that restricts a portion of DHS's funding until DHS certifies and reports that it has revised its guidance for working with GAO. DHS has provided us with a draft version of its revised guidance. We have provided DHS with comments on this draft and look forward to continuing to collaborate with the department. DHS is now 5 years old, a key milestone for the department. Since its establishment, DHS has had to undertake actions to secure the border and the transportation sector and defend against, prepare for, and respond to threats and disasters while simultaneously working to transform itself into a fully functioning cabinet department. Such a transformation is a difficult undertaking for any organization and can take, at a minimum, 5 to 7 years to complete even under less daunting circumstances. Nevertheless, DHS's 5-year anniversary provides an opportunity for the department to review how it has matured as an organization. As part of our broad range of work reviewing DHS management and mission programs, we will continue to assess in the coming months DHS's progress in addressing high-risk issues. In particular, we will continue to assess the progress made by the department in its transformation and information sharing efforts, and assessing whether any progress made is sustainable over the long term. Further, as DHS continues to evolve and transform, we will review its progress and performance and provide information to Congress and the public on its efforts. This concludes my prepared statement. I would be pleased to answer any questions you and the Subcommittee Members may have. For further information about this testimony, please contact Norman J. Rabkin, Managing Director, Homeland Security and Justice, at 202-512- 8777 or [email protected]. Other key contributors to this statement were Cathleen A Berrick, Anthony DeFrank, Rebecca Gambler, and Thomas Lombardi. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Department of Homeland Security (DHS) began operations in March 2003 with missions that include preventing terrorist attacks from occurring within the United States, reducing U.S. vulnerability to terrorism, minimizing damages from attacks that occur, and helping the nation recover from any attacks. GAO has reported that the implementation and transformation of DHS is an enormous management challenge. GAO's prior work on mergers and acquisitions found that successful transformations of large organizations, even those faced with less strenuous reorganizations than DHS, can take at least 5 to 7 years to achieve. This testimony addresses (1) the progress made by DHS in implementing its management functions; and (2) key issues that have affected the department's implementation efforts. This testimony is based on GAO's August 2007 report evaluating DHS's progress between March 2003 and July 2007; selected reports issued since July 2007; and GAO's institutional knowledge of homeland security and management issues. Within each of its management areas--acquisition, financial, human capital, information technology, and real property management--DHS has made some progress, but has also faced challenges. DHS has recognized the need to improve acquisition outcomes and taken some positive steps to organize and assess the acquisition function, but continues to lack clear accountability for the outcomes of acquisition dollars spent. The department also has not fully ensured proper oversight of its contractors providing services closely supporting inherently government functions. DHS has designated a Chief Financial Officer and taken actions to prepare corrective action plans for its internal control weaknesses. However, DHS has been unable to obtain an unqualified audit opinion of its financial statements, and for fiscal year 2007 the independent auditor identified significant deficiencies in DHS's internal control over financial reporting. DHS has taken actions to implement its human capital system by, for example, issuing a departmental training plan and human capital operational plan. Among other things, DHS still needs to implement a human capital system linked to its strategic plan, establish a market-based and more performance-oriented pay system, and seek more routine feedback from employees. DHS has taken actions to develop information technology management controls, such as developing an information technology human capital plan and developing policies to ensure the protection of sensitive information. However, DHS has not yet fully implemented a comprehensive information security program or a process to effectively manage information technology investments. DHS has developed an Asset Management Plan and established performance measures consistent with Federal Real Property standards. However, DHS has yet to demonstrate full implementation of its Asset Management Plan or full use of asset management inventory information. Various cross-cutting issues have affected DHS's implementation efforts. For example, DHS has not yet updated its strategic plan and put in place structures to help it manage for results. Accountability and transparency are critical to effectively implementingDHS's management functions. GAO has experienced delays in obtaining access to needed information from DHS, though over the past year, GAO's access has improved. GAO is hopeful that planned revisions to DHS's guidance for working with GAO will streamline our access to documents and officials. DHS's 5 year anniversary provides an opportunity for the department to review how it has matured as an organization. As part of our broad range of work, GAO will continue to assess DHS's progress in addressing high-risk issues. In particular, GAO will continue to assess the progress made by the department in its transformation efforts and whether any progress made is sustainable over the long term.
| 3,648 | 738 |
DOD faces a number of longstanding and systemic challenges that have hindered its ability to achieve more successful acquisition outcomes-- obtaining the right goods and services, at the right time, at the right cost. These challenges include addressing the issues posed by DOD's reliance on contractors, ensuring that DOD personnel use sound contracting approaches, and maintaining a workforce with the skills and capabilities needed to properly manage the acquisitions and oversee contractors. The issues encountered in Iraq and Afghanistan are emblematic of these systemic challenges, though their significance and effect are heightened in a contingency environment. Our concerns about DOD's acquisition of services, including the department's reliance on contractors and the support they provide to deployed forces, predate the operations in Iraq and Afghanistan. We identified DOD contract management as a high-risk area in 1992 and since then we continued to identify a need for DOD to better manage services acquisitions at both the strategic and individual contract levels. Similarly, in 1997 we raised concerns about DOD's management and use of contractors to support deployed forces in Bosnia. We issued a number of reports on operational contract support since that time, and our recent high-risk update specifically highlighted the need for increased management attention to address operational contract support. Contractors can provide many benefits, such as unique skills, expertise, and flexibility to meet unforeseen needs, but relying on contractors to support core missions can place the government at risk of transferring government responsibilities to contractors. In 2008, we concluded that the increased reliance on contractors required DOD to engage in a fundamental reexamination of when and under what circumstances it should use contractors versus civil servants or military personnel. Earlier this year, we reported that the department lacked good information on the roles and functions fulfilled by contractors. Our work has concluded that DOD's reliance on contractors is still not fully guided by either an assessment of the risks using contractors may pose or a systematic determination of which functions and activities should be contracted out and which should be performed by civilian employees or military personnel. The absence of systematic assessments of the roles and functions that contractors should perform is also evident in contingency environments. For example, in June 2010 we reported that DOD had not fully planned for the use of contractors in support of operations in Iraq and Afghanistan and needed to improve planning for operational contract support in future operations. In addition, we reported that while U.S. Forces-Iraq had taken steps to identify all the Army's Logistics Civil Augmentation Program (LOGCAP) contract support needed for the drawdown in Iraq, it had not identified the other contractor support it may need. We found that the May 2009 drawdown plan had delegated responsibility for determining contract support requirements to contracting agencies rather than to operational personnel. However, DOD contracting officials told us that they could not determine the levels of contractor services required or plan for reductions based on those needs because they lacked sufficient, relevant information on requirements for contractor services during the drawdown. Similarly for Afghanistan, we found that despite the additional contractors that would be needed to support the troop increase, U.S. Forces-Afghanistan was engaged in very little planning for contractors with the exception of planning for the increased use of LOGCAP. Further, we have reported on limitations in DOD's ability to track contractor personnel deployed with U.S. forces. In January 2007, DOD designated the Synchronized Predeployment and Operational Tracker (SPOT) as its primary system for tracking data on contractor personnel deployed with U.S. forces. SPOT was designed to account for all U.S., local, and third-country national contractor personnel by name and to contain a summary of services being provided and information on government-provided support. Our reviews of SPOT, however, have highlighted shortcomings in the system's implementation in Iraq and Afghanistan. For example, we found that varying interpretations by DOD officials as to which contractor personnel should be entered into the system resulted in SPOT not presenting an accurate picture of the total number of contractor personnel in Iraq or Afghanistan. In addition, we reported in 2009 that DOD's lack of a departmentwide policy for screening local or third-country nationals--who constitute the majority of DOD contractor personnel in Iraq and Afghanistan--poses potential security risks. We are currently assessing DOD's process for vetting firms that are supporting U.S. efforts in Afghanistan. Regarding planning for the use of contractors in future operations, since February 2006 DOD guidance has called for the integration of an operational contract support annex--Annex W--into certain combatant command operation plans, if applicable to the plan. However, 4 years later we reported that of the potential 89 plans that may require an Annex W, only 4 operation plans with Annex Ws had been approved by the department. As a result, DOD risks not fully understanding the extent to which it will be relying on contractors to support combat operations and being unprepared to provide the necessary management and oversight of deployed contractor personnel. Moreover, the combatant commanders are missing an opportunity to fully evaluate and react to the potential risks of reliance on contractors. While the strategic level defines the direction and manner in which an organization pursues improvements in services acquisition, it is through the development, execution, and oversight of individual contracts that the strategy is implemented. Keys to doing so are having clearly defined and valid requirements, a sound contract, and effective contractor management and oversight. In short, DOD, like all organizations, needs to assure itself that it is buying the right thing in the right way and that doing so results in the desired outcome. Our work over the past decade identified weaknesses in each of these key areas, whether for services provided in the United States or abroad, as illustrated by the following examples: In June 2007, we reported that DOD understated the extent to which it used time-and-materials contracts, which can be awarded quickly and adjusted when requirements or funding are uncertain. We found few attempts to convert follow-on work to less risky contract types and found wide discrepancies in DOD's oversight. That same month we also reported that DOD personnel failed to definitize--or reach final agreement on--contract terms within required time frames in 60 percent of the 77 contracts we reviewed. Until contracts are definitized, DOD bears increased risk because contractors have little incentive to control costs. We then reported in July 2007 that DOD had not completed negotiations on certain task orders in Iraq until more than 6 months after the work began and after most of the costs had been incurred, contributing to its decision to pay the contractor nearly all of the $221 million questioned by auditors. We subsequently reported in 2010 that DOD had taken several actions to enhance departmental insight into and oversight of undefinitized contract actions; however, data limitations hindered DOD's full understanding of the extent to which they are used. As early as 2004, we raised concerns about DOD's ability to effectively administer and oversee contracts in Iraq. We noted that effective contract administration and oversight remained challenging in part because of the continued expansion of reconstruction efforts, staffing constraints, and need to operate in an unsecure and threatening environment. In 2008, we reported that the lack of qualified personnel hindered oversight of contracts to maintain military equipment in Kuwait and provide linguistic services in Iraq and questioned whether DOD could sustain increased oversight of its private security contractors. During our 2010 visits with deployed and recently returned units, we found that units continue to deploy to Afghanistan without designating contracting officer's representatives beforehand and that those representatives often lacked the technical knowledge and training needed to effectively oversee certain contracts. Several units that had returned from Afghanistan told us that contracting officer's representatives with no engineering background were often asked to oversee construction projects and were unable to ensure that the buildings and projects they oversaw met the technical specifications required in the drawing plans. We are currently assessing the training on the use of contract support that is provided to military commanders, contracting officer's representatives, and other nonacquisition personnel before they deploy. Underlying the ability to properly manage the acquisition of goods and services is having a workforce with the right skills and capabilities. DOD recognizes that the defense acquisition workforce, which was downsized considerably through the 1990s, faces increases in the volume and complexity of work because of increases in services contracting, ongoing contingency operations, and other critical missions. For example, while contract spending dramatically increased from fiscal years 2001 through 2008, DOD reported that its acquisition workforce decreased by 2.6 percent over the same period. In April 2010, DOD issued an acquisition workforce plan that identified planned workforce growth, specified recruitment and retention goals, and forecasted workforce-wide attrition and retirement trends. As part of that plan, DOD announced that it would increase the size of two oversight organizations--the Defense Contract Audit Agency and the Defense Contract Management Agency--over the next several years to help reduce the risk of fraud, waste, and abuse in DOD contracts. However, we reported in September 2010 that DOD had not completed its assessment of the critical skills and competencies of its overall acquisition workforce and that it had not identified the funding needed for its initiatives until the conclusion of our review. The current budget situation raises questions as to whether DOD will be able to sustain its projected workforce growth and related initiatives. We are currently reviewing the Defense Contract Management Agency's capacity for oversight and surveillance of contracting activity domestically in light of its role in contingency operations. DOD has recognized the need to take action to address the challenges it faces regarding contract management and its reliance on contractors, including those related to operational contract support. Over the past several years, the department has announced new policies, guidance and training initiatives, but not all of these actions have been implemented and their expected benefits have not yet been fully realized. While these actions are steps in the right direction, we noted in our February 2011 high-risk update that to improve outcomes on the billions of dollars spent annually on goods and services, sustained DOD leadership and commitment are needed to ensure that policies are consistently put into practice. Specifically we concluded that DOD needs to take steps to strategically manage services acquisition, including defining and measuring against desired outcomes, and developing the data needed to do so; determine the appropriate mix, roles, and responsibilities of contractor, federal civilian, and military personnel; assess the effectiveness of efforts to address prior weaknesses with specific contracting arrangements and incentives; ensure that its acquisition workforce is adequately sized, trained, and equipped to meet the department's needs; and fully integrate operational contract support throughout the department through education and predeployment training. DOD has generally agreed with the recommendations we have previously made and has actions under way to implement them. I would like to touch on a few of the actions already taken by DOD. On a broad level, for example, improved DOD guidance, DOD's initiation and use of independent management reviews for high-dollar services acquisitions, and other steps to promote the use of sound business arrangements have begun to address several weaknesses, such as the department's management and use of time-and-materials contracts and undefinitized contract actions. Further, DOD has identified steps to promote more effective competition in its acquisitions, such as requiring contracting officers to take additional actions when DOD receives only one bid in response to a solicitation and revising its training curriculum to help program and acquisition personnel develop and better articulate the department's requirements. Similarly, efforts are under way to reduce the department's reliance on contractors. In April 2009, the Secretary of Defense announced his intent to reduce the department's reliance on contractors by hiring new personnel and by converting, or in-sourcing, functions currently performed by contractors to DOD civilian personnel. To help provide better insights into, among other things, the number of contractors providing services to the department and the functions they perform and to help make informed workforce decisions, Congress enacted legislation in 2008 requiring DOD to annually compile and review an inventory of activities performed pursuant to contracts for services. In January 2011, we reported that while DOD had taken actions to reduce prior inconsistencies resulting from DOD components using different approaches to compile the inventory, it still faced data and estimating limitations that raised questions about the accuracy and usefulness of the data. Given this early state of implementation, the inventory and associated review processes are being used to various degrees by the military departments to help inform workforce decisions, with the Army generally using the inventories to a greater degree than the other military departments. Later this year we will review DOD's strategic human capital plans for both its civilian and acquisition workforces, the status of efforts to in-source functions previously performed by contractor personnel, and DOD's upcoming inventory of services. Furthermore, DOD has taken several steps intended to improve planning for the use of contractors in contingencies and to improve contract administration and oversight. For example, in the area of planning for the use of contractors, in October 2008 the department issued Joint Publication 4-10, Operational Contract Support, which establishes doctrine and provides standardized guidance for and information on planning, conducting, and assessing operational contract support integration, contractor management functions, and contracting command and control organizational options in support of joint operations. DOD also provided additional resources for deployed contracting officers and their representatives through the issuance of the Joint Contingency Contracting Handbook in 2007 and the Deployed Contracting Officer's Representative Handbook in 2008. In 2009, the Army issued direction to identify the need for contracting officer's representatives, their roles and responsibilities, and their training when coordinating operational unit replacements. Our work found that beyond issuing new policies and procedures, DOD needs to fundamentally change the way it approaches operational contract support. In June 2010, we called for a cultural change in DOD that emphasizes an awareness of operational contract support throughout all aspects of the department to help it address the challenges it faces in ongoing and future operations. This view is now apparently shared by the department. In a January 2011 memorandum, the Secretary of Defense expressed concern about the risks introduced by DOD's current level of dependency on contractors, future total force mix, and the need to better plan for operational contract support in the future. Toward that end, he directed the department to undertake a series of actions related to force mix, contract support integration, planning, and resourcing. According to the Secretary, his intent was twofold: to initiate action now and to subsequently codify the memorandum's initiatives in policy and through doctrine, organization, training, materiel, leadership, education, personnel, and facilities changes and improvements. He concluded that the time was at hand, while the lessons learned from recent operations were fresh, to institutionalize the changes necessary to influence a cultural shift in how DOD views, accounts for, and plans for contractors and personnel support in contingency environments. The Secretary's recognition and directions are significant steps, yet cultural change will require sustained commitment from senior leadership for several years to come. While my statement has focused on the challenges confronting DOD, our work involving State and USAID has found similar issues, particularly related to not planning for and not having insight into the roles performed by contractors and workforce challenges. The need for visibility into contracts and contractor personnel to inform decisions and oversee contractors is critical, regardless of the agency, as each relies extensively on contractors to support and carry out its missions in Iraq and Afghanistan. Our work has identified gaps in USAID and State's workforce planning efforts related to the role and extent of reliance on contractors. We noted, for example, in our 2004 and 2005 reviews of Afghanistan reconstruction efforts that USAID did not incorporate information on the contractor resources required to implement the strategy, hindering its efforts to make informed resource decisions. More generally, in June 2010, we reported that USAID's 5-year workforce plan for fiscal years 2009 through 2013 had a number of deficiencies, such as lacking supporting workforce analyses that covered the agency's entire workforce, including contractors, and not containing a full assessment of the agency's workforce needs, including identifying existing workforce gaps and staffing levels required to meet program needs and goals. Similarly, in April 2010, we noted that State's departmentwide workforce plan generally does not address the extent to which contractors should be used to perform specific functions, such as contract and grant administration. As part of State's fiscal year 2011 budget process, State asked its bureaus to focus on transitioning some activities from contractors to government employees. State officials told us, however, that departmentwide workforce planning efforts generally have not addressed the extent to which the department should use contractors because those decisions are left up to individual bureaus. State noted that in response to Office of Management and Budget guidance, a pilot study was underway regarding the appropriate balance of contractor and government positions, to include a determination as to whether or not the contracted functions are inherently governmental, closely associated to inherently governmental, or mission critical. In the absence of strategic planning, we found that it was often individual contracting or program offices within State and USAID that made case-by- case decisions on the use of contractors to support contract or grant administration functions. For example, USAID relied on a contractor to award and administer grants in Iraq to support community-based conflict mitigation and reconciliation projects, while State relied on a contractor to identify and report on contractor performance problems and assess contractor compliance with standard operating procedures for its aviation program in Iraq. State and USAID officials generally cited a lack of sufficient number of government staff, the lack of in-house expertise, or frequent rotations among government personnel as key factors contributing to their decisions to use contractors. Our work over the past three years to provide visibility into the number of contractor personnel and contracts associated with the U.S. efforts in Iraq and Afghanistan found that State and USAID continue to lack good information on the number of contractor personnel working under their contracts. State and USAID had agreed to use the SPOT database to track statutorily-required information. The system still does not reliably track the agencies' information on contracts, assistance instruments, and associated personnel in Iraq or Afghanistan. As a result, the agencies relied on other data sources, which had their own limitations, to respond to our requests for information. We plan to report on the agencies' efforts to track and use data on contracts, assistance instruments, and associated personnel in Iraq or Afghanistan later this year. The agencies have generally agreed with the recommendations we have made to address these challenges. To their credit, senior agency leaders acknowledged that they came to rely on contractors and other nongovernmental organizations to carry out significant portions of State and USAID's missions. For example, the Quadrennial Diplomacy and Development Review (QDDR), released in December 2010, reported that much of what used to be the exclusive work of government has been turned over to private actors, both for profit and not for profit. As responsibilities mounted and staffing levels stagnated, State and USAID increasingly came to rely on outsourcing, with contracts and grants to private entities often representing the default option to meet the agencies' growing needs. Further, the QDDR recognized the need for the agencies to rebalance the workforce by determining what functions must be conducted by government employees and what functions can be carried out by nongovernment entities working on behalf of and under the direction of the government. As part of this effort, the QDDR called for State and USAID to ensure that work that is critical to carrying out their core missions is performed by an adequate number of government employees. The review also recommended that for contractor-performed functions, the agencies develop well-structured contracts with effective contract administration and hold contractors accountable for performance and results. Along these lines, the Administrator of USAID recently announced a series of actions intended to improve the way USAID does business, including revising its procurement approach. The acknowledgment of increased contractor reliance and the intention to examine their roles is important, as is developing well-structured contracts and effectively administering contracts. Left unaddressed, these challenges may pose potentially serious consequences to achieving the U.S. government's policy objectives in Iraq and Afghanistan. For example, in March 2011, the Secretary of State testified that the department is not in an "optimal situation," with contractors expected to comprise 84 percent of the U.S. government's workforce in Iraq. We recently initiated a review of State's capacity to plan for, award, administer, and oversee contracts with performance in conflict environments, such as Iraq and Afghanistan. As part of this review, we will assess the department's workforce both in terms of number of personnel and their expertise to carry out acquisition functions, including contractor oversight. We will also assess the status of the department's efforts to enhance its workforce to perform these functions. The issues I discussed today--contract management, the use of contractors in contingency environments, and workforce challenges--are not new and will not be resolved overnight, but they need not be enduring or intractable elements of the acquisition environment. The challenges encountered in Iraq and Afghanistan are the result of numerous factors, including poor strategic and acquisition planning, inadequate contract administration and oversight, and an insufficient number of trained acquisition and contract oversight personnel. These challenges manifest in various ways, including higher costs, schedule delays, and unmet goals, but they also increase the potential for fraud, waste, abuse, and mismanagement in contingency environments such as Iraq and Afghanistan. While our work has provided examples that illustrate some effects of such shortcomings, in some cases, estimating their financial effect is not feasible or practicable. The inability to quantify the financial impact should not, however, detract from efforts to achieve greater rigor and accountability in the agencies' strategic and acquisition planning, internal controls, and oversight efforts. Stewardship over contingency resources should not be seen as conflicting with mission execution or the safety and security of those so engaged. Toward that end, the agencies have recognized that the status quo is not acceptable and that proactive, strategic, and deliberate analysis and sustained commitment and leadership are needed to produce meaningful change and make the risks more manageable. DOD has acknowledged the need to institutionalize operational contract support and set forth a commitment to encourage cultural change in the department. State and USAID must address similar challenges, including the use and role of contractors in continency environments. The recent QDDR indicates that the agencies have recognized the need to do so. These efforts are all steps in the right direction, but agreeing that change is needed at the strategic policy level must be reflected in the decisions made by personnel on a day- to-day basis. Chairman Thibault, Chairman Shays, this concludes my prepared statement. I would be happy to respond to any questions you or the other commissioners may have. For further information about this statement, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Johana R. Ayers, Vince Balloon, Jessica Bull, Carole Coffey, Timothy DiNapoli, Justin Jaynes, Sylvia Schatz, Sally Williamson, and Gwyneth Woolwine. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Department of Defense (DOD) obligated about $367 billion in fiscal year 2010 to acquire goods and services to meet its mission and support its operations, including those in Iraq and Afghanistan. GAO's work, as well as that of others, has documented shortcomings in DOD's strategic and acquisition planning, contract administration and oversight, and acquisition workforce. These are challenges that need to be addressed by DOD and by the Department of State and the U.S. Agency for International Development (USAID) as they carry out their missions in Iraq and Afghanistan and prepare for future contingencies. Today's statement discusses (1) contract management challenges faced by DOD, including those that take on heightened significance in a contingency environment; (2) actions DOD has taken and those needed to address these challenges; and (3) similar challenges State and USAID face. The statement is drawn from GAO's body of work on DOD contingency contracting, contract management, and workforce, as well as prior reports on State and USAID's contracting and workforce issues. DOD faces a number of longstanding and systemic challenges that hinder its ability to achieve more successful acquisition outcomes--obtaining the right goods and services, at the right time, at the right cost. These challenges include addressing the issues posed by DOD's reliance on contractors, ensuring that DOD personnel use sound contracting approaches, and maintaining a workforce with the skills and capabilities needed to properly manage acquisitions and oversee contractors. The issues encountered with contracting in Iraq and Afghanistan are emblematic of these systemic challenges, though their significance and impact are heightened in a contingency environment. GAO's concerns regarding DOD contracting predate the operations in Iraq and Afghanistan. GAO identified DOD contract management as a high-risk area in 1992 and raised concerns in 1997 about DOD's management and use of contractors to support deployed forces in Bosnia. In the years since then, GAO has continued to identify a need for DOD to better manage and oversee its acquisition of services. DOD has recognized the need to address the systemic challenges it faces, including those related to operational contract support. Over the past several years, DOD has announced new policies, guidance, and training initiatives, but not all of these actions have been implemented and their expected benefits have not yet been fully realized. While DOD's actions are steps in the right direction, DOD needs to (1) strategically manage services acquisition, including defining desired outcomes; (2) determine the appropriate mix, roles, and responsibilities of contractor, federal civilian, and military personnel; (3) assess the effectiveness of efforts to address prior weaknesses with specific contracting arrangements and incentives; (4) ensure that its acquisition workforce is adequately sized, trained, and equipped; and (5) fully integrate operational contract support throughout the department through education and predeployment training. In that regard, in June 2010 GAO called for a cultural change in DOD that emphasizes an awareness of operational contract support throughout all aspects of the department. In January 2011, the Secretary of Defense expressed concerns about DOD's current level of dependency on contractors and directed the department to take a number of actions. The Secretary's recognition and directions are significant steps, yet instilling cultural change will require sustained commitment and leadership. State and USAID face contracting challenges similar to DOD's, particularly with regard to planning for and having insight into the roles performed by contractors. In April 2010, GAO reported that State's workforce plan did not address the extent to which contractors should be used to perform specific functions. Similarly, GAO reported that USAID's workforce plan did not contain analyses covering the agency's entire workforce, including contractors. The recently issued Quadrennial Diplomacy and Development Review recognized the need for State and USAID to rebalance their workforces and directed the agencies to ensure that they have an adequate number of government employees to carry out their core missions and to improve contract administration and oversight. GAO has made multiple recommendations to the agencies to address contracting and workforce challenges. The agencies have generally agreed with the recommendations and have efforts under way to implement them.
| 5,070 | 874 |
From fiscal year 2007 through 2012, the total number of female servicemembers has grown from 200,941 to 208,905, with female servicemembers comprising about 14 percent of the total active duty force. During this time, the largest number of active-duty female servicemembers resided in the Army. (See fig. 1.) In fiscal year 2012, more than three-quarters of the Army's female servicemember population was age 35 and under, with the largest group being between 18 and 24 years old. Recommendations for female- specific preventative health screenings are based on age, such as cervical cancer screening, which would be applicable for female servicemembers from an early age, while others, such as mammograms, are currently not recommended until age 50, absent any personal history of health problems of this nature. (See fig. 2.) DOD operates its own large, complex health system--the Military Health System--that provides health care to approximately 9.7 million beneficiaries across a range of venues, from MTFs located on military installations to the battlefield. These beneficiaries include active-duty servicemembers and their dependents, eligible National Guard and Reserve servicemembers and their dependents, and retirees and their dependents or survivors. The Military Health System has a dual health care mission: supporting wartime and other deployments, known as the readiness mission, and providing peacetime care, known as the benefits mission. The readiness mission provides medical services and support to the armed forces during military operations and involves deploying medical personnel and equipment, as needed, around the world to support military forces. The benefits mission provides medical services and support to members of the armed forces, their family members, and others eligible for DOD health care. The care of the eligible beneficiary population is spread across the military departments--Army, Navy, and Air Force. Each military department delivers care directly through its own MTFs, which are managed by their medical departments, including the Army's Medical Command (MEDCOM); the Navy's Bureau of Medicine and Surgery, which is also responsible for providing health care to members of the Marine Corps and their beneficiaries; and the Air Force Medical Service. Servicemembers obtain health care through the military services' system of MTFs, which is supplemented by participating civilian health care providers, institutions, and pharmacies to facilitate access to health care services when necessary. Active-duty servicemembers receive most of their care from MTFs, where they receive priority access over other beneficiaries. Within the continental United States, the Army is organized into three medical regions--Northern, Southern, and Western--each headed by a subordinate regional medical command, which exercises authority over the MTFs in its region. Across the three regions, there are 27 domestic Army installations with a primary MTF, which report directly to the regional medical commands and are responsible for reporting information for other associated MTFs, which may include smaller MTFs, such as clinics, on the same installation, as well as MTFs on different Army installations or at installations operated by other military services. For example, at Fort Benning, there are multiple facilities located on the installation, including Martin Army Community Hospital--the primary MTF--as well as several clinics. In addition to reporting for all of those facilities, Martin Army Community Hospital also reports to the regional medical command for other Army facilities located off the installation, including an Army clinic at Eglin Air Force Base in Florida. All Army MTFs, both primary and associated, can be classified under one of three categories on the basis of their size: Army Health Centers/Clinics are generally the smallest facilities only offering outpatient primary care. Army Community Hospitals are larger than clinics and offer primary and secondary care, such as inpatient care and surgery under anesthesia. Army Medical Centers are generally the largest facilities offering primary and secondary care as well as other care, such as cancer treatments, neonatal care, and specialty diagnostics. Each of the military services is responsible for maintaining the medical readiness of its active-duty force. DOD's IMR policy establishes six elements for the military services to assess in order to determine a servicemember's medical readiness to deploy: 4. individual medical equipment,5. medical readiness laboratory tests, and 6. periodic health assessments (PHA). DOD's policy establishes a baseline of standards for continuously assessing each of the IMR elements. In addition to this, each of the military services establishes its own policy that may include more specific criteria. Each military service is responsible for assessing and categorizing a servicemember's IMR as follows: Fully medically ready, current in all categories. Partially medically ready, lacking one or more immunizations, readiness laboratory tests, or medical equipment. Not medically ready, existence of a chronic or prolonged deployment-limiting condition, including servicemembers who are hospitalized or convalescing from serious illness or injury, or individuals who require urgent dental care. Medical readiness indeterminate, inability to determine the servicemember's current health status because of missing health information such as a lost medical record, an overdue PHA, or an overdue dental exam. All of the military services use different systems to collect information about IMR status. In addition, DOD requires that each of the services provide quarterly reports about the IMR status of their servicemembers. DOD and the military services have a number of organizations that fund or conduct research, including research on health care issues that affect those who have served in a combat zone. The Defense Health Program within DOD's Office of the Assistant Secretary of Defense for Health Affairs receives significant funding for this research through its annual appropriation. Through an interagency agreement, the Army Medical Research and Materiel Command manages the day-to-day execution of this funding through joint program committees. There are several joint program committees that focus on specific research areas, including clinical and rehabilitative medicine and military operational medicine.Officials from the other military services participate in these committees. Research organizations from the military services, such as the Naval Medical Research Center, the Office of Naval Research, and the Air Force Medical Support Agency, also manage funds from the Defense Health Program for research. In addition to the military services, other organizations within DOD also fund or conduct research, including the TriService Nursing Research Program, which funds and supports research on military nursing. DOD's policy establishes six elements for assessing the IMR of a servicemember to deploy, most of which are gender-neutral. Four of the six elements--immunization status, medical readiness laboratory tests, individual medical equipment, and dental readiness--are gender-neutral; they apply equally to female and male servicemembers. In order to pass these elements of the IMR assessment, servicemembers must be current for each element, by having immunizations, including MMR (measles, mumps, and rubella); medical readiness laboratory tests, such as a human immunodeficiency virus test and results current within the past 24 months; individual medical equipment, such as gas mask inserts for all personnel needing visual correction; and an annual dental exam. The remaining elements of IMR--deployment-limiting conditions and PHAs--include some aspects that are specific to female servicemembers. The Army, Navy, Air Force, and Marine Corps have policies that define pregnancy as a deployment-limiting condition. In addition, they also have policies that establish a postpartum deferment period--generally 6 months after delivery--when a female servicemember is not required to deploy. The deferment period was established in order to provide for medical recovery from childbirth and to allow additional time to prepare family care plans and child care. However, each of the military services has a policy that allows the servicemember to voluntarily deploy before the period has expired. In addition, cancer that requires continuing treatment and specialty evaluations can also be a deployment-limiting condition. Although cancer treatment could affect both male and female servicemembers, there are some cancers that would be specific to female servicemembers, such as ovarian cancer, while other cancers are specific to male servicemembers, such as prostate cancer. The PHA includes a review of information about preventative screenings and counseling for each servicemember. Some of the preventative screenings that are reviewed as part of the PHA are female-specific, such as mammograms and pap smears. To satisfy this element of IMR, a servicemember's PHA must be current--the assessment of any changes in health status must have occurred within the past year--for both female and male servicemembers. The results of these preventative screenings do not negatively affect this element of a servicemember's readiness assessment even when follow-on studies, labs, referrals or additional visits may be pending or planned. Nonetheless, these screenings could identify a health issue that would be considered a deployment-limiting condition--a separate element of IMR--and could therefore limit readiness. For example, the results of a mammogram may identify cancer that requires treatment or specialized medical evaluations that could be determined to be a deployment-limiting condition. On the basis of our survey, we found that most routine female-specific health care services--including pelvic examinations, clinical breast examinations, pap smears, screening mammographies, prescription of contraceptives, and pregnancy tests--were available through the MTFs at the 27 domestic Army installations with a primary MTF. Screening mammography services were not available at two of these installations; however, in those instances, this service was available from a civilian network provider. We did not survey about the availability of male and female providers to provide pregnancy tests as senior health care officials stated that this health care service is not one that is necessarily administered by health care providers. that she would have to wait 3 months to see a female provider, so she opted to see a male provider. On the basis of our survey, the availability of specialized health care services varied by the type of primary MTF on the domestic Army installation; however, when services were not available at the installation, they were available through other MTFs or from a civilian network provider. Specifically, more types of specialized health care services were available on installations with a larger Army Medical Center as the primary MTF than at installations with a smaller Army Health Center/Clinic. For example, none of the installations where the primary MTF was an Army Health Center/Clinic offered surgical, medical, or radiation treatments for breast, ovarian, cervical, and uterine cancers, whereas some installations where the primary MTFs were Army Community Hospitals and Army Medical Centers did make these treatments available. (See table 1.) Additionally, both male and female providers were available to provide specialized female-specific services-- such as treatment of abnormal pap smear, prenatal care, labor and delivery, benign gynecological disorders, and postpartum care--that were offered at the 27 domestic Army installations. In addition, when asked about the availability of other programs, officials from 25 of the 27 domestic Army installations we surveyed reported offering female-specific health care programs or activities, including female-specific groups for breast cancer, pregnancy education, pregnancy physical training, postpartum care, women's clinics, and health care fairs. Five of the six installations that we visited reported having female-specific programs, such as breast cancer awareness activities, lactation consultants, a women's clinic or health care team, annual women's health care fair, and a pregnancy physical training program. With respect to privacy for individuals, including female servicemembers, who seek care at domestic Army MTFs, Army MEDCOM officials noted that reasonable safeguards should be in place to limit incidental, and avoid prohibited, uses and disclosures of information.cubicles, dividers, shields, curtains, or similar barriers should be utilized in an area where multiple patient-staff communications routinely occur. DOD provides space-planning criteria for health facilities that assert that private space be made available to counsel patients, including facilities for outpatient women's health services. When asked to report on the challenges MTFs face in ensuring the physical privacy of female servicemembers, senior health care officials at most domestic Army installations (18 of 27) we surveyed did not report examples of any challenges. However, officials from the remaining nine installations cited two privacy-related challenges--physical layout of the exam rooms and auditory issues. For example, officials from three installations reported that some exam rooms were configured such that some examination tables face the door. Officials from two of these installations reported the use of a privacy curtain to overcome this room limitation. Nonetheless, all of the female servicemembers that we interviewed at the six sites that we visited felt that adequate steps were taken to ensure their physical privacy during health care visits. Officials from another installation reported on the survey that the layout of a waiting room may allow for conversations at the reception desk to be overheard, which may compromise patient privacy. Additionally, 3 of the 39 female servicemembers that we interviewed stated that they had concerns regarding auditory privacy in the waiting or exam rooms. At three of the six installations that we visited, we observed clinics that had waiting areas with separate check-in bays, such as those for walk-in appointments, pharmacy, and laboratory tests. The separation of these check-in areas spread people out and provided more distance between those checking in and those sitting in the waiting rooms. Behavioral health illnesses affect both men and women, and with the exception of postpartum depression, are not easily distinguished by gender. Consequently, behavioral health services are not inherently gender-specific. Behavioral health services were provided in a variety of settings, such as through outpatient, inpatient, residential, and telebehavioral settings. We found in our survey that the availability of behavioral health services at domestic Army installations varied; however, when these services were not available at the installation, they were available from other sources, including from other MTFs or from civilian network providers. (See table 2.) All of the 27 domestic Army installations we surveyed offered individual and group outpatient treatments, and most (23 of 27) offered family outpatient treatment. If treatments were not available at the installation, they were offered from another MTF or a civilian network provider. About one-third (10 of 27) of domestic Army installations offered inpatient treatment, and fewer offered residential treatment (5 of 27). In addition to general behavioral health services, all of the domestic Army installations included in our survey offered some type of behavioral health services for substance abuse. (See table 3.) With regard to the availability of substance abuse treatment options, all 27 domestic Army installations we surveyed offered individual outpatient treatment. All but one domestic Army installation offered group outpatient treatment and more than a third (11 of 27) offered family outpatient treatment. Few domestic Army installations offered inpatient treatment (5 of 27) or residential treatment (4 of 27) for substance abuse. If these treatments were not available on the installations, they were available from another MTF, a civilian network provider, or both. As a way to increase access to behavioral health services, telebehavioral health services--medically supervised behavioral health treatment using secured two-way telecommunications technology to link patients from an originating site for treatment with providers who are at another site--can be used to connect servicemembers and behavioral health providers. This service was available at 22 of the 27 domestic installations we surveyed. Telebehavioral health services can be used to provide treatment to servicemembers in remote locations, where providers may not be readily available, and to ensure continuity of care for servicemembers who change duty stations. While behavioral health services are not inherently gender-specific, a number of Army installations we surveyed reported offering programs or activities that were specific to women. Officials from 18 of 27 domestic Army installations provided examples of female-specific behavioral health programs or activities, including a post-deployment group for female servicemembers, postpartum groups, and specific therapy groups for female servicemembers. Four of the six installations we visited reported having female-specific behavioral health programs or activities, such as postpartum, post-deployment, and general women's support groups. The importance of female-specific groups was echoed by most (34 of 39) of the female servicemembers that we interviewed. These female servicemembers told us that there was a need for female-specific groups for certain topics, such as post-traumatic stress disorder (PTSD), postpartum depression, parenting, and general female servicemember issues. With respect to privacy when providing behavioral health services, officials from 17 of the 27 domestic Army installations that we surveyed did not report any challenges to ensuring physical privacy when providing behavioral health services to female servicemembers when asked to report on the challenges MTFs face in ensuring the physical privacy of female servicemembers. Officials from the other 10 installations reported two main challenges to ensuring physical privacy--having mixed gender waiting rooms and concerns regarding auditory privacy in the waiting or exam rooms. Three of the 27 installations reported using white noise machines in an effort to help mask noise and address any potential auditory concerns. All of the female servicemembers we interviewed at the six installations that we visited felt that adequate steps were taken to ensure their physical privacy during behavioral health visits. The Women's Health Research Interest Group, which is supported by the TriService Nursing Research Program, is currently in the process of identifying research gaps on health issues affecting female servicemembers. As part of this effort, they are comparing a compiled list of existing research with data on health care issues for female servicemembers to determine if there are any existing gaps in research. Interest group officials said that the goal is to develop a repository for peer-reviewed research articles related to health issues for female servicemembers, including those who served in combat, and to use this repository to identify research that could enhance the health care of female servicemembers, including those who have served in a combat zone. To ensure that researchers will have access to the results of their work, officials have plans to distribute their results in presentations at local and national conferences. In addition, officials told us that they will disseminate their findings through peer-reviewed publications and post this information on the TriService Nursing Research Program website, which is available to the public. However at the time of our review, only one DOD research organization that we spoke with was aware of their work. Specifically, an official from the Air Force Medical Support Agency told us that it was aware of the efforts by the Women's Health Research Interest Group. In addition, other DOD research organizations told us that they would be interested in the results of this work even though they were not aware of it at the time of our discussion. While none of the other DOD research organizations that we spoke with are trying to identify gaps in research on female servicemembers, officials from each organization told us that they conduct research based on needs and capabilities. For example, one organization said that it reviews health care issues experienced during deployments and speaks with health care providers to determine what research is needed to better restore a servicemember's ability to function. DOD research organizations said that while they focus their research on needs or capabilities, they consider gender in their research efforts. For example, officials from one division of the Army Medical Research and Materiel Command told us that when developing a research announcement based on genitourinary injuries sustained during deployments, they contacted the services to determine the type and extent of injuries encountered. At the time of the inquiry, only one female was reported by the services as having a significant genitourinary injury and this led to the development of an announcement that did not specifically mention females or males. While this announcement was not gender-specific, officials said that research proposals could include female servicemembers. In addition, officials from another division of the Army Medical Research and Materiel Command told us that when discussing proposed research to examine blood markers for PTSD, the original proposal did not include female servicemembers because researchers believed that female hormones would make detecting blood biomarkers for PTSD more difficult. Officials from Army Medical Research and Materiel Command found this justification for leaving out female servicemembers unsatisfactory so they required researchers to include both genders in this study. We provided a draft of this report to DOD for comment. DOD responded that it did not have any comments on the draft report. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO's website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV. To describe the availability of routine, specialized, and behavioral health care services to female servicemembers at domestic Army installations and from other sources, we surveyed senior health care officials at the 27 domestic Army installations that had a primary military treatment facility (MTF). Through this survey, we collected information on the availability of these services to female servicemembers at installations to which more than two-thirds of the Army's female servicemembers were attached as of August 1, 2012. In developing the survey, we conducted pre-tests to refine and validate the specific health care services as those services available to female servicemembers in the Army and checked that (1) the terminology was used correctly; (2) the questionnaire did not place undue burden on agency officials; (3) the information could be feasibly obtained; and (4) the survey was complete and unbiased. We chose the four pretest sites to include at least one installation with a primary MTF that was a medical center, a community hospital, and a health center/clinic. We conducted one of the pretests where all GAO participants were present and three pretests with some GAO participants in person and others participated by telephone. We made changes to the content and format of the survey on the basis of the feedback we received during the pretests. On August 20, 2012, Army Medical Command (MEDCOM) officials sent the survey to senior health care officials at the 27 domestic Army installations as a Word document by email that the respondents were requested to return after marking checkboxes or entering responses to open-answer boxes. All surveys were returned by September 26, 2012, for a 100 percent response rate. We conducted follow-up with senior health care officials about missing or inconsistent responses, through Army MEDCOM officials, between September 2012 and December 2012. The survey is presented in appendix III. In addition to the contact named above, Bonnie Anderson, Assistant Director; Jennie Apter; Danielle Bernstein; Natalie Herzog; Ron La Due Lake; Amanda K. Miller; Lisa Motley; Mario Ramsey; and Laurie F. Thurber made key contributions to this report.
|
Female servicemembers are serving in more complex occupational specialties and are being deployed to combat operations, potentially leading to increased health risks. Similar to their male counterparts, female servicemembers must maintain their medical readiness; however, they have unique health care needs that require access to gender-specific services. The National Defense Authorization Act for Fiscal Year 2012 directed GAO to review a variety of issues related to health care for female servicemembers. This report describes (1) the extent that DOD's policies for assessing individual medical readiness include unique health care issues of female servicemembers; (2) the availability of health care services to meet the unique needs of female servicemembers at domestic Army installations; and (3) the extent that DOD's research organizations have identified a need for research on the specific health care needs of female servicemembers who have served in combat. GAO reviewed DOD and military-service policies on individual medical readiness and surveyed senior health care officials about the availability of specific health services at the 27 domestic Army installations with MTFs that report directly to the domestic regional medical commands. GAO focused on the Army because it has more female servicemembers than the other military services. GAO also visited six Army installations--two from each of the Army's three domestic regional medical commands--and interviewed DOD officials who conduct research on health issues for servicemembers. The Department of Defense's (DOD) policy for assessing the individual medical readiness of a servicemember to deploy establishes six elements to review, most of which are gender-neutral. Four of the six elements--immunization status, medical readiness laboratory tests, individual medical equipment, and dental readiness--apply equally to female and male servicemembers. The remaining elements of individual medical readiness--deployment-limiting conditions and periodic health assessments--include aspects that are specific to female servicemembers. For example, the Army, Navy, Air Force, and Marine Corps have policies that define pregnancy as a deployment-limiting condition. Officials surveyed by GAO reported that female-specific health care services and behavioral health services were generally available through domestic Army installations. Specifically, according to GAO's survey results: Most routine female-specific health care services--pelvic examinations, clinical breast examinations, pap smears, prescription of contraceptives, and pregnancy tests--were available at the 27 surveyed domestic Army installations. The availability of specialized health care services--treatment of abnormal pap smears, prenatal care, labor and delivery, benign gynecological disorders, postpartum care, and surgical, medical, and radiation treatment of breast, ovarian, cervical, and uterine cancers--at the 27 surveyed domestic Army installations varied. However, when these services were not available at the installation, they could be obtained through either another military treatment facility (MTF) or from a civilian network provider. The availability of behavioral health services, such as psychotherapy or substance abuse treatment, which were not gender-specific, varied across the 27 domestic Army installations; however, similar to specialty care, these services could be obtained from other MTFs or civilian network providers. In addition, 18 of the 27 surveyed Army installations reported offering female-specific programs or activities, such as a post-deployment group for female servicemembers or a postpartum group. One DOD organization, the Women's Health Research Interest Group, is currently in the process of identifying research gaps on health issues affecting female servicemembers. Interest group officials said that the goal is to develop a repository for peer-reviewed research articles related to health issues for female servicemembers, including those who served in combat, and to use this repository to identify research that could enhance the health care of female servicemembers, including those who have served in a combat zone. To ensure that researchers will have access to the results of their work, officials have plans to distribute their results in presentations at local and national conferences. In addition, officials told GAO that they will disseminate their findings through peer-reviewed publications and post this information on the Internet to make it available to the public. GAO provided a draft of this report to DOD for comment. DOD responded that it did not have any comments on the draft report.
| 5,033 | 926 |
Budget-scorekeeping rules were developed by the executive and legislative branches in connection with the Budget Enforcement Act of 1990. These rules are to be used by the scorekeepers to assure compliance with budget laws. Their purpose is to ensure that the scorekeepers measure the effects of legislation consistent with scorekeeping conventions and specific legal requirements. The rules are reviewed annually and revised as necessary to achieve that purpose. Leases may be of two general types--operating and capital. The Office of Management and Budget (OMB) identifies six criteria that a lease must meet in order to be considered an operating lease rather than a capital lease. Ownership of the asset remains with the lessor during the term of the lease and is not transferred to the government at or shortly after the end of the lease term. The lease does not contain a bargain-price purchase option. The lease term does not exceed 75 percent of the estimated economic life of the asset. The asset is a general purpose asset rather than being for a special purpose of the government and is not built to unique specifications of the government lessee. There is a private sector market for the asset. The present value of the minimum lease payments over the life of the lease does not exceed 90 percent of the FMV of the asset at the beginning of the lease term. If a lease does not meet all six criteria above, it must be treated as a capital lease for budget-scoring purposes. For a capital lease, the net present value of the total cost of the lease is scored as budget authority in the year budget authority is first made available for the lease. For GSA operating leases, only the budget authority needed to cover the annual payment is required to be scored. As we previously reported, in general, capital facilities should be funded up front at the time the federal government enters into the commitment. In June 1991, GSA wrote to OMB generally describing the policies and procedures it would follow to ensure the proper implementation of the new budget scoring rules. These rules were incorporated in OMB Circular A-11. Appendix B of the circular contains the scoring rules for lease- purchases and leases of capital assets. In March 1992, GSA wrote to OMB saying that after reviewing its nonprospectus inventory, as well as OMB policies and procedures, GSA concluded that nonprospectus leases should be considered operating leases for scoring purposes without the necessity of a case-by-case determination. In this letter, GSA stated that there was no practical way to implement a policy of determining whether each nonprospectus lease met the criteria for being considered an operating lease without severely damaging its ability to meet client-agency needs. GSA considered this view consistent with OMB's intent, as well as an operational necessity. In April 1992, GSA issued guidance on lease scoring in which it stated that all nonprospectus leases are to be considered operating leases unless the lease is a lease-purchase, the lease contains a nominal or bargain purchase price, or the lease is on government-owned land. All nonprospectus leases that met one of these exceptions were to be scored as capital leases by the regions. All prospectus-level leases were to be scored at GSA's central office. In October 1998, GSA announced it was no longer following the policy of considering most nonprospectus leases as operating leases. Since then, according to GSA officials, GSA has required regional offices to apply the appropriate criteria to all prospectus and nonprospectus leases and that copies of the resulting scoring be retained in the regionally maintained lease file. GSA headquarters is to review the scoring of all prospectus-level leases. Two of the six scoring criteria used to determine an operating lease concern the term of a lease: that the lease term not exceed 75 percent of the estimated economic life of the asset and that the present value of the minimum lease payments over the life of the lease not exceed 90 percent of the FMV of the asset at the beginning of the lease term. According to GSA officials, GSA's leases generally meet the first of these two criteria. If GSA rents new space, it meets this criterion because it only has 20-year leasing authority and tax law specifies that a new building's economic life is longer than 30 years (30 yrs. times 75% = 22.50 yrs.). If GSA rents older space, it generally requires it to be upgraded, which extends the building's estimated economic life, thereby meeting this criterion. Thus, the remaining criterion that could affect the lease term is that the present value of the minimum lease payments over the life of the lease not exceed 90 percent of the FMV of the asset at the beginning of the lease term. For example, if a lease has a 20-year term whose present value of the minimum lease payments exceeds 90 percent of FMV then by reducing the 20-year term, the present value of the minimum lease payments is reduced while the FMV of the asset remains the same. This lowering of the percentage relationship between present value of the minimum lease payments and 90 percent of FMV allows a lease to meet this scoring criterion. However, if a lease does not meet any one of the other four scoring criteria, the lease would be a capital lease no matter what the term. Six of GSA's 11 regions identified 12 projects or leases for which the scoring process had affected the term of the lease. In one other region, according to a GSA official, GSA thought that the term of about eight other leases had been affected in the last 2 years but they could neither identify those leases nor the impact of budget scoring on the lease term. GSA officials from the other four regions said they could not identify any projects affected by budget scoring. Only 2 of the identified 12 projects--a lease for the Immigration and Naturalization Service and a lease for the Secret Service--were among the 39 prospectus-level projects reviewed, and none came from the 102 lease files we reviewed. According to GSA officials, other factors, such as the agency or the market, determined the term of these other leases. Table 1 lists the leases or lease projects that we or GSA identified as being affected by scoring. According to GSA officials, during the planning for the Department of Transportation lease, it was realized that due to the rental rates in the District of Columbia, a 20-year lease would probably not satisfy the 90 percent scoring criterion. In order to address this issue, GSA reduced the lease term to 15 years, estimating that the present value of the minimum lease payments for a 15-year lease would not exceed 90 percent of the FMV. For the SSA lease, according to officials, it was originally submitted as new construction but not approved. GSA then decided to do it as a 20- year build-to-suit lease, but when reviewed it was determined that it would be a capital lease because it did not satisfy the 90-percent scoring criterion. At OMB's direction, the lease was awarded as a 10-year lease because OMB thought that SSA space needs might be reduced in the future because of automation. Four factors limited the identification of leases affected by budget scoring, according to GSA officials. First, GSA did not begin determining whether each nonprospectus lease met the scoring criteria for being considered an operating lease until about October 1998. GSA issued guidance in 1992 that stated there was no practical way to implement a policy of determining whether each nonprospectus lease met the criteria for being considered an operating lease without severely damaging its ability to meet client-agency needs. Nonprospectus leases were to be considered operating leases unless the lease was a lease-purchase, the lease contained a nominal or bargain purchase price, or the lease was on government-owned land. Thus, it is unknown if nonprospectus leases would have been affected by scoring between 1992 and 1998. Second, prospectus-level leases were scored in headquarters until September 1998, and scoring records were not kept in the lease files that are maintained by GSA's regional offices. Third, GSA headquarters does not maintain documentation on whether the scoring process affected the lease term. According to a headquarters official, although GSA kept copies of scoring for prospectus leases, the records do not show whether the term was directly affected by scoring. Fourth, according to GSA officials, budget-scoring rules affect an unknown number of leases because if staff believe a project will be affected by budget-scoring rules, they reduce the term to avoid the potential scoring conflict. However, they do not formally score the lease and do not use the scoring rules as a tool to identify the best term. As of October 1998, GSA's regional offices were to score and document the scoring of both prospectus and nonprospectus leases, according to officials. However, the officials said that the files will contain only the final scoring sheets and not preliminary runs that might identify situations where a lease term was adjusted in order for the lease to score as an operating lease. We could not determine the actual monetary impact of reducing the lease term. However, we found two leases in which GSA requested 10-year and 20-year and 15-year and 20-year lease costs. GSA provided a consultant's report showing the difference between 10- and 20-year lease costs for another project and the SEC lease had 15- and 20-year lease costs. GSA did not identify these two lease terms as being affected by budget scoring. However, the SEC lease term was affected by scoring. According to GSA officials, GSA does not generally seek comparisons of short- and long-term lease costs in the solicitation process. Also, GSA officials stated that the use of a 20-year lease is only appropriate in certain situations, such as if the agency has a long-term need and the federal presence is large enough in the market to backfill the space with other federal employees if the needs of the requesting agency change over time. Also, they pointed out that in most cases it would be less costly to construct a federal facility to meet a long-term need than it is to lease. We previously reported that construction was usually the least costly approach for meeting long-term space needs. Further, GSA pointed out that other factors, such as market, location, and the agency's desires, affect the selection of the lease term. While reviewing files, we identified two leases for which GSA had solicited offers for both 10- and 20-year and 15- and 20-year leases. The first lease was for a 20-year lease structured as either a 10-year lease with a 10-year option or a 20-year lease. The 20-year lease term was 3.24 percent less expensive per NUSF than the 10-year lease that was awarded. This lease was awarded as a 10-year lease with a 10-year option because the agency's long-range plans were unknown. Eight final offers showed that the 20-year lease ranged from 0 to 12.9 percent less expensive per NUSF than the 10- year lease. However, for two other final offers the 20-year lease ranged from .06 percent to 1.19 percent more expensive per NUSF than the 10- year lease. The second lease was for a 20-year lease structured as a 20-year lease with cancellation rights at 15 years or a 20-year lease. The 20-year lease term was 5.56 percent less expensive per NUSF than the 20-year lease with cancellation rights at 15 years for the offer selected for award. The contract was awarded as a 20-year lease with cancellation rights at 15- years. It is not clear from the file why this option was chosen. Four final offers showed that the 20-year lease ranged from 5.56 percent to 7.75 percent less expensive per NUSF than the 20-year lease with cancellation rights at 15 years. However, for two other final offers the 20-year lease was 5.99 percent and 7.97 percent more expensive per NUSF than the 20-year lease with cancellation rights after 15 years. Furthermore, a consultant's report on locating an FBI building in Texas showed that a 20-year lease was 32 percent less expensive per square foot than a 10-year lease. The consultant pointed out that the cost difference might be due to the specialized nature of the FBI building. The SEC lease project had offers ranging from 10 to 20 years. For the successful offer, the 20-year lease costs and the 15-year lease costs were the same per RSF. Three other final offers showed that the 20-year lease costs ranged from 1.3 percent to 4.1 percent less expensive per RSF than the 15-year lease costs. One final offer included 10-year lease costs. This offer showed that 20-year lease costs were 8.8 percent less expensive per RSF than 10-year lease costs per RSF. Further, 15-year lease costs were 7.4 percent less expensive per RSF than 10-year lease costs per RSF. The difference identified between terms and costs in these three examples are not projectable to other leases because other factors such as market condition (whether rental rates are high or low) affect the cost of a lease. In testimony before the Subcommittee on Public Buildings and Economic Development, House Committee on Transportation and Infrastructure, on May 15,1997, a private industry real estate official testified that a 20-year lease term could have annual rental rates as much as 33 percent less expensive than a 10-year lease and 13 percent less expensive than a 15- year lease. Also, he testified that a 15-year lease term can be as much as 23 percent less expensive than a 10-year lease. He further stated that renewal options in a lease are more advantageous than having to renegotiate a new lease for the same location. While GSA officials agree that a long-term lease generally has a lower cost than a short-term lease, they could not quantify the difference between a 20-year lease or 15-year lease and a 10- year lease. Also, they stated that it is generally less costly to construct a federal facility to meet a long-term need--20 years or more--than it is to lease. Furthermore, they pointed out that other factors such as the desires of the agency and the market must be considered along with cost. For nine GSA lease acquisitions, we previously reported that construction would have been less costly in eight of the nine cases, with the range of cost differences being from a negative $.2 million to a positive $48.1 million for construction. For 11 cities throughout the country, we reported that to build a hypothetical 100,000 square foot office building versus obtaining a 20-year lease, the estimated range of cost savings for construction versus leasing was from $.3 million in St Louis, MO, to $14 million in Washington, D.C. Also, we previously reported that the budget scoring rules favor leasing and that one option for scorekeeping that could be considered would be to recognize that many operating leases are used for long-term needs and should be treated on the same basis as purchases. This would entail scoring up front the present value of lease payments covering the same time period used to analyze ownership options. Applying the principle of up-front full recognition of long-term costs to all options for satisfying long-term space needs--purchases, lease-purchase or operating leases--is more likely to result in selection of the most cost-effective alternative than the current scoring rules would. According GSA officials, while scoring does affect the term of some leases, the term of most leases is determined by various factors other than budget scoring, such as the type of space--existing or build-to-suit--lease term desired by the agency, rental market condition, and location of the structure. The importance of each variable may be different for each lease. GSA officials said that for existing space, lease terms do not usually exceed 10 years. This has been a standard practice for some time. If the requirement is for build-to-suit space, then the term of the lease may have to be longer than 10 years to accommodate the lessor's ability to finance the building. It is these build-to-suit leases that are most likely to be affected by scoring because the lessor must have a longer term lease to get financing for a new structure. The lease term to which the agency is willing to commit is another important factor. GSA officials stated that some agencies told GSA that the agency only has authority to commit to a maximum of a 10-year lease. Other agencies only want leases of 10 years or less because of the changes occurring within the agency, such as downsizing or consolidation. An example, according to GSA officials, is the Internal Revenue Service; because of downsizing it does not want to sign a lease longer than 10 years. The rental market conditions also affect a lease's term. GSA does not want to commit to a long-term lease when the market rent is considered high. Conversely, if market rent is low, GSA will consider a longer term lease, according to officials. An example is a lease for the Customs Service in Seattle, WA, for which GSA did not want a long-term lease because the current rental rates were high. Location becomes an important factor because GSA is required to take space back from an agency with only 120 days notice. So in areas with a limited federal presence, GSA does not want to commit to leases where the space cannot be easily back filled with other federal agency employees, according to GSA officials. For example, in small towns, GSA would not want to commit to a lease term longer than an agency wanted when it is the only federal agency in the location. GSA would not be able to find another federal tenant for this space. Although efforts to address budget-scoring rules did result in shorter term leases in some cases, we could not determine the total number of leases where the term was actually affected by budget scoring because of GSA's documentation process for scoring leases. Further, while a shorter term lease can be more costly than a longer term lease, we could not determine the actual overall monetary impact of shorter lease terms because GSA does not generally seek comparisons of short- and long-term lease costs in the solicitation process. In addition to having some effect on the lease term, our previous work has shown that budget scoring can affect the government's decision whether to construct or lease a facility. Also, we have previously reported that the budget-scoring rules have the effect of favoring leasing and that one option for scorekeeping that could be considered would be to recognize that many operating leases are used for long-term needs and should be treated on the same basis as purchases or construction. Because of the overall effect budget scoring appears to be having on the acquisition of real property, we plan to address the effects of budget scoring on real property acquisition as part of a govermentwide review of real property management we recently initiated. To address identifying leases affected by scoring, we reviewed OMB's guidance on scoring leases (Circular A-11, Appendix B, and Circular A-94), interviewed GSA officials in headquarters and all 11 regions, and reviewed 102 active lease files with terms from 10 to19 years and 100,000 RSF or more in GSA regions 3, 7, 8, and 11, which were the 4 regions with the most leases meeting our criteria of 10 to 19 years and 100,000 RSF or more. We dropped 8 lease files from our original selection of 110 files because the files could not be located during our visit or had been moved to other locations prior to our visiting the region. We did not verify the accuracy of the data used to select the lease files. To determine monetary impact of scoring on the lease term, we reviewed congressional testimony, previous GAO reports, 102 GSA lease files, and 8 final offers for the SEC lease; and we interviewed officials in GSA headquarters, all 11 GSA regions, and SEC. To identify other factors influencing lease term, we reviewed 102 active GSA lease files and interviewed GSA headquarters and regional officials. We conducted our review at GSA and SEC between October 2000 and July 2001 in accordance with generally accepted government auditing standards. We obtained comments on a draft of the report from GSA and SEC. On August 10 and 14, 2001, we received written comments from the Associate Executive Director, SEC, and the Commissioner of GSA's Public Buildings Service (PBS), respectively. The SEC official provided clarifying information, which has been included in the report. The PBS Commissioner basically agreed with us that budget scoring is affecting the lease term and provided additional comments, which he believes support this position. The first comment stated that seasoned leasing specialists said that the use of 20-year leases had declined since the Congress passed the Budget Enforcement Act of 1990. While this may be true, GSA did not have documentation on the impact of budget scoring on the lease term, other than for the cases cited. Also, it is possible that other factors, such as market conditions, contributed to the decline in the use of 20-year leases. Second, GSA stated that the National Capital Region sets the term of all the above-prospectus leases it submits as part of its capital plan at 10-years, except in certain cases, to avoid budget-scoring problems. For the fiscal year 2000 and 2001 prospectus-level leases that we reviewed, this is accurate. However, prior to fiscal year 2000, both the Patent and Trademark Office and the Department of Transportation leases were submitted for longer terms, 20 and 15 years, respectively. Third, GSA said while options to renew a lease were advantageous, it did not generally seek them for leases with 10-year terms because options are scored as part of the 90-percent scoring criterion and could result in a capital lease. While GSA is correct that OMB guidance requires options to be considered in scoring leases, there is an exception to this rule. According to OMB's guidance, agencies do not have to include an option for budget scoring if exercising the option would require additional legislative action. Lastly, GSA raised the issue of short-term leases resulting in increased rental costs in some cases because they lead to shorter amortization periods and higher mortgage payments for lessors who use federal leases as collateral for financing. While the report shows that in certain cases shorter term leases are more expensive than long-term leases, we did not look at whether this increased cost was driven by shorter amortization periods and higher mortgage payments. GSA also made some technical comments, which we have reflected in the report where appropriate. We have included GSA's written comments in appendix I. We are sending copies of this report to the Chairmen and Ranking Minority Members of congressional committees with jurisdiction over GSA and SEC. We are also sending copies to the Administrator, GSA, and the Chairman, SEC. Copies will also be made available to others upon request. Key contributors to this report were Ronald L. King and Thomas G. Keightley. If you have any questions, please contact me or Ron King on (202) 512-2834.
|
This report responds to a concern that budget-scoring restrictions were forcing the General Services Administration (GSA) to rely on shorter term leases that increase the costs to the Federal Buildings Fund because their per-square-foot costs are greater than longer term leases. Budget-scorekeeping rules are to be used by the scorekeepers to ensure compliance with budget laws and that legislation are consistent with scorekeeping conventions and that specific legal requirements. The rules are reviewed annually and revised as necessary to achieve those purposes. The way in which budget-scoring rules were implemented affected the lease or lease project term of at least 13 of the 39 federal agency leases GAO reviewed. Since GSA officials do not generally seek comparisons of long-term versus short-term leases in the solicitation process, GAO could not determine the overall monetary impact of budget scoring in the lease term. However, GAO identified three isolated cases that had comparisons of long term versus short-term leases in the solicitation process, and, in each case, the price per net useable square foot was lower with the longer term lease. GSA officials said that while budget scoring affects the term of some leases, the term of most leases is determined by various factors, either individually or in combination, such as rental market conditions, location, and the term desired by the agency.
| 5,023 | 289 |
STARS is designed to replace FAA's automated radar terminal system, which is composed of 15-to 25-year-old controller workstations and supporting computer systems. According to FAA, this system is prone to failures, is maintenance intensive, and requires long repair times. The system also has capacity constraints that restrict the agency from making required safety and efficiency enhancements. Automated radar terminal systems are located at 180 Terminal Radar Approach Control facilities (TRACON) and allow FAA controllers to separate and sequence aircraft near airports. STARS equipment (see fig. 1) is also expected to provide the platform needed to make system enhancements that would increase the level of air traffic control automation and improve weather display, surveillance, and communications. In addition, STARS is expected to permit FAA to consolidate some TRACONs and replace all Digital Bright Radar Indicator Tower Equipment systems. In September 1996, FAA signed a contract with Raytheon Corporation and, as mentioned, currently plans to acquire as many as 171 STARSs. In producing STARS, Raytheon intends to rely fully on commercially available hardware and, to a large extent, on commercially available software. Some original software development will still be required. In August 1996, the contractor projected that 124,000 new lines of software code will need development to meet FAA's requirements. This estimate was revised in December 1996 to 140,000 new lines of code. STARS is an outgrowth of the troubled Advanced Automation System acquisition. As originally designed, the terminal segment of this system, known as the Terminal Advanced Automation System, would provide controllers in TRACONs with new workstations and supporting computer systems. However, in June 1994, the FAA Administrator ordered a major restructuring of the acquisition to solve long-standing schedule and cost problems. These schedule delays were up to 8 years behind the original schedule, and estimated costs had increased to $7.6 billion from the original $2.5 billion estimated in 1983. Specifically, regarding terminal modernization, the Administrator canceled the Terminal Advanced Automation System and expanded the STARS project to include all terminal facilities. In April 1996, FAA established a new acquisition management system, as directed by the Congress. Included in this system is the concept of life-cycle management, which is intended to be a more comprehensive, disciplined full-cost approach to managing the acquisition cycle, from analysis of mission needs and alternative investments through system development, implementation, operation, and, ultimately, disposal. Under this new system, decisions related to resource allocation (mission and investment) are made by FAA's Joint Resources Council, which is composed of associate administrators for operations and acquisition and other key executives. Decisions associated with program planning and implementation are made within Integrated Product Teams (IPT). IPTs are responsible for bringing together all essential elements of program implementation, including scheduling, allocation of funding, and the roles and responsibilities of stakeholders. To ensure successful program implementation, the acquisition management system dictates that these issues be resolved before contracts are awarded. IPTs also generate schedule and cost baselines, which the Joint Resources Council authorizes the teams to operate under. Team members include representatives from FAA units responsible for operating and maintaining air traffic control equipment and other stakeholders in the acquisition process. To achieve the implementation schedule approved by the Joint Resources Council in January 1996, FAA will have to obtain commitment from key stakeholders, resolve scheduling conflicts between STARS and other terminal modernization efforts, and overcome difficulties in developing the system. FAA is aware that these issues pose a risk for STARS and has begun several risk mitigation initiatives. While such actions are encouraging, it is too early to tell how effective they will be. FAA's schedule for developing and implementing STARS by its January 1996 approved baseline is shown in table 1. Figure 2 shows FAA's plans for ordering, delivering, and operating STARS. FAA intends to begin operating STARS at only three TRACONs before fiscal year 2000. Operation increases after this time, with FAA expecting to operate 55 additional systems in fiscal year 2002. FAA has yet to obtain commitment from all key stakeholders responsible for ensuring that STARS equipment is properly installed. FAA's new acquisition management system stresses that IPTs need to reach agreement before contracts are awarded. Such agreement is necessary to ensure that all stakeholders' roles are defined and agreed upon, facilities are ready to receive STARS, and all other equipment necessary for the operation of STARS is in place. In the past, poor coordination among key stakeholders has caused schedule delays in other modernization projects at FAA. The IPT for STARS has yet to obtain commitment to the STARS schedule from the entire Airway Facilities Service--a key stakeholder. Located in headquarters and regions, maintenance technicians who work for the Airway Facilities Service are responsible for installing and maintaining air traffic control equipment. FAA's current schedule anticipates that STARS will be installed at most sites using a turnkey concept whereby the contractor, not FAA employees, will install the equipment. This concept presumes that a significant level of regional resources will still be required to support and oversee contractor installation. IPT officials told us that while Airway Facilities Service officials at headquarters have committed to the turnkey concept, regions' commitment is incomplete. IPT and Airway Facilities Service officials told us that a process has been established to ensure regions' understanding and obtain their commitment. As part of this process, the IPT has begun regional briefings and has formed implementation teams to gain regions' commitment on turnkey issues. In addition, the IPT has yet to obtain commitment to the STARS schedule from the Professional Airways Systems Specialists--the technicians' union. Top union officials told us that, as of late February 1997, they have not been briefed on the STARS turnkey concept and have not agreed as to how it will be implemented. The union is concerned that the turnkey installation may jeopardize the job security of its members. IPT officials said that while union representatives have been involved in reviewing vendors' proposals for STARS, the union has not been briefed on the specifics of STARS deployment. Although FAA's Acquisition Management System stresses that all key program implementation issues be resolved before contracts are awarded, the IPT believed that it could obtain the union's commitment at a later date. As required by the union's collective bargaining agreement, in January 1997, the IPT initiated actions to brief the union and obtain its commitment. FAA's schedule for STARS can be jeopardized by scheduling conflicts with other modernization efforts. For example, each year, various TRACONs are scheduled to be renovated or replaced. If STARS equipment is delivered during this time, installation could be delayed. Currently, the IPT is unsure of the number of these potential conflicts. In September 1996, the IPT identified 12 potential scheduling conflicts at the first 45 STARS sites. One month later, the number of conflicts was reduced to four, but the team did not provide us with an explanation for this decrease. We believe that the number of potential conflicts will not be known until the IPT ascertains the readiness of each facility to receive and install STARS equipment. The IPT plans to start conducting site reviews in 1997. Another potential scheduling conflict involves terminal surveillance radars, which track aircraft position and use analog or digital processing and communications to transmit the information to TRACONs. Many existing surveillance radars are not digital, but STARS requires digital processing and communications. FAA plans to replace nondigital Airport Surveillance Radar-7s (ASR-7) with new digital ASR-11s. The agency has not decided yet whether to replace other nondigital radar, ASR-8s, or to digitize them. In January 1997, FAA was concerned that 47 of 98 ASR-7s and -8s might not be upgraded in time to meet the STARS schedule. FAA officials told us that, as of late February, they had reduced the number of potential conflicts from 47 to 10 through efforts to coordinate the STARS and digital radar schedules. According to an IPT official, if digital radar does not provide coverage for a TRACON's entire airspace, FAA may have to delay STARS or reorder the sequence of TRACONs receiving STARS. FAA officials told us that they are taking actions to identify and resolve potential scheduling conflicts. The IPT has developed project guides for the FAA regions receiving STARS. These guides identify possible scheduling conflicts with other modernization efforts. Also, Airway Facilities Service officials told us that as a result of a recent reassessment in December 1996 of the schedule for the first 39 STARSs, FAA was able to avoid potential conflicts by repositioning the order in which TRACONs received STARS. Finally, the Airway Facilities Service is developing a database to assist the IPT in maintaining current planning information. Although STARS depends on the use of commercial off-the-shelf computer hardware and a significant amount of commercially available software, FAA and Raytheon have numerous tasks to accomplish before system development is completed. However, the nature and extent of these tasks are not completely known, and such development inevitably poses continual managerial and technical challenges. As noted in table 1, FAA's schedule calls for software development to proceed in two phases. For the initial phase, the agency expects to complete software testing in September 1998, about 2 years from the time when the contract was awarded. For the second phase, the agency expects to complete testing of the full STARS software in July 1999. As an example of the challenge that software development poses for FAA, as recently as December 1996, FAA and Raytheon were discussing (1) how the system would provide specific functions and (2) whether certain functions would be needed, and if so, whether the functions would be included in the equipment with initial- or full-system capability. According to Raytheon officials, these discussions ended with FAA and Raytheon coming to closure on all of the 28 issues needing resolution. As a result, some 16,000 lines of additional software code--beyond the planned 124,000 lines of new code--must be written. Of the 140,000 lines of code, about 138,000 are for flight data processing, training, and maintenance functions, and 2,000 are to fulfill safety requirements, such as warning controllers when aircraft are not maintaining proper separation or minimum safe altitudes. Raytheon officials believe the additional code development will not affect their ability to meet the original milestones. All new code will have to be tested in conjunction with the nearly 840,000 lines of existing STARS software code. If potential difficulties in developing and testing the system are realized, initial implementation of STARS--particularly at the three TRACONs targeted for operation before fiscal year 2000--will likely be delayed. FAA's life-cycle cost baseline has the potential to increase--from $2.23 billion, the level approved by the Joint Resources Council in January 1996, to as much as $2.76 billion. This possible increase is attributable to expected higher costs for operating and maintaining STARS equipment. FAA expects the estimate for facilities and equipment costs to remain stable for the immediate future. FAA's January 1996 facilities and equipment cost baseline is $940 million. During 1996, this baseline was reviewed by the IPT. Through September 1996, the IPT was estimating that the baseline could increase to $1.18 billion. At that time, the IPT (1) estimated higher expected costs for software development; (2) estimated higher expected implementation, technical support, and maintenance costs because of the addition of necessary equipment; and (3) included costs for communications because the baseline estimate overlooked them. In December 1996, the IPT assessed the STARS costs on the basis of the signed contract with Raytheon. As a result, the IPT determined, that while some cost elements will increase, other elements will decrease. Specifically, significantly lower costs for hardware--key components were $40,000 less per unit than what FAA had estimated--will enable the STARS project for the present time to stay within the original baseline. Table 2 shows the differences in cost elements between the original cost baseline and the IPT's December 1996 assessment. FAA's January 1996 operations cost baseline is $1.29 billion. However, based on a September 1996 analysis, FAA staff identified a potential $529 million increase that could revise the baseline to $1.82 billion. FAA officials told us that this increase occurred, in part, because the agency overlooked maintenance costs in the initial estimates. Also, the officials attributed the increase to FAA's deploying more STARS equipment than originally planned. IPT officials told us that on the basis of more current information from the contractor, operations and maintenance costs are expected to be significantly closer to the $1.29 billion baseline estimate than the $1.82 billion figure. The officials could not, however, provide us with an updated cost estimate or detailed support for their views. The IPT officials told us that they are reviewing the latest cost estimates and expect to brief the Joint Resources Council on any potential changes to the baseline in March 1997. Separate and distinct from STARS life-cycle costs are two additional costs that FAA will incur to make STARS operational. First, FAA will have to prepare the TRACONs for the delivery of STARS equipment. FAA officials estimate that the agency will incur at least $18 million in costs to get the first 46 TRACONs and related facilities ready to accept the STARS equipment. Roughly half of this amount is for asbestos removal; the balance is for power upgrades and building improvements. FAA has yet to develop estimates for readying the remaining sites. Second, FAA will incur costs for upgrading radars. FAA plans to modernize the existing analog ASR-8 radars that provide data to its TRACONs. Because the implementation of STARS is approaching, FAA is faced with an immediate decision between digitizing these existing analog radars or replacing them with new digital radars. FAA officials estimate that the 20-year life-cycle costs for modifying and digitizing all the ASR-8s will be $459 million and for replacing them will be $474 million. According to FAA officials, the estimated cost difference between digitizing existing radars and buying new radars is minimal because of the higher costs of maintaining older analog equipment. The agency is continuing to refine these cost estimates, and it expects to decide later this year on which option to select. We provided the Department of Transportation with a draft of this report for its review and comment. We met with FAA officials, including the IPT leader for Terminal Air Traffic Systems Development; the Program Director for National Airspace System Transition and Implementation; and representatives of FAA's Air Traffic and Airway Facilities Services. FAA was concerned about our use of the $1.82 billion estimate for operations and maintenance costs. The estimate came from a September 1996 study done by FAA's Program Analysis and Operations Research staff. FAA told us that this estimate was preliminary and should not be reported as a basis for evaluating the STARS project. While FAA acknowledged that there may be some cost growth in the STARS project, it did not anticipate growth as large as we reported. We continue to include the September 1996 estimate in this report. This estimate was developed by experienced cost analysts, including a member of the STARS IPT, and was the only documented estimate available since the official baseline was approved in January 1996. Furthermore, FAA could not provide us with a more current estimate or detailed support for its views on why the September 1996 analysis may have overstated the cost estimate for operations and maintenance. FAA also expressed concern about the way the draft report characterized the extent to which key stakeholders were committed to the implementation schedule, which relies heavily on the use of the turnkey concept. We revised the report to recognize that (1) while regions' commitment is incomplete, Airway Facilities Service officials at headquarters have committed to the turnkey concept and (2) FAA has established a process, including the formation of implementation teams to ensure regions' understanding and obtain their commitment on turnkey issues. However, because the turnkey concept will affect regional resources and employees' responsibilities, FAA agreed that the potential lack of regions' commitment is a risk that must be mitigated throughout the implementation of STARS. To obtain information for this report, we interviewed officials at FAA headquarters, its New England Regional Office in Burlington, Massachusetts, its New York Regional Office in Jamaica, New York, and its William J. Hughes Technical Center in Pomona, New Jersey. We reviewed agency documentation on current schedule and life-cycle costs for STARS. We reviewed guidelines pertaining to system acquisition, compared FAA's actions to the guidance, and identified key issues that could affect the success of the STARS project. To identify any labor issues that could affect the scheduled deployment, we interviewed union officials with the Professional Airways Systems Specialists. We conducted our review from July 1996 through January 1997 in accordance with generally accepted government auditing standards. However, we did not assess the reliability of the process used to generate cost information. We are sending copies of this report to the Secretary of Transportation, the Administrator of FAA, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3650 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix I. John H. Anderson, Jr. Gregory P. Carroll Robert E. Levin Peter G. Maristch John T. Noto The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the Federal Aviation Administration's (FAA) acquisition planning to date, focusing on the extent to which: (1) the schedule estimate for the Standard Terminal Automation Replacement System (STARS) is attainable; and (2) cost estimates to make STARS operational are likely to change. GAO noted that: (1) the STARS schedule, which calls for implementation of 171 air traffic control facilities between December 1998 and February 2005, is attainable only if FAA is successful in its efforts to mitigate certain risks; (2) specifically, FAA will need to obtain commitment by key stakeholders to the STARS schedule, resolve schedule conflicts between STARS and other modernization efforts, and overcome difficulties in developing system software that could delay implementing STARS; (3) FAA is aware that these issues pose a risk for STARS and has begun several risk mitigation initiatives; (4) while such actions are encouraging, it is too early to tell how effective they will be; (5) FAA's cost estimate for STARS has the potential to increase; (6) FAA's total cost estimate for STARS is $2.23 billion; (7) FAA approved this estimate in January 1996, however, a September 1996 analysis by agency officials pointed to potential cost increases that could drive the total cost estimate to as much as $2.76 billion; (8) this possible increase is attributable to expected higher costs for operating and maintaining STARS equipment; (9) FAA officials are continuing to revise the STARS cost estimate and now believe that cost increases may be significantly lower; and (10) at this time, however, FAA could not provide GAO with an updated estimate.
| 4,033 | 362 |
Opportunities for servicewomen have increased dramatically since 1948, when the Women's Armed Services Integration Act of 1948 gave women a permanent place in the military services. However, the act excluded women from serving on Navy ships (except hospital ships and transports) and aircraft engaged in combat missions. Because the Marine Corps is a naval oriented air and ground combat force, the exclusion of women from Navy ships essentially barred them from combat positions in the Marine Corps as well. The Women's Army Corps already excluded women from combat positions, eliminating the need for a separate statute for Army servicewomen. During the 1970s, Congress and the services created more opportunities for women in the military. In 1974, the age requirement for enlistment without parental consent became the same for men and women. Then, in 1976, women were admitted to the Air Force Academy, the Naval Academy, and the Military Academy. In 1977, the Army implemented a policy that essentially opened many previously closed occupations, including some aviation assignments, but formally closed combat positions to women. Finally, in 1978, Congress amended the 1948 Integration Act to allow women to serve on additional types of noncombat ships. The Navy and the Marine Corps subsequently assigned women to noncombat ships such as tenders, repair ships, and salvage and rescue ships. In February 1988, DOD adopted a Department-wide policy called the Risk Rule, that set a single standard for evaluating positions and units from which the military service could exclude women. The rule excluded women from noncombat units or missions if the risks of exposure to direct combat, hostile fire, or capture were equal to or greater than the risk in the combat units they supported. Each service used its own mission requirements and the Risk Rule to evaluate whether a noncombat position should be open or closed to women. The National Defense Authorization Act for Fiscal Years 1992 and 1993 repealed the prohibition on the assignment of women to combat aircraft in the Air Force, the Navy, and the Marines Corps. The act also established the Presidential Commission on the Assignment of Women in the Armed Forces to study the legal, military, and societal implications of amending the exclusionary laws. The Commission's November 1992 report recommended retaining the direct ground combat exclusion for women. In April 1993, the Secretary of Defense directed the services to open more specialties and assignments to women, including those in combat aircraft and on as many noncombatant ships as possible under current law. The Army and the Marine Corps were directed to study the possibility of opening more assignments to women, but direct ground combat positions were to remain closed. The Secretary of Defense also established the Implementation Committee, with representatives from the Office of the Secretary of Defense, the military services, and the Joint Chiefs, to review the appropriateness of the Risk Rule. In November 1993, Congress repealed the naval combat ship exclusions and required DOD to notify Congress prior to opening additional combat positions to women. In January 1994, the Secretary of Defense, in response to advice from the Implementation Committee, rescinded the Risk Rule. In DOD's view, the rule was no longer appropriate based on experiences during Operation Desert Storm, where everyone in the theater of operation was at risk. The Secretary also established a new DOD-wide direct ground combat assignment rule that allows all servicemembers to be assigned to all positions for which they qualify, but excludes women from assignments to units below the brigade level whose primary mission is direct ground combat. The purpose of this change was to expand opportunities for women in the services. Additionally, the Secretary stipulated that no units or positions previously open to women would be closed. At that time, the Secretary issued a definition of direct ground combat to ensure a consistent application of the policy excluding women from direct ground combat units. As of September 1998, DOD had not revised its 1994 rule or changed its direct ground combat definition. In addition to establishing the direct ground combat assignment rule in 1994, the Secretary of Defense also permitted the services to close positions to women if (1) the units and positions are required to physically collocate and remain with direct ground combat units, (2) the service Secretary attests that the cost of providing appropriate living arrangements for women is prohibitive, (3) the units are engaged in special operations forces' missions or long-range reconnaissance, or (4) job related physical requirements would exclude the vast majority of women. The military services may propose additional exceptions, with justification to the Secretary of Defense. At the time of our review, about 221,000 positions, or about 15 percent of the approximately 1.4 million positions in DOD, were closed to servicewomen. About half of these are closed because of DOD's policy to exclude women from positions whose primary mission is to engage in direct ground combat. Figure 1 shows the percentage and numbers of positions closed based on exclusion policies. Appendixes I and II provide more details on the numbers and types of positions closed by each service. As figure 1 shows, about 46 percent of the positions closed to women in the military services are associated with the direct ground combat exclusion policy. These positions, according to DOD officials, are in units whose primary mission is to engage in direct ground combat and includes occupations in infantry, armor, field artillery, and special forces. The majority of these closures are in the Army, followed by the Marine Corps, and a small number in the Air Force. About 41 percent of the positions closed to women are attributed to the collocation exclusion policy. Units that collocate with direct ground combat units operate within and as part of those units during combat operations. For example, Army ground surveillance radar units, while not considered direct ground combat units, routinely operate with infantry and armor units on the battlefield. Because of the differences in roles, missions, and organization between the Army and the Marine Corps, however, some positions that are closed for collocation reasons in the Army may be closed for direct ground combat reasons in the Marine Corps, according to DOD officials. Cost-prohibitive living arrangements account for about 12 percent of the positions closed to women. These positions are exclusive to the Navy and are on submarines and small surface vessels like mine sweepers, mine hunters, and coastal patrol ships. The special operations forces and long-range reconnaissance missions exclusion policy accounts for almost 2 percent of all positions closed to women. These closures are in the Navy and the Air Force because the Army classifies most of its special operations forces as direct ground combat forces. During our review we found no additional exceptions or exclusions based on physical requirements. When DOD formalized its policy excluding women from direct ground combat positions in 1994, it adopted the primary elements of the Army's ground combat exclusion policy as the DOD-wide assignment rule. According to DOD officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the prohibition on direct ground combat was a long-standing Army policy, and for that reason, no consideration was given to repealing it when DOD adopted the current assignment policy in 1994. Other reasons for continuing the ground combat exclusion policy were presented in a 1994 DOD news briefing announcing the opening of 80,000 new positions to servicewomen. At the briefing, defense officials said they believed that "integrating women into ground combat units would not contribute to the readiness and effectiveness of those units" due to the nature of direct ground combat and the way individuals need to perform under those conditions. The DOD official providing the briefing said that physical strength and stamina, living conditions, and lack of public support for women in ground combat were some of the issues considered. According to DOD, its perception of the lack of public support was partly based on the results of a survey done in 1992 for the Presidential Commission on the Assignment of Women in the Armed Forces. DOD documents also cited the Department's lack of experience with women in direct ground combat and its observation of the experience of other countries as part of the rationale for continuing the exclusion of women from direct ground combat. As of September 1998, DOD had no plans to reconsider the ground combat exclusion policy because, in its view, there is no military need for women in ground combat positions because an adequate number of men are available. Additionally, DOD continues to believe that opening direct ground combat units to women lacks congressional and public support. Finally, DOD cited military women's lack of support for involuntary assignments to ground combat positions as another reason for continuing its exclusion policy. This lack of support has been documented in several studies of military women. For example, in a 1997 Rand Corporation study, done at the request of DOD, most servicewomen expressed the view that while ground combat positions should be opened to women, such positions should be voluntarily assigned. DOD provided the military services with a single definition of direct ground combat. The services use the definition to ensure a common application of the policy excluding women from direct ground combat units. To be considered a direct ground combat unit, the primary mission of the unit must include all the criteria of the direct ground combat definition. Specifically, DOD defines direct ground combat as engaging "an enemy on the ground with individual or crew served weapons, while being exposed to hostile fire and to a high probability of direct physical contact with the hostile force's personnel." In addition, DOD's definition states that "direct ground combat takes place well forward on the battlefield while locating and closing with the enemy to defeat them by fire, maneuver, or shock effect." According to ground combat experts, "locating and closing with the enemy to defeat them by fire, maneuver, or shock effect" is an accurate description of the primary tasks associated with direct ground combat units and positions. However, DOD's definition of direct ground combat links these tasks to a particular location on the battlefield--"well forward." In making this link, the definition excludes battlefields that may lack a clearly defined forward area. According to current Army and Marine Corps ground combat doctrine, battlefields are generally conceptualized to include close, deep, and rear operational areas. Close operations areas involve friendly forces that are in immediate contact with enemy forces and are usually exposed to the greatest risk. Direct ground combat units, along with supporting collocated units, primarily operate in the close operations area. Deep operations are focused beyond the line of friendly forces and are generally directed against hostile supporting forces and functions, such as command and control, and supplies. Rear operations sustain close and deep operations by providing logistics and other supporting functions. Several factors determine how the battlefield will develop during a military operation, including mission, available resources, terrain, and enemy forces. The phrase "well forward on the battlefield" in DOD's definition, according to ground combat experts, implies that military forces will be arrayed in a linear manner on the battlefield. On this battlefield, direct ground combat units operate in the close operational area where the forward line of troops comprises the main combat units of friendly and hostile forces. Land battles envisioned in Europe during the Cold War were planned in a linear manner. Figure 2 depicts an example of a linear battlefield. Battlefields can also be arrayed in a nonlinear manner, meaning that they may have a less precise structure, and the functions of close, deep, and rear operations may have no adjacent relationship. On a nonlinear battlefield, close operations can take place throughout the entire area of military operations, rather than just at the forward area as in the linear organization. Recent military operations like Operation Restore Hope in Somalia and Operation Joint Endeavor in Bosnia involved nonlinear situations that lacked well-defined forward areas, according to ground combat experts. Figure 3 depicts an example of a nonlinear battlefield. Ground combat experts in the Army and the Marine Corps note that, in the post-Cold War era, the nonlinear battlefield is becoming more common. Should this trend continue, defining direct ground combat as occurring "well forward on the battlefield" may become increasingly less descriptive of actual battlefield conditions. We provided a draft of this report to the Office of the Secretary of Defense, the Army, the Air Force, the Marine Corps, and the Navy. The Office of the Secretary of Defense and the military services orally concurred with information presented in the report. Additionally, the Army, the Navy, and the Marine Corps provided technical comments, which we incorporated as appropriate. To identify the military occupations and positions closed to women, we reviewed data from the Army, the Marine Corps, the Navy, and the Air Force on current positions closed to women, the numbers associated with each closed position, and the justification for each closed position. Based on the information provided, we compiled the closed occupations and positions to determine the total number of positions closed and the justification for each. We discussed the currency of this information with officials from the Department of the Army, Deputy Chief of Staff for Personnel; Headquarters Marine Corps, Deputy Chief of Staff for Manpower and Reserve Affairs; the Department of the Navy, Bureau of Naval Personnel; and the Department of the Air Force, Deputy Chief of Staff, Personnel. During this review, we did not evaluate the military services' decisions for closing certain positions or units to women. To identify DOD's rationale for the exclusion of women from direct ground combat positions, we reviewed documents, including policy memorandums, congressional correspondence, and press briefings from the Office of the Under Secretary of Defense for Personnel and Readiness. We also interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, who helped provide useful information regarding the historical origins of the prohibition of women in direct ground combat. To determine the relationship of DOD's definition of direct ground combat to current military operations, we reviewed Army and Marine Corps ground combat doctrine. Doctrine is developed from a variety of sources, including actual lessons learned from combat operations, and it provides a framework for military forces to plan and execute military operations. We also interviewed ground combat doctrine officials at the Army's Combined Arms Center, Fort Leavenworth, Kansas, and the Marine Corps' Combat Development Command, Quantico, Virginia, and an expert from the Naval War College, Newport, Rhode Island. We did not evaluate the rationale the military services used to classify closures based on the Secretary of Defense's approved justifications. To calculate the percentage of positions closed to women in the military services, we used the active duty authorized personnel end strength for fiscal year 1998. Authorized end strength is the maximum number of personnel authorized by Congress for a particular service. The Marine Corps, in some publications, may show a higher percentage of positions closed because it uses actual assignable positions to derive the percentage of positions closed to women. The actual strength, which is a measurement of personnel at a particular point in time, fluctuates throughout the year and can sometimes be lower than authorized personnel end strength. We conducted our review from March to September 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and Members of Congress; the Secretaries of Defense, the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. We will also make copies available to other interested parties upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. About 15 percent of all positions across the armed forces are closed to women because they (1) are in occupations that primarily engage in direct ground combat, (2) collocate and operate with direct ground combat units, (3) are located on ships where the cost of providing appropriate living arrangements is considered prohibitive, or (4) are in units that engage in special operations missions and long-range reconnaissance. Table I.1 shows the number of positions closed in each service and the exclusion justification. About 142,000 positions, or about 29 percent, of the Army's fiscal year 1998 active force authorized personnel end strength of 495,000 are closed to women. About half of these closures are associated with occupations involving direct ground combat. These closures include the occupational fields of infantry, armor, and special forces. The remaining closures are in occupational specialties or units that are required to collocate and remain with direct ground combat units, including combat engineering, field artillery, and air defense artillery. Also, some occupational specialties in the petroleum and water, maintenance, and transportation career fields, for example, are considered open to women but are closed at certain unit levels because they collocate with direct ground combat units. About 43,400 positions, or about 25 percent, of the Marine Corps' fiscal year 1998 active force authorized personnel end strength of 174,000, are closed to women. About two-thirds of the closures are in occupational fields involving direct ground combat, such as infantry, artillery and tank, and assault amphibious vehicles. The other third of the closures are in occupational specialties that are required to collocate and remain with direct ground combat units, such as counterintelligence specialists and low-altitude air defense gunners. In addition, some occupational specialties, such as landing support specialist and engineering officer, are generally open to women but are closed at certain unit levels because of collocation with direct ground combat units. About 2,300 positions, or less than 1 percent, of the Air Force's fiscal year 1998 active force authorized personnel end strength of 371,577 are closed to women. About 69 percent of these are in occupations such as tactical air command and control, combat controller, and pararescue, which are involved with direct ground combat, according to Air Force documents. About 18 percent are closed because the Air Force places restrictions on assignments to aircrew positions in its helicopters that conduct special operations forces missions. About 13 percent of the closures are in certain weather and radio communications occupations because they collocate with ground combat units or special operations forces. Appendix II shows the career fields and occupations that are closed to women. Other occupations, for example in transportation, maintenance, and aviation, are generally considered open, but women may be restricted from assignment to them at various unit levels because these units collocate with direct ground combat forces. Carol R. Schuster William E. Beusse Colin L. Chambers Carole F. Coffey Julio A. Luna Andrea D. Romich The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed various issues pertaining to the treatment of men and women in the armed forces, focusing on: (1) the numbers and types of positions that are closed to women and the associated justifications for closure; (2) Department of Defense's (DOD) current rationale for excluding women from direct ground combat; and (3) the relationship of DOD's definition of direct ground combat to current military operations. GAO noted that: (1) approximately 221,000 of DOD's 1.4 million positions are closed to women, who comprise about 14 percent of the armed services; (2) about 101,700 of these positions are closed based on DOD's policy of not assigning women to occupations that require engagement in direct ground combat; (3) the remaining 119,300 positions are closed because they are collocated and operate with direct ground combat units, are located on certain ships where the cost of providing appropriate living arrangements for women is considered prohibitive, or are in units that conduct special operations and long-range reconnaissance missions; (4) GAO found no positions closed to women because of job-related physical requirements; (5) DOD's current rationale for excluding women from direct ground combat units or occupations is similar to its rationale when it first formalized the combat exclusion policy in 1994; (6) at that time, DOD officials did not consider changing its long-standing policy because they believed that the integration of women into direct ground combat units lacked both congressional and public support; (7) furthermore, transcripts of a 1994 press briefing indicate that DOD officials believed that the assignment of women to direct ground combat units would not contribute to the readiness and effectiveness of those units because of physical strength, stamina, and privacy issues; (8) at the time of GAO's review, DOD had no plans to reconsider the ground combat exclusion because in GAO's view: (a) there is no military need for women in ground combat positions because an adequate number of men are available; (b) the idea of women in direct ground combat continues to lack congressional and public support; and (c) most servicewomen do not support the involuntary assignment of women to direct ground combat units; (9) DOD's definition of direct ground combat includes a statement that ground combat forces are well forward on the battlefield; (10) this statement, however, does not reflect the less predictable nature of emerging post-Cold War military operations that may not have a well-defined forward area on the battlefield; and (11) if this trend continues, DOD's definition of direct ground combat may become increasingly less descriptive of actual battlefield conditions.
| 4,175 | 568 |
Because of such emergencies as natural disasters, hazardous material spills, and riots, all levels of government have had some experience in preparing for different types of disasters and emergencies. Preparing for all potential hazards is commonly referred to as the "all-hazards" approach. While terrorism is a component within an all-hazards approach, terrorist attacks potentially impose a new level of fiscal, economic, and social dislocation within this nation's boundaries. Given the specialized resources that are necessary to address a chemical or biological attack, the range of governmental services that could be affected, and the vital role played by private entities in preparing for and mitigating risks, state and local resources alone will likely be insufficient to meet the terrorist threat. Some of these specific challenges can be seen in the area of bioterrorism. For example, a biological agent released covertly might not be recognized for a week or more because symptoms may only appear several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These characteristics require responses that are unique to bioterrorism, including health surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics or vaccines to large segments of the population to prevent the spread of an infectious disease. The resources necessary to undertake these responses are generally beyond state and local capabilities and would require assistance from and close coordination with the federal government. National preparedness is a complex mission that involves a broad range of functions performed throughout government, including national defense, law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. While only the federal government is empowered to wage war and regulate interstate commerce, state and local governments have historically assumed primary responsibility for managing emergencies through police, firefighters, and emergency medical personnel. The federal government's role in responding to major disasters is generally defined in the Stafford Act, which requires a finding that the disaster is so severe as to be beyond the capacity of state and local governments to respond effectively before major disaster or emergency assistance from the federal government is warranted. Once a disaster is declared, the federal government--through the Federal Emergency Management Agency (FEMA)--may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. There has been an increasing emphasis over the past decade on preparedness for terrorist events. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense Against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar- Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in the Department of Justice, Department of Energy, FEMA, and Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. The attacks of September 11, 2001, as well as the subsequent attempts to contaminate Americans with anthrax, dramatically exposed the nation's vulnerabilities to domestic terrorism and prompted numerous legislative proposals to further strengthen our preparedness and response. During the first session of the 107th Congress, several bills were introduced with provisions relating to state and local preparedness. For instance, the Preparedness Against Domestic Terrorism Act of 2001, which you cosponsored, Mr. Chairman, proposes the establishment of a Council on Domestic Preparedness to enhance the capabilities of state and local emergency preparedness and response. The funding for homeland security increased substantially after the attacks. According to documents supporting the president's fiscal year 2003 budget request, about $19.5 billion in federal funding for homeland security was enacted in fiscal year 2002. The Congress added to this amount by passing an emergency supplemental appropriation of $40 billion dollars. According to the budget request documents, about one- quarter of that amount, nearly $9.8 billion, was dedicated to strengthening our defenses at home, resulting in an increase in total federal funding on homeland security of about 50 percent, to $29.3 billion. Table 1 compares fiscal year 2002 funding for homeland security by major categories with the president's proposal for fiscal year 2003. We have tracked and analyzed federal programs to combat terrorism for many years and have repeatedly called for the development of a national strategy for preparedness. We have not been alone in this message; for instance, national commissions, such as the Gilmore Commission, and other national associations, such as the National Emergency Management Association and the National Governors Association, have advocated the establishment of a national preparedness strategy. The attorney general's Five-Year Interagency Counterterrorism Crime and Technology Plan, issued in December 1998, represents one attempt to develop a national strategy on combating terrorism. This plan entailed a substantial interagency effort and could potentially serve as a basis for a national preparedness strategy. However, we found it lacking in two critical elements necessary for an effective strategy: (1) measurable outcomes and (2) identification of state and local government roles in responding to a terrorist attack. In October 2001, the president established the Office of Homeland Security as a focal point with a mission to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks. While this action represents a potentially significant step, the role and effectiveness of the Office of Homeland Security in setting priorities, interacting with agencies on program development and implementation, and developing and enforcing overall federal policy in terrorism-related activities is in the formative stages of being fully established. The emphasis needs to be on a national rather than a purely federal strategy. We have long advocated the involvement of state, local, and private-sector stakeholders in a collaborative effort to arrive at national goals. The success of a national preparedness strategy relies on the ability of all levels of government and the private sector to communicate and cooperate effectively with one another. To develop this essential national strategy, the federal role needs to be considered in relation to other levels of government, the goals and objectives for preparedness, and the most appropriate tools to assist and enable other levels of government and the private sector to achieve these goals. Although the federal government appears monolithic to many, in the area of terrorism prevention and response, it has been anything but. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 federal entities in bioterrorism alone. One of the areas that the Office of Homeland Security will be reviewing is the coordination among federal agencies and programs. Concerns about coordination and fragmentation in federal preparedness efforts are well founded. Our past work, conducted prior to the creation of the Office of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. There had been no single leader in charge of the many terrorism- related functions conducted by different federal departments and agencies. In fact, several agencies had been assigned leadership and coordination functions, including the Department of Justice, the Federal Bureau of Investigation, FEMA, and the Office of Management and Budget. We previously reported that officials from a number of agencies that combat terrorism believe that the coordination roles of these various agencies are not always clear. The recent Gilmore Commission report expressed similar concerns, concluding that the current coordination structure does not provide the discipline necessary among the federal agencies involved. In the past, the absence of a central focal point resulted in two major problems. The first of these is a lack of a cohesive effort from within the federal government. For example, the Department of Agriculture, the Food and Drug Administration, and the Department of Transportation have been overlooked in bioterrorism-related policy and planning, even though these organizations would play key roles in response to terrorist acts. In this regard, the Department of Agriculture has been given key responsibilities to carry out in the event that terrorists were to target the nation's food supply, but the agency was not consulted in the development of the federal policy assigning it that role. Similarly, the Food and Drug Administration was involved with issues associated with the National Pharmaceutical Stockpile, but it was not involved in the selection of all items procured for the stockpile. Further, the Department of Transportation has responsibility for delivering supplies under the Federal Response Plan, but it was not brought into the planning process and consequently did not learn the extent of its responsibilities until its involvement in subsequent exercises. Second, the lack of leadership has resulted in the federal government's development of programs to assist state and local governments that were similar and potentially duplicative. After the terrorist attack on the federal building in Oklahoma City, the federal government created additional programs that were not well coordinated. For example, FEMA, the Department of Justice, the Centers for Disease Control and Prevention, and the Department of Health and Human Services all offer separate assistance to state and local governments in planning for emergencies. Additionally, a number of these agencies also condition receipt of funds on completion of distinct but overlapping plans. Although the many federal assistance programs vary somewhat in their target audiences, the potential redundancy of these federal efforts warrants scrutiny. In this regard, we recommended in September 2001 that the president work with the Congress to consolidate some of the activities of the Department of Justice's Office for State and Local Domestic Preparedness Support under FEMA. State and local response organizations believe that federal programs designed to improve preparedness are not well synchronized or organized. They have repeatedly asked for a one-stop "clearinghouse" for federal assistance. As state and local officials have noted, the multiplicity of programs can lead to confusion at the state and local levels and can expend precious federal resources unnecessarily or make it difficult for them to identify available federal preparedness resources. As the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds and have argued that the application process is burdensome and inconsistent among federal agencies. Although the federal government can assign roles to federal agencies under a national preparedness strategy, it will also need to reach consensus with other levels of government and with the private sector about their respective roles. Clearly defining the appropriate roles of government may be difficult because, depending upon the type of incident and the phase of a given event, the specific roles of local, state, and federal governments and of the private sector may not be separate and distinct. A new warning system, the Homeland Security Advisory System, is intended to tailor notification of the appropriate level of vigilance, preparedness, and readiness in a series of graduated threat conditions. The Office of Homeland Security announced the new warning system on March 12, 2002. The new warning system includes five levels of alert for assessing the threat of possible terrorist attacks: low, guarded, elevated, high, and severe. These levels are also represented by five corresponding colors: green, blue, yellow, orange, and red. When the announcement was made, the nation stood in the yellow condition, in elevated risk. The warning can be upgraded for the entire country or for specific regions and economic sectors, such as the nuclear industry. The system is intended to address a problem with the previous blanket warning system that was used. After September 11th, the federal government issued four general warnings about possible terrorist attacks, directing federal and local law enforcement agencies to place themselves on the "highest alert." However, government and law enforcement officials, particularly at the state and local levels, complained that general warnings were too vague and a drain on resources. To obtain views on the new warning system from all levels of government, law enforcement, and the public, the United States Attorney General, who will be responsible for the system, provided a 45-day comment period from the announcement of the new system on March 12th. This provides an opportunity for state and local governments as well as the private sector to comment on the usefulness of the new warning system, and the appropriateness of the five threat conditions with associated suggested protective measures. Numerous discussions have been held about the need to enhance the nation's preparedness, but national preparedness goals and measurable performance indicators have not yet been developed. These are critical components for assessing program results. In addition, the capability of state and local governments to respond to catastrophic terrorist attacks is uncertain. At the federal level, measuring results for federal programs has been a longstanding objective of the Congress. The Congress enacted the Government Performance and Results Act of 1993 (commonly referred to as the Results Act). The legislation was designed to have agencies focus on the performance and results of their programs rather than on program resources and activities, as they had done in the past. Thus, the Results Act became the primary legislative framework through which agencies are required to set strategic and annual goals, measure performance, and report on the degree to which goals are met. The outcome-oriented principles of the Results Act include (1) establishing general goals and quantifiable, measurable, outcome-oriented performance goals and related measures, (2) developing strategies for achieving the goals, including strategies for overcoming or mitigating major impediments, (3) ensuring that goals at lower organizational levels align with and support general goals, and (4) identifying the resources that will be required to achieve the goals. A former assistant professor of public policy at the Kennedy School of Government, now the senior director for policy and plans with the Office of Homeland Security, noted in a December 2000 paper that a preparedness program lacking broad but measurable objectives is unsustainable. This is because it deprives policymakers of the information they need to make rational resource allocations, and program managers are prevented from measuring progress. He recommended that the government develop a new statistical index of preparedness,incorporating a range of different variables, such as quantitative measures for special equipment, training programs, and medicines, as well as professional subjective assessments of the quality of local response capabilities, infrastructure, plans, readiness, and performance in exercises. Therefore, he advocated that the index should go well beyond the current rudimentary milestones of program implementation, such as the amount of training and equipment provided to individual cities. The index should strive to capture indicators of how well a particular city or region could actually respond to a serious terrorist event. This type of index, according to this expert, would then allow the government to measure the preparedness of different parts of the country in a consistent and comparable way, providing a reasonable baseline against which to measure progress. In October 2001, FEMA's director recognized that assessments of state and local capabilities have to be viewed in terms of the level of preparedness being sought and what measurement should be used for preparedness. The director noted that the federal government should not provide funding without assessing what the funds will accomplish. Moreover, the president's fiscal year 2003 budget request for $3.5 billion through FEMA for first responders--local police, firefighters, and emergency medical professionals--provides that these funds be accompanied by a process for evaluating the effort to build response capabilities, in order to validate that effort and direct future resources. FEMA has developed an assessment tool that could be used in developing performance and accountability measures for a national strategy. To ensure that states are adequately prepared for a terrorist attack, FEMA was directed by the Senate Committee on Appropriations to assess states' response capabilities. In response, FEMA developed a self-assessment tool--the Capability Assessment for Readiness (CAR)--that focuses on 13 key emergency management functions, including hazard identification and risk assessment, hazard mitigation, and resource management. However, these key emergency management functions do not specifically address public health issues. In its fiscal year 2001 CAR report, FEMA concluded that states were only marginally capable of responding to a terrorist event involving a weapon of mass destruction. Moreover, the president's fiscal year 2003 budget proposal acknowledges that our capabilities for responding to a terrorist attack vary widely across the country. Many areas have little or no capability to respond to a terrorist attack that uses weapons of mass destruction. The budget proposal further adds that even the best prepared states and localities do not possess adequate resources to respond to the full range of terrorist threats we face. Proposed standards have been developed for state and local emergency management programs by a consortium of emergency managers from all levels of government and are currently being pilot tested through the Emergency Management Accreditation Program at the state and local levels. Its purpose is to establish minimum acceptable performance criteria by which emergency managers can assess and enhance current programs to mitigate, prepare for, respond to, and recover from disasters and emergencies. For example, one such standard is the requirement that (1) the program must develop the capability to direct, control, and coordinate response and recovery operations, (2) that an incident management system must be utilized, and (3) that organizational roles and responsibilities shall be identified in the emergency operational plans. Although FEMA has experience in working with others in the development of assessment tools, it has had difficulty in measuring program performance. As the president's fiscal year 2003 budget request acknowledges, FEMA generally performs well in delivering resources to stricken communities and disaster victims quickly. The agency performs less well in its oversight role of ensuring the effective use of such assistance. Further, the agency has not been effective in linking resources to performance information. FEMA's Office of Inspector General has found that FEMA did not have an ability to measure state disaster risks and performance capability, and it concluded that the agency needed to determine how to measure state and local preparedness programs. Since September 11th, many state and local governments have faced declining revenues and increased security costs. A survey of about 400 cities conducted by the National League of Cities reported that since September 11th, one in three American cities saw their local economies, municipal revenues, and public confidence decline while public-safety spending is up. Further, the National Governors Association estimates fiscal year 2002 state budget shortfalls of between $40 billion and $50 billion, making it increasingly difficult for the states to take on expensive, new homeland security initiatives without federal assistance. State and local revenue shortfalls coupled with increasing demands on resources make it more critical that federal programs be designed carefully to match the priorities and needs of all partners--federal, state, local, and private. Our previous work on federal programs suggests that the choice and design of policy tools have important consequences for performance and accountability. Governments have at their disposal a variety of policy instruments, such as grants, regulations, tax incentives, and regional coordination and partnerships, that they can use to motivate or mandate other levels of government and private-sector entities to take actions to address security concerns. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. Key to the national effort will be determining the appropriate level of funding so that policies and tools can be designed and targeted to elicit a prompt, adequate, and sustainable response while also protecting against federal funds being used to substitute for spending that would have occurred anyway. The federal government often uses grants to state and local governments as a means of delivering federal programs. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad national purpose and to provide a great deal of discretion to state and local officials. Either type of grant can be designed to (1) target the funds to states and localities with the greatest need, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as "supplantation," with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. More specifically: Targeting: The formula for the distribution of any new grant could be based on several considerations, including the state or local government's capacity to respond to a disaster. This capacity depends on several factors, the most important of which perhaps is the underlying strength of the state's tax base and whether that base is expanding or is in decline. In an August 2001 report on disaster assistance, we recommended that the director of FEMA consider replacing the per-capita measure of state capability with a more sensitive measure, such as the amount of a state's total taxable resources, to assess the capabilities of state and local governments to respond to a disaster. Other key considerations include the level of need and the costs of preparedness. Maintenance-of-effort: In our earlier work, we found that substitution is to be expected in any grant and, on average, every additional federal grant dollar results in about 60 cents of supplantion. We found that supplantation is particularly likely for block grants supporting areas with prior state and local involvement. Our recent work on the Temporary Assistance to Needy Families block grant found that a strong maintenance- of-effort provision limits states' ability to supplant. Recipients can be penalized for not meeting a maintenance-of-effort requirement. Balance accountability and flexibility: Experience with block grants shows that such programs are sustainable if they are accompanied by sufficient information and accountability for national outcomes to enable them to compete for funding in the congressional appropriations process. Accountability can be established for measured results and outcomes that permit greater flexibility in how funds are used while at the same time ensuring some national oversight. Grants previously have been used for enhancing preparedness and recent proposals direct new funding to local governments. In recent discussions, local officials expressed their view that federal grants would be more effective if local officials were allowed more flexibility in the use of funds. They have suggested that some funding should be allocated directly to local governments. They have expressed a preference for block grants, which would distribute funds directly to local governments for a variety of security-related expenses. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president's fiscal year 2003 budget, have included some of these provisions. This matching grant would be administered by FEMA, with 25 percent being distributed to the states based on population. The remainder would go to states for pass-through to local jurisdictions, also on a population basis, but states would be given the discretion to determine the boundaries of substate areas for such a pass-through--that is, a state could pass through the funds to a metropolitan area or to individual local governments within such an area. Although the state and local jurisdictions would have discretion to tailor the assistance to meet local needs, it is anticipated that more than one- third of the funds would be used to improve communications; an additional one-third would be used to equip state and local first responders, and the remainder would be used for training, planning, technical assistance, and administration. Federal, state, and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, transportation systems, water systems, public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors (for example, for chemical and nuclear facilities). In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. Five models of shared regulatory authority are: fixed federal standards that preempt all state regulatory action in the subject area covered; federal minimum standards that preempt less stringent state laws but permit states to establish standards that are more stringent than the federal; inclusion of federal regulatory provisions not established through preemption in grants or other forms of assistance that states may choose to accept; cooperative programs in which voluntary national standards are formulated by federal and state officials working together; and widespread state adoption of voluntary standards formulated by quasi- official entities. Any one of these shared regulatory approaches could be used in designing standards for preparedness. The first two of these mechanisms involve federal preemption. The other three represent alternatives to preemption. Each mechanism offers different advantages and limitations that reflect some of the key considerations in the federal-state balance. To the extent that private entities will be called upon to improve security over dangerous materials or to protect vital assets, the federal government can use tax incentives to encourage and enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. National preparedness is a complex mission that requires unusual interagency, interjurisdictional, and interorganizational cooperation. The responsibilities and resources for preparedness reside with different levels of government--federal, state, county, and local--as well as with various public, private, and non-governmental entities. An illustration of this complexity can be seen with ports. As a former Commissioner on the Interagency Commission on Crime and Security in U.S. Seaports recently noted, there is no central authority, as at least 15 federal agencies have jurisdiction at seaports-- the primary ones are the Coast Guard, the Customs Service, and the Immigration and Naturalization Service. In addition, state and local law enforcement agencies and the private sector have responsibilities for port security. The security of ports is particularly relevant in this area given that the ports of Long Beach and Los Angeles together represent the third busiest container handler in the world after Hong Kong and Singapore. Promoting partnerships between critical actors (including different levels of government and the private sector) facilitates the maximizing of resources and also supports coordination on a regional level. Partnerships could encompass federal, state, and local governments working together to share information, develop communications technology, and provide mutual aid. The federal government may be able to offer state and local governments assistance in certain areas, such as risk management and intelligence sharing. In turn, state and local governments have much to offer in terms of knowledge of local vulnerabilities and resources, such as local law enforcement personnel, available to respond to threats and emergencies in their communities. Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI, given the information needed to do so. As the United States Conference of Mayors noted, a close working partnership of local and federal law enforcement agencies, which includes the sharing of intelligence, will expand and strengthen the nation's overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of intelligence among federal agencies. An expansion of this act has been proposed (S.1615, H.R. 3285) that would provide for information sharing among federal, state, and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored Mr. Chairman, addresses a number of information-sharing needs. For instance, this proposed legislation provides that the United States Attorney General expeditiously grant security clearances to governors who apply for them, and state and local officials who participate in federal counterterrorism working groups or regional terrorism task forces. Local officials have emphasized the importance of regional coordination. Regional resources, such as equipment and expertise, are essential because of proximity, which allows for quick deployment, and experience in working within the region. Large-scale or labor-intensive incidents quickly deplete a given locality's supply of trained responders. Some cities have spread training and equipment to neighboring municipal areas so that their mutual aid partners can help. These partnerships afford economies of scale across a region. In events that require a quick response, such as a chemical attack, regional agreements take on greater importance because many local officials do not think that federal and state resources can arrive in sufficient time to help. Mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in response to an emergency. Because individual jurisdictions may not have all the resources they need to respond to all types of emergencies, these agreements allow for resources to be deployed quickly within a region. The terms of mutual aid agreements vary for different services and different localities. These agreements may provide for the state to share services, personnel, supplies, and equipment with counties, towns, and municipalities within the state, with neighboring states, or, in the case of states bordering Canada, with jurisdictions in another country. Some of the agreements also provide for cooperative planning, training, and exercises in preparation for emergencies. Some of these agreements involve private companies and local military bases, as well as local government entities. Such agreements were in place for the three sites that were involved on September 11th-- New York City, the Pentagon, and a rural area of Pennsylvania--and provide examples of some of the benefits of mutual aid agreements and of coordination within a region. With regard to regional planning and coordination, there may be federal programs that could provide models for funding proposals. In the 1962 Federal-Aid Highway Act, the federal government established a comprehensive cooperative process for transportation planning. This model of regional planning continues today under the Transportation Equity Act for the 21st century (TEA-21, originally ISTEA) program. This model emphasizes the role of state and local officials in developing a plan to meet regional transportation needs. Metropolitan Planning Organizations (MPOs) coordinate the regional planning process and adopt a plan, which is then approved by the state. Mr. Chairman, in conclusion, as increasing demands are placed on budgets at all levels of government, it will be necessary to make sound choices to maintain fiscal stability. All levels of government and the private sector will have to communicate and cooperate effectively with each other across a broad range of issues to develop a national strategy to better target available resources to address the urgent national preparedness needs. Involving all levels of government and the private sector in developing key aspects of a national strategy that I have discussed today--a definition and clarification of the appropriate roles and responsibilities, an establishment of goals and performance measures, and a selection of appropriate tools-- is essential to the successful formulation of the national preparedness strategy and ultimately to preparing and defending our nation from terrorist attacks. This completes my prepared statement. I would be pleased to respond to any questions you or other members of the subcommittee may have. For further information about this testimony, please contact me at (202) 512-6737, Paul Posner at (202) 512-9573, or JayEtta Hecker at (202) 512- 2834. Other key contributors to this testimony include Jack Burriesci, Matthew Ebert, Colin J. Fallon, Thomas James, Kristen Sullivan Massey, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA's Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation's Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD's Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President's Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01- 14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Bioterrorism: The Centers for Disease Control and Prevention's Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO- 02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01- 915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999.
|
Federal, state, and local governments share responsibility in preparing for catastrophic terrorist attacks. Because the national security threat is diffuse and the challenge is intergovernmental, national policymakers need a firm understanding of the interests, capacity, and challenges when formulating antiterrorism strategies. Key aspects of this strategy should include a definition and clarification of the appropriate roles and responsibilities of federal, state, and local entities. GAO has found fragmentation and overlap among federal assistance programs. More than 40 federal entities have roles in combating terrorism, and past federal efforts have resulted in a lack of accountability, a lack of cohesive effort, and program duplication. This situation has led to confusion, making it difficult to identify available federal preparedness resources and effectively partner with the federal government. Goals and performance measures should be established to guide the nation's preparedness efforts. For the nation's preparedness programs, however, outcomes have yet to be defined in terms of domestic preparedness. Given the recent and proposed increases in preparedness funding, real and meaningful improvements in preparedness and establishing clear goals and performance measures are critical to ensuring a successful and a fiscally responsible effort. The strategy should include a careful choice of the most appropriate tools of government to best achieve national goals.
| 8,029 | 266 |
Program and policy decisions require a wide array of information that answers various questions. For example, descriptive information tells how a program operates--what activities are performed, who performs them, and who is reached. In contrast, evaluative information speaks to how well a program is working--such as whether activities are managed efficiently and effectively, whether they are carried out as intended, and to what extent the program is achieving its intended objectives or results. There are a variety of methods for obtaining information on program results, such as performance measurement and program evaluation, which reflect differences in how readily one can observe program results. Performance measurement, as defined by the Results Act, is the ongoing monitoring and reporting of program accomplishments, particularly progress towards preestablished goals. It tends to focus on regularly collected data on the type and level of program activities (process), the direct products and services delivered by the program (outputs), and the results of those activities (outcomes). While performance may be defined more broadly as program process, inputs, outputs, or outcomes, results usually refer only to the outcomes of program activities. For programs that have readily observable results, performance measurement may provide sufficient information to demonstrate program results. In some programs, however, results are not so readily defined nor measured. In such cases, program evaluations may be needed, in addition to performance measurement, to examine the extent to which a program is achieving its objectives. Program evaluations are systematic studies conducted periodically to assess how well a program is working. While they may vary in their focus, these evaluations typically examine a broader range of information on program performance and its context than is feasible in ongoing performance measurement. Where programs aim to produce changes, as a result of program activities, outcome (or effectiveness) evaluations assess the extent to which those outcomes or results were achieved, such as whether students increased their understanding of or skill in the material of instruction. In cases where the program's outcomes are influenced by complex systems or events outside the program's control, impact evaluations use scientific research methods to establish the causal connection between outcomes and program activities, estimate what would have happened in the absence of the program, and thus isolate the program's contribution to those changes. For example, although outcome measures might show a decline in a welfare program's caseload after the introduction of job placement activities, a systematic impact evaluation would be needed to assess how much of the observed change was due to an improved economy rather than the new program. In addition, a program evaluation that also systematically examines how a program was implemented can provide important information about why a program did or did not succeed and suggest ways to improve it. For the purposes of this report, we used the definition of program evaluation that is used in the Results Act, "an assessment, through objective measurement and systematic analysis, of the manner and extent to which federal programs achieve intended objectives." We asked about assessments of program results, which could include both the analysis of outcome-oriented program performance measures as well as specially conducted outcome or impact evaluations. Two government initiatives could influence the demand for and the availability and use of program evaluation information. The Results Act seeks to promote a focus on program results, by requiring agencies to set program and agency performance goals and to report annually on their progress in achieving them (beginning with fiscal year 1999). In addition to encouraging the development of information on program results for activities across the government, the Results Act recognizes the complementary nature of program evaluation and performance measurement. It requires agencies to include a schedule for future program evaluations in their strategic plans, the first of which was to be submitted to Congress by September 30, 1997. The Results Act also requires agencies to review their success in achieving their annual performance goals (which are set forth in their annual performance plans) and to summarize the findings of program evaluations in their annual program performance reports (the first of which is due by March 31, 2000). The National Performance Review (NPR) led by the Vice President's office has asked agencies to reexamine their policies, programs, and operations to find and implement ways to improve performance and service to their customers. Both of these initiatives--because of their focus on program results--could be expected to increase the demand for and the availability and use of program evaluation information. Other recent governmentwide initiatives could have potentially conflicting effects. In several program areas, devolution of program responsibility from the federal level and consolidation of individual federal programs into more comprehensive, multipurpose grant programs has shifted both program management and accountability responsibilities toward the states. These initiatives may thus make it more difficult for federal agencies to evaluate the results of those programs. In addition, efforts to reduce the growth of the federal budget have resulted in reductions in both federal staff and program resources in many agencies. The combination of these initiatives raises a question: In an environment of limited federal resources and responsibility, how can agencies meet the additional needs for program results information? To identify the roles and resources available for federal program evaluation, in 1996, we conducted a mail survey of offices identified by federal agency officials that were conducting studies of program results or effectiveness in 13 cabinet-level departments and 10 independent executive agencies. Detailed information on program evaluation studies refers to those conducted during fiscal year 1995 (regardless of when they began or ended). To identify how recent reforms were expected to affect federal evaluation activities and what strategies were available for responding to those changes, we interviewed external evaluation experts and evaluation and other officials at selected federal and state agencies. In this report, we use the term "agency" to include both cabinet-level departments and independent agencies. We distributed surveys in 1996 regarding federal evaluation activities within 13 cabinet level departments and 10 independent executive agencies in the federal government. We excluded the Department of Defense from our survey of evaluation offices because of the prohibitively large number of offices it identified as conducting assessments of effectiveness. Although we asked agency officials to be inclusive in their initial nominations of offices that conducted evaluations of program results, some offices that conducted evaluations may have been overlooked and excluded from our survey. However, many offices that were initially identified as having conducted evaluations later reported that they had not done so. In our survey, we asked each office about the range of its analytic and evaluation activities and about the length, cost, purpose, and other characteristics of the program evaluation studies they conducted during fiscal year 1995. (See appendix I for more details on the scope and methodology of the survey.) Between 1996 and 1997, we conducted interviews of program evaluation practitioners selected to represent divergent perspectives. We asked what had been or were expected to be the effects of various government changes and reforms on federally supported and related program evaluation activities and strategies for responding to those effects. We identified individuals with evaluation knowledge and expertise from a review of the survey responses, the evaluation literature, and our prior work; they were from an array of federal and state agencies and the academic and consulting communities. We then judgmentally selected 18 people to interview to reflect (1) a mix of different program types and diverse amounts of experience with program evaluation and (2) experience with some of the reforms at the state or federal level. Those selected included nine evaluation officials (six from offices in federal agencies and three from state legislative audit agencies) and seven external evaluation experts (four from private research organizations and three from universities). In addition, we interviewed an OMB official and one official from a state executive branch agency, and we also asked the officials from the state legislative audit agencies about their experiences with state performance reporting requirements. We conducted our review between May 1996 and July 1997 in accordance with generally accepted government auditing standards. However, we did not independently verify the types of studies conducted, other information reported by our respondents, nor information gained from interviewees. The resources allocated to conducting systematic assessments of program results (or evaluation studies) were small and unevenly distributed across the 23 agencies (departments and independent agencies) we surveyed. We found 81 offices that reported expending resources--funds and staff time--on conducting program effectiveness studies in fiscal year 1995. Over half of those offices had 18 or fewer full-time equivalent staff FTEs, while only a few offices had as many as 300 to 400 FTEs. (See figure 1.) Moreover, about one-third of the offices reported spending 50 percent or more of their time on evaluation activities (including development of performance measures and assessments of program effectiveness, compliance, or efficiency), since program evaluation was only one of these offices' many responsibilities. (See survey question 9 in appendix I.) Two of the 3 largest offices (over 300 FTEs) spent about 10 percent of their staff time on program evaluation activities. Thus, the estimated staff and budget resources that the 81 offices actually devoted to evaluation activities totaled 669 FTEs and at least $194 million across the 23 agencies surveyed. In addition, most (61 of 81) offices reported also conducting management analysis activities; the most frequent activities were conducting management studies, developing strategic plans, and describing program implementation. Of those offices that could estimate their staff time, about half reported spending less than 25 percent of their time on management analysis. Similarly, many offices reported conducting policy planning and analysis, but most of them reported spending less than 25 percent of their time on it. Thus, a majority of the offices (45 of the 81 identified) conducted few evaluation studies (5 or less in fiscal year 1995), while 16 offices--representing 7 agencies--accounted for two-thirds of the 928 studies conducted. (See table 1.) Finally, 6 of the 23 agencies we surveyed did not report any offices conducting evaluation studies in fiscal year 1995. A few of these agencies indicated that they analyzed program accomplishments or outputs or conducted management reviews to assess their programs' performance but did not conduct an evaluation study per se. Some of the 6 agencies also reported conducting other forms of program reviews that focused on assessing program compliance or efficiency rather than program results. Offices conducting program evaluations were located at various levels of federal agencies. A few of the 81 offices were located in the central policy or administrative office at the highest level of the organization (5 percent) or with the Inspector Generals (5 percent); many more were located in administrative offices at a major subdivision level (43 percent) or in program offices or technical or analytic offices supporting program offices (30 and 16 percent, respectively). (See table 2.) Four of the 23 agencies surveyed had offices at all 3 levels (agency, division, and program), and over half the agencies (14 of 23) had conducted evaluations at the program level. The 16 offices conducting 20 or more studies were more likely to be centralized at the agency or division level than at the program level. A diverse array of evaluation studies were described in the surveys. Just over half of the studies for which we have such information were conducted in-house (51 percent), and 27 percent lasted under 6 months. But the studies that were contracted out tended to be larger investments--almost two-thirds of them took over a year to complete, and over half cost between $100,000 and $500,000. Moreover, almost a third of all the studies lasted more than 2 years, reflecting some long-term evaluations. (See table 3.) For example, a study of the impact of a medical treatment program, which used an experimental design with a complex set of Medicare program and clinical data from thousands of patients on numerous outcomes (for both patients and program costs), took over 2 years and cost over $1 million to complete. Many of the 1995 studies reportedly used relatively simple designs or research methods, and many relied on existing program data. The two most commonly reported study designs were judgmental assessments (18 percent) as well as experimental designs employing random assignment (14 percent). (See table 4 for a list of designs ranging from the most to least amount of control over the study conditions.) Many of the studies (over 70 of the 129) that used experimental designs were evaluations of state demonstration programs, which were required by law to use such methods, and were conducted out of one office. Experimental designs and designs using statistical controls are used to identify a program's net impact on its objectives where external factors are also known to affect its outcome. However, without knowing the circumstances of many of the programs being evaluated, it is impossible for us to determine the adequacy of the designs used to assess program effectiveness. At least 40 percent of the studies employed existing program records in their evaluations, while about one-quarter employed special surveys or other ad hoc data-collection methods specially designed for the studies. Just under half (40 percent) of the studies used data from program administrative records that were produced and reported at the federal level; almost a third (28 percent) used data from routinely produced, but not typically reported, program records; 5 percent of the studies used data from administrative records of other federal agencies; and 14 percent used administrative records from state programs. Some studies may have used many types of data sources, which would suggest a heavy reliance on administrative and other program-related data. (See table 5.) The primary reported purpose of the studies was to evaluate ongoing programs, either on an office's own initiative or at the request of top agency officials. In the survey, most officials conducting evaluations reported having a formal and an ad hoc planning process for deciding what evaluation work they would do. Many criteria were indicated as being used to select which studies to do (such as a program office request, congressional interest, or continuation or follow-up of past work), but the criterion most often cited was the interest of high-level agency officials in the program or subject area. Moreover, about one-fourth of the studies were requested by top agency officials. About one-fourth of the studies were indicated to be self-initiated. Most offices were not conducting studies for the Congress or as the result of legislative mandates; only 17 percent of the studies were reported to have been requested in those ways. (See table 6.) For those offices reporting that they conducted studies, about half of the 570 studies for which we have information evaluated ongoing programs.Ongoing programs of all sizes were evaluated, ranging in funding from less than $10 million to over $1 billion. About one-third of these studies evaluated demonstration programs and many of them cost less than $10 million. In contrast, few reported evaluations of new programs and many of these new programs reportedly were small (with funding under $10 million). Program evaluation was reported to be used more often for general program improvement than for direct congressional oversight. Their primary uses most often were said to be to improve program performance (88 percent), assess program effectiveness (86 percent), increase general knowledge about the program (62 percent), and guide resource allocation decisions within the program (56 percent). (See table 7.) Accordingly, these offices overwhelmingly (over three-fourths of respondents) reported program managers and higher-level agency officials as the primary audience of their studies. (See table 8.) About one-third of the offices reported support for budget requests as a primary use and one-third reported congressional audiences were primary users for their studies. Fewer respondents (20 percent) reported program reauthorization as a primary use of the study results. (See tables 7 and 8.) Program evaluation was not the primary responsibility for most of these offices and the offices often reported "seldom, if ever" performing the program evaluation roles we asked about. The role most likely to be characterized as 'most often performed' was conducting studies of programs administered elsewhere in their agency. (See table 9.) About one-half of those who responded reported "sometimes" providing technical or design assistance to others or conducting joint studies, while a few offices saw their role as training others in research or evaluation methods. One office dealing with an evaluation mandate conducted work sessions with state and local program managers and evaluators as well as provided training to enhance state evaluation capabilities. Two-thirds of the offices seldom, if ever, designed evaluations conducted by other offices or agencies, trained others in research or evaluation methods, or approved plans for studies by others. Some of our interviewees thought that recent governmentwide reforms would increase interest in learning the results of federal programs and policies but would also complicate the task of obtaining that information. Devolution of federal program responsibility in the welfare and health care systems has increased interest in evaluation because the reforms are major and largely untested. However, in part because programs devolved to the states are expected to operate quite diversely across the country, some evaluation officials noted that evaluating the effects of these reforms was expected to be more difficult. In addition, federal budget reductions over the past few years were said by some not only to have reduced the level of federal evaluation activity but also to have diminished agency technical capacity through the loss of some of their most experienced staff. Because implementation of the Results Act's performance reporting requirements is not very far along (the first annual reports on program performance are not due until March 2000), several of our interviewees thought it was too early to estimate the effect of the Results Act. Some hoped the Act would increase the demand for results information and expand the role of data and analysis in decisionmaking. One interviewee thought it would improve the focus of the evaluations they now conduct. A few evaluation officials were concerned that a large investment would be required to produce valid and reliable outcome (rather than process) data. A few also noted that resources for obtaining data on a greatly expanded number of program areas would compete for funds used for more in-depth evaluations of program impact. Other evaluators noted that changes in the unit of analysis for performance reporting from the program level to budget account or organization might make classic program evaluation models obsolete. As we previously reported, the federal program officials who have already begun implementing performance measurement appeared to have an unusual degree of program evaluation support and found it quite helpful in addressing the analytic challenges of identifying program goals, developing measures, and collecting data. Many of these program officials said they could have used more of such assistance; but, when asked why they were not able to get the help they needed, the most common response was that it was hard to know in advance that evaluation expertise would be needed. In addition to using program evaluation techniques to clarify program goals and develop reliable measures, several of these program officials saw the need for impact evaluations to supplement their performance data. Their programs typically consisted of efforts to influence highly complex systems or events outside government control, where it is difficult to attribute a causal connection between their program and its desired outcomes. Thus, without an impact evaluation or similar effort to separate the effects of their programs from those of other external events or factors, program officials from the previous study recognized that simple examination of outcome measures may not accurately reflect their programs' performance. Some states' experiences with performance measurement suggested that performance measurement will take time to implement, and the federal experience suggests that it will not supplant the need for effectiveness evaluations. Two state officials described a multiyear process to develop valid and reliable measures of program performance across the state government. While performance measures were seen as useful for program management, some state agency and legislative staff also saw a continuing need for evaluations to assess policy impact or address problems of special interest or "big-picture" concerns, such as whether a government program should be continued or privatized. NPR was seen by several of those we interviewed as not having much of an effect on efforts to evaluate the results of their programs beyond increasing the use of customer surveys. This may have been because it was seen as primarily concerned with internal government operations, or because, as one agency official reported, its effect was most noticeable in only a few areas: regulatory programs and other intergovernmental partnerships. However, one agency official said that NPR had a big impact on reorienting their work toward facilitating program improvement, while two others felt that it reaffirmed changes they had already begun. Given constraints on federal budgets, some officials we interviewed in general did not expect federal evaluation resources to rise to meet demand, so they described efforts to leverage and prioritize available resources. While an evaluation official reported supplementing his evaluation teams with consultants, concern was also expressed that staff reductions in their unit had left the technical expertise too weakened to competently oversee consultants' work. Another evaluation official explained that they responded to the increasing demand for information by narrowing the focus and scope of their studies to include only issues with major budget implications or direct implications for agency action. Both a state official and two external evaluation experts felt that states grappling with new program responsibilities would have difficulty evaluating them as well, so that continued federal investment would be needed. A federal official, however, noted that private foundations could fund the complex rigorous studies needed to answer causal questions about program results. Some of the evaluators we interviewed expected that fewer impact studies would be done. Some expected that the range of their work may broaden to rely on less rigorous methods and include alternatives such as monitoring program performance and customer satisfaction. From our interviews, we learned that a few agencies have devolved responsibility for evaluations to the program offices, which may have more interest in program improvement. Another agency reported that it had built evaluation into its routine program review system, which provides continuous information on the success of the program and its outcomes, noting that it thereby reduced the need for special evaluation studies. One evaluation official reported that by having redefined evaluation as part of program management, program evaluation became more acceptable in his agency because it no longer appeared to be overhead. A few agencies reported that they were adapting the elements of their existing program information systems to yield information on program results. But in other agencies, evaluation officials and external experts thought that their systems were primarily focused on program process, rather than results. The evaluation official said that structural changes to, and a major investment in, their data systems will be required to provide valid and meaningful data on results. As program responsibility shifts to state and local entities, evaluation officials and others we interviewed described the need for study designs that can handle greater contextual complexity, new ways to measure outcomes, and the need to build partnerships with the programs' stakeholders. One of the officials saw classical experimental research designs as no longer feasible in programs, which, due to increased state flexibility in how to deliver services, no longer represented a discrete national program or were unlikely to employ rigorous evaluation techniques that entailed random assignment of particular program services to individuals. Others noted the need to develop evaluation designs that could reflect the multiple levels on which programs operate and the organizational partnerships involved. To address some of these complexities, federal offices with related program interests have formed task groups to attempt to integrate their research agendas on the effects of major changes in the health and welfare systems. Similarly, a few federal evaluation officials reported an interest in consulting with their colleagues in other federal offices to share approaches for tackling the common analytic problems they faced. In other strategies, federal evaluation officials described existing or planned efforts to change the roles they and other program stakeholders played in conducting evaluations. One agency has arranged for the National Academy of Sciences to work with state program officials and the professional communities involved to help build a prototype performance measurement system for federal assistance to state programs. One evaluation office expects to shift its role toward providing more technical assistance to local evaluators and synthesizing their studies' results. Another federal office has delegated some evaluation responsibility to the field while it synthesizes the results to answer higher level policy questions, such as which types of approaches work best. The Results Act recognizes and encourages the complementary nature of program evaluations and performance measures by asking agencies to provide a summary of program evaluation findings along with performance measurement results in their annual performance reports. One federal evaluation official said his agency had efforts under way to "align" program evaluation and performance measurement through, for example, planning evaluations so that they will provide the performance data needed. But, the official also expressed concern about how to integrate the two types of information. Officials from states that had already begun performance measurement and monitoring said they would like to see the federal government provide more leadership by (1) providing a catalog of performance measures available for use in various program areas and (2) funding and designing impact evaluations to supplement their performance information. Seeking to improve government performance and public confidence in government, the Results Act has instituted new requirements for federal agencies to report on their results at the same time that other management reforms may complicate the task of obtaining such information. Comparison of current federal program evaluation resources with the anticipated challenges leads us to several conclusions. First, federal agencies' evaluation resources have important roles to play in responding to increased demand for information on program results, but--as currently configured and deployed--they are likely to be challenged to meet these future roles. It is implausible to expect that, by simply conducting more program evaluation studies themselves, these offices can produce data on results across all activities of the federal government. Moreover, some agencies reported that they had reduced their evaluation resources to the point that the remaining staff feel unable to meet their current responsibilities. Lastly, the devolution of some program responsibilities to state and local governments has increased the complexity of the programs they are being asked to evaluate, creating new challenges. Second, in the future, carefully targeting and reshaping the use of federal evaluation resources and leveraging federal and nonfederal resources show promise for addressing the most important questions about program results. In particular, federal evaluators could assist program managers to develop valid and reliable performance reporting by sharing their expertise through consultation and training. Early agency efforts to meet the Results Act's requirements found program evaluation expertise helpful in managing the numerous analytical challenges involved, such as clarifying program goals and objectives, developing measures of program outcomes, and collecting and analyzing data. In addition, because performance measures will likely leave some gaps in needed information, strategic planning for future evaluations might strive to fill those gaps by focusing on those questions judged to have the most policy importance. In many programs, performance measures alone are not sufficient to establish program impact or the reasons for observed performance. Program evaluations can also serve as valuable supplements to program performance reporting by addressing policy questions that extend beyond or across program borders, such as the comparative advantage of one policy alternative to another. Finally, without coordination, it is unlikely that the increasingly diverse activities involved in evaluating an agency's programs will efficiently supplement each other to meet both program improvement and policymaking information needs. As some agencies devolve some of the evaluations they conducted in the past to program staff or state and local evaluators, they run the risk that, due to differences in evaluation resources and questions, data from several studies conducted independently may not likely be readily aggregated. Thus, in order for such devolution of evaluation responsibility to better provide an overall picture of a national program, those evaluations would have to be coordinated in advance. Similarly, as federal agencies increasingly face common analytic problems, they could probably benefit from cross-agency discussion and collaboration on approaches to those problems. The Director of OMB commented on a draft of this report and generally agreed with our conclusions. OMB noted that other countries are experiencing public sector reforms that include a focus on results and increasing interest in program evaluation. OMB also provided technical comments that we have incorporated as appropriate throughout the text. OMB's comments are reprinted in appendix II. We are sending copies of this report to the Chair and Ranking Minority Member of the House Committee on Government Reform and Oversight, the Director of OMB, and other interested parties. We will also make copies available to others on request. Please contact me or Stephanie Shipman, Assistant Director at (202) 512-7997 if you or your staff have any questions. Major contributors to this report are listed in appendix III. The 23 federal executive agencies (13 cabinet-level departments and 10 independent agencies) that we surveyed are listed as follows. These represent 23 of the 24 executive agencies (we excluded the Department of Defense) covered by the Chief Financial Officer's Act. The 24 represent about 97 percent of the executive branch's full-time staff and cover over 99 percent of the federal government's outlay for fiscal year 1996. To identify the roles and resources expended on federal program evaluation, we surveyed all offices (or units) in the 23 executive branch departments and independent agencies that we identified as conducting evaluation in fiscal year 1995. We defined evaluation as systematic analysis using objective measures to assess the results or the effects of federal programs, policies, or activities. To identify these evaluation offices, we (1) began with the list of evaluation offices that we surveyed in 1984 (2) added offices based on a review of office titles implying analytical responsibilities and discussions with experts knowledgeable about evaluation studies, and (3) talked with our liaison staff and other officials in the federal departments and agencies to ensure broad yet appropriate survey coverage. In some instances, the survey was distributed to offices throughout an agency by agency officials, while in other instances we sent the survey directly to named evaluation officials. We attempted to survey as many evaluation offices as possible; however, in some cases, we may not have been told about or directed to all such offices. Therefore, we cannot assume that we have identified all offices that conducted program evaluation studies in fiscal year 1995. Overall, we received about 160 responses, of which 81 were from offices that conducted such studies. The survey was directed toward results-oriented evaluation studies, such as formal impact studies, assessments of program results, and syntheses or reviews of evaluation studies. We sought to exclude studies that focused solely on assessing client needs, describing program operations or implementation, or assessing fraud, compliance, or efficiency. However, we allowed the individual offices to (1) define "program" since a federal program could be tied to a single budget account, represent a combination of several programs, or involve several state programs and (2) determine whether or not they did this type of study and, if not, they could exempt themselves from completing the survey. We did not verify the accuracy of the responses provided by evaluation units. We also had some information on fiscal year 1996 activities but did not report those results since they were comparable to the fiscal year 1995 results. Some respondents were unable to complete different parts of the survey. About one-third of the respondents did not report either the office's budget, its number of full-time equivalent staff (FTE), cost information about studies, or the sources of data used in the studies. For some questions, respondents were asked to answer in terms of the number of studies conducted, and we used the total number of studies indicated by all respondents to the question as the denominator when computing percents. However, when the level of nonresponse to individual survey questions was above 20 percent or was unclear due to incomplete information on how many studies had been reported on, we used the full complement of 928 studies to provide a conservative estimate. The questions for which we reported results are reproduced on the following pages. Committee on Governmental Affairs, United States Senate. "Government Performance and Results Act of 1993." Report No. 103-58, June 16, 1993. Evaluation Practice. "Past, Present, Future Assessments of the Field of Evaluation." Entire Issue. M.F. Smith, ed., Vol. 15, #3, Oct. 1994. Martin, Margaret E., and Miron L. Straf (eds.). Principles and Practices for a Federal Statistical Agency. Washington, D.C.: National Academy Press, 1992. National Performance Review. "Mission-Driven, Results-Oriented Budgeting." Accompanying Report of the National Performance Review of the Office of the Vice President, Sept. 1993. New Directions for Program Evaluation. "Evaluation in the Federal Government: Changes, Trends, and Opportunities." Entire issue. C.G. Wye and R. Sonnichsen, eds. #55, Fall 1992. New Directions for Program Evaluation. "Progress and Future Directions in Evaluation: Perspectives on Theory, Practice, and Methods." Entire issue. Debra Rog and Deborah Fournier, eds. #76, Winter 1997. Office of Evaluation and Inspections. Practical Evaluation for Public Managers: Getting the Information You Need. Washington, D.C.: Office of Inspector General, Department of Health and Human Services, 1994. Public Law 103-62, Aug. 3, 1993, "Government Performance and Results Act of 1993." Wargo, Michael J. "The Impact of Federal Government Reinvention on Federal Evaluation Activity." Evaluation Practice, 16(3) (1995), pp. 227-237. The Results Act: An Evaluator's Guide to Assessing Agency Annual Performance Plans (GAO/GGD-10.1.19, Mar. 1998). Balancing Flexibility and Accountability: Grant Program Design in Education and Other Areas (GAO/T-GGD/HEHS-98-94, Feb. 11, 1998). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). Block Grants: Issues in Designing Accountability Provisions (GAO/AIMD-95-226, Sept. 1995). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). Management Reform: Implementation of the National Performance Review's Recommendations (GAO/OGC-95-1, Dec. 5, 1994). Public Health Service: Evaluation Set-Aside Has Not Realized Its Potential to Inform the Congress (GAO/PEMD-93-13, Apr. 1993). Program Evaluation Issues (GAO/OCG-93-6TR, Dec. 1992). "Improving Program Evaluation in the Executive Branch." A Discussion Paper by the Program Evaluation and Methodology Division (GAO/PEMD-90-19, May 1990). Program Evaluation Issues (GAO/OCG-89-8TR, Nov. 1988). Federal Evaluation: Fewer Units, Reduced Resources, Different Studies from 1980 (GAO/PEMD-87-9, Jan. 23, 1987). (966704/973810) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO reviewed federal agencies' efforts to provide information on federal program results, focusing on: (1) the current resources and roles for program evaluation in federal agencies; (2) the anticipated effects of governmentwide reforms and other initiatives on evaluation of federal programs; and (3) potential strategies for agencies to respond to the anticipated effects and provide information on program results. GAO noted that: (1) existing federal evaluation resources--at least as currently configured and deployed--are likely to be challenged to meet increasing demands for program results information; (2) agencies reported devoting variable but relatively small amounts of resources to evaluating program results; (3) morever, agencies reported that the primary role of program evaluation was internally focused on program improvement, rather than direct congressional or other external oversight; (4) interest in the program by high-level officials was most often cited as a criterion for initiating evaluation work; a small portion of studies were said to be conducted for a congressional committee or in response to a legislative mandate; (5) some of the evaluation officials and experts that GAO interviewed anticipated not only increased interest in learning the results of federal programs and policies but also additional complications in obtaining that information; (6) some evaluation officials from states with performance measurement experience noted that effectiveness evaluations would continue to be needed to assess policy impact and address problems of special interest or larger policy issues, such as the need for any government intervention at all in an area; (7) to meet the anticipated increase in demand for program results information as well as the associated technical challenges, some evaluation officials GAO interviewed described efforts to leverage both federal and nonfederal resources; (8) however, some agencies anticipated that major investments in their data systems would be required to produce reliable data on program outcomes; and, in a prior study, program officials were concerned that reliance on less rigorous methods would not provide an accurate picture of program effectiveness; (9) moreover, while some federal evaluation officials envisioned providing increased technical assistance to state and local evaluators, a few state evaluation officials suggested an alternative strategy for the federal government; (10) GAO drew several conclusions from its comparison of current federal evaluation resources with the anticipated challenges to meeting increased demand for information on program results; (11) federal evaluation resources have important roles to play in responding to increased demand for information on program results, but--at least as currently configured and deployed--they are likely to be challenged to meet that demand; and (12) in the future, carefully targeting federal agencies' evaluation resources shows promise for addressing key questions about program results.
| 7,526 | 528 |
The Federal Reserve, as the United States' central bank, has primary responsibility for maintaining the nation's cash supply. In carrying out this responsibility, Federal Reserve Banks perform various cash-related functions to meet the needs of the depository institutions served by the Federal Reserve Banks. At the 37 Federal Reserve Banks and Branches which make up the Federal Reserve System, the cash operations function is responsible for shipping cash to meet the needs of depository institutions, receiving shipments of new currency from the Bureau of Engraving and Printing, new coin from the U.S. Mint, and incoming deposits of excess and unfit currency and coin from depository institutions. In addition to maintaining custodial controls over the cash in its possession, each Federal Reserve Bank and Branch processes currency received from circulation and records and summarizes the various accounting transactions associated with these activities. While the 37 Federal Reserve Banks and Branches perform the same cash-related functions, they may use different systems and processes to manage and account for the cash under their control. The Federal Reserve Banks and Branches in three of the System's 12 districts--Atlanta, Philadelphia, and San Francisco--use the Cash Automation System (CAS) to manage and account for cash under their control. CAS is an electronic inventory system which, among other features, tracks coin and currency activities and balances by denomination and identifies bank operating units with custodial responsibility for cash. Certain data maintained in CAS are used to provide daily updates to the bank's general ledger system. CAS data are also used by bank officials to prepare monthly currency activity reports. These reports, which track each Federal Reserve Bank's monthly currency activities and end-of-month vault balance, are used by the Federal Reserve Board to monitor currency activities across the Federal Reserve System. In September 1996, we reported on the results of a review of currency activity reports prepared by the Los Angeles Branch. The review responded to concerns about reported inaccuracies in certain of the bank's monthly currency activity reports. The review's objectives were to determine the nature of currency reporting inaccuracies and review actions intended to resolve them. Our review found that certain data needed for the October through December 1995 currency activity reports were forced to ensure that the reports agreed with the Los Angeles Branch's end-of-month balance sheet. As a result, analysis by a bank analyst showed that receipts from circulation were understated by $5.8 million in October, overstated by $61.8 million in November and understated by $111 million in December. Our review noted problems with the reporting of currency activities which raised concern about the quality of the Los Angeles Reserve Branch's internal control environment and potential CAS system limitations which could affect currency accounting and reporting. In response to the review's findings and recommendations, the Federal Reserve Board took a number of immediate actions specific to the Los Angeles Branch including (1) revising policies and procedures for preparing the monthly currency activity report, (2) conducting an unannounced 100-percent count of the Los Angeles Branch's currency and coin holdings and comparing the results to the bank's balance sheet, and (3) conducting an internal review of the bank's cash operations and related financial records. The Federal Reserve Board reported that (1) the results of the physical count confirmed that the Los Angeles Branch's balance sheet accurately reflected its currency and coin holdings and (2) its examiners found that the accounting for the cash handled by the bank was accurate and that proper safeguards and controls existed to ensure the integrity of the bank's financial records. In addition to actions addressing the Los Angeles Branch's currency reporting and controls, the Federal Reserve Board arranged for an external examination of internal control over cash operations at certain banks that use CAS to manage and account for cash operations--the subject of this report. Our September 1996 report recommended that, given the problems in preparing the currency activity report using CAS data in Los Angeles, the Federal Reserve Board require an external review of internal controls. In response to our recommendation, the Federal Reserve Board hired Coopers & Lybrand L.L.P., an independent public accounting firm, to examine and report on managements' assertions about the effectiveness of the internal control structure over financial reporting and safeguardingfor cash at three banks--the Federal Reserve Bank of Atlanta's Home Office, the Federal Reserve Bank of San Francisco's Los Angeles Branch, and the Federal Reserve Bank of Philadelphia. These banks represent 3 of the 12 cash operations located in the Reserve System which use CAS to provide inventory and management control and accounting for cash-related activities. Table 1 provides 1996 currency data on the relative size and volume of currency processing activities at the 3 locations covered by Coopers & Lybrand's external examinations, the 12 which use CAS, and the entire 37 banks and branches. The objective of Coopers & Lybrand's examinations was to opine on whether managements' assertions on the effectiveness of internal controls were fairly stated based on the internal control criteria used by management. In performing its examinations and concluding on the reliability of managements' assertions, Coopers & Lybrand performed an attest engagement which is governed by the AICPA's Attestation Standards. The attestation standards provide both general and specific guidance which is intended to enhance the consistency and quality of these engagements. The attestation standards consist of general, fieldwork, and reporting standards which apply to all attestation engagements and individual standards which apply to specific types of attestation engagements. The attestation standards supplement existing auditing standards by reenforcing the need for technical competence, independence in attitude, due professional care, adequate planning and supervision, sufficient evidence, and appropriate reporting. In addition to the general, fieldwork, and reporting attestation standards, Coopers & Lybrand's examination at the three Reserve Banks was also subject to requirements of a specific attestation standard--Statement on Standards for Attestation Engagements No. 2, Reporting on an Entity's Internal Control Structure Over Financial Reporting. This standard provides guidance on planning, conducting, and reporting on the engagement, including evaluating the design and operating effectiveness of internal controls. A key provision of the standard is that management use reasonable control criteria which have been established by a recognized body in evaluating the internal control structure's effectiveness. This requirement ensures that management uses commonly understood and/or accepted control criteria in concluding on the internal control structure's effectiveness and that the practitioner uses the same criteria in forming an opinion on management's assertion. Management for each of the Federal Reserve Banks covered by Coopers & Lybrand's examinations based their assessments of internal control effectiveness on criteria contained in the Internal Control-Integrated Framework issued in September 1992 by the Committee on Sponsoring Organizations of the Treadway Commission (COSO). To develop a broad understanding of internal control and establish standards for assessing its effectiveness, COSO developed a structured approach--the Integrated Framework--which defines internal control and describes how it relates to an entity's operations. Internal control represents the process, designed and operated by an entity's management and personnel, to provide reasonable assurance that fundamental organizational objectives are achieved. The Integrated Framework describes internal control in terms of objectives, essential components of internal control, and criteria for assessing internal control effectiveness. Internal control objectives--what internal controls are intended to achieve--fall into three distinct but overlapping categories: operations--relating to effective and efficient use of an entity's resources; financial reporting--relating to preparing reliable financial statements; and compliance--relating to an entity's compliance with laws and regulations. Safeguarding controls are a subcategory within each of these control objectives. Safeguarding controls--those designed to prevent or promptly detect unauthorized acquisition, use, or disposition of an entity's resources--are primarily operations controls. However, certain aspects of safeguarding controls can also be considered compliance and financial reporting controls. When legal or regulatory requirements apply to use of resources, operations controls designed to safeguard the efficient and effective use of resources also address compliance objectives. Similarly, objectives designed to ensure that losses associated with the use or disposition of resources are properly recognized and reflected in the entity's financial statements also address financial reporting objectives. In May 1994, COSO issued an addendum to its Integrated Framework to provide specific reporting guidance on controls concerning safeguarding of assets. COSO stated that there is a reasonable expectation that a management report will cover not only controls to help ensure that transactions involving an entity's assets are properly reflected in the financial statements, but also controls to help prevent or promptly detect unauthorized acquisition, use, or disposition of the underlying assets. COSO believes it is important that this expectation be met. The addendum provided suggested wording for management's report on internal control over financial reporting to also specifically state safeguarding of assets when covered by management's report. Internal control, as described in the Integrated Framework, consists of five essential and interrelated components: control environment, risk assessment, control activities, information and communication, and monitoring. The control environment represents the control consciousness of an entity, its management, and staff. Risk assessment refers to the awareness and management of relevant internal and external risk associated with achieving established objectives. Control activities represent the operating policies and procedures designed to help ensure that management's directives--desired actions intended to address risks--are carried out. Information and communication refers to the need for relevant and useful information to be communicated promptly to management and staff for use in carrying out their responsibilities. The monitoring component refers to the need to monitor and assess over time the effectiveness of internal control policies and procedures in achieving their intended objectives. The nature and extent to which an entity's internal control structure incorporates the five control components represent criteria that can be used in assessing the internal control effectiveness of operating, financial reporting, and compliance controls. Management can assess and report on the effectiveness of any of the three categories of control objectives. Internal controls can be judged effective if, for each category of control objective reported on, management has reasonable assurance that each of the five control components has been effectively incorporated into the entity's internal control structure. COSO recognized that determining effectiveness was a subjective judgment. Similarly, with respect to effectiveness of safeguarding controls, controls can be judged effective if management has reasonable assurance that unauthorized acquisition, use, or disposition of an entity's assets that could have a material effect on the financial statements are being prevented or detected promptly. For each examination, Coopers & Lybrand concluded that Federal Reserve Bank management fairly stated its assertion that the bank maintained an effective internal control structure over financial reporting and safeguarding for cash as of the date specified by management based on criteria established in the Internal Control--Integrated Framework issued by COSO. Coopers & Lybrand's examinations were conducted at different times during the late summer and fall of 1996 because management for each of the three Reserve Banks made their assertions about the effectiveness of internal controls as of different specified dates (Atlanta, September 30, 1996; Los Angeles, August 31, 1996; and Philadelphia, October 31, 1996). In making an assertion as of a point in time, the scope of management's assessment of internal controls is limited to the design and operating effectiveness of internal controls in place on the date of management's assertion. In addition to its positive conclusions on the reliability of management's assertion on the effectiveness of financial reporting and safeguarding controls, Coopers & Lybrand's report contains standard language related to the inherent limitations in any internal control structure and projections of results of any internal control structure evaluation to other periods. This language, required by the AICPA's Attestation Standards, is intended to remind readers that (1) internal controls, no matter how well designed and operated, can provide only reasonable assurance that internal control objectives are achieved, and (2) projections of the results of any internal control structure evaluation to any other period is subject to the risk that the internal control structure may be inadequate because of changes in conditions, or the degree of adherence to policies and procedures may deteriorate. To perform our work, we met with Federal Reserve officials and the Coopers & Lybrand partner and audit manager responsible for the examination and discussed the nature of the examination of internal controls over financial reporting and safeguarding for cash. We also discussed the applicable attestation standards and internal control criteria used by the firm in conducting the examination. We reviewed the applicable attestation standards and evaluation criteria (Internal Control--Integrated Framework issued by COSO) used by the bank's management and Coopers & Lybrand to assess the effectiveness of internal controls over financial reporting and safeguarding for cash. We also reviewed the Coopers & Lybrand working papers supporting its opinions on internal controls at the Atlanta, Los Angeles, and Philadelphia Federal Reserve Banks. We looked for evidence that the work had been planned and performed in accordance with applicable attestation standards. We also looked for evidence that Coopers & Lybrand's work addressed the applicable internal control criteria. Where necessary, we obtained additional understanding of the procedures performed through discussions with the partner and audit manager of Coopers & Lybrand. Where Coopers & Lybrand's working papers indicated that it used work performed by the Federal Reserve Banks' General Auditors with respect to electronic data processing controls, we conducted interviews with the General Auditor staff for the three banks and Federal Reserve Automation Services and reviewed their applicable internal audit working papers. We visited the Atlanta, Los Angeles, and Philadelphia banks to enhance our understanding of the respective internal control structures over financial reporting and safeguarding for cash. During our visits, which took place during April and May 1997, we observed the processes and internal controls in the respective bank's cash department that had been identified and documented by Coopers & Lybrand, and held discussions with management and staff of the cash department and the internal audit department. We performed our work from January 1997 through June 1997. Our review was performed in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Federal Reserve System Board of Governors. On August 1, 1997, the Board of Governors of the Federal Reserve System provided us with comments that are included in appendix II and discussed in the agency comments section of this report. In performing its examinations, Coopers & Lybrand (1) obtained an understanding of the procedures and internal controls, (2) evaluated the design effectiveness of the controls, (3) tested and evaluated the operating effectiveness of the controls, and (4) formed opinions about whether managements' assertions regarding the effectiveness of the internal controls were fairly stated, in all material respects, based on the COSO control criteria. Internal controls usually involve two elements: a policy establishing what should be done and procedures to effect the policy. The procedures include a range of activities such as approvals, authorizations, verifications, reconciliations, physical security, and separation of duties. Coopers & Lybrand found that the Federal Reserve has developed custody control standards and procedures that provide a framework for establishing systems of internal controls to protect cash processed and stored at the banks. Coopers & Lybrand's working papers described the cash operating process the banks followed in managing, controlling, and accounting for cash operations. This process is broken down into four major areas: (1) receiving/shipping of cash, (2) processing of currency to check the accuracy of deposits from depository institutions, identify counterfeit currency, and determine the currency's fitness for recirculation, (3) vault storage of cash, and (4) cash administration. The cash operations followed by the banks are discussed in more detail in appendix I. Coopers & Lybrand's work focused on the internal controls designed to properly record, process, and summarize transactions to permit the preparation of reliable financial statements and to maintain accountability for assets (financial reporting controls) and safeguard assets against loss from unauthorized acquisition, use, or disposition (safeguarding controls). These controls include two categories of information system control activities which serve to ensure completeness, accuracy, and validity of the financial information in the system. In order to determine whether the internal controls provided reasonable assurance that losses or misstatements material in relation to the financial statements would be prevented or detected as of the date of management's assertion, Coopers & Lybrand tested the operating effectiveness of the internal controls. The testing methods included observation, inquiry, and inspection. No one specific control test is necessary, applicable, or equally effective in every circumstance. Generally, a combination of these types of control tests is performed to provide the necessary level of assurance. The types of tests performed for each control activity are determined by the auditor using professional judgment and depend on the nature of the control to be tested and the timing of the control test. For example, documentation of some control activities may not be available or relevant and evidence about the effectiveness of operation is obtained through observation or inquiry. Also, some activities, such as those relating to the resolution of exception items, may not occur on the date that the auditor is conducting the tests. In those cases, the auditor needs to inquire about the procedures performed when exceptions occur. Observation tests are conducted by observing entity personnel actually performing control activities in the normal course of their duties. For example, Coopers & Lybrand observed the physical separation between the carriers and the receiving and shipping teams, the use of locks and seals on the containers used for storing currency, and the preparation of the end of day proof by each of the teams. In currency processing, Coopers & Lybrand observed preparation of the processing unit proof, transfer of currency to and from the processing teams, and processing team operations. Observation of processing operations documented in their working papers included the handling of currency rejected by the high speed machine and its processing on the slower speed machine, and the physical transfer of rejected currency from the processing team to the cancellation team. Inquiry tests are conducted by making either oral or written inquiries of entity personnel involved in the application of specific control activities to determine what they do or how they perform the specific control activity. For example, Coopers & Lybrand's inquiries of bank personnel included asking about procedures performed when containers stored in the vault are found to have broken seals and when discrepancies in shipments are reported by the depository institutions. Inspection tests are conducted by examining documents and records for evidence (such as the existence of initials, signatures, or other indications) that a control activity was applied to those documents and records. Coopers & Lybrand used inspection to test controls such as the daily reconciliation of CAS and the general ledger system, the end of day proofs prepared by each team, vault inventories, and monitoring logs prepared by cash department management personnel. Similarly, Coopers & Lybrand tested computer controls through observation, inquiry, and inspection. For example, they observed the enforcement of physical access controls such as logging of visitors and video surveillance. They asked management about the control procedures over changes to the CAS program code and corroborated the information they were given by interviewing system users and application developers. They inspected a system log to verify that backup tapes were being produced on schedule. For many of the computer controls tests in their work program, Coopers & Lybrand consulted with Federal Reserve Bank's General Auditors to gain an understanding of the computer controls and/or examined their working papers to further corroborate information that Coopers & Lybrand obtained through observation, inquiry, and inspection. In addition to other tests conducted by inspection, observation, and inquiry, the banks' internal audit working papers evidenced tests based upon independent verification of compliance with computer control procedures. For example, the General Auditors for the Federal Reserve Bank in Philadelphia selected five days of work for each of five cash processing rooms and examined system reports and manual logs to verify that the high-speed currency processing machines were tested daily and that they returned acceptable results before being put into production. The results of our review disclosed no instances in which Coopers & Lybrand did not comply, in all material respects, with the AICPA's Attestation Standards in the work described above. We found that Coopers & Lybrand's working papers adequately documented that it had planned, performed and supervised the work. The working papers contained evidence that the auditor had an appropriate level of knowledge about the Federal Reserve Banks and had considered relevant work from prior years' audits, such as descriptions of the internal control structure. The scope of the examination was detailed in a written engagement letter. We found that the work was performed by staff who were independent with respect to the Federal Reserve Banks and had adequate experience. Also, the working papers evidenced that the staff had been properly supervised. For example, key working papers were reviewed by the Audit Manager and Partner. We found that Coopers & Lybrand used audit tools to assist it in documenting the internal controls for each of the processes included in cash operations. For example, its auditors prepared worksheets which identified internal control objectives, the related risks and the control activities designed to address the objectives. Also, they prepared work programs which described the procedures to be performed to test the control activities, and they documented the results of their tests in written working papers. They used similar audit tools for their review of computer controls, documenting in their working papers the control objectives to be tested, the procedures performed, and their conclusions. In accordance with the attestation standards, the working papers contained written assertions made by management about the effectiveness of the bank's internal controls and contained a written management representation letter. In commenting on a draft of this report, the Board of Governors of the Federal Reserve System fully concurred with our conclusion on Coopers & Lybrand's work. The Board of Governors indicated that our conclusions are consistent with those of the Board's Inspector General. Also, the Board of Governors noted that the financial controls in each Reserve Bank's operations, including cash, will be evaluated on an ongoing basis as part of Coopers & Lybrand's audit procedures in order to render an opinion on the financial statements. Further, the cash operations controls are reviewed regularly by the Banks' internal auditors, the Board's financial examiners, Board staff who conduct periodic operations reviews of Reserve Bank cash functions, and the Department of Treasury reviews of currency destruction activities. We are sending copies of this report to the Chairman of the Board of Governors of the Federal Reserve System; the Secretary of the Treasury; the Chairman of the House Committee on Banking and Financial Services; the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; and the Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-9406 if you or your staff have any questions. Major contributors to this report are listed in appendix III. As the United States' central bank, the Federal Reserve has primary responsibility for maintaining the nation's cash supply. In carrying out this responsibility, Federal Reserve Banks perform various cash-related operations. At the 37 Federal Reserve Banks and Branches, the cash operations function is responsible for receiving new coin from the U.S. Mint, new currency from the Bureau of Engraving and Printing, cash from depository institutions, currency processing, safeguarding cash held on deposit, and shipping cash to meet the needs of depository institutions. In addition, Federal Reserve Banks must record and summarize the various accounting transactions associated with their cash-related activities. While each Federal Reserve Bank performs the same basic cash-related functions, banks may use different systems and procedures to manage and account for the cash under their control. Federal Reserve Banks in Atlanta, Los Angeles, and Philadelphia use the Cash Automation System (CAS) to provide inventory, safeguarding, and accounting control over currency processing. CAS is an electronic inventory system which, among other features, tracks coin and currency activities and balances by denomination, and identifies bank operating units with custodial responsibility for cash. Certain data maintained in CAS are used to provide daily updates to the Federal Reserve's general ledger system. CAS data are also used by Federal Reserve officials to prepare monthly currency activity reports. In addition to CAS, the three Federal Reserve Banks use procedural controls to safeguard cash and account for processing-related activities. These controls include restricted access, joint custody, segregation of processing-related duties, video surveillance cameras, supervisory review, and monitoring. Presented below is a general description of the cash operations functions at the three Federal Reserve Banks examined by Coopers & Lybrand. While the description focuses on currency operations, the handling and control procedures over coin are similar to those for currency, with a few notable differences. For example, coins are handled in bags and their content is verified through a weighing process, while currency notes received from depository institutions are individually checked by high-speed equipment for accuracy, fitness, and authenticity. Also, coin is stored in a separate vault from currency. Each work day, depository institutions may notify Federal Reserve Banks electronically of currency that they are depositing with or ordering from each bank. The notification includes the dollar amount and denominational breakdown for the deposit or order. The cash is transported between the Federal Reserve Banks and depository institutions by armored carriers which enter the bank buildings through secured entrances. To ensure the integrity of the currency received from or transferred to the carriers, the Federal Reserve Banks use a minimum of two-person receiving or shipping teams. These teams are always physically separated from the carriers as shipments are unloaded or loaded by the carriers. For example, carriers unload or load the currency into a glass-walled room (sometimes called an anteroom) which is bordered on one end by the carriers' entrance and on the other end by the receiving or shipping room. Each anteroom has two sets of locking doors on either end. The receiving or shipping team cannot enter the anteroom when the carrier is unloading or loading currency. Currency transfers are accepted on a "said to contain" basis. Carriers verify currency transfers by checking the number and denomination of currency bags to see if they match the stated contents on the manifests. When currency is received by a Federal Reserve Bank, the receiving team counts the number of bags received from each depository institution and independently compares this to the carrier's manifest before accepting the currency from the carrier. Subsequently, the receiving team counts the bundles of currency to verify the total amount received. These counts of the number of bundles received for each denomination are performed independently by each team member. The team members then independently put their counts into CAS where they are compared to each other and to the deposit notification received from the depository institution. If the counts match, the depository institution automatically receives credit for the shipment. If the counts do not match, the difference is investigated and must be resolved before the end of day closeout or reconciliation process can be completed. After the counts are completed, the currency is transferred to a vault in a sealed container where it is safeguarded until it goes through currency processing. When currency is being shipped to fill an order, the currency is transferred from the vault to a shipping team. The shipping team inspects the integrity of the seals on the containers prior to accepting accountability for the currency. The shipping team prepares the order by placing the currency in sealed bags. The team members independently count the order and put their counts into CAS where they are compared to each other and to the order notification received electronically from the depository institution. Because the carrier accepts the shipment on a said-to-contain basis, any discrepancies subsequently identified by the depository institution in the amounts of currency in the bags must be resolved with the Federal Reserve Bank that filled the order. At the end of each shift, each receiving and shipping team prepares a daily proof to ensure that all of the currency transferred to the team from a carrier or the vault is accounted for either in the team's ending inventory or through transfers to the vault or carriers. Currency received from depository institutions is processed to check the accuracy of the deposit, identify counterfeit currency, and determine the currency's fitness. The processing takes place in glass-walled rooms which have numerous surveillance cameras and locked doors that enable the processing team to control access to each room and its contents. Processing teams are composed of either two or three members who share joint custody and accountability for the team's currency holdings and processing activities. On a scheduled basis, the processing machines are tested to ensure they are performing within established tolerance levels. The tests consist of running currency test decks through the machines to determine whether they are correctly counting the notes, identifying and rejecting different denominations and counterfeit currency, and identifying and shredding soiled currency. Testing is performed by trained currency processing staff who are not directly involved in routine processing activities. The test results are tracked through automated output reports which are reviewed by the test operator and management. If the test results indicate the need for service, site engineers are available to service the machines. Test decks are only used for a specified number of tests after which the test decks are destroyed. Custody of the test decks is tracked in the CAS inventory and access is restricted through the use of locked storage containers. All currency received from circulation is processed initially on a high-speed machine which counts the notes and tests for denomination, soil level, and authenticity. One of three things can happen to individual currency notes as they are processed on the high-speed machine. Currency which passes the machine's various tests is considered fit for recirculation and is repackaged with a new currency strap which identifies the Federal Reserve Bank, the processing team, and the date the currency was processed. Currency failing only the soil test is shredded on-line by the high-speed machine which generates output reports that track the number and denomination of currency shredded during the shift. The high-speed machine also rejects currency for incorrect denomination, questionable soil levels, and/or potential counterfeit. This currency undergoes further processing to check denomination and authenticity on a slower speed machine. Differences in count are tracked by the automated output reports and recorded in CAS as adjustments to the depository institution's deposit. Depository institutions are notified--via a written adjustment advice--of changes to their previously recorded deposit amounts. Rejected currency is transferred to a slower machine for further processing and inspection along with the straps that identify the depository institution that packaged the currency. The operator enters the rejected currency into the slower speed machine where it is retested for denomination, soil level, and counterfeit. Currency which passes the retests is shredded on-line and tracked in automated output reports. Currency which fails any one of the retests is rejected by the slower speed machine. The rejects, along with the cause for the rejections, are tracked and separately reported in automated output reports. These reports are also used to adjust the depository institution's account with the Federal Reserve Bank for the amount of the difference. Currency rejected by the slower speed machine is sorted for off-line destruction or transfer. Counterfeit items are stamped "Counterfeit" and transferred daily from the processing team to independent clerks who examine, count, and collect counterfeit currency for shipment to the U.S. Secret Service for follow-up and analysis. Currency rejected for denomination and soil level is transferred daily to a separate team for cancellation and subsequent off-line destruction. In the presence of the processing team, a cancellation team counts and accepts the transfer of the rejected currency for cancellation. The transfer is recorded in the CAS system. The team takes the rejected currency in a locked container to a cancellation room where the currency is cancelled by punching bank-specific-shaped holes into the currency. The cancellation process is monitored by an independent observer who also monitors the transfer of the cancelled currency to a separate off-line destruction team. Upon verification and approval by the off-line destruction team, the transfer of cancelled currency is recorded in CAS. Off-line destruction occurs periodically throughout the week and is monitored by an independent observer who counts the number and denomination of the currency straps to be destroyed and matches it to the strap count performed by the off-line destruction team. In addition, the destruction team and independent observer follow prescribed policies which include sample counts of individual low value currency notes and a 100 percent count of higher value currency notes. Once this count is completed, the off-line destruction team, along with the independent observer, takes the cancelled currency to a special room where it is destroyed in a shredder. Once all currency has been destroyed, the destruction team and the independent observer inspect the shredder to ensure that all cancelled currency was destroyed. Following the off-line destruction, the team generates from CAS a certificate of destruction based on the earlier currency transferred to the off-line destruction team. The certificate of destruction is signed by the team and the observer and forwarded to Cash Administration for use in the end-of-day reconciliation. At the end of each shift, each processing team prepares its unit proof. The proof is designed to ensure that the processing team can account for the team's currency holdings and processing activities by tracking the value of its beginning and ending inventory, its currency transfers in and out, and any adjustments arising during processing. After the team completes and accepts the proof data, it is transmitted electronically to CAS where it is compared to related currency data entered into CAS during the shift. If the proof data balance and agree with related currency data in CAS, the unit proof is accepted. If the proof data do not agree with related currency data in CAS, the processing team must request management assistance to identify and resolve differences. The Federal Reserve Banks use vaults to safeguard the currency they hold. The vault is a separate room within the cash department and a record is maintained of all persons who enter and exit the vault each day. Access to the vault may also be restricted through the use of keys or swipe cards. When stored in the vault, currency of the same denomination is stacked in locked containers. Cash department employees have a set of locks with their own personal key or combination. The employees use these locks to secure the containers for which they are accountable. In addition to the locks, each two-person team secures the containers with two prenumbered seals. In some Federal Reserve Banks, the locks are removed while the containers are stored in the vault. When this occurs, the integrity of the seals is verified when accountability for the container is transferred to another team. In some Federal Reserve Banks, accountability for the currency is transferred to vault custodians when the currency is stored in the vault. In other Federal Reserve Banks, accountability for currency stored in the vault stays with either the receiving or shipping team, and the vault custodians serve more of an administrative function. In both cases, the vault custodians periodically conduct a rack count of the currency in the vault (i.e., daily in Atlanta and Los Angeles, weekly in Philadelphia) and reconcile the count to CAS. The custodians also prepare a daily proof at the end of each day to ensure that all transfers of currency in and out of the vault match shipping, receiving and high-speed processing records. The Cash Administration independent proof clerk is responsible for producing the department proof, the daily reconciliation of CAS and the Integrated Accounting System (IAS), and submitting manual entries to the IAS. All manual IAS entries must balance and be reviewed and approved by management. Before the department proof can be produced, CAS is used to verify that all teams have produced their final unit proofs, and the cash department inventory and transaction totals agree. The department proof lists all of the transactions and current inventory balances for each of the department's teams (receiving, shipping, processing, and vault). The independent proof clerk then compares the department inventory total to the calculated balance from CAS. The calculated balance is determined by taking the ending inventory from the previous day and adding/subtracting for the current day's transactions. The two totals must be equal. Throughout the day, transactions from CAS are automatically uploaded and posted to IAS. The daily reconciliation of CAS and IAS involves the comparison of the end-of-day department inventory totals from CAS to the total reflected in IAS. The two totals must be equal. Periodically, the independent proof clerk performs a blind confirmation of the reconciliation in which the clerk is "locked out" of IAS and submits the CAS balances to the accounting department for reconciliation. The daily reconciliations of CAS and IAS are reviewed and approved by cash administration management. Sharon S. Kittrell, Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the work of the Federal Reserve's external auditor, Coopers & Lybrand L.L.P., in reporting on the effectiveness of the internal control structure over financial reporting for cash at the Atlanta and Philadelphia Federal Reserve Banks, and the Los Angeles Branch, focusing on whether: (1) the work was conducted in accordance with applicable professional standards; and (2) supported the auditor's opinion on managements' assertions on the effectiveness of the internal controls over cash operations. GAO noted that: (1) GAO's review disclosed no instances in which Coopers & Lybrand's work to support its opinions on the effectiveness of the internal control structures over financial reporting and safeguarding for coin and currency at the Atlanta and Philadelphia Federal Reserve Banks, and the Los Angeles Branch did not comply, in all material aspects, with the American Institute of Certified Public Accountants' Attestation Standards; (2) Coopers & Lybrand obtained and documented an understanding of the internal control policies and procedures, developed by the Federal Reserve Banks, to manage and account for each of the four main cash operating functions: receiving/shipping, currency processing, vault, and cash administration; (3) Coopers & Lybrand also performed tests and other procedures in support of its evaluation of the design and operating effectiveness of the internal controls in order to form an opinion about the reliability of management's assertion; and (4) for each examination, Coopers & Lybrand concluded that the Federal Reserve Bank management fairly stated its assertion that the bank maintained an effective internal control structure over financial reporting and safeguarding for cash as of the date specified by management based on criteria established in the Internal Control--Integrated Framework issued by the Committee on Sponsoring Organizations of the Treadway Commission.
| 8,089 | 374 |
As part of our audit of the fiscal years 2014 and 2013 CFS, we considered the federal government's financial reporting procedures and related internal control. Also, we determined the status of corrective actions Treasury and OMB have taken to address open recommendations relating to their processes to prepare the CFS that were detailed in our previous reports. A full discussion of our scope and methodology is included in our February 2015 report on our audit of the We have communicated each of the fiscal years 2014 and 2013 CFS.control deficiencies discussed in this report to your staff. We performed our audit of the fiscal years 2014 and 2013 CFS in accordance with U.S. generally accepted government auditing standards. We believe that our audit provided a reasonable basis for our conclusions in this report. During our audit of the fiscal year 2014 CFS, we identified three new internal control deficiencies in Treasury's processes used to prepare the CFS. Specifically, we found that Treasury did not have (1) a sufficient process to work with key federal entities prior to the end of the fiscal year to reasonably assure that new or substantially revised federal accounting standards were consistently implemented by the entities to allow appropriate consolidation at the government-wide level, (2) procedures for determining whether entities and transactions for which it does not have audit assurance are significant in the aggregate to the CFS, and (3) sufficient procedures for (a) identifying significant increases or decreases in all CFS line items and disclosures from prior fiscal year reported amounts and (b) understanding the reasons for such changes. Treasury did not have a sufficient process to work with key federal entities prior to the end of the fiscal year to reasonably assure that new or substantially revised federal accounting standards were consistently implemented by the entities to allow appropriate consolidation at the government-wide level. For example, for the Financial Report of the United States Government (Financial Report), the Federal Accounting Standards Advisory Board's (FASAB) Technical Bulletin 2011-1, Accounting for Federal Natural Resources Other Than Oil and Gas, requires a concise statement as part of required supplementary information (RSI) explaining the nature and valuation of federal natural resources. The statement is to encompass significant federal natural resources other than oil and gas under management by the federal government. For fiscal year 2014, only one federal entity (the Department of the Interior) provided to Treasury a discussion of significant federal natural resources under entity management, specifically related to coal leases. As a result, the RSI for federal natural resources other than oil and gas included in the fiscal year 2014 Financial Report reported only coal leases from the Department of the Interior. The RSI did not describe coal resources that are not currently under lease or certain other natural resources owned by the federal government. We communicated this matter to Treasury and OMB officials who revised the Financial Report before issuance, as appropriate. We found that Treasury has a process to work with federal entities when implementing new and revised federal accounting standards. Specifically, Treasury presents the standards for discussion at regularly scheduled monthly meetings--called Central Reporting Team meetings--that include financial reporting representatives from federal entities. Treasury also updates the Treasury Financial Manual, its financial reporting guidance for federal entities, to include new reporting requirements. This process was followed in implementing Technical Bulletin 2011-1 in fiscal year 2014. However, this process is not sufficient to reasonably assure that new or substantially revised federal accounting standards are consistently implemented by the entities. In a prior year, after we identified inconsistencies in the information reported to Treasury related to the implementation of FASAB Statement of Federal Financial Accounting Standards (SFFAS) No. 33, Treasury established a working group involving the key federal entities affected by the standard. The group met several times to discuss the standard and through such discussions was able to identify and resolve inconsistencies in the reporting of information for consolidation at the government-wide level. However, Treasury has not adopted a similar process for implementing subsequent standards. Federal financial statements are to be presented in accordance with applicable generally accepted accounting principles (GAAP). FASAB, the body designated as the source of GAAP for federal reporting entities, regularly issues new and revised standards, including two new federal accounting standards that are to be implemented in fiscal year 2015. Without a sufficient process to work with key federal entities to reasonably assure that new or substantially revised federal accounting standards are consistently implemented by the entities, there is an increased risk of misstatements in the financial statements or incomplete and inaccurate disclosure of information within the Financial Report. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement a sufficient process to work with key federal entities prior to the end of the fiscal year to reasonably assure that new or substantially revised federal accounting standards are consistently implemented by the responsible entities to allow appropriate consolidation at the government-wide level. Treasury did not have procedures for determining whether entities and transactions for which it does not have audit assurance are significant in the aggregate to the CFS. Treasury's standard operating procedure (SOP) entitled "Significant Entities" includes procedures for identifying federal entities with activity that is material to at least one financial statement line item or note disclosure. Each federal entity identified as significant is to submit an audited closing package to Treasury. The closing package methodology is intended to link federal entities' audited consolidated department-level financial statements to certain statements of the CFS. Chief financial officers of significant federal entities are required to verify the consistency of the closing package data with their respective entities' audited financial statements. In addition, entity auditors are required to separately audit and report on the financial information in the closing packages. However, the SOP did not include, and therefore Treasury did not perform, procedures for determining whether entities and amounts that were not included in a closing package, and thus for which Treasury did not have audit assurance, were significant to the CFS in the aggregate before the CFS was finalized. Specifically, we found that Treasury's SOP did not include procedures for assessing the significance of aggregate amounts for which it does not have audit assurance, including amounts related to the following: non-significant entities, which are not required to submit audited closing packages; significant entities that did not submit audited closing packages; nonmaterial line items for significant calendar year-end entities; material line items for calendar year-end entities that did not submit audited closing packages; journal vouchers processed by Treasury that were not based on the closing packages; and uncorrected misstatements identified at the consolidated level, including uncorrected misstatements submitted by the significant entities with their closing packages. Standards for Internal Control in the Federal Government provides that control activities are the policies, procedures, techniques, and mechanisms that enforce management's directives.provide that an entity should accurately record transactions and events-- from initiation to summary records--and that control activities include procedures to achieve accurate recording of transactions and events. Without procedures for determining whether aggregate amounts for which Treasury does not have audit assurance are significant to the CFS, there is an increased risk of material misstatements in the financial statements. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement procedures for determining whether entities and transactions for which Treasury does not have audit assurance are significant in the aggregate to the CFS. Treasury did not have sufficient procedures for (1) identifying significant increases or decreases in all CFS line items and disclosures from prior fiscal year reported amounts and (2) understanding the reasons for such changes. Treasury's SOP entitled "Preparation of the Financial Report" includes procedures for performing an overall variance analysis at the consolidated line item level for the Balance Sheet, Statement of Net Cost, Statement of Operations and Changes in Net Position, and related notes to the Balance Sheet. The variance analysis compares the amounts for the current and prior years and provides an explanation for material changes.anomalies in the data that, if unexplained, could indicate misstatements in the data. However, the SOP did not include, and therefore Treasury did not perform, an overall variance analysis on the remaining CFS line items and disclosures where comparable amounts were presented. This includes the CFS budget statements, which consist of the Reconciliation of Net Operating Cost and Unified Budget Deficit and Statement of Changes in Cash Balance from Unified Budget and Other Activities; non- Balance Sheet notes; RSI; Required Supplementary Stewardship Information; and Other Information. Such variance analysis helps identify unusual trends or Standards for Internal Control in the Federal Government provides that control activities are the policies, procedures, techniques, and mechanisms that enforce management's directives. The standards also provide that an entity should accurately record transactions and events-- from initiation to summary records--and that control activities include procedures to achieve accurate recording of transactions and events. Without sufficient procedures for identifying and understanding significant changes from prior year reported amounts, there is an increased risk of misstatements in the financial statements or incomplete and inaccurate disclosure of information within the Financial Report. We recommend that the Secretary of the Treasury direct the Fiscal Assistant Secretary to develop and implement procedures for identifying significant increases or decreases in all CFS line items and disclosures from prior fiscal year reported amounts and for understanding the reasons for such changes. At the end of the fiscal year 2013 audit, 31 recommendations from our prior reports regarding control deficiencies in the processes used to prepare the CFS were open. Treasury implemented corrective actions during fiscal year 2014 that resulted in significant progress in resolving certain of the control deficiencies addressed by our recommendations. For 7 recommendations, the corrective actions resolved the related control deficiencies, and we closed the recommendations. While progress was made, 24 recommendations from our prior reports remained open as of February 19, 2015, the date of our report on the audit of the fiscal year 2014 CFS. Consequently, a total of 27 recommendations need to be addressed--24 remaining from prior reports and the 3 new recommendations we are making in this report. Appendix I summarizes the status as of February 19, 2015, of the 31 open recommendations from our prior reports, including the status according to Treasury and OMB, as well as our own assessment and additional comments where appropriate. Various efforts are under way to address these recommendations. We will continue to monitor Treasury's and OMB's progress in addressing our recommendations as part of our fiscal year 2015 CFS audit. In oral comments on a draft of this report, OMB generally concurred with the findings and recommendations of this report. In written comments on a draft of this report, which are reprinted in appendix II, Treasury concurred with our three new recommendations. Treasury also provided details on its ongoing efforts to address the material weaknesses that relate to the federal government's processes used to prepare the CFS. To address the material weakness related to intragovernmental transactions, Treasury stated that it will continue to devote significant resources toward resolving the material weakness and, in order to resolve systemic intragovernmental differences, Treasury has requested that federal agencies provide it with root cause information and corrective action plans. Regarding the material weakness related to the compilation process, Treasury stated that it continued to implement software and processes to automate and streamline the compilation of the Financial Report, and that in fiscal year 2015, it will focus on collecting critical information from reporting entities identified as significant to the Financial Report, including entities in the legislative and judicial branches. In addition, Treasury noted that it is continuing its efforts to validate material completeness of budgetary information included in the Financial Report, as well as the consistency of such information with agency reports. This report contains recommendations to the Secretary of the Treasury. The head of a federal agency is required by 31 U.S.C. SS 720 to submit a written statement on actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days after the date of this report. A written statement must also be sent to the Senate and House Committees on Appropriations with the agency's first request for appropriations made more than 60 days after the date of this report. Please provide me with a copy of your responses. We are sending copies of this report to interested congressional committees, the Fiscal Assistant Secretary of the Treasury, and the Controller of the Office of Management and Budget's Office of Federal Financial Management. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. We acknowledge and appreciate the cooperation and assistance provided by Treasury and OMB during our audit. If you or your staff have any questions or wish to discuss this report, please contact me at (202) 512-3406 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. related material weaknessNo. Count GAO-04-45 (results of the fiscal year 2002 audit) 1 02-22 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to perform an assessment to define the reporting entity, including its specific components, in conformity with the criteria issued by the Federal Accounting Standards Advisory Board (FASAB). Key decisions made in this assessment should be documented, including the reason for including or excluding components and the basis for concluding on any issue. Particular emphasis should be placed on demonstrating that any financial information that should be included but is not included is immaterial. (Preparation material weakness) Treasury developed a process to identify all reporting entities for inclusion in the Financial Report of the U.S. Government (Financial Report). The reporting entities can be found in appendix A of the Financial Report. Closed. Open. Fiscal Assistant Secretary, in coordination with the Controller of OMB, to provide in the financial statements all the financial information relevant to the defined reporting entity, in all material respects. Such information would include, for example, the reporting entity's assets, liabilities, and revenues. (Preparation material weakness) Treasury was able to collect data from all reporting entities in the executive branch for fiscal year-end 2014. Partial data were also collected from the judicial and legislative branches, despite those entities not being statutorily required to report to Treasury. Treasury and OMB will continue to work with non-executive branch entities to collect all necessary information. 02-24 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to disclose in the financial statements all information that is necessary to inform users adequately about the reporting entity. Such disclosures should clearly describe the reporting entity and explain the reason for excluding any components that are not included in the defined reporting entity. (Preparation material weakness) Treasury revamped the wording in appendix A of the Financial Report to clearly demonstrate the reporting entities included in the Financial Report. In addition, appendix A clearly discloses the reasons for excluding certain reporting entities. Closed. Open. Fiscal Assistant Secretary, in coordination with the Controller of OMB, to help ensure that federal agencies provide adequate information in their legal representation letters regarding the expected outcomes of the cases. (Preparation material weakness) Treasury and OMB will work with the agencies to encourage the proper usage of the "unable to determine" category for the legal cases. In addition, Treasury will work to limit agency use of this category, require that additional information be reported when this category is selected, or both. No. 02-37 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies develop a detailed schedule of all major treaties and other international agreements that obligate the U.S. government to provide cash, goods, or services, or that create other financial arrangements that are contingent on the occurrence or nonoccurrence of future events (a starting point for compiling these data could be the State Department's Treaties in Force). (Preparation material weakness) Per Treasury and OMB Treasury and OMB will develop a process to leverage the existing information from agencies, where possible, and assess methods and approaches for obtaining additional information from agencies. 02-38 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies classify all such scheduled major treaties and other international agreements as commitments or contingencies. (Preparation material weakness) See the status of recommendation No. 02-37. Per GAO Open. Until a comprehensive analysis of major treaty and other international agreement information has been performed, Treasury and OMB are precluded from determining if additional disclosure is required by generally accepted accounting principles (GAAP) in the CFS, and we are precluded from determining whether the omitted information is material. Open. See the status of recommendation No. 02-37. 02-39 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that have a reasonably possible chance of resulting in a loss or claim as a contingency. (Preparation material weakness) See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. Per Treasury and OMB See the status of recommendation No. 02-37. Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies disclose in the notes to the CFS amounts for major treaties and other international agreements that are classified as commitments and that may require measurable future financial obligations. (Preparation material weakness) Per GAO Open. See the status of recommendation No. 02-37. 02-41 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB, to establish written policies and procedures to help ensure that major treaty and other international agreement information is properly identified and reported in the CFS. Specifically, these policies and procedures should require that federal agencies take steps to prevent major treaties and other international agreements that are classified as remote from being recorded or disclosed as probable or reasonably possible in the CFS. (Preparation material weakness) See the status of recommendation No. 02-37. Open. See the status of recommendation No. 02-37. 02-129 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to ensure that the note disclosure for stewardship responsibilities related to the risk assumed for federal insurance and guarantee programs meets the requirements of Statement of Federal Financial Accounting Standards No. 5, Accounting for Liabilities of the Federal Government, paragraph 106, which requires that when financial information pursuant to Financial Accounting Standards Board standards on federal insurance and guarantee programs conducted by government corporations is incorporated in general purpose financial reports of a larger federal reporting entity, the entity should report as required supplementary information what amounts and periodic change in those amounts would be reported under the "risk assumed" approach. (Preparation material weakness) Treasury will continue to request this information from the agencies at interim and through year-end reporting requirements in the Treasury Financial Manual (TFM) 2-4700. In addition, Treasury will continue to participate on the FASAB Risk Assumed task force and implement any related changes corresponding to the issuance of revised or new federal accounting standards. Open. Treasury's reporting in this area is not complete. The CFS should include all major federal insurance programs in the risk assumed reporting and analysis. Also, since future events are uncertain, risk assumed information should include indicators of the range of uncertainty around expected estimates, including indicators of the sensitivity of the estimate to changes in major assumptions. No. Count GAO-04-866 (results of the fiscal year 2003 audit) 11 03-8 The Director of OMB should direct the Controller of OMB, in coordination with Treasury's Fiscal Assistant Secretary, to work with the Department of Justice and certain other executive branch federal agencies to ensure that these federal agencies report or disclose relevant criminal debt information in conformity with GAAP in their financial statements and have such information subjected to audit. (Preparation material weakness) Treasury and OMB will assess options as to what methodologies or approaches to use for obtaining the additional information needed from the agencies. Open. 03-9 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to include relevant criminal debt information in the CFS or document the specific rationale for excluding such information. (Preparation material weakness) See the status of recommendation No. 03-8. Open. GAO-05-407 (results of the fiscal year 2004 audit) 13 04-3 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to require that Treasury employees contact and document communications with federal agencies before recording journal vouchers to change agency audited closing package data. (Preparation material weakness) Treasury will continue to strengthen internal control procedures to ensure accurate and supported journal vouchers. Open. 04-6 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to assess the infrastructure associated with the compilation process and modify it as necessary to achieve a sound internal control environment. (Preparation material weakness) Treasury continues to make improvements to its internal control infrastructure by updating and revising its standard operating procedures (SOP) and working to help ensure that key controls are in place at all critical areas. Open. GAO-07-805 (results of the fiscal year 2006 audit) 15 06-6 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB's Office of Federal Financial Management, to establish effective processes and procedures to ensure that appropriate information regarding litigation and claims is included in the government-wide legal representation letter. (Preparation material weakness) Treasury and OMB will develop a process to leverage the existing information from agencies, where possible, and assess options for approaches to use for obtaining additional information from agencies. Open. No. Count GAO-08-748 (results of the fiscal year 2007 audit) 16 07-9 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, in coordination with the Controller of OMB's Office of Federal Financial Management, to develop and implement effective processes for monitoring and assessing the effectiveness of internal control over the processes used to prepare the CFS. (Preparation material weakness) Treasury will start to implement an internal review, based on OMB Circular No. A-123 guidance, on the Financial Report in fiscal year 2016. Open. 07-10 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB's Office of Federal Financial Management, to develop and implement alternative solutions to performing almost all of the compilation effort at the end of the year, including obtaining and utilizing interim financial information from federal agencies. (Preparation material weakness) Treasury obtained and utilized interim financial information from federal agencies starting in fiscal year 2012. In fiscal year 2014, Treasury increased the number of key topics and completed subject matters to the extent possible before fiscal year-end. Closed. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to develop and implement procedures to provide for the active involvement of key federal entity personnel with technical expertise in relatively new areas and more complex areas in the preparation and review process of the Financial Report. (Preparation material weakness) 11-09 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled "Significant Federal Entities Identification" to include procedures for identifying any entities that become significant to the Financial Report during the fiscal year but were not identified as significant in the prior fiscal year. (Preparation material weakness) Treasury has involved key federal entity personnel with technical expertise during the interim and year-end preparation and review of the Financial Report. In fiscal year 2015, Treasury will continue to work collaboratively with the agencies to increase their involvement in preparing certain key disclosures and other information. Treasury implemented a process in fiscal year 2014 to identify all significant entities in the Financial Report. This process monitors the significant entity reporting throughout the fiscal year and is finalized before the publication date. Closed. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled "Significant Federal Entities Identification"to include procedures for obtaining audited closing packages from newly identified significant entities in the year they become significant, including timely written notification to newly identified significant entities. (Preparation material weakness) Treasury has enhanced the SOP to include notifying the significant entities in a timely manner and explaining the required significant entity reporting. Treasury and OMB will continue to work with the identified significant entities to obtain audit coverage over the required reporting. Treasury implemented a process in fiscal year 2014 to identify all significant entities in the Financial Report. This process monitors the significant entity reporting throughout the fiscal year and is finalized before the publication date. Closed. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled "Significant Federal Entities Identification" to include procedures for identifying any material line items for significant calendar year-end entities that become material to the CFS during the current fiscal year but were not identified as material in the analysis using prior year financial information. (Preparation material weakness) 12-02 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to enhance the SOP entitled "Significant Federal Entities Identification" Treasury has enhanced the SOP to notify the significant entities in a timely manner and to explain the required significant entity reporting. Treasury and OMB will continue to work with the identified significant entities to obtain audit coverage over the required reporting. Open. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB's Office of Federal Financial Management, to establish and implement effective procedures for reporting amounts in the CFS budget statements that are fully consistent with the underlying information in significant federal entities' audited financial statements and other financial data. (Budget statements material weakness) Treasury has established and begun implementing procedures to request agency submission of closing package data as a means of validating budget deficit data against agencies' audited financial information. In addition, Treasury is examining the benefits of performing a complete audit on the general ledger data used for the budget statements. 12-05 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB's Office of Federal Financial Management, to establish and implement effective procedures for identifying and reporting all items needed to prepare the CFS budget statements. (Budget statements material weakness) Treasury will strengthen procedures to demonstrate that all material reconciling items are included on the budget statements. Open. Count GAO-14-543 (results from the fiscal year 2013 audit) 25 No. 13-01 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to include all key elements recommended by the Implementation Guide for OMB Circular A-123, Management's Responsibility for Internal Control - Appendix A, Internal Control over Financial Reporting and fully consider the interrelationships between deficiencies in the corrective action plans. (Preparation material weakness) Treasury and OMB are working to develop a remediation plan to address issues that are impediments to auditability. Open. Closed. Fiscal Assistant Secretary to develop and implement procedures to sufficiently document management's conclusions and the basis of such conclusions regarding the accounting policies for the CFS. (Preparation material weakness) Treasury implemented procedures in fiscal year 2014 related to documenting accounting policies. Treasury documented several accounting policies for the Financial Report in fiscal year 2014. 13-03 The Secretary of the Treasury should direct the Fiscal Assistant Secretary to improve and implement Treasury's procedures for verifying that staff's preparation of the narrative within the notes to the CFS is accurate and supported by the underlying financial information of the significant component entities. (Preparation material weakness) Treasury implemented a cross- reference guide in fiscal year 2014 to verify each line in the narrative of the Financial Report. Closed. Starting in fiscal year 2015, Treasury will require agencies to provide detailed root cause analysis documentation and corrective action plans (including target completion dates) through TFM 2-4700. Open. Treasury will complete the analysis of agency intragovernmental differences on a quarterly basis to determine if additional agencies should be integrated into the quarterly reconciliation process. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to expand the scorecard process to include intragovernmental activity and balances that are currently not covered by the process or demonstrate that such information is immaterial to the CFS. (Intragovernmental material weakness) No. 13-06 The Secretary of the Treasury should direct the Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to establish and implement policies and procedures for accounting for and reporting all significant General Fund activity and balances, obtaining assurance on the reliability of the amounts, and reconciling the activity and balances between the General Fund and federal entities. (Intragovernmental material weakness) Per Treasury and OMB Treasury is working to complete the General Fund general ledger and will continue to implement processes and controls in preparation for its auditability. Treasury has expanded the reciprocal categories for General Fund activity and balances to assist with reconciling intragovernmental differences with federal agency trading partners. Per GAO Open. Open. Fiscal Assistant Secretary, working in coordination with the Controller of OMB, to establish a formalized process to require the performance of additional audit procedures specifically focused on intragovernmental activity and balances between federal entities to provide increased audit assurance over the reliability of such information. (Intragovernmental material weakness) Treasury and OMB will assess options to increase audit assurance over the reliability of agency intragovernmental activity and balances. Legend: CFS= consolidated financial statements of the U.S. government; OMB = Office of Management and Budget; Treasury = Department of the Treasury. The status of the recommendations listed in app. I is as of February 19, 2015, the date of our report on the audit of the fiscal year 2014 CFS. The recommendations in our prior reports related to material weaknesses in the following areas: Preparation: The material weakness related to the federal government's inability to reasonably assure that the consolidated financial statements are (1) consistent with the underlying audited entities' financial statements, (2) properly balanced, and (3) in accordance with U.S. GAAP. Budget statements: The material weakness related to the federal government's inability to reasonably assure that the information in the Reconciliation of Net Operating Cost and Unified Budget Deficit and the Statement of Changes in Cash Balance from Unified Budget and Other Activities is complete and consistent with the underlying information in the audited entities' financial statements and other financial data. Intragovernmental: The material weakness related to the federal government's inability to adequately account for and reconcile intragovernmental activity and balances between federal entities. The title of this SOP changed to "Significant Entities" in fiscal year 2013.
|
Treasury, in coordination with OMB, prepares the Financial Report of the United States Government , which contains the CFS. Since GAO's first audit of the fiscal year 1997 CFS, certain material weaknesses and other limitations on the scope of its work have prevented GAO from expressing an opinion on the accrual-based CFS. As part of the fiscal year 2014 CFS audit, GAO identified material weaknesses and other control deficiencies in the processes used to prepare the CFS. The purpose of this report is to provide (1) details on the control deficiencies GAO identified related to the processes used to prepare the CFS, along with related recommendations, and (2) the status of corrective actions Treasury and OMB have taken to address GAO's prior recommendations relating to the processes used to prepare the CFS that remained open at the end of the fiscal year 2013 audit. During its audit of the fiscal year 2014 consolidated financial statements of the U.S. government (CFS), GAO identified control deficiencies in the Department of the Treasury's (Treasury) and the Office of Management and Budget's (OMB) processes used to prepare the CFS. These control deficiencies contributed to material weaknesses in internal control over the federal government's ability to adequately account for and reconcile intragovernmental activity and balances between federal entities; reasonably assure that the consolidated financial statements are (1) consistent with the underlying audited entities' financial statements, (2) properly balanced, and (3) in accordance with U.S. generally accepted accounting principles; and reasonably assure that the information in the Reconciliation of Net Operating Cost and Unified Budget Deficit and the Statement of Changes in Cash Balance from Unified Budget and Other Activities is complete and consistent with the underlying information in the audited entities' financial statements and other financial data. During its audit of the fiscal year 2014 CFS, GAO identified three new internal control deficiencies. Specifically, GAO found that Treasury did not have a sufficient process to work with key federal entities prior to the end of the fiscal year to reasonably assure that new or substantially revised federal accounting standards were consistently implemented by the entities to allow appropriate consolidation at the government-wide level, procedures for determining whether entities and transactions for which it does not have audit assurance are significant in the aggregate to the CFS, and sufficient procedures for (1) identifying significant increases or decreases in all CFS line items and disclosures from prior fiscal year reported amounts and (2) understanding the reasons for such changes. In addition, GAO found that various other control deficiencies identified in previous years' audits with respect to the processes used to prepare the CFS continued to exist. Specifically, 24 of the 31 recommendations from GAO's prior reports regarding control deficiencies in the processes used to prepare the CFS remained open as of February 19, 2015, the date of GAO's report on its audit of the fiscal year 2014 CFS. GAO will continue to monitor the status of corrective actions taken to address the 3 new recommendations made in this report as well as the 24 open recommendations from prior years as part of its fiscal year 2015 CFS audit. GAO is making three new recommendations to Treasury to address the control deficiencies identified during the fiscal year 2014 CFS audit. In commenting on GAO's draft report, Treasury and OMB generally concurred with GAO's recommendations.
| 6,707 | 703 |
Various factors challenge U.S. efforts to ensure proper management and oversight of U.S. development efforts in Afghanistan. Among the most noteworthy has been the "high-threat" working environment U.S. personnel and others face in Afghanistan, the difficulties in preserving institutional knowledge due in part to a high rate of staff turnover, and the Afghan government's lack of capacity and corruption challenges. As we have previously reported, Afghanistan has experienced annual increases in the level of enemy-initiated attacks. Although the pattern of enemy-initiated attacks remains seasonal, generally peaking from June through September each year and then declining during the winter months, the annual "peak" (high point) and "trough" (low point) for each year have surpassed the peak and trough, respectively, for the preceding year since September 2005. This includes a rise in attacks against coalition forces and civilians, as well as Afghan National Security Forces. The high- threat security environment has challenged USAID's and others' ability to implement assistance programs in Afghanistan, increasing implementation times and costs for projects in nonsecure areas. For example, we found during our review of the U.S. road reconstruction efforts that a key road to the Kajaki dam was terminated after USAID had spent about $5 million after attacks prevented contractors from working on the project. In addition, U.S. officials cited poor security as having caused delays, disruptions, and even abandonment of certain reconstruction projects. For example, a project to provide Afghan women jobs in a tailoring business in southwest Afghanistan failed, in part, because of the threat against the female employees. The high-threat security environment has also limited the movement and ability of U.S. personnel to directly monitor projects. USAID has specifically cited the security environment in Afghanistan as a severe impediment to its ability to directly monitor projects, noting that USAID officials are generally required to travel with armored vehicles and armed escorts to visit projects in much of the country. USAID officials stated that their ability to arrange project visits can become restricted if military forces cannot provide the necessary vehicles or escorts because of other priorities. In 2009, USAID documented site visits for two of the eight programs included in our review (see fig. 1). We have experienced similar restrictions to travel beyond the embassy compound during our visits to Afghanistan. In the Mission's 2008 and 2009 Federal Managers Financial Integrity Act of 1982 Annual Certifications, the Mission reported its efforts to monitor project implementation in Afghanistan as a significant deficiency. These reports raised concerns that designated USAID staff are "prevented from monitoring project implementation in an adequate manner with the frequency required" and noted that there is a high degree of potential for fraud, waste, and mismanagement of Mission resources. USAID further noted that the deficiency in USAID's efforts to monitor projects will remain unresolved until the security situation in Afghanistan improves and stabilizes. The reports identified several actions to address the limitations to monitor project implementation, including, among others: placement of more staff in the field; use of Afghan staff--who have greater mobility than expatriate staff--to monitor projects; hiring of a contractor to monitor the implementation of construction projects and conduct regular site visits; and collecting of implementing partner video or photographs--including aerial photographs. Preserving institutional knowledge is vital to ensuring that new Mission personnel are able to effectively manage and build on USAID assistance efforts. We found, however, during our review of USAID's road reconstruction efforts in 2008 and, most recently, our review of USAID's agricultural development program that USAID had not taken steps to mitigate challenges to maintaining institutional knowledge. USAID did not consistently document decisions made. For example, staff working in Afghanistan had no documented assessments for modifications to the largest USAID-funded United Nations Office for Project Services (UNOPS) project in Afghanistan--Rehabilitation of Secondary Roads--even though these modifications increased the scope and budget of the program by more than ten times its original amount. Furthermore, USAID and other U.S. agencies in Afghanistan lack a sufficient number of acquisition and oversight personnel with experience working in contingency operations. This problem is exacerbated by the lack of mechanisms for retaining and sharing institutional knowledge during transitions of USAID personnel and the rate at which USAID staff turn over, which USAID acknowledged as hampering program design and implementation. In addition, the State Department Office of Inspector General noted in its February 2010 inspection of the U.S. Embassy to Afghanistan and its staff that 1-year assignments, coupled with multiple rest-and-recuperation breaks, limited the development of expertise, contributed to a lack of continuity, and required a higher number of personnel to achieve strategic goals. The USAID monitoring officials for the eight agricultural programs we focused on during our review of USAID's agricultural development efforts in Afghanistan were in place, on average, 7.5 months (see table 1). Moreover, the length of time that a monitoring official was in place has declined. The two most recently initiated agricultural programs have had monitoring officials in place for an average of only 3 months each. USAID officials noted that the effectiveness of passing information from one monitoring official to another is dependent on how well the current official has maintained his or her files and what guidance, if any, is left for their successor. USAID officials noted that a lack of documentation and knowledge transfer may have contributed to the loss of institutional knowledge. We reported in April 2010 that USAID used contractors to help administer its contracts and grants in Afghanistan, in part to address frequent rotations of government personnel and security and logistical concerns. Functions performed by these contractors included on-site monitoring of other contractors' activities and awarding and administering grants. While relying on contractors to perform such functions can provide benefits, we found that USAID did not always fully address related risks. For example, USAID did not always include a contract clause required by agency policy to address potential conflicts of interest, and USAID contracting officials generally did not ensure enhanced oversight in accordance with federal regulations for situations in which contractors provided services that closely supported inherently governmental functions. USAID has increasingly included and emphasized capacity building among its programs to address the government of Afghanistan's lack of capacity to sustain and maintain many of the programs and projects put in place by donors. In 2009, USAID rated the capability of 14 of 19 Afghan ministries and institutions it works with as 1 or 2 on a scale of 5, with 1 representing the need for substantial assistance across all areas and 5 representing the ability to perform without assistance. The Ministry of Agriculture, Irrigation, and Livestock was given a rating of 2--needing technical assistance to perform all but routine functions--while the Ministry for Rural Rehabilitation and Development was given a rating of 4--needing little technical assistance. Although USAID has noted overall improvement among the ministries and institutions in recent years, none was given a rating of 5. USAID has undertaken some steps to address the Afghan ministries' limited capacity and corruption in Afghanistan by including a capacity- building component in its more recent contracts. In 2009, the U.S. government further emphasized capacity building by pursuing a policy of Afghan-led development, or "Afghanization," to ensure that Afghans lead efforts to secure and develop their country. At the national level, the United States plans to channel more of its assistance through the Afghan government's core budget. At the field level, the United States plans to shift assistance to smaller, more flexible, and faster contract and grant mechanisms to increase decentralized decision making in the field. For example, the U.S. government agricultural strategy stresses the importance of increasing the Ministry of Agriculture, Irrigation, and Livestock's capacity to deliver services through direct budget and technical assistance. USAID also recognized that, with a move toward direct budget assistance to government ministries, USAID's vulnerability to waste and corruption is anticipated to increase. According to USAID officials, direct budget assistance to the Ministry of Agriculture, Irrigation, and Livestock is dependent on the ability of the ministry to demonstrate the capacity to handle the assistance. These officials noted that an assessment of the Ministry's ability to manage direct budget assistance was being completed. The U.S. Embassy has plans under way to establish a unit at the embassy to receive and program funds on behalf of the Ministry while building the Ministry's capacity to manage the direct budget assistance on its own. According to the Afghanistan's National Development Strategy, Afghanistan's capacity problems are exacerbated by government corruption, describing it as a significant and growing problem in the country. The causes of corruption in Afghan government ministries, according to the Afghanistan National Development Strategy, can be attributed to, among other things, a lack of institutional capacity in public administration, weak legislative and regulatory frameworks, limited enforcement of laws and regulations, poor and nonmerit-based qualifications of public officials, low salaries of public servants, and a dysfunctional justice sector. Furthermore, the sudden influx of donor money into a system already suffering from poorly regulated procurement practices increases the risk of corruption. In April 2009, USAID published an independent Assessment of Corruption in Afghanistan that found that corruption was a significant and growing problem across Afghanistan that undermined security, development, and democracy-building objectives. According to the assessment, pervasive, entrenched, and systemic corruption is at an unprecedented scope. The USAID-sponsored assessment added that Afghanistan has or is developing most of the institutions needed to combat corruption, but these institutions, like the rest of the government, are limited by a lack of capacity, rivalries, and poor integration. The assessment also noted that the Afghan government's apparent unwillingness to pursue and prosecute high-level corruption, an area of particular interest to this Subcommittee, was also reported as particularly problematic. The assessment noted that "substantial USAID assistance already designed to strengthen transparency, accountability, and effectiveness--prime routes to combat corruption." Additionally, we reported in 2009 that USAID's failure to adhere to its existing policies severely limited its ability to require expenditure documentation for Afghanistan-related grants that were associated with findings of alleged criminal actions and mismanaged funds. Specifically, in 2008, a United Nations procurement taskforce found instances of fraud, embezzlement, conversion of public funds, conflict of interest, and severe mismanagement of USAID-funded the UNOPS projects in Afghanistan, including the $365.8 million Rehabilitation of Secondary Roads project. The USAID Office of Inspector General also reported in 2008 that UNOPS did not complete projects as claimed and that projects had defects and warranty issues, as well as numerous design errors, neglected repairs, and uninstalled equipment and materials--all of which were billed as complete. USAID's Mission to Afghanistan manages and oversees most U.S. development assistance programs in Afghanistan and relies on implementing partners to carry out its programs. USAID's Automated Directives System (ADS) establishes performance management and evaluation procedures for managing and overseeing its assistance programs. These procedures, among other things, require (1) the development of a Mission Performance Management Plan (PMP); (2) the establishment of performance indicators and targets; and (3) analyses and use of program performance data. USAID had generally required the same performance management and evaluation procedures in Afghanistan as it does in other countries in which it operates. However, in October 2008, USAID approved new guidance that proposed several alternative monitoring methods for USAID projects in high-threat environments. This guidance was disseminated in December 2009, but the Afghanistan Mission agricultural office staff did not become aware of the guidance until June 2010. The ADS requires USAID officials to complete a Mission PMP for each of its high-level objectives as a tool to manage its performance management and evaluation procedures. While the Afghanistan Mission had developed a PMP in 2006, covering the years 2006, 2007, and 2008, the Afghanistan Mission has operated without a PMP to guide development assistance efforts after 2008. According to USAID, the Mission is in the process of developing a new Mission PMP that will reflect the current Administration's priorities and strategic shift to counterinsurgency. USAID expects the new PMP to be completed by the end of fiscal year 2010. The Mission attributed the delay in creating the new PMP to the process of developing new strategies in different sectors and gaining approval from the Embassy in Afghanistan and from agency headquarters in Washington. Overall, we found that the 2006-2008 Mission PMP incorporated key planning activities. For example, the PMP identified indicators and established baselines and targets for the high-level objectives for all USAID programs in Afghanistan, including its agricultural programs, which are needed to assess program performance. In addition, the PMP described regular site visits, random data checks, and data quality assessments as the means to be used to verify and validate information collected. The Mission PMP noted that it should enable staff to systematically assess contributions to the Mission's program results and take corrective action when necessary. Further, it noted that indicators, when analyzed in combination with other information, provide data for program decision making. The 2006-2008 Mission PMP, however, did not include plans for evaluations of the high-level objective that the agricultural programs in our review supported. Under USAID's current policies, implementing partners working on USAID development assistance projects in Afghanistan are required to develop and submit monitoring and evaluation plans that include performance indicators and targets to USAID for approval. However, during our most recent review of USAID's agricultural development programs, we found that USAID did not always approve implementing partner performance indicators and targets. While the implementing partners for the eight agricultural programs we reviewed did submit monitoring and evaluation plans, which generally contained performance indicators and targets, we found that USAID had not always approved these plans and did not consistently require targets to be set for all of indicators as required. For example, only 2 of 7 active agricultural programs included in our review had set targets for all of their indicators for fiscal year 2009. Figure 2 shows the number of performance indicators with targets by fiscal year for the eight agricultural programs we reviewed that the implementing partner developed and submitted to USAID for approval. In addition to collecting performance data and assessing the data's quality, ADS also includes the monitoring activities of analyzing and interpreting performance data in order to make program adjustments, inform higher- level decision making, and resource allocation. We found that while USAID collects implementing partner performance data, or information on targets and results, the agency did not fully analyze and interpret this performance data for the eight agricultural programs we reviewed. Some USAID officials in Afghanistan told us that they reviewed the information reported in implementing partners' quarterly reports in efforts to analyze and interpret a program's performance for the eight programs, although they could not provide any documentation of their efforts to analyze and interpret program performance. Some USAID officials also said that they did not have time to fully review the reports. In addition, in our 2008 report on road reconstruction in Afghanistan, we reported that USAID officials did not collect data for two completed road projects or for any active road reconstruction projects in a manner to allow them to accurately measure impact. As a result, it is unclear the extent to which USAID uses performance data. USAID is also required to report results to advance organizational learning and demonstrate USAID's contribution to overall U.S. government foreign assistance goals. While USAID did not fully analyze and interpret program data, the Mission did meet semiannually to examine and document strategic issues and determine whether the results of USAID-supported agricultural activities are contributing to progress toward high-level objectives. The Mission also reported aggregate results in the Foreign Assistance Coordination and Tracking System. ADS also requires USAID to undertake at least one evaluation for each of its high-level objectives, to disseminate the findings of evaluations, and to use evaluation findings to further institutional learning, inform current programs, and shape future planning. In May 2007, USAID initiated an evaluation covering three of the eight agricultural programs in our review--ADP-Northeast, ADP-East, and ADP-South. This evaluation intended to assess the progress toward achieving program objectives and offer recommendations for the coming years. The evaluators found insufficient data to evaluate whether the programs were meeting objectives and targets, and, thus, shifted their methodology to a qualitative review based on interviews and discussions with key individuals. As required, USAID posted the evaluation to its Internet site for dissemination. However, we are uncertain of the extent to which USAID used the 2007 evaluation to adapt current programs and plan future programs. Few staff were able to discuss the evaluation's findings and recommendations and most noted that they were not present when the evaluation of the three programs was completed and, therefore, were not aware of the extent to which changes were made to the programs. With regards to using lessons learned to plan future programs, USAID officials could not provide examples of how programs were modified as a result of the discussion. USAID has planned evaluations for seven of the eight agricultural programs included in our review during fiscal year 2010. Madam Chairwoman and members of the subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. To address our objectives, we reviewed past GAO reports and testimonies, examining U.S. efforts in Afghanistan, including reviews of USAID's agricultural and road reconstruction projects. We reviewed U.S. government performance management and evaluation, funding; and reporting documents related to USAID programs in Afghanistan. Our reports and testimonies include analysis of documents and other information from USAID and other U.S. agencies, as well as private contractors and other implementing partners working on U.S.-funded programs in Washington, D.C., and Afghanistan. In Afghanistan, we also met with officials from the United Nations and the governments of Afghanistan and the United Kingdom. We traveled to Afghanistan to meet with U.S. and Afghan officials, implementing partners, and aid recipients to discuss several U.S.-funded projects. We analyzed program budget data provided by USAID to report on program funding, as well as changes in USAID's program monitoring officials over time. We analyzed program data provided by USAID and its implementing partners to track performance against targets over time. We took steps to assess the reliability of the budget and performance and determined they were sufficiently reliable for the purposes of this report. Our work was conducted in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A more detailed description of our scope and methodologies can be found in the reports cited throughout this statement. For questions regarding this testimony, please contact Charles Michael Johnson Jr., at (202) 512-7331 or [email protected]. Individuals making key contributions to this statement include: Jeffrey Baldwin-Bott, Thomas Costa, Aniruddha Dasgupta, David Hancock, John Hutton, Hynek Kalkus, Farahnaaz Khakoo, Bruce Kutnick, Anne McDonough-Hughes, and Jim Michels. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony discusses oversight of U.S. assistance programs in Afghanistan. Strengthening the Afghan economy through development assistance efforts is critical to the counterinsurgency strategy and a key part of the U.S Integrated Civilian-Military Campaign Plan for Afghanistan. Since fiscal year 2002, the U.S. Agency for International Development (USAID) has awarded over $11.5 billion in support of development assistance programs in Afghanistan. Since 2003, GAO has issued several reports and testimonies related to U.S. security, governance, and development efforts in Afghanistan. In addition to reviewing program planning and implementation, we have focused on efforts to ensure proper management and oversight of the U.S. investment, which are essential to reducing waste, fraud, and abuse. Over the course of this work, we have identified improvements that were needed, as well as many obstacles that have affected success and should be considered in program management and oversight. While drawing on past work relating to U.S. development efforts in Afghanistan, this testimony focuses on findings in our most recent report released yesterday on the USAID's management and oversight of its agricultural programs in Afghanistan. It will address (1) the challenges the United States faces in managing and overseeing development programs in Afghanistan; and (2) the extent to which USAID has followed its established performance management and evaluation procedures. Various factors challenge U.S. efforts to ensure proper management and oversight of U.S. development efforts in Afghanistan. Among the most significant has been the "high-threat" working environment, the difficulties in preserving institutional knowledge due to the lack of a formal mechanism for retaining and sharing information during staff turnover, and the Afghan government ministries' lack of capacity and corruption challenges. USAID has taken some steps to assess and begin addressing the limited capacity and corruption challenges associated with Afghan ministries. In addition, USAID has established performance management and evaluation procedures for managing and overseeing its assistance programs. These procedures, among other things, require (1) the development of a Mission Performance Management Plan (PMP); (2) the establishment and approval of implementing partner performance indicators and targets; and (3) analyses and use of performance data. Although USAID disseminated alternative monitoring methods for projects in high-threat environments such as Afghanistan, USAID has generally required the same performance management and evaluation procedures in Afghanistan as it does in other countries in which it operates. Summary USAID has not consistently followed its established performance management and evaluation procedures. There were various areas in which the USAID Mission to Afghanistan (Mission) needed to improve upon. In particular, we found that the Mission had been operating without an approved PMP to guide its management and oversight efforts after 2008. In addition, while implementing partners have routinely reported on the progress of USAID's programs, we found that USAID did not always approve the performance indicators these partners were using, and that USAID did not ensure, as its procedures require, that its implementing partners establish targets for each performance indicator. For example, only 2 of 7 USAID-funded agricultural programs active during fiscal year 2009, included in our review, had targets for all of their indicators. We also found that USAID could improve its assessment and use of performance data submitted by implementing partners or program evaluations to, among other things, help identify strengths or weaknesses of ongoing or completed programs. Moreover, USAID needs to improve documentation of its programmatic decisions and put mechanisms in place for program managers to transfer knowledge to their successors. Finally, USAID has not fully addressed the risks of relying on contractor staff to perform inherently governmental tasks, such as awarding and administering grants. In the absence of consistent application of its existing performance management and evaluation procedures, USAID programs are more vulnerable to corruption, waste, fraud, and abuse. We reported in 2009 that USAID's failure to adhere to its existing policies severely limited its ability to require expenditure documentation for Afghanistan-related grants that were associated with findings of alleged criminal actions and mismanaged funds. To enhance the performance management of USAID's development assistance programs in Afghanistan, we have recommended, among other things, that the Administrator of USAID take steps to: (1) ensure programs have performance indicators and targets; (2) fully assess and use program data and evaluations to shape current programs and inform future programs; (3) address preservation of institutional knowledge; and (4) improve guidance for the use and management of USAID contractors. USAID concurred with these recommendations, and identified steps the agency is taking to address them. We will continue to monitor and follow up on the implementation of our recommendations.
| 4,223 | 958 |
The nearly $3 billion in unpaid federal taxes owed by over 27,000 contractors registered in DOD's Central Contractor Registration system (CCR) represented almost 14 percent of the registered contractors as of February 2003. In addition, DOD contractors receiving fiscal year 2002 payments from five of the largest DFAS contract and vendor payment systems represented at least $1.7 billion of the nearly $3 billion in unpaid federal taxes shown on IRS records. Data reliability issues with respect to DOD and IRS records prevented us from identifying an exact amount of unpaid federal taxes. Consequently, the total amount of unpaid federal taxes owed by DOD contractors is not known. The type of unpaid taxes owed by these DOD contractors varied and consisted of payroll, corporate income, excise, unemployment, individual income, and other types of taxes. Unpaid payroll taxes include amounts that a business withholds from an employee's wages for federal income taxes, Social Security, Medicare, and the related matching contributions of the employer for Social Security and Medicare. As shown in figure 1, about 42 percent of the total tax amount owed by DOD contractors was for unpaid payroll taxes. Employers are subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee's wages, the employer is deemed to have a responsibility to hold these amounts "in trust" for the federal government until the employer makes a federal tax deposit in that amount. To the extent these withheld amounts are not forwarded to the federal government, the employer is liable for these amounts, as well as the employer's matching Federal Insurance Contribution Act contributions for Social Security and Medicare. Individuals within the business (e.g., corporate officers) may be held personally liable for the withheld amounts not forwarded and assessed a civil monetary penalty known as a trust fund recovery penalty (TFRP). Failure to remit payroll taxes can also be a criminal felony offense punishable by imprisonment of more than a year, while the failure to properly segregate payroll taxes can be a criminal misdemeanor offense punishable by imprisonment of up to a year. The law imposes no penalties upon an employee for the employer's failure to remit payroll taxes since the employer is responsible for submitting the amounts withheld. The Social Security and Medicare trust funds are subsidized or made whole for unpaid payroll taxes by the general fund, as we discussed in a previous report. Over time, the amount of this subsidy is significant. As of September 1998, the estimated cumulative amount of unpaid taxes and associated interest for which the Social Security and Medicare trust funds were subsidized by the general fund was approximately $38 billion. A substantial amount of the unpaid federal taxes shown in IRS records as owed by DOD contractors had been outstanding for several years. As reflected in figure 2, 78 percent of the nearly $3 billion in unpaid taxes was over a year old as of September 30, 2002, and 52 percent of the unpaid taxes was for tax periods prior to September 30, 1999. Our previous work has shown that as unpaid taxes age, the likelihood of collecting all or a portion of the amount owed decreases. This is due, in part, to the continued accrual of interest and penalties on the outstanding tax debt, which, over time, can dwarf the original tax obligation. Until DOD establishes processes to provide information from all payment systems to TOP, the federal government will continue missing opportunities to collect hundreds of millions of dollars in tax debt owed by DOD contractors. Additionally, IRS's current implementation strategy appears to make the levy program one of the last collection tools IRS uses. Changing the IRS collection program to (1) remove the policies that work to unnecessarily exclude cases from entering the levy program and (2) promote the use of the levy program to make it one of the first collection tools could allow IRS--and the government--to reap the advantages of the program earlier in the collection process. We estimate that DOD, which functions as its own disbursing agent, could have offset payments and collected at least $100 million in unpaid taxes in fiscal year 2002 if it and IRS had worked together to effectively levy contractor payments. However, in the 6 years since the passage of the Taxpayer Relief Act of 1997, DOD has collected only about $687,000. DOD collections to date relate to DFAS payment reporting associated with implementation of the TOP process in December 2002 for its contract payment system, which disbursed over $86 billion to DOD contractors in fiscal year 2002. Although it has been more than 7 years since the passage of DCIA, DOD has not fully assisted IRS in using its continuous levy authority for the collection of unpaid taxes by providing Treasury's Financial Management Service (FMS) with all DFAS payment information. IRS's continuous levy authority authorizes the agency to collect federal tax debts of businesses and individuals that receive federal payments by levying up to 15 percent of each payment until the debt is paid. Under TOP, FMS matches a database of debtors (including those with federal tax debt) to certain federal payments (including payments to DOD contractors). When a match occurs, the payment is intercepted, the levied amount is sent to IRS, and the balance of the payment is sent to the debtor. All disbursing agencies are to compare their payment records with the TOP database. Since DOD has its own disbursing authority, once DFAS is notified by FMS of the amount to be levied, DOD should deduct this amount from the contractor payment before it is made to the payee and forward the levied amount to the Department of the Treasury as described in figure 3. The TOP database includes federal tax and nontax debt, state tax debt, and child support debt. By fully participating in the TOP process, DOD will also aid in the collection of other debts, such as child support and federal nontax debt (e.g., student loans). At the completion of our work, DOD had no formal plans or schedule to begin providing payment information from any of its 15 vendor payment systems to FMS for comparison with the TOP database. These 15 decentralized payment systems disbursed almost $97 billion to DOD contractors from 22 different payment locations in fiscal year 2002. In response to our draft report, DOD developed a schedule to provide payment information to TOP for all of its additional payment systems by March 2005. As we have previously reported, DOD's business systems environment is stovepiped and not well integrated. DOD recently reported that its current business operations were supported by approximately 2,300 systems in operation or under development, and requested approximately $18 billion in fiscal year 2003 for the operation, maintenance, and modernization of DOD business systems. In addition, DFAS did not have an organizational structure in place to implement the TOP payment reporting process. DOD recently communicated a timetable for implementing TOP reporting for its vendor payment systems with completion targeted for March 2005. IRS's continuing challenges in pursuing and collecting unpaid taxes also hinder the government's ability to take full advantage of the levy program. For example, due to resource constraints, IRS has established policies that either exclude or delay referral of a significant number of cases to the program. Also, the IRS review process for taxpayer requests, such as installment agreements or certain offers in compromise which IRS is legally required to consider, often takes many months, during which time IRS excludes these cases from the levy program. In addition, inaccurate or outdated information in IRS systems prevents cases from entering the levy program. Our audit and investigation of 47 case studies also showed IRS continuing to work with businesses and individuals to achieve voluntary compliance, and taking enforcement actions such as levies of federal contractor payments later in the collection process. We recently recommended that IRS study the feasibility of submitting all eligible unpaid federal tax accounts to FMS on an ongoing basis for matching against federal payment records under the levy program, and use information from any matches to assist IRS in determining the most efficient method of collecting unpaid taxes, including whether to use the levy program. The study was not completed at the time of our audit. In earlier reviews, we estimated IRS could use the levy program to potentially recover hundreds of millions of dollars in tax debt. Although the levy program could provide a highly effective and efficient method of collecting unpaid taxes from contractors that receive federal payments, IRS policies restrict the number of cases that enter the program and the point in the collection process they enter the program. For each of the collection phases listed below, IRS policy either excludes or severely delays putting cases into the levy program. Phase 1: Notify taxpayer of unpaid taxes, including a demand for payment letter. Phase 2: Place the case into the Automated Collection System (ACS) process. The ACS process consists primarily of telephone calls to the taxpayer to arrange for payment. Phase 3: Move the case into a queue of cases awaiting assignment to a field collection revenue officer. Phase 4: Assign the case to field collections where a revenue officer attempts face-to-face contact and collection. As of September 30, 2002, IRS listed $81 billion of cases in these four phases: 17 percent were in notice status, 17 percent were in ACS, 26 percent were in field collection, and 40 percent were in the queue awaiting assignment to the field. At the same time these four phases take place, sometimes over the course of years, DOD contractors with unpaid taxes continue to receive billions of dollars in contract payments. IRS excludes cases in the notification phase from the levy program to ensure proper notification rules are followed. However, as we previously reported, once proper notification has been completed, IRS continues to delay or exclude from the levy program those accounts placed in the other three phases. IRS policy is to exclude accounts in the ACS phase primarily because officials believed they lack the resources to issue levy notices and respond to the potential increase in telephone calls from taxpayers responding to the notices. Additionally, IRS excludes the majority of cases in the queue phase (awaiting assignment to field collection) from the levy program for 1 year. Only after cases await assignment for over a year does IRS allow them to enter the levy program. Finally, IRS excludes most accounts from the levy program once they are assigned to field collection because revenue officers said that the levy action could interfere with their successfully contacting taxpayers and resolving the unpaid taxes. These policy decisions, which may be justified in some cases, result in IRS excluding millions of cases from potential levy. IRS officials that work on ACS and field collection inventories can manually unblock individual cases they are working in order to put them in the levy program. However, by excluding cases in the ACS and field collection phases, IRS records indicate it excluded as much as $34 billion of cases from the levy program as of September 30, 2002. In January 2003, IRS unblocked and made available for levy those accounts identified as receiving federal salary or annuity payments. However, other accounts remain blocked from the levy program. IRS stated that it intended to unblock a portion of the remaining accounts sometime in 2005. Additionally, $32 billion of cases are in the queue, and thus under existing policy would be excluded from the levy program for the first year each case is in that phase. IRS policies, along with its inability to more actively pursue collections, both of which IRS has in the past attributed to resource constraints, combine to prevent many cases from entering the levy program. Since IRS has a statutory limitation on the length of time it can pursue unpaid taxes, generally limited to 10 years from the date of the assessment, these long delays greatly decrease the potential for IRS to collect the unpaid taxes. We identified specific examples of IRS not actively pursuing collection in our review of 47 selected cases involving DOD contractors. In one case, IRS cited resource and workload management considerations. IRS is not currently seeking collection of about $14.9 billion of unpaid taxes citing these considerations-about 5 percent of its overall inventory of unpaid assessments as of September 30, 2002. In another case, IRS cited financial hardship where the taxpayer was unable to pay. This puts collection activities on hold until the taxpayer's adjusted gross income (per subsequent tax return filings) exceeds a certain threshold. Some cases repeatedly entered the queue awaiting assignment to a field collection revenue officer and remained there for long periods. In addition to excluding cases for various operational and policy reasons as described above, IRS excludes cases from the levy program for particular taxpayer events such as bankruptcy, litigation, or financial hardship, as well as when taxpayers apply for an installment agreement or an offer in compromise. When one of these events take place, IRS enters a code in its automated system that excludes the case from entering the levy program. Although these actions are appropriate, IRS may lose opportunities to collect through the levy program if the processing of agreements is not timely or prompt action is not taken to cancel the exclusion when the event, such as a dismissed bankruptcy petition, is concluded. Delays in processing taxpayer documents and errors in taxpayer records are long-standing problems at IRS and can harm both government interests and the taxpayer. Our review of cases involving DOD contractors with unpaid federal taxes indicates that problems persist in the timeliness of processing taxpayer applications and in the accuracy of IRS records. For example, we identified a number of cases in which the processing of DOD contractor applications for an offer in compromise or an installment agreement was delayed for long periods, thus blocking the cases from the levy program and potentially reducing government collections. We also found that inaccurate coding at times prevented both IRS collection action and cases from entering the levy program. For example, if these blocking codes remain in the system for long periods, either because IRS delays processing taxpayer agreements or because IRS fails to input or reverse codes after processing is complete, cases may be needlessly excluded from the levy program. Although the nation's tax system is built upon voluntary compliance, when businesses and individuals fail to pay voluntarily, the government has a number of enforcement tools to compel compliance or elicit payment. Our review of DOD contractors with unpaid federal taxes indicates that although the levy program could be an effective, reliable collection tool, IRS is not using the program as a primary tool for collecting unpaid taxes from federal contractors. For the cases we audited and investigated, IRS subordinated the use of the levy program in favor of negotiating voluntary tax compliance with the DOD contractors, which often resulted in minimal or no actual collections. We selected for case study 47 businesses and individuals that had unpaid taxes and were receiving DOD contractor payments in fiscal year 2002. For all 47 cases that we audited and investigated, we found abusive or potentially criminal activity related to the federal tax system. Thirty-four of these case studies involved businesses with employees that had unpaid payroll taxes dating as far back as the early 1990s, some for as many as 62 tax periods. However, rather than fulfill their role as "trustees" of this money and forward it to IRS, these DOD contractors diverted the money for other purposes. The other 13 case studies involved individuals that had unpaid income taxes dating as far back as the 1980s. We are referring the 47 cases detailed in our related report to IRS for evaluation and additional collection action or criminal investigation. Our audit and investigation of the 34 case study business contractors showed substantial abuse or potential criminal activity as all had unpaid payroll taxes and all diverted funds for personal or business use. In table 1, and on the following pages, we highlight 13 of these businesses and estimate the amounts that could have been collected through the levy program based on fiscal year 2002 DOD payments. For these 13 cases, the businesses owed unpaid taxes for a range of 6 to 30 quarters (tax periods). Eleven of these cases involved businesses that had unpaid taxes in excess of 10 tax periods, and 5 of these were in excess of 20 tax periods. The amount of unpaid taxes associated with these 13 cases ranged from about $150,000 to nearly $10 million; 7 businesses owed in excess of $1 million. In these 13 cases, we saw some cases where IRS filed tax liens on property and bank accounts of the businesses, and a few cases where IRS collected minor amounts through the levying of non-DOD federal payments. We also saw 1 case in which the business applied for an offer in compromise, which IRS rejected on the grounds that the business had the financial resources to pay the outstanding taxes in their entirety, and 2 cases in which the businesses entered into, and subsequently defaulted on, installment agreements to pay the outstanding taxes. In 5 of the 13 cases, IRS assessed the owners or business officers with TFRPs, yet no collections were received from these penalty assessments. The following provides illustrative detailed information on several of these cases. Case # 1 - This base support contractor provided services such as trash removal, building cleaning, and security at U.S. military bases. The business had revenues of over $40 million in 1 year, with over 25 percent of this coming from federal agencies. This business's outstanding tax obligations consisted of unpaid payroll taxes. In addition, the contractor defaulted on an IRS installment agreement. IRS assessed a TFRP against the owner. The business reported that it paid the owner a six figure income and that the owner had borrowed nearly $1 million from the business. The business also made a down payment for the owner's boat and bought several cars and a home outside the country. The owner allegedly has now relocated his cars and boat outside the United States. This contractor went out of business in 2003 after state tax authorities seized its bank account. The business transferred its employees to a relative's business, which also had unpaid federal taxes, and submitted invoices and received payments from DOD on a previous contract through August 2003. Case # 2 - This engineering research contractor received nearly $400,000 from DOD during 2002. At the time of our audit, the contractor had not remitted its payroll tax withholdings to the federal government since the late 1990s. In 1996, the owner bought a home and furnishings worth approximately $1 million and borrowed nearly $1 million from the business. The owner told our investigators that the payroll tax funds were used for other business purposes. Case # 3 - This aircraft parts manufacturer did not pay payroll withholding and unemployment taxes for 19 of 20 periods through the mid- to late 1990s. IRS assessed a TFRP against several corporate officers, and placed the business in the FPLP in 2000. This business claims that its payroll taxes were not paid because the business had not received DOD contract payments; however, DOD records show that the business received over $300,000 from DOD during 2002. Case # 5 - This janitorial services contractor reported revenues of over $3 million and had received over $700,000 from DOD in a recent year. The tax problems of this business date back to the mid-1990s. At the time of our audit, the business had both unpaid payroll and unemployment taxes of nearly $3 million. In addition, the business did not file its corporate tax returns for 8 years. IRS assessed a TFRP against the principal officer of the business in early 2002. This contractor employed two officers who had been previously assessed TFRPs related to another business. Case # 7 - This furniture business reported gross revenues of over $200,000 and was paid nearly $40,000 by DOD in a recent year. The business had accumulated unpaid federal taxes of over $100,000 at the time of our audit, primarily from unpaid employee payroll taxes. The business also did not file tax returns for several years, even after repeated notices from IRS. The owners made an offer to pay IRS a portion of the unpaid taxes through an offer in compromise, but IRS rejected the offer because it concluded that the business and its owners had the resources to pay the entire amount. At the time of our audit, IRS was considering assessing a TFRP against the owners to make them personally liable for the taxes the business owed. The owners used the business to pay their personal expenses, such as their home mortgage, utilities, and credit cards. The owners said they considered these payments a loan from the business. Under this arrangement, the owners were not reporting this company benefit as income so they were not paying income taxes, and the business was reporting inflated expenses. Case # 9 - This family-owned and operated building contractor provided a variety of products and services to DOD, and DOD provided a substantial portion of the contractor's revenues. At the time of our review, the business had unpaid payroll taxes dating back several years. In addition to failing to remit the payroll taxes it withheld from employees, the business had a history of filing tax returns late, sometimes only after repeated IRS contact. Additionally, DOD made an overpayment to the contractor for tens of thousands of dollars. Subsequently, DOD paid the contractor over $2 million without offsetting the earlier overpayment. Case # 10 - This base support services contractor has close to $1 million in unpaid payroll and unemployment taxes dating back to the early 1990s, and the business has paid less than 50 percent of the taxes it owed. IRS assessed a TFRP against one of the corporate officers. This contractor received over $200,000 from DOD during 2002. Individuals are responsible for the payment of income taxes, and our audit and investigation of 13 individuals showed significant abuse of the federal tax system similar to what we found with our DOD business case studies. In table 2, and on the following pages, we highlight four of the individual case studies. In all four cases, the individuals had unpaid income taxes. In one of the four cases, the individual operated a business as a sole proprietorship with employees and had unpaid payroll taxes. Taxes owed by the individuals ranged from four to nine tax periods, which equated to years. Each individual owed in excess of $100,000 in unpaid income taxes, with one owing in excess of $200,000. In two of the four cases, the individuals had entered into, and subsequently defaulted on, at least one installment agreement to pay off the tax debt. The following provides illustrative detailed information on these four cases. Case # 14 - This individual's business repaired and painted military vehicles. The owner failed to pay personal income taxes and did not send employee payroll tax withholdings to IRS. The owner owed over $500,000 in unpaid federal business and individual taxes. Additionally, the TOP database showed the owner had unpaid child support. IRS levied the owner's bank accounts and placed liens against the owner's real property and business assets. The business received over $100,000 in payments from DOD in a recent year, and the contractor's current DOD contracts are valued at over $60 million. In addition, the business was investigated for paying employee wages in cash. Despite the large tax liability, the owner purchased a home valued at over $1 million and a luxury sports car. Case # 15 - This individual, who is an independent contractor and works as a dentist at a military installation, had a long history of not paying income taxes. The individual did not file several tax returns and did not pay taxes in other periods when a return was filed. The individual entered into an installment agreement with IRS but defaulted on the agreement. This individual received $78,000 from DOD during a recent year, and DOD recently increased the individual's contract by over $80,000. Case # 16 - This individual is another independent contractor who also works as a dentist on a military installation. DOD paid this individual over $200,000 in recent years, and recently signed a multiyear contract worth over $400,000. At the time of our review, this individual had paid income taxes for only 1 year since the early 1990s and had accumulated unpaid taxes of several hundred thousand dollars. In addition, the individual's prior business practice owes over $100,000 in payroll and unemployment taxes for multiple periods going back to the early 1990s. Case # 17 - DOD paid this individual nearly $90,000 for presenting motivational speeches on management and leadership. This individual has failed to file tax returns since the late 1990s and had unpaid income taxes for a 5-year period from the early to mid-1990s. The total amount of unpaid taxes owed by this individual is not known because of the individual's failure to file income tax returns for a number of years. IRS placed this individual in the levy program in late 2000; however, DOD payments to this individual were not levied because DFAS payment information was not reported to TOP as required. See our related report for details on the other 30 DOD contractor case studies. Federal law does not prohibit a contractor with unpaid federal taxes from receiving contracts from the federal government. Existing mechanisms for doing business only with responsible contractors do not prevent businesses and individuals with unpaid federal taxes from receiving contracts. Further, the government has no coordinated process for identifying and determining the businesses and individuals with unpaid taxes that should be prevented from receiving contracts and for conveying that information to contracting officers before awarding contracts. In previous work, we supported the concept of barring delinquent taxpayers from receiving federal contracts, loans and loan guarantees, and insurance. In March 1992, we testified on the difficulties involved in using tax compliance as a prerequisite for awarding federal contracts. In May 2000, we testified in support of H.R. 4181 (106th Congress), which would have amended DCIA to prohibit delinquent federal debtors, including delinquent taxpayers, from being eligible to contract with federal agencies. Safeguards in the bill would have enabled the federal government to procure goods or services it needed from delinquent taxpayers for designated disaster relief or national security. Our testimony also pointed out implementation issues, such as the need to first ensure that IRS systems provide timely and accurate data on the status of taxpayer accounts. However, this legislative proposal was not adopted and there is no existing statutory bar on delinquent taxpayers receiving federal contracts. Federal agencies are required by law to award contracts to responsible sources. This statutory requirement is implemented in the FAR, which requires that government purchases be made from, and government contracts awarded to, responsible contractors only. To effectuate this policy, the government has established a debarment and suspension process and established certain criteria for contracting officers to consider in determining a prospective contractor's responsibility. Contractors debarred, suspended, or proposed for debarment are excluded from receiving contracts and agencies are prohibited from soliciting offers from, awarding contracts to, or consenting to subcontracts with these contractors, unless compelling reasons exist. Prior to award, contracting officers are required to check a governmentwide list of parties that have been debarred, suspended, or declared ineligible for government contracts, as well as to review a prospective contractor's certification on debarment, suspension, and other responsibility matters. Among the causes for debarment and suspension is tax evasion. In determining whether a prospective contractor is responsible, contracting officers are also required to determine that the contractor meets several specified standards, including "a satisfactory record of integrity and business ethics." Except for a brief period during 2000 through 2001, contracting officers have not been required to consider compliance with federal tax laws in making responsibility determinations. Neither the current debarment and suspension process nor the requirements for considering contractor responsibility effectively prevent the award of government contracts to businesses and individuals that abuse the tax system. Since most businesses and individuals with unpaid taxes are not charged with tax evasion, and fewer still convicted, these contractors would not necessarily be subject to the debarment and suspension process. None of the contractors described in this report were charged with tax evasion for the abuses of the tax system we identified. A prospective contractor's tax noncompliance, other than tax evasion, is not considered by the federal government before deciding whether to award a contract to a business or individual. Further, no coordinated and independent mechanism exists for contracting officers to obtain accurate information on contractors that abuse the tax system. Such information is not obtainable from IRS because of a statutory restriction on disclosure of taxpayer information. As we found in November 2002, unless reported by prospective contractors themselves, contracting officers face significant difficulties obtaining or verifying tax compliance information on prospective contractors. Moreover, even if a contracting officer could obtain tax compliance information on prospective contractors, a determination of a prospective contractor's responsibility under the FAR when a contractor abused the tax system is still subject to a contracting officer's individual judgment. Thus, a business or individual with unpaid taxes could be determined to be responsible depending on the facts and circumstances of the case. Since the responsibility determination is largely committed to the contracting officer's discretion and depends on the contracting situation involved, there is the risk that different determinations could be reached on the basis of the same tax compliance information. On the other hand, if a prospective contractor's tax noncompliance results in mechanical determinations of nonresponsibility, de facto debarment could result. Further, a determination that a prospective contractor is not responsible under the FAR could be challenged. Because individual responsibility determinations can be affected by a number of variables, any implementation of a policy designed to consider tax compliance in the contract award process may be more suitably addressed on a governmentwide basis. The formulation and implementation of such a policy may most appropriately be the role of OMB's Office of Federal Procurement Policy. The Administrator of Federal Procurement Policy provides overall direction for governmentwide procurement policies, regulations, and procedures. In this regard, OMB's Office of Federal Procurement Policy is in the best position to develop and pursue policy options for prohibiting federal contract awards to businesses and individuals that abuse the tax system. Thousands of DOD contractors that failed in their responsibility to pay taxes continue to get federal contracts. Allowing these contractors to do substantial business with the federal government while not paying their federal taxes creates an unfair competitive advantage for these businesses and individuals at the expense of the vast majority of DOD contractors that do pay their taxes. DOD's failure to fully comply with DCIA and IRS's continuing challenges in collecting unpaid taxes have contributed to this unacceptable situation, and have resulted in the federal government missing the opportunity to collect hundreds of millions of dollars in unpaid taxes from DOD contractors. Working closely with IRS and Treasury, DOD needs to take immediate action to comply with DCIA and thus assist in effectively implementing IRS's legislative authority to levy contract payments for unpaid federal taxes. Also, IRS needs to better leverage its ability to levy DOD contractor payments, moving quickly to use this important collection tool. Beyond DOD, the federal government needs a coordinated process for dealing with contractors that abuse the federal tax system, including taking actions to prevent these businesses and individuals from receiving federal contracts. Our related report on these issues released today includes nine recommendations to DOD, IRS, and OMB. Our DOD recommendations address the need to comply with the DCIA by supporting IRS efforts under the Taxpayer Relief Act of 1997 to collect unpaid federal taxes. Our IRS recommendations address improving the effectiveness of IRS collection activities through earlier use of the Federal Payment Levy Program and changing or eliminating policies that prevent businesses and individuals with federal contracts from entering the levy program. Our OMB recommendation addresses developing and pursuing policy options for prohibiting federal contract awards to businesses and individuals that abuse the federal tax system. In written comments on a draft of our report, DOD and IRS officials partially agreed with our recommendations. OMB officials did not agree with our recommendation to develop policy options for prohibiting federal contract awards to businesses and individuals that abuse the federal tax system. Our report also suggests that Congress consider requiring DOD to periodically report to Congress on progress in providing its payment information to TOP for each of its contract and vendor payment systems, including details of the resulting collections by system and in total for all contract and vendor payment systems during the reporting period. In addition, our report suggests that Congress consider requiring that OMB report to Congress on progress in developing and pursuing options for prohibiting federal government contract awards to businesses and individuals that abuse the federal tax system, including periodic reporting of actions taken. DOD and OMB did not agree with our matters for congressional consideration. We continue to believe all of our recommendations and matters for congressional consideration constitute valid and necessary courses of action, especially in light of the identified weaknesses and the slow progress of DOD to fully implement the offset provisions of the DCIA since its passage more than 7 years ago. Mr. Chairman, Members of the Subcommittee, and Ms. Schakowsky, this concludes our prepared statement. We would be pleased to answer any questions you may have. For future contacts regarding this testimony, please contact Gregory D. Kutz at (202) 512-9095 or [email protected], Steven J. Sebastian at (202) 512- 3406 or [email protected], or John J. Ryan at (202) 512-9587 or [email protected]. Individuals making key contributions to this testimony included Tida Barakat, Gary Bianchi, Art Brouk, Ray Bush, William Cordrey, Francine DelVecchio, K. Eric Essig, Kenneth Hill, Jeff Jacobson, Shirley Jones, Jason Kelly, Rich Larsen, Tram Le, Malissa Livingston, Christie Mackie, Julie Matta, Larry Malenich, Dave Shoemaker, Wayne Turowski, Jim Ungvarsky, and Adam Vodraska. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
GAO addressed issues related to three high-risk areas including the Department of Defense (DOD) and the Internal Revenue Service (IRS) financial management and IRS collection of unpaid taxes. This testimony provides a perspective on (1) the magnitude of unpaid federal taxes owed by DOD contractors, (2) whether indications exist of abuse or criminal activity by DOD contractors related to the federal tax system, (3) whether DOD and IRS have effective processes and controls in place to use the Treasury Offset Program (TOP) in collecting unpaid federal taxes from DOD contractors, and (4) whether DOD contractors with unpaid taxes are prohibited by law from receiving federal contracts. In a companion report issued today. DOD and IRS records showed that over 27,000 contractors owed about $3 billion in unpaid taxes as of September 30, 2002. DOD has not fully implemented provisions of the Debt Collection Improvement Act of 1996 that would assist IRS in levying up to 15 percent of each contract payment to offset a DOD contractor's federal tax debt. We estimate that DOD could have collected at least $100 million in fiscal year 2002 had it and IRS fully utilized the levy process authorized by the Taxpayer Relief Act of 1997. As of September 2003, DOD had collected only about $687,000 in part because DOD provides contractor payment information from only 1 of its 16 payment systems to TOP. In response to our draft report, DOD developed a schedule to provide payment information to TOP for all of its additional payment systems by March 2005. Furthermore, we found abusive or potentially criminal activity related to the federal tax system through our audit and investigation of 47 DOD contractor case studies. The 47 contractors provided a variety of goods and services, including building maintenance, catering, dentistry, funeral services, and parts or support for weapons and other sensitive military programs. The businesses in these case studies owed primarily payroll taxes with some dating back to the early 1990s. These payroll taxes included amounts withheld from employee wages for Social Security, Medicare, and individual income taxes. However, rather than fulfill their role as "trustees" and forward these amounts to IRS, these DOD contractors diverted the money for personal gain or to fund the business. For example, owners of two businesses each borrowed nearly $1 million from their companies and, at about the same time, did not remit millions of dollars in payroll taxes. One owner bought a boat, several cars, and a home outside the United States. The other paid over $1 million for a furnished home. Both contractors received DOD payments during fiscal year 2002, but one went out of business in 2003. The business, however, transferred its employees to a relative's company (also with unpaid taxes) and recently received payments on a previous contract. IRS's continuing challenges in collecting unpaid federal taxes also contributed to the problem. In several case studies, IRS was not pursuing DOD contractors due to resource and workload management constraints. For other cases, control breakdowns resulted in IRS freezing collection activity for reasons that were no longer applicable. Federal law does not prohibit contractors with unpaid federal taxes from receiving federal contracts. OMB is responsible for providing overall direction to governmentwide procurement policies, regulations, and procedures, and is in the best position to develop policy options for prohibiting federal contracts to contractors that abuse the tax system.
| 7,370 | 700 |
The Low-Level Radioactive Waste Policy Act of 1980, as amended in 1985, made states responsible for disposing of commercially generated low-level radioactive waste. Consequently, in 1987 Arizona, California, North Dakota, and South Dakota entered into a compact in which California agreed to develop a disposal facility that would serve the needs of waste generators in the four states. The Congress ratified the compact in 1988. California, the only state since 1980 to have authorized the construction and operation of a disposal facility, is responsible for licensing and regulating its disposal facility. As authorized by the Atomic Energy Act of 1954, as amended, the Atomic Energy Commission (a predecessor to the Nuclear Regulatory Commission [NRC]) relinquished to the state in 1962 a significant portion of the Commission's authority to regulate radioactive materials within the state, including the disposal of low-level radioactive waste. The state incorporated NRC's criteria for siting and regulating low-level waste disposal facilities into the state's regulations. In 1985, California named US Ecology its "license designee" and authorized the company to screen and select a potential site for a disposal facility, to investigate its suitability, and to construct and operate the facility as licensed and regulated by the state. After evaluating potential sites, a 1,000-acre site in Ward Valley in the Mojave Desert was selected. (See fig. 1.) About 70 of the 1,000 acres would be used for the trenches containing the disposed waste. Almost all of the remaining area would constitute a buffer zone. In April 1991 Interior's Bureau of Land Management, which manages the land, and the state jointly issued an environmental impact statement concluding that the proposed facility would not cause significant adverse environmental effects. The statement is required as part of the record for the Secretary of the Interior's land-transfer decision. In July 1992, California asked Interior to sell the Ward Valley site to the state under authority granted to the Secretary by the Federal Land Policy and Management Act of 1976 (FLPMA). Among other things, this act authorizes the Secretary to transfer public land by direct sale upon finding that the transfer would serve important public objectives that cannot be achieved elsewhere and that outweigh other public objectives and values served by retaining federal ownership of the land. After making such a finding, the land transfer must be made on terms that the Secretary deems are necessary to ensure proper land use and the protection of the public interest. After considering the environmental impacts of a licensed disposal facility at the site, the outgoing Secretary decided in January 1993 to sell the land as requested. Acting for the state, US Ecology then paid Interior $500,000 for the land. The outgoing Secretary's decision was immediately challenged in federal court on the basis of Interior's alleged noncompliance with FLPMA and the National Environmental Policy Act (NEPA) and alleged failure to protect native desert tortoises under the Endangered Species Act. To settle the lawsuits and to assure himself that the proposed land transfer would comply with applicable federal laws, the incoming Secretary rescinded the earlier land-transfer decision and returned US Ecology's payment. Meanwhile, in September 1993, California issued a license to US Ecology, contingent on transfer of the land to the state, to construct and operate the disposal facility. Legal challenges to the state's licensing action were denied by the state's courts. From 1993 until 1996 the Secretary deferred the land-transfer decision while (1) the Bureau completed a first supplement to the April 1991 environmental impact statement, (2) the National Academy of Sciences reviewed seven technical issues related to the Ward Valley site, and (3) Interior negotiated, with the state, the terms of a public hearing on the proposed facility and the land-transfer agreement. The land-transfer negotiations reached an impasse in late 1995 over the issue of Interior's authority to enforce the state's compliance with the Academy's recommendations in court. Then, in February 1996 Interior announced that it would prepare a second supplement to the environmental impact statement and conduct tests that the Academy had recommended. Interior expected these activities to take about a year to complete; however, Interior has not begun preparing the supplement or conducting the tests. When Interior announced in February 1996 that it would prepare the second supplement, it cited the Academy's May 1995 report and new information about the migration of radioactive elements in the soil from the former disposal facility at Beatty as its basis for preparing the supplement. Although Interior also said it would address "nearby Indian sacred sites" in the supplement, it did not identify any such sites or sources of information on this issue. Thereafter, Interior relied on information obtained from the public, including environmental groups, Native Americans, and others, to select 10 more issues to address in the supplement and to expand the issue of sacred Indian sites to include a variety of issues pertaining to Native Americans. In March 1994, the Secretary asked the Academy to study seven radiological safety and environmental concerns about the proposed Ward Valley facility that were raised by three scientists employed by the Geological Survey. The scientists were particularly concerned about the potential for (1) water to flow into the trenches containing the waste, (2) radioactive materials to move down through the unsaturated soil to the water table, and (3) a connection between the local groundwater and the Colorado River. In a May 1995 report, a 17-member committee of the Academy concluded that the occurrence of any of these three situations is unlikely. Two committee members, however, disagreed with the majority's conclusion that the movement of radioactive elements to the water table is "highly unlikely." The Academy added that the potential effect on water quality of any contaminants that might reach the Colorado River would be insignificant. Among other things, however, the Academy recommended that additional measurements at the site be made to explain why tritium had apparently been detected about 100 feet beneath the surface of Ward Valley during the site's investigation. The unexpected measurement of tritium at this depth raised questions about how quickly radioactive elements might migrate from the disposal facility to the groundwater. The Academy concluded that inappropriate sampling procedures probably introduced atmospheric tritium into the soil samples. Fifteen committee members concluded that the tritium tests could be done during the facility's construction because the purpose of the tests was to improve baseline information for the long-term monitoring of the site rather than to resolve questions about the site's suitability for a disposal facility. Two members concluded that the tests should be completed in time to use the results in a final decision on the site's suitability as a disposal facility. In 1994 and 1995, the Geological Survey detected tritium and another radioactive element in the soil adjacent to a disposal facility for low-level radioactive waste located at Beatty, Nevada. This facility had operated from 1962 until Nevada closed it after 1992. US Ecology began operating the facility in 1981. While conducting research next to the Beatty facility, the Survey detected radioactive elements in concentrations well above natural background levels. The Survey attributed this situation to disposal practices at Beatty, such as disposing of liquid radioactive waste, that are now prohibited. The Survey added that it is doubtful that the distribution of the radioactive elements leaking from the site and their movement through the ground over time will ever be understood because of incomplete records of the disposal of liquid radioactive wastes. Therefore, the Survey concluded, extrapolations of the information from Beatty to the proposed Ward Valley facility are too tenuous to have much scientific value because of the uncertainties about how radioactive elements at Beatty are transported and because liquid wastes cannot be buried at Ward Valley. The Survey concluded that the findings of tritium near Beatty do not help explain the measurements of tritium at Ward Valley. Interior relied on the views of the public to add 10 more issues to address in the second supplement and to expand another issue--"nearby Indian sacred sites"--into a broader review of Native American issues. For example, before Interior announced that it would prepare a second supplement, an environmental group--the Committee to Bridge the Gap--had already requested that Interior prepare a supplement addressing the Academy's report, the Beatty facility, and four other issues that Interior eventually selected: (1) the potential pathways of waste to the groundwater and then to the Colorado River; (2) the types, quantities, and sources of waste to be disposed of; (3) the recent financial troubles of US Ecology; and (4) protection of the desert tortoise. After Interior's February 1996 announcement that it would prepare a second supplement, the Bureau obtained and summarized public comments and recommended to Interior's Deputy Secretary that 10 issues be addressed in the supplement. Four of the 10 issues were similar to those that the Committee to Bridge the Gap had already raised. Subsequently, the Deputy Secretary approved 13 issues to be addressed in the second supplement. In addition to the Academy's report, the new information on the Beatty facility, and the four other issues that the Committee to Bridge the Gap had recommended, Interior expanded the scope of the Indian sacred sites issue and added (1) the movement of radioactive elements in the soil, (2) alternative methods of disposal, (3) the potential introduction of nonnative plants, (4) waste transportation, (5) the state's long-term obligations, and (6) the public health impacts of operating the disposal facility. Except for the Academy's report and the new information about the Beatty facility, all of the issues that Interior will address in the second supplement had been considered earlier in the state's licensing proceeding; in the state's and the Bureau's joint environmental impact statement; and in the Bureau's first supplement of September 1993. According to the Council on Environmental Quality's regulations for implementing NEPA, however, when a federal agency has already addressed issues in an environmental impact statement, it must prepare a supplement to the statement when significant new circumstances or information relevant to environmental concerns has become available. An agency may also prepare a supplement when it determines that doing so will further the purposes of NEPA. Interior's announcement that it would prepare the second supplement did not state that the Academy's report and the new information on the Beatty facility constituted significant new circumstances or information that would require Interior to prepare a supplement. According to Interior, its decision to prepare the statement had been prompted by (1) the state's rejection of Interior's proposed land-transfer conditions and (2) the passage of 5 years since the initial environmental impact statement had been prepared. Other evidence indicates that Interior did not initially consider the Academy's report and the new information on Beatty significant enough to require a supplement. For example, the Secretary's public statement on the Academy's report said that the report "provides a qualified clean bill of health in relation to concerns about the site." According to the Secretary, with appropriate land-transfer conditions based on the recommendations of that report, the Secretary was "now confident that the transfer of the land is in the public interest." Also, when Interior announced that it would prepare the second supplement, it stated that the Survey's new information on the Beatty site indicated "little similarity with Ward Valley" but underscored the need for continued scientific monitoring at both locations. Interior also did not compare the public comments it received with the state's licensing record or the previous environmental statements to provide a basis for identifying "significant" new circumstances or information. According to the Bureau's Sacramento officials who are preparing the second supplement, whether or not there was any "new" information was not important to the Bureau's deliberations about what issues should or should not be addressed in the supplement. For many of the issues, they said, what was "new" was the public's concerns about the issues. The effect of the Ward Valley facility on Native Americans in the region is one example of an issue that had been addressed earlier by the state and the Bureau. In part, however, Interior plans to address Native American issues in the second supplement because of two recent Executive orders.One order requires federal agencies to accommodate access to and the ceremonial use of Indian sacred sites and avoid adversely affecting the integrity of such sites. The second order requires federal agencies to make "environmental justice" for low-income and minority populations (including Indian tribes) a part of their missions by identifying and addressing, as appropriate, the relatively high and adverse human health or environmental effects of their activities on these groups. To a significant degree, the state and the Bureau had addressed Native American issues in the site selection process, the state's licensing proceeding, and the 1991 environmental impact statement. The specific consultation steps, according to the 1991 statement, included an archaeological survey of the site with Native American participation. This survey found that no significant cultural resources were present at the site. In addition, US Ecology contacted the Indian tribes in the region to evaluate the potential cultural impacts of a regional nature. A site-specific walkabout by tribal representatives did not identify any unique cultural resources. According to US Ecology's license application, that part of Ward Valley where the proposed disposal site is located had once been disturbed by military tank maneuvers. Also, electric-power transmission lines cross the site, and a pit used to supply rock for highway construction is nearby. As recently as February 1997, The Director of the Bureau's Sacramento office stated in a letter to the Environmental Protection Agency that the affected tribes were fully represented and consulted in the scoping and descriptive phases of the 1991 environmental impact statement. Interior plans to assess compliance with the two Executive orders in the second supplement by addressing the effects that a disposal facility at Ward Valley could have on Native Americans' religious and cultural values, tourism, agricultural cultivation, and future economic developments, such as hotels and gambling casinos, along the Colorado River. The river is about 20 miles east of the Ward Valley site at its closest point. In commenting on a draft of our report, Interior also said that it will address the environmental justice implications to low-income and minority populations that may live near where waste is stored of not transferring the Ward Valley site to the state. The reasons Interior gave for its decision to prepare a second supplement were the impasse over land-transfer conditions and the age of the original environmental impact statement. Two other reasons for the second supplement, however, have shaped Interior's actions on the Ward Valley issue for several years; specifically, Interior believes that it should provide a forum for resolving public concerns and independently determine if the site is suitable for a disposal facility. In contrast, California and US Ecology believe that (1) the state--not Interior--has the authority, implementing criteria, and expertise for determining if the site is suitable and (2) Interior had completed all essential requirements for deciding on the land transfer in January 1993. Consequently, California and US Ecology have sued Interior over, among other things, whether Interior has exceeded its authority with respect to radiological safety issues. The lawsuits are pending. Interior's regulations for transferring federal land under FLPMA do not encourage, require, or prohibit public hearings on proposed transfers. Nevertheless, Interior wanted the state to conduct a formal public hearing on the Ward Valley facility because of the controversy over the facility. According to Interior, the second supplement and tritium tests will fulfill its responsibility to assure the public that health and safety concerns are adequately addressed. California conducted a public hearing as a part of its licensing procedures for the Ward Valley facility. The applicable state laws and regulations required the state to conduct a hearing in which the public makes brief oral statements and provide written comments. All comments were to be considered by the state and included in the written licensing record. Several individuals and groups unsuccessfully urged the state to conduct a public hearing on the license application using formal, trial-type procedures. However, a state appellate court found that the state had met the requirements of state law and regulations and an appeal of the court's decision was denied. California issued a license to US Ecology to build and operate a disposal facility for low-level radioactive waste at Ward Valley in accordance with the state's authority under the Atomic Energy Act of 1954 and related state laws and regulations. Interior, however, has not accepted the results of the state's licensing proceeding as an adequate basis for Interior to make a land-transfer decision. For example, in an August 11, 1993, letter to the governor of California, Interior's Secretary asked the state to conduct a formal public hearing as part of a credible process for determining if the site is appropriate so the Secretary can make a land-transfer decision. FLPMA requires the Secretary of the Interior to ensure that federal lands transferred to other parties are properly used and protect the public interest. California, on the other hand, is responsible for licensing and regulating the Ward Valley facility according to the state's laws and regulations, which are intended to adequately protect public health and safety. Where the respective responsibilities of Interior and the state overlap, if at all, has been an uncertain matter. The former Secretary, in his January 1993 decision (subsequently rescinded) to transfer the land, accepted the state's and US Ecology's technical findings supporting the state's licensing decision and accepted that the proposed facility would be licensed by the state according to all applicable federal and state laws and regulations. In contrast, the current Secretary has asserted more overlap between Interior's and the state's respective responsibilities. For example, when the Secretary requested the state to conduct a formal public hearing, he said the hearing should focus on the issue of the migration of radionuclides from the site because that issue directly relates to his ". . . responsibility under federal law regarding the suitability of the site. . . ." Setting aside the issue of authority, Interior has neither the criteria nor the technical expertise to independently assess the suitability of the site from a radiological safety perspective. Moreover, Interior had not sought advice or assistance on the suitability of the site from NRC or, until recently, the Department of Energy (DOE), which have such expertise. Interior has not sought NRC's assistance in addressing issues about the suitability of the Ward Valley site for a disposal facility. In 1993, the Bureau verbally requested NRC's views on the adequacy of California's program for regulating radioactive materials, including the Ward Valley facility. NRC responded that it periodically reviews California's regulatory program to determine, as required by the Atomic Energy Act, if the state's program is compatible with NRC's program for regulating radioactive materials in states that have not agreed to assume this responsibility. On the basis of these periodic reviews, NRC said that it had concluded that the state has a highly effective regulatory program for low-level radioactive waste and is capable of conducting an effective and thorough review of US Ecology's license application for the Ward Valley facility. DOE had no role on the Ward Valley facility until February 1996, when Interior decided to perform the tritium tests at the site. Thereafter, DOE and Interior negotiated conditions under which Interior would use facilities at DOE's Lawrence Livermore National Laboratory to conduct one technical part of the tests. Interior officials subsequently told us that DOE's role in the testing has evolved into a partnership with Interior in setting up the test arrangements. The Interior officials also pointed out that federal agencies such as NRC and the Environmental Protection Agency are expected to comment on the second supplement. California and US Ecology do not agree that Interior is authorized to independently determine if the Ward Valley site is suitable for a disposal facility. Their position is that the regulation of radiological safety issues, such as migration of radionuclides, is the state's responsibility because of the state's agreement with NRC under the Atomic Energy Act. Therefore, they argue, radiological safety matters are outside of Interior's authority and expertise. As discussed earlier, the state and US Ecology have sued Interior. They have asked the court to order Interior to complete the sale of the land and declare that Interior had exceeded its authority with respect to protecting the public against radiation hazards. Thus, the courts ultimately will decide the legality of, among other issues raised by the litigation, Interior's position that it must independently determine if the site is suitable for a disposal facility. In conclusion, the task of developing new facilities for disposing of commercially generated low-level radioactive waste has proven more difficult than imagined when the Congress gave states this responsibility 17 years ago. Because no state has yet developed a new facility, the actions in California are viewed as an indicator of whether the current national disposal policy can be successful. In the case of Ward Valley, however, Interior has not accepted the state's findings in the area of radiological safety as adequate to permit Interior to decide on the land transfer. Instead, Interior has decided that it must independently determine if the site is suitable for a disposal facility. Whether an independent determination is within Interior's discretion will be decided in the courts. Setting this legal question aside, most of the substantive issues that the public has raised to Interior for its consideration have already been addressed by the state and by the Bureau. Moreover, subsequent new information, such as the Academy's report, generally favors the proposed facility. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the proposed transfer of federal land in Ward Valley, California to the state for use as a low-level radioactive waste disposal site, focusing on: (1) what sources of information the Department of the Interior relied on in deciding to prepare a second supplemental environmental impact statement and in selecting issues to address in the supplement; (2) whether the selected issues had been considered in earlier state or federal proceedings and, if so, whether they are being reconsidered on the basis of significant new information; and (3) what Interior's underlying reasons were for preparing the supplement. GAO noted that: (1) Interior cited a May 1995 report on the Ward Valley site by the National Academy of Sciences and information developed by its U.S. Geological Survey in 1994 and 1995 about the migration of radioactive elements in the soil from a former disposal facility at Beatty, Nevada, as its basis for preparing the second supplement; (2) it also stated that it would address nearby Indian sacred sites in the supplement; (3) after obtaining and analyzing information from the public, including environmental groups, Native Americans, and others, Interior decided to address 10 more issues in the supplement and to expand the issue of sacred Indian sites to include a variety of issues pertaining to Native Americans; (4) eleven of the 13 issues that Interior is addressing in the second supplement had been considered in California's licensing process and in previous environmental impact statements prepared by the state and Interior's Bureau of Land Management; (5) the other two issues, the findings and recommendations of the Academy and the information on the Beatty facility, are new; (6) the reasons cited by Interior for preparing a second supplement were an impasse with California over land-transfer conditions and the 5 years that had passed since the original environmental impact statement was issued in April 1991; (7) two other reasons, however, have shaped Interior's action on the Ward Valley issue over the last several years; (8) specifically, Interior believes that it should provide a forum for resolving public concerns and independently determine if the site is suitable for a disposal facility; (9) it should be noted that California has met all of the state's procedural and substantive requirements for licensing the proposed facility; (10) consequently, the state and US Ecology, the company licensed by the state to construct and operate the disposal facility, have sued Interior to determine, among other things, if Interior exceeded its authority regarding radiological safety matters, such as independently deciding on the site's suitability; and (11) thus, whether or not an independent determination of the site's suitability is within Interior's discretion will be decided in the courts.
| 4,821 | 562 |
The H-2A program was preceded by several other temporary worker programs designed to address farm labor shortages in the United States. During World War I, the Congress authorized the issuance of rules providing for the temporary admission of otherwise inadmissible aliens, and this led to the establishment of a temporary farm labor program designed to replace U.S. workers directly involved in the war effort. Similarly, initially through an agreement with Mexico, a guest worker program was authorized during World War II that brought in over 4 million Mexican workers from 1942 to 1964, called "braceros" to work on farms Although the Bracero program expanded the farm on a seasonal basis.labor supply, the program also affected domestic farm workers through reduced wages and employment, according to a 2009 Congressional Research Service report. The Bracero program has been criticized by labor groups, which identified issues such as mistreatment of workers and lax enforcement of work contracts. While the Bracero program was still in effect, the Immigration and Nationality Act of 1952 (INA) established the statutory authority for a guestworker program that included workers performing temporary services or labor, known as "H-2" after the specific provision of the law. The Immigration Reform and Control Act of 1986 amended the INA and effectively divided the H-2 program into two programs: the H-2A program expressly for agricultural employers and the H-2B program expressly for nonagricultural employers. The H-2A program was created to help agricultural employers obtain an adequate labor supply while also protecting the jobs, wages, and working conditions of U.S. farm workers. The H-2A law and regulations contain several requirements to protect U.S. workers from adverse effects associated with the hiring of temporary foreign workers and to protect foreign workers from exploitation. Under the program, employers must provide H-2A workers a minimum level of wages, benefits, and working conditions. For example, employers must pay a prescribed wage rate, provide the workers housing that meets minimum standards for health and safety, pay for workers' travel costs to and from their home country, and guarantee workers will be paid for three-quarters of the work contract (see table 1 for more information about the even if less work is needed.conditions of employment that employers are expected to provide workers). In fiscal year 2011, Labor received about 4,900 employer applications requesting permission to hire H-2A workers. State issued about 55,000 H-2A visas in fiscal year 2011 and about 94 percent of these visas were processed by Mexican posts, according to data reported by State. Employers requested H-2A workers to help support the production of various commodities, such as fruit, vegetables, tobacco, and grain. While many of these employers requested help with general farm work, others sought workers with special skills, such as sheepherders or combine operators. Employers in some states rely more heavily on H-2A workers to meet their labor needs. In fiscal year 2011, Labor reported that over half of the H-2A positions it certified were located in five southeastern states--North Carolina, Florida, Georgia, Louisiana, and Kentucky. Although California is the largest producer of agricultural products in the country, the state is ranked thirteenth in its employment of H-2A workers, according to a recent Labor report. H-2A workers are expected to work temporarily and must leave the country once the temporary work contract is complete, but may return in future years to meet employers' seasonal needs under specific circumstances. In fiscal year 2011, about 27 percent of H-2A employers requested H-2A workers for 6 months or less and about 73 percent of employers requested workers for 7 to 12 months. H-2A workers represent a small proportion of the approximately 1 million hired agricultural workers that the U.S. Department of Agriculture estimates are in the United States, many of whom are not legally authorized to work in the country (referred to as undocumented workers). Research suggests that about half of all U.S. agricultural workers are An employer may inadvertently hire undocumented undocumented.workers if the workers give the employer fraudulent documents. Employers may also choose to violate the law and knowingly hire undocumented workers rather than employing U.S. workers or participating in the H-2A program and meeting its associated requirements. However, employers knowingly hiring undocumented workers rather than using the legal H-2A process risk penalties or workforce disruption through DHS's enforcement of immigration law or from state actions that may affect the availability of undocumented workers. To request H-2A workers, employers apply consecutively to their state workforce agency, Labor, and DHS; and prospective workers apply to State for H-2A visas. Under the law and Labor's H-2A regulations, state workforce agencies, Labor, and employers are subject to specific deadlines for processing H-2A applications (see fig. 1).are not subject to processing deadlines under relevant statutes and regulations, according to agency officials. In fiscal year 2011, most employers' applications for H-2A workers were approved, but some employers experienced delays in having their applications for H-2A workers processed. Labor approved 94 percent of the H-2A applications for foreign agricultural workers and processed 63 percent of approved applications by the statutory deadline of at least 30 days prior to the date workers were to begin work.not process 37 percent of applications by the deadline, including 7 percent of applications approved less than 15 days before workers were needed, leaving little time for employers to petition DHS and for workers to obtain visas from State. According to Labor officials, employers' failure to provide required documentation, such as an approved housing inspection, contributes to processing delays. DHS approved 98 percent of the employer petitions for H-2A workers in fiscal year 2011, and about 72 percent of these petitions were processed within 7 days. However, 28 percent took longer and DHS took a month or longer to process 6 percent of the petitions (see fig. 2). An official at DHS told us that employers have up to 84 days plus the applicable mailing time to provide additional documentation requested by the agency, which can significantly affect how long it takes the agency to process a petition. To process applications more efficiently and provide better customer service, Labor and DHS have taken steps to create new electronic applications that will allow employers to file for H-2A workers online, but development and implementation of both applications has been delayed (see table 2). Federal law and executive orders provide that federal agencies are to be customer service-focused and executive orders provide that federal agencies use technology to improve the customer experience.Accordingly, in fiscal year 2009, Labor implemented a web-based system for two of its other labor certification programs that allows employers to file applications online and for the agency to process them electronically. Labor is currently in the process of developing an online H-2A application to add to its existing web-based filing system, but it has been delayed. Specifically, in October 2010, Labor began designing an online H-2A application for employers that it planned to deploy in August 2011. However, Labor officials told us the online application was delayed because the agency could not award the contract to develop it while operating under a provisional budget based on a continuing resolution. Since then, Labor completed the design of the online H-2A application and in June 2012 awarded the contract to develop, test, and implement it. Labor officials told us they anticipate the online H-2A application will be available for use by employers by the end of 2012 and, according to the development contract, the online application should be available to employers on November 15, 2012. According to Labor officials, the online application will allow employers to create account profiles and check the status of their H-2A applications. In addition, Labor officials said the online H-2A application would also result in faster application processing, reduced costs, better customer service, and improved data quality. DHS also plans to implement an online petition for H-2A workers, but the agency has experienced delays and is in the process of developing a schedule for completing this work. The agency planned to deploy an online H-2A petition in October 2012 as part of its Transformation Program, which aims to replace the paper-based systems currently used to process petitions with an electronic system. However, the Transformation Program itself has been delayed several times since its inception in 2005, as we have previously reported, and officials told us they have not started work on the online H-2A petition and do not know when it will be completed. In prior work on the Transformation Program, we found DHS was managing the program without specific acquisition management controls, such as reliable schedules, which contributed to missed milestones. DHS officials said they were addressing this report's recommendations and are in the process of developing an integrated master schedule for all Transformation activities, including the online H- 2A petition, in accordance with GAO best practices outlined in the report. Once the online petition for H-2A workers is available, employers will be able to file all required documents electronically to petition for H-2A workers, create account profiles, and check the status of their applications. In addition, the agency could streamline benefits processing by eliminating redundant data entry and reducing the number of required forms. Recently and over the course of our review, in addition to taking steps to modernize the H-2A application process, federal agencies have taken a number of other steps to improve employers' experience with the application process. Specifically, Labor made changes to its review process to informally resolve issues with employers and reduce unnecessary delays and appeals. Labor officials told us that, in 2011, they piloted using e-mail to communicate with employers in 10 states about their H-2A applications. In March 2012, Labor began using e-mail to communicate with employers in all states about their applications. Labor also changed its procedures so that it can make corrections to minor errors on an employer's H-2A application--such as adding a missing phone number--after obtaining the employer's permission via e- mail to correct the error. In February 2011, Labor instituted a policy that gives employers up to 5 additional days to submit required documentation on their H-2A applications rather than automatically denying them because all of the required documentation was not submitted by the deadline. In addition to the changes outlined above, since implementing its new regulations in March 2010, Labor provided employers with more guidance about the requirements of the H-2A program in a variety of formats (see table 3). Labor officials said these efforts resulted in improved timeliness and fewer appeals in recent months. Our analysis of Labor's data showed that the agency's timeliness remained relatively unchanged, although the percentage of applications for which deficiency notices were issued and the number of appealed decisions declined substantially over that period. For the first half of fiscal year 2012, Labor processed 61 percent of certified applications at least 30 days prior to the employer's date of need and issued deficiency notices for 38 percent of employer applications. Sixty employer appeals were filed during the first half of fiscal year 2012. Several employers we interviewed reported that they did not understand the H-2A program requirements because Labor's decisions seemed inconsistent. A number of the inconsistencies employers cited concerned job order terms and conditions, the acceptability of which varies by state. Labor officials told us they strive for consistency and have many checks in place to ensure consistent decisions. Specifically, they said analysts in Labor's processing center follow detailed standard operating procedures and the center has multiple quality assurance methods to ensure consistency, including supervisory review, peer review, and a quarterly quality assurance process. In addition, according to Labor officials, processing center analysts are given an overview of the H-2A program, study the regulations and standard operating procedures, and shadow a more seasoned employee before receiving their own cases to There are also periodic training classes that address adjudicate. adjudication issues that have arisen during the last calendar year. Our internal control standards state that agency managers should identify the knowledge and skills needed for various jobs and provide necessary training, among other things. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00.21.3.1 (Washington, D.C.: Nov. 1999). state workforce agencies are directed to apply a prevailing practice standard to determine whether the frequency with which an employer intends to pay H-2A workers is acceptable, while states can use a more subjective normal and common practice standard to determine whether job qualifications, such as how much experience is required, are acceptable (see table 4). In 1988, Labor provided states with an H-2A Program Handbook that included guidance on how to make these decisions and encouraged states to administer formal surveys to determine acceptable practices. If the state workforce agency cannot use a formal survey, Labor's guidance suggests states make these determinations using other information sources, such as staff knowledge and experience, informal surveys, reviews of job orders used by non-H-2A employers, or consultation with experts in agriculture or farm worker advocates. In 2011, Labor began posting results from states' prevailing practice surveys online to help employers write job orders that are consistent with prevailing, normal, and accepted practices. Labor's guidance to states for determining acceptable practices, however, is broad and not prescriptive, leading states to apply varied methods, some of which may be insufficient. For example, the Administrative Law Judge who ruled on the Massachusetts apple and vegetable growers' appeal of Labor's initial decision to prohibit experience requirements did not consider the Massachusetts state workforce agency's prevailing practice survey in his ruling because of its design flaws. Further, two employer representatives told us they considered state prevailing practice surveys to be unreliable and inconsistent in their coverage. In addition, officials in the three states we visited said they did not include questions about certain terms and conditions in formal surveys and used different methods to determine whether a particular practice was acceptable: two states reviewed job orders filed by non-H-2A employers; the other state informally surveyed non-H-2A employers in-person. One employer representative expressed frustration that neighboring states used different methods to determine acceptable practices for the same crop and that the results differed. DHS also has taken several steps to improve employers' experience with the H-2A application process. Specifically, the agency took steps to expedite petitions for H-2A workers and provide more guidance to employers. In October 2007, DHS directed its employees to expedite the handling and adjudication of H-2A petitions. According to our analysis, the agency's processing times have improved in recent years. From fiscal year 2006 to fiscal year 2011, the percentage of petitions approved within 1 week increased from about 34 percent to 72 percent. At the same time, the percentage of petitions that took 1 month or longer to approve declined from about 11 percent to about 6 percent. In July 2010 and June 2011, DHS invited employers to participate in teleconferences to discuss employers' difficulties with some of its new systems and procedures. In addition, the agency posted summaries of the teleconferences and answers to employers' frequently asked questions on its Web site. State has addressed employer concerns with the H-2A visa application process by hosting face-to-face meetings with employers and other key stakeholders, making improvements to its worker processing procedures, and taking steps to increase the capacity of its Monterrey consulate to process H-2A visas. In 2012, State officials said they reached out to Labor to discuss H-2A related issues. They also said the two agencies are formalizing working groups in part to improve information sharing. State also meets with employers and other stakeholders at annual meetings that bring together representatives from Labor, DHS, and State. Officials from Labor and DHS, and State's contractor attended the most recent of these meetings, held in Texas in January 2012. A representative of an employer association who attended this meeting told us it was helpful to have representatives from all three agencies there to answer questions. After hearing at the January 2012 meeting that some employers had difficulties with getting their Mexican H-2A workers processed by their date of need, State directed employers with approaching dates of need to request emergency appointments for their workers to be processed at posts other than the Monterrey consulate. State officials noted that all Mexican posts have the capacity to process H-2A visa applications and suggested that applicants can visit other posts if it is difficult to get appointments at the Monterrey consulate. State also developed new procedures to better enable them to handle large groups of workers. In addition, State is expanding its Monterrey consulate, which currently handles most of the H-2A visas processed. Officials said the new facility is scheduled to open in 2014, although they were uncertain whether future staffing levels at the facility would increase. The H-2A program is a means through which agricultural employers can legally hire temporary foreign workers when there is a shortage of U.S. workers. The H-2A application process consists of a series of sequential steps conducted by varied agencies, no one of which bears responsibility for monitoring or assessing the performance of the process as a whole. Negotiating this largely paper-based process can be time consuming, complex, and challenging for employers. The associated difficulties can impose a burden on H-2A employers that is not borne by employers who break the law and hire undocumented workers. Although Labor and DHS have taken some steps to incorporate new technologies, delays in the development of electronic application filing systems continue burdening employers with paperwork and may be consuming more resources from federal agencies than necessary. In addition, the absence of systems to collect data on the reasons for processing delays makes it difficult for these agencies to identify why employer applications are initially rejected, to target their efforts to address the most important issues that challenge employers, and to improve performance. Meanwhile, employers who require workers at different points of the season must bear the additional costs of submitting paperwork to multiple agencies for each set of workers. In addition, employers continue to express confusion about how state workforce agencies and Labor are applying Labor's new regulations. Without additional clarification and transparency, employers may continue to submit unacceptable paperwork that requires extra resources from all parties to process. As immigration rules are tightened and the economy improves for U.S. workers, more employers may need to use the H-2A program to obtain foreign workers. This potential influx of new users could exacerbate existing problems if changes are not made to improve the application process. To improve the timeliness of application processing, as part of creating new online applications, we recommend that the Secretaries of Labor and Homeland Security: develop a method of automatically collecting data on the reasons for deficiency notices, requests for additional evidence, and denials, and use this information to develop strategies to improve the timeliness of H-2A application processing. Such information could help the agencies determine whether, for example, employers may need more guidance or staff may need more training. To reduce the burden on agricultural employers and improve customer service, we recommend that the Secretary of Labor: permit the use of a single application with staggered dates of need for employers who need workers to arrive at different points of a harvest season. Employers could still be required to submit evidence of their recruitment efforts, but would not be required to resubmit a full application for each set of workers needed during the season. To promote consistency and transparency of decisions made about the acceptability of employer applications and clarify program rules, we recommend that the Secretary of Labor: review and revise, as appropriate, guidance provided to state workforce agencies on methods to determine the acceptability of employment practices. This guidance should be made available to employers and published on Labor's Web site. We provided a draft of this report to Labor, DHS, and State for review and comment. State had no comments. Labor and DHS provided written comments which are reproduced in appendices I and II. Labor and DHS also provided technical comments, which we incorporated as appropriate. DHS concurred with our recommendation that the agency develop a method to automatically collect additional data through its forthcoming electronic application system to improve the timeliness of application processing. Similarly, Labor agreed with our recommendation that the agency develop a method of automatically collecting data on the reasons for deficiency notices and use this information to develop strategies to improve the timeliness of H-2A application processing and noted that it would explore the resources required to collect such information as part of its online application system. Labor also agreed with our recommendation that it update the guidance it provides to state workforce agencies on methods to determine the acceptability of employment practices. Labor did not agree with our recommendation that it allow employers to file a single application per season for workers arriving on different start dates, stating that the department's regulations define the date of need as the first date the employer requires the services of all H-2A workers that are the subject of the application, not an indication of the first date of need for only some of the workers. Labor stated that having each employer file a single application with staggered dates of need would result in one recruitment for job opportunities that could begin many weeks or months after the original date of need, which could nullify the validity of the required labor market test. We are not recommending that employers conduct a single labor market test corresponding with their earliest date of need. Employers should still be required to submit evidence of their recruitment efforts for every start date listed on each application, but we believe they should not be required to resubmit a full application package for each set of workers needed during a season. Labor also expressed concern that our report points to the experiences of some employers or those of a single employer to support our conclusions. As noted earlier in this report, information obtained from our interviews cannot be generalized to all states or all agricultural employers. In addition, the illustrations used in this report highlight challenges expressed by numerous employers with whom we spoke, even when we used one employer's experience as an example. Further, as we noted previously, agency data are not available to document the extent of some employer challenges, such as whether workers arrive by the date they are needed by employers. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Homeland Security, Labor, State, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the individual named above, Betty Ward-Zukerman, Assistant Director; Hedieh Rahmanou Fusfield, Jeffrey G. Miller, and Cathy Roark made key contributions to this report. Also contributing were Hiwotte Amare, James Bennett, Kathy Leslie, Jonathan McMurray, Jean McSween, Kathleen van Gelder, and Craig Winslow.
|
The H-2A visa program allows U.S. employers anticipating a shortage of domestic agricultural workers to hire foreign workers on a temporary basis. State workforce agencies and three federal agencies--the Departments of Labor, Homeland Security, and State review applications for such workers. GAO was asked to examine (1) any aspects of the application process that present challenges to agricultural employers, and (2) how federal agencies have addressed any employer challenges with the application process. GAO analyzed Labor and DHS data; interviewed agency officials and employer representatives; and conducted site visits in New York, North Carolina, and Washington. Over 90 percent of employer applications for H-2A workers were approved in fiscal year (FY) 2011, but some employers experienced processing delays. For example, the Department of Labor (Labor) processed 63 percent of applications in a timely manner in FY 2011, but 37 percent were processed after the deadline, including 7 percent that were approved less than 15 days before workers were needed. This left some employers little time for the second phase of the application process, which is managed by the Department of Homeland Security (DHS), and for workers to obtain visas from the Department of State (State). Although workers can apply for visas online, most of the H-2A process involves paper handling, which contributes to processing delays. In addition, employers who need workers at different times of the season must repeat the entire process for each group of workers. Although the agencies lack data on the reasons for processing delays, employers reported delays due to increased scrutiny by Labor and DHS when these agencies implemented new rules and procedures intended to improve program integrity and protect workers. For example, in FY 2011, Labor notified 63 percent of employers that their applications required changes or additional documentation to comply with its new rules, up sharply from previous years. Federal agencies are taking steps to improve the H-2A application process. Labor and DHS are developing new electronic application systems, but both agencies' systems have been delayed. Labor also recently began using e-mail to resolve issues with employers, and all three agencies provided more information to employers to clarify program requirements. Even with these efforts, some employers view Labor's decisions as inconsistent. For example, some employers received different decisions about issues such as whether they can require workers to have experience in farm work and questioned the methods states used to decide whether the job qualifications in their applications were acceptable. We found states used different methods to determine acceptable qualifications, which is allowed under Labor's guidance. GAO recommends that (1) Labor and DHS use their new electronic application systems to collect data on reasons applications are delayed and use this information to improve the timeliness of application processing; (2) Labor allow employers to submit one application for groups of similar workers needed in a single season; and (3) Labor review and revise, as appropriate, its guidance to states regarding methods for determining the acceptability of employment practices in employers' applications. DHS and Labor agreed with the recommendation to collect additional data and Labor agreed with the recommendation to update its guidance. Labor disagreed with the recommendation it allow employers to apply once per season. GAO believes the recommendation is still valid and that a single application does not preclude timely testing of the labor market as workers are needed.
| 4,931 | 670 |
An improper payment is any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. This definition includes any payment to an ineligible recipient, any payment for an ineligible good or service, any duplicate payment, any payment for a good or service not received (except where authorized by law), and any payment that does not account for credit for applicable discounts. Improper Payments Elimination and Recovery Act of 2010, Pub. L. No. 111- 204, SS 2(e), 124 Stat. 2224, 2227 (2010) (codified at 31 U.S.C. SS 3321 note). Office of Management and Budget guidance also instructs agencies to report as improper payments any payments for which insufficient or no documentation was found. the greatest financial risk to Medicare (see Table 1). However, the contractors have varying roles and levels of CMS direction and oversight in identifying claims for review. MACs process and pay claims and conduct prepayment and postpayment reviews for their established geographic regions. As of January, 2016, 12 MACs--referred to as A/B MACs--processed and reviewed Medicare Part A and Part B claims, and 4 MACs--referred to as DME MACs-- processed and reviewed DME claims. MACs are responsible for identifying both high-risk providers and services for claim reviews, and CMS has generally given the MACs broad discretion to identify claims for review. Each individual MAC is responsible for developing a claim review strategy to target high-risk claims.20 In their role of processing and paying claims, the MACs also take action based on claim review findings. The MACs deny payment on claims when they or other contractors identify payment errors during prepayment claim reviews. When MACs or other claim review contractors identify overpayments using postpayment reviews, the MACs seek to recover the overpayment by sending providers what is referred to as a demand letter. In the event of underpayments, the MACs return the balance to the provider in a future reimbursement. For additional information on the MAC roles and responsibilities, see GAO, Medicare Administrative Contractors: CMS Should Consider Whether Alternative Approaches Could Enhance Contractor Performance, GAO-15-372 (Washington, D.C.: Apr. 2015). Congress established per beneficiary Medicare limits for therapy services, which took effect in 1999. However, Congress imposed temporary moratoria on the limits several times until 2006, when it required CMS to implement an exceptions process in which exceptions to the limits are allowed for reasonable and necessary therapy services. Starting in 2012, the exceptions process has applied a claim review requirement on claims after a beneficiary's annual incurred expenses reach certain thresholds. For additional information on the therapy service limits, see GAO, Medicare Outpatient Therapy: Implementation of the 2012 Manual Medical Review Process, GAO-13-613 (Washington, D.C.: July, 2013). As required by law, the RAs are paid on a contingent basis from recovered overpayments. The contingency fees generally range from 9.0 percent to 17.5 percent, and vary by RA region, the type of service reviewed, and the way in which the provider remits the overpayment. Because the RAs are paid from recovered funds rather than appropriated funds, the use of RAs expands CMS's capacity for claim reviews without placing additional demands on the agency's budget. The RAs are allowed to target high-dollar claims that they believe have a high risk of improper payments, though they are not allowed to identify claims for review solely because they are high-dollar claims. The RAs are also subject to limits that only allow them to review a certain percentage or number of a given provider's claims. The RAs initially identified high rates of error for short inpatient hospital stays and targeted those claims for review. Certain hospital services, particularly services that require short hospital stays, can be provided in both an inpatient and outpatient setting, though inpatient services generally have higher Medicare reimbursement amounts. The RAs found that many inpatient services should have been provided on an outpatient basis and denied many claims for having been rendered in a medically unnecessary setting.23 Medicare has a process that allows for the appeal of claim denials, and hospitals appealed many of the short inpatient stay claims denied by RAs. Hospital appeals of RA claim denials helped contribute to a significant backlog in the Medicare appeals system. determining whether RA prepayment reviews could prevent fraud and the resulting improper payments and, in turn, lower the FFS improper payment rate. From 2012 through 2014, operating under this waiver authority, CMS conducted the RA Prepayment Review Demonstration in 11 states. In these states, CMS directed the RAs to conduct prepayment claim reviews for specific inpatient hospital services. Additionally, the RAs conducted prepayment reviews of therapy claims that exceeded the annual per beneficiary limit in the 11 demonstration states. Under the demonstration, instead of being paid a contingency fee based on recovered overpayments, the RAs were paid contingency fees based on claim denial amounts. In anticipation of awarding new RA contracts, CMS began limiting the number of RA claim reviews and discontinued the RA Prepayment Review Demonstration in 2014. CMS required the RAs to stop sending requests for medical documentation to providers in February 2014, so that the RAs could complete all outstanding claim reviews by the end of their contracts. However, in June 2015, CMS cancelled the procurement for the next round of RA contracts, which had been delayed because of bid protests. Instead, CMS modified the existing RA contracts to allow the RAs to continue claim review activities through July 31, 2016. In November 2015, CMS issued new requests for proposals for the next round of RA contracts and, according to CMS officials, plans to award them in 2016. The SMRC conducts nationwide postpayment claim reviews as part of CMS-directed studies aimed at lowering improper payment rates. The SMRC studies often focus on issues related to specific services at high risk for improper payments, and provide CMS with information on the prevalence of the issues and recommendations on how to address them. Although CMS directs the types of services and improper payment issues that the SMRC examines, the SMRC identifies the specific claims that are reviewed as part of the studies. CMS's CERT program annually estimates the amount and rate of improper payments in the Medicare FFS program, and CMS uses the CERT results, in part, to direct and oversee the work of claim review contractors, including the MACs, RAs, and SMRC. CMS's CERT program develops its estimates by using a contractor to conduct postpayment claim reviews on a statistically valid random sample of claims. The CERT program develops the estimates as part of CMS's efforts to comply with the Improper Payments Information Act, which requires agencies to annually identify programs susceptible to significant improper payments, estimate amounts improperly paid, and report these estimates and actions taken to reduce them.25 In addition, the CERT program estimates improper payment rates specific to Medicare service and provider types and identifies services that may be particularly at risk for improper payments. See Improper Payments Information Act of 2002 (IPIA), Pub. L. No. 107-300, 116 Stat. 2350 (2002) (codified, as amended, at 31 U.S.C. SS 3321 note). The IPIA was subsequently amended by the Improper Payments Elimination and Recovery Act of 2010, Pub. L. No. 111-204, 124 Stat. 2224 (2010), and the Improper Payments Elimination and Recovery Improvement Act of 2012, Pub. L. No. 112-248, 126 Stat. 2390 (2013). We have also reported that prepayment controls are generally more cost-effective than postpayment controls and help avoid costs associated with the "pay and chase" process. See GAO, A Framework for Managing Fraud Risks in Federal Programs, GAO-15-593SP (Washington, D.C.: July 28, 2015). CMS is not always able to collect overpayments identified through postpayment reviews. A 2013 HHS OIG study found that each year over the period from fiscal year 2007 to fiscal year 2010, approximately 6 to 9 percent of all overpayments identified by claim review contractors were deemed not collectible.27 Postpayment reviews require more administrative resources compared to prepayment reviews. Once overpayments are identified on a postpayment basis, CMS requires contractors to take timely efforts to collect the overpayments. HHS OIG reported that the process for recovering overpayments can involve creating and managing accounts receivables for the overpayments, tracking provider invoices and payments, and managing extended repayment plans for certain providers. In contrast, contractors do not need to take these steps, and expend the associated resources, for prepayment reviews, which deny claims before overpayments are made. Key stakeholders we interviewed identified few significant differences in conducting and responding to prepayment and postpayment reviews. Specifically, CMS, MAC, and RA officials stated that prepayment and postpayment review activities are generally conducted by claim review contractors in similar ways. Officials we interviewed from health care provider organizations told us that providers generally respond to prepayment and postpayment reviews similarly, as both types of review occur after a service has been rendered, and involve similar medical documentation requirements and appeal rights. These statistics are based on CMS summary financial data, and the currently not collectable classification for overpayments can vary based on when overpayments are identified and demanded, and if overpayments are under appeal. See Department of Health and Human Services, Office of Inspector General, Medicare's Currently Not Collectible Overpayments, OEI-03-11-00670 (Washington, D.C.: June 2013). hold discussions with the RAs for postpayment review findings, and CMS recently implemented the option for SMRC findings as well. The discussions offer providers the opportunity to give additional information before payment determinations are made and before providers potentially enter the Medicare claims appeals process. Several of the provider organizations we interviewed found the RA discussions helpful, stating that some providers have been able to get RA overpayment determinations reversed. Such discussions are not available for RA prepayment claim reviews or for MAC reviews. CMS officials stated that the discussions are not feasible for prepayment claim reviews due to timing difficulties, as the MACs and RAs are required to make payment determinations within 30 days after receiving providers' medical records. Second, providers stated that they may face certain cash flow burdens with prepayment claim reviews that they do not face with postpayment reviews due to how the claims are treated in the Medicare appeals process.29 When appealing postpayment review overpayment determinations, providers keep their Medicare payment through the first two levels of appeal before CMS recovers the identified overpayment. If the overpayment determinations are overturned at a higher appeal level, CMS must pay back the recovered amount with interest accrued for the period in which the amount was recouped. In contrast, providers do not receive payment for claims denied on a prepayment basis and, if prepayment denials are overturned on appeal, providers do not receive interest on the payments for the duration the payments were held by CMS. The Medicare FFS appeals process consists of five levels of review that include CMS contractors, staff divisions within HHS, and ultimately, the federal judicial system, allowing appellants who are dissatisfied with the decision at one level to appeal to the next level. claims deemed most critical by each MAC to address and a description of plans to address them. During the same time period, the MACs conducted approximately 76,000 postpayment claim reviews, though some MACs did not conduct any postpayment claims reviews. Prior to the establishment of the national RA program, the MACs conducted a greater proportion of postpayment reviews. However, the MACs have shifted nearly all of their focus to conducting prepayment reviews, as responsibility for conducting postpayment reviews has generally shifted to the RAs. According to CMS officials, the MACs currently use postpayment reviews to analyze billing patterns to inform other review activities, including future prepayment reviews, and to help determine where to conduct educational outreach for specific providers. CMS has also encouraged the MACs to use postpayment reviews to perform extrapolation, a process in which the MACs estimate an overpayment amount for a large number of claims based on a sample of claim reviews. According to CMS officials, extrapolation is not used often but is an effective strategy for providers that submit large volumes of low-dollar claims with high improper payment rates. The SMRC is focused on examining Medicare billing and payment issues at the direction of CMS, and all of its approximately 178,000 reviews in 2013 and 2014 were postpayment reviews. The SMRC uses postpayment reviews because its studies involve developing sampling methodologies to examine issues with specific services or specific providers. For example, in 2013, CMS directed the SMRC to complete a national review of home health agencies, which involved reviewing five claims from every home health agency in the country. CMS had the SMRC conduct this study to examine issues arising from a new coverage requirement that raised the improper payment rate for home health services.30 Additionally, a number of SMRC studies used postpayment sampling to perform extrapolation to determine overpayment amounts for certain providers. The RAs generally conducted postpayment reviews, though they conducted prepayment reviews under the Prepayment Review Demonstration. The RAs conducted approximately 85 percent of their claim reviews on a postpayment basis in 2013 and 2014--accounting for approximately 1.7 million postpayment claim reviews--with the other 15 percent being prepayment reviews conducted under the demonstration. CMS is no longer using the RAs to conduct prepayment reviews because the demonstration ended. Outside of a demonstration, CMS must pay the RAs from recovered overpayments, which effectively limits the RAs to postpayment reviews. CMS and RA officials who we interviewed generally considered the demonstration a success, and CMS officials told us that they included prepayment reviews as a potential work activity in the requests for proposals for the next round of RA contracts, in the event that the agency is given the authority to pay RAs on a different basis. However, the President's fiscal year budget proposals for 2015 through 2017 did not contain any legislative proposals that CMS be provided such authority. Obtaining the authority to allow the RAs to conduct prepayment reviews would align with CMS's strategy to pay claims properly the first time. In not seeking the authority, CMS may be missing an opportunity to reduce the amount of uncollectable overpayments from RA reviews and save administrative resources associated with recovering overpayments. The rate of improper payments for home health services rose from 6.1 percent in fiscal year 2012 to 17.3 percent in fiscal year 2013and to 51.4 percent in fiscal year 2014. According to CMS, the increase in improper payments occurred primarily because of CMS's implementation of a requirement that home health agencies have documentation showing that referring providers conducted a face-to-face examination of beneficiaries before certifying them as eligible for home health services. Our analysis of RA claim review data shows that the RAs focused on reviewing inpatient claims in 2013 and 2014, though this focus was not consistent with the degree to which inpatient services constituted improper payments, or with CMS's expectation that the RAs review all claim types. In 2013, a significant majority--78 percent--of all RA claim reviews were for inpatient claims, and in 2014, nearly half--47 percent-- of all RA claim reviews were for inpatient claims (see Table 3). For RA postpayment reviews specifically, which excludes reviews conducted as part of the RA Prepayment Review Demonstration, 87 percent of RA reviews were for inpatient claims in 2013, and 64 percent were for inpatient claims in 2014. Inpatient services had high amounts of improper payments relative to other types of services--with over $8 billion in improper payments in fiscal year 2012 and over $10 billion in fiscal year 2013--which reflect the costs of providing these services. However, inpatient services did not have a high improper payment rate relative to other services and constituted about 30 percent of overall Medicare FFS improper payments in both years. As will be discussed, the proportion of inpatient reviews in 2014 would likely have been higher if CMS--first under its own authority and then as required by law--had not prohibited the RAs from conducting reviews of claims for short inpatient hospital stays at the beginning of fiscal year 2014. The RAs conducted about 1 million fewer claim reviews in 2014 compared to 2013, and nearly all of the decrease can be attributed to fewer reviews of inpatient claims. In general, the RAs have discretion to select the claims they review, and their focus on reviewing inpatient claims is consistent with the financial incentives associated with the contingency fees they receive, as inpatient claims generally have higher payment amounts compared to other claim types. By law, RAs receive a portion of the recovered overpayments they identify, and RA officials told us that they generally focus their claim reviews on audit issues that have the greatest potential returns. Our analysis found that RA claim reviews for inpatient services had higher average identified improper payment amounts per postpayment claim review relative to other claim types in 2013 and 2014 (see Table 4). For example, in 2013, the RAs identified about 10 times the amount per postpayment claim review for inpatient claims compared to claim reviews for physicians. Although CMS expects the RAs to review all claim types, CMS's oversight of the RAs did not ensure that the RAs distributed their reviews across claim types in 2013 and 2014. According to CMS officials, the agency's approval of RA audit issues is the primary way in which CMS controls the type of claims that the RAs review. However, the officials said they generally focus on the appropriateness of the review methodology when determining whether to approve the audit issues, instead of on whether the RA's claim review strategy encompasses all claim types. The RAs generally determine the types of audit issues that they present to CMS for approval, and based on our analysis of RA audit issues data, we found that from the inception of the RA program to May 2015, 80 percent of the audit issues approved by CMS were for inpatient claims. Additionally, CMS generally gives RAs discretion regarding the claims that they select for review among approved audit issues. Effective October 1, 2013, CMS changed the coverage requirements for short inpatient hospital stays. As a result, CMS prohibited RA claim reviews related to the appropriateness of inpatient admissions for claims with dates of admission between October 1, 2013 and September 30, 2014. In April 2014 and April 2015, Congress enacted legislation directing CMS to continue the prohibition of RA claim reviews related to the appropriateness of inpatient admissions for claims with dates of admission through September 30, 2015, unless there was evidence of fraud and abuse. Protecting Access to Medicare Act of 2014, Pub. L. No. 113-93, SS 111, 128 Stat.1040, 1044 (2014); Medicare Access and CHIP Reauthorization Act of 2015, Pub. L. No. 114-10, SS 521, 129 Stat. 87, 176 (2015). In July 2015, CMS announced that it would not allow such RA claim reviews for claims with dates of admission of October 1, 2015 through December 31, 2015. The RAs were allowed to continue reviews of short stay inpatient claims for reasons other than reviewing inpatient status, such as reviews related to coding requirements. Beginning on October 1, 2015, Quality Improvement Organizations assumed responsibility for conducting initial claim reviews related to the appropriateness of inpatient hospital admissions. Starting January 1, 2016, the Quality Improvement Organizations will refer providers exhibiting persistent noncompliance with Medicare policies to the RAs for potential further review. CMS stated that it will monitor the extent to which the RAs are reviewing all claim types, may impose a minimum percentage of reviews by claim type, and may take corrective action against RAs that do not review all claim types. CMS has also taken steps to provide incentives for the RAs to review other types of claims. To encourage the RAs to review DME claims-- which had the highest rates of improper payments in fiscal years 2012 and 2013--CMS officials stated that they increased the contingency fee percentage paid to the RAs for DME claims. Further, in the requests for proposals for the next round of RA contracts, CMS included a request for a national RA that will specifically review DME, home health agency, and hospice claims. CMS officials told us that they are procuring this new RA because the existing four regional RAs reviewed a relatively small number of these types of claims. Although DME, home health agency, and hospice claims combined represented more than 25 percent of improper payments in both 2013 and 2014, they constituted 5 percent of RA reviews in 2013 and 6 percent of reviews in 2014. In 2013 and 2014, the MACs focused their claim reviews on physician and DME claims. Physician claims accounted for 49 percent of MAC claim reviews in 2013 and 55 percent of reviews in 2014, while representing 30 percent of improper payments in fiscal year 2012 and 26 percent in fiscal year 2013 (see Table 5). DME claims accounted for 29 percent of their reviews in 2013 and 26 percent in 2014, while representing 22 percent of total improper payments in fiscal year 2013 and 16 percent of improper payments in fiscal year 2014. DME claims also had the highest rates of improper payments in both years. According to CMS officials, the MACs focused their claim reviews on physician claims--a category which encompasses a large variety of provider types, including labs, ambulances, and individual physician offices--because they constitute a significant majority of all Medicare claims. CMS officials also told us that they direct MAC claim review resources to DME claims in particular because of their high improper payment rate. Further CMS officials told us that the MACs' focus on reviewing physician and DME claims was in part due to how CMS structures the MAC claim review workload. CMS official noted that each A/B MAC is responsible for addressing improper payments for both Medicare Part A and Part B, and MAC Part B claim reviews largely focus on physician claims. Additionally, 4 of the 16 MACs are DME MACs that focus their reviews solely on DME claims. CMS officials also noted that MAC reviews of inpatient claims were likely lowered during this period because of CMS's implementation of new coverage policies for inpatient admissions. Similar to the RAs, the MACs were limited in conducting reviews for short inpatient hospital stays after October 1, 2013. The focus of the SMRC's claim reviews depended on the studies that CMS directed the contractor to conduct in 2013 and 2014. In 2013, the SMRC focused its claim reviews on outpatient and physician claims, with physician claims accounting for half of all SMRC reviews (see Table 6). Physician claims accounted for 30 percent--the largest percentage--of the total amount of estimated improper payments in fiscal year 2012. In 2014, the SMRC focused 46 percent of its reviews on home health agency claims and 44 percent of its claim reviews on DME claims, which had the two highest improper payment rates in fiscal year 2013. CMS generally directs the SMRC to conduct studies examining specific services, and the number of claims reviewed by claim type is highly dependent on the methodologies of the studies. For example, one SMRC study involved reviewing nearly 50,000 DME claims for suppliers deemed high risk for having improperly billed for diabetic test strips. In 2014, the claim reviews for this study accounted for all of the SMRC's DME claim reviews and nearly half of all the SMRC claim reviews. Additionally, in 2014, the SMRC reviewed more than 50,000 claims as part of its study that examined five claims from every home health agency. The study followed a significant increase in the improper payment rate for home health agencies from 2012 to 2013, from 6 percent to 17 percent. In some cases, SMRC studies focused on specific providers. For example, a 2013 SMRC study reviewed claims for a single hospital to follow up on billing issues previously identified by the HHS OIG. The RAs were paid an average of $158 per claim review conducted in 2013 and 2014 and identified $14 in improper payments, on average, per dollar paid by CMS in contingency fees (see Table 7). The cost to CMS in RA contingency fees per review decreased from $178 in 2013 to $101 in 2014 because the average identified improper payment amount per review decreased from $2,549 to $1,509. The decrease in the average identified improper payment amount per review likely resulted from the RAs conducting proportionately fewer reviews of inpatient claims in 2014 compared to 2013. The SMRC was paid an average of $256 per claim review conducted in studies initiated in fiscal years 2013 and 2014, though the amount paid per claim review varied by study and varied between years (see Table 8). In particular, the amount paid to the SMRC is significantly higher for studies that involve extrapolation for providers who had their claims reviewed as part of the studies and were found to have a high error rate. Based on our analysis, the higher average amount paid per review in 2014--$346 compared to $110 in 2013--can in part be attributed to the SMRC conducting proportionally more studies involving extrapolation in 2014. As well as increasing study costs, the use of extrapolation can significantly increase the associated amounts of identified improper payments per study. For example, the SMRC study on diabetic test strips involved extrapolation and included reviews of nearly 50,000 claims from 500 providers. It cost CMS more than $23 million to complete, but the SMRC identified more than $63 million in extrapolated improper payments. According to CMS officials, the agency has the SMRC perform extrapolation as part of its studies when it is cost effective--that is, when anticipated extrapolated overpayment amounts are greater than the costs associated with having the SMRC conduct the extrapolations. The amount the SMRC was paid per review also varied based on the type of service being reviewed and the number of reviews conducted. CMS pays the SMRC more for claim reviews for Part A services, such as inpatient and home health claims, than for claim reviews for Part B services, such as physician and DME claims, because CMS officials said that claim reviews of Part A services are generally more resource- intensive. Additionally, CMS gets a volume discount on SMRC claim reviews, with the cost per review decreasing once the SMRC reaches certain thresholds for the number of claim reviews in a given year. The SMRC identified $7 in improper payments per dollar paid by the agency, on average, in 2013 and 2014, though the average amount varied considerably by study and varied for 2013 and 2014. In 2013, the SMRC averaged $25 in improper payments per dollar paid, while in 2014, it averaged $4. The larger figure for 2013 is primarily attributed to two SMRC studies that involved claim reviews of inpatient claims that identified more than $160 million in improper payments but cost CMS less than $1 million in total to conduct. We were unable to determine the cost per review and the amount of improper payments identified by the MACs per dollar paid by CMS because the agency does not have reliable data on funding of MAC claim reviews for 2013 and 2014, and the agency collects inconsistent data on the savings from prepayment claim denials. For an agency to achieve its objectives, federal internal control standards provide that an agency must obtain relevant data to evaluate performance towards achieving agency goals.38 By not collecting reliable data on claim review funding and by not having consistent data on identified improper payments, CMS does not have the information it needs to evaluate MAC cost effectiveness and performance in protecting Medicare funds. GAO/AIMD-00-21.3.1. higher-level, broader contractual work activities. CMS officials told us that they have not required the MACs to report data on specific funds spent to conduct prepayment and postpayment claim reviews. However, as of February 2016, CMS officials told us that all MACs are either currently reporting specific data on prepayment and postpayment claim review costs or planning to do so soon. We also found that data on savings from MAC prepayment reviews were not consistent across the MACs. In particular, the MACs use different methods to calculate and report savings associated with prepayment claim denials, which represented about 98 percent of MAC claim review activity in 2013 and 2014. According to CMS and MAC officials, claims that are denied on a prepayment basis are never fully processed, and the Medicare payment amounts associated with the claims are never calculated. In the absence of processed payment amounts, the MACs use different methods for calculating prepayment savings. According to the MACs: Two MACs use the amount that providers bill to Medicare to calculate savings from prepayment claim denials. However, the amount that providers bill to Medicare is often significantly higher than and not necessarily related to how much Medicare pays for particular services. One MAC estimated that billed amounts can be, on average, three to four times higher than allowable amounts. Accordingly, calculated savings based on provider billed amounts can greatly inflate the estimated amount that Medicare saves from claim denials. Nine MACs calculate prepayment savings by using the Medicare "allowed amount." The allowed amount is the total amount that providers are paid for claims for particular services, though it is generally marginally higher than the amount that Medicare pays, as it includes the amount Medicare pays, cost sharing that beneficiaries are responsible for paying, and amounts that third parties are responsible for paying. Additionally, the allowed amounts may not account for Medicare payment policies that may reduce provider payments, such as bundled payments. Five MACs compare denied claims with similar claims that were paid to estimate what Medicare would have paid. CMS has not provided the MACs with documented guidance or other instructions for how to calculate savings from prepayment reviews. Federal internal controls standards provide that an agency must document guidance that has a significant impact on the agency's ability to achieve its goals. In reviewing MAC claim review program documentation, including the Medicare Program Integrity Manual and MAC contract statements of work, we were unable to identify any instructions on how the MACs should calculate savings from prepayment claim denials. Further, several MACs we interviewed indicated that they have not been provided guidance for calculating savings from prepayment denials. CMS officials told us that they were under the impression that all of the MACs were reporting prepayment savings data based on the amount that providers bill to Medicare, which can significantly overestimate the amount that Medicare saves from prepayment claim denials. Because CMS has not provided documented guidance on how to calculate savings from prepayment claim review, the agency lacks consistent and reliable information on the performance of MAC claim reviews. In particular, CMS does not have reliable information on the extent to which MAC claim reviews protect Medicare funds or on how the MACs' performance compares to other contractors conducting similar activities. CMS contracts with claim review contractors that use varying degrees of prepayment and postpayment reviews to identify improper payments and protect the integrity of the Medicare program. Though we found few differences in how contractors conduct and how providers respond to the two review types, prepayment reviews are generally more cost-effective because they prevent improper payments and limit the need to recover overpayments through the "pay and chase" process, which requires administrative resources and is not always successful. Although CMS considered the Prepayment Review Demonstration a success, and having the RAs conduct prepayment reviews would align with CMS's strategy to pay claims properly the first time, the agency has not requested legislative authority to allow the RAs to do so. Accordingly, CMS may be missing an opportunity to better protect Medicare funds and agency resources. Inconsistent with federal internal control standards, CMS has not provided the MACs with documented guidance or other instructions for how to calculate savings from prepayment reviews. As a result, CMS does not have reliable data on the amount of improper payments identified by the MACs, which limits CMS's ability to evaluate MAC performance in preventing improper payments. CMS uses claim review contractors that have different roles and take different approaches to preventing improper payments. However, the essential task of reviewing claims is similar across the different contractors and, without better data, CMS is not in a position to evaluate the performance and cost effectiveness of these different approaches. We recommend that the Secretary of HHS direct the Acting Administrator of CMS to take the following two actions: In order to better ensure proper Medicare payments and protect Medicare funds, CMS should seek legislative authority to allow the RAs to conduct prepayment claim reviews. In order to ensure that CMS has the information it needs to evaluate MAC effectiveness in preventing improper payments and to evaluate and compare contractor performance across its Medicare claim review program, CMS should provide the MACs with written guidance on how to accurately calculate and report savings from prepayment claim reviews. We provided a copy of a draft of this report to HHS for review and comment. HHS provided written comments, which are reprinted in appendix I. In its comments, HHS disagreed with our first recommendation, but it concurred with our second recommendation. HHS also provided us with technical comments, which we incorporated in the report as appropriate. HHS disagreed with our first recommendation that CMS seek legislative authority to allow the RAs to conduct prepayment claim reviews. HHS noted that other claim review contractors conduct prepayment reviews and CMS has implemented other programs as part of its strategy to move away from the "pay and chase" process of recovering overpayments, such as prior authorization initiatives and enhanced provider enrollment screening. However, we found that prepayment reviews better protect agency funds compared with postpayment reviews, and believe that seeking the authority to allow the RAs to conduct prepayment reviews is consistent with CMS's strategy. HHS concurred with our second recommendation that CMS provide the MACs with written guidance on how to accurately calculate and report savings from prepayment claim reviews. HHS stated that it will develop a uniform method to calculate savings from prepayment claim reviews and issue guidance to the MACs. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Acting Administrator of CMS, appropriate congressional requesters, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix II. Kathleen M. King, (202) 512-7114, [email protected]. In addition to the contact named above, Lori Achman, Assistant Director; Michael Erhardt; Krister Friday; Richard Lipinski; Kate Tussey; and Jennifer Whitworth made key contributions to this report.
|
CMS uses several types of claim review contractors to help reduce improper payments and protect the integrity of the Medicare program. CMS pays its contractors differently--the agency is required by law to pay RAs contingency fees from recovered overpayments, while other contractors are paid based on cost. Questions have been raised about the focus of RA reviews because of the incentives associated with the contingency fees. GAO was asked to examine the review activities of the different Medicare claim review contractors. This report examines (1) differences between prepayment and postpayment reviews and the extent to which contractors use them; (2) the extent to which the claim review contractors focus their reviews on different types of claims; and (3) CMS's cost per review and amount of improper payments identified by the claim review contractors per dollar paid by CMS. GAO reviewed CMS documents; analyzed CMS and contractor claim review and funding data for 2013 and 2014; interviewed CMS officials, claim review contractors, and health care provider organizations; and assessed CMS's oversight against federal internal control standards. The Centers for Medicare & Medicaid Services (CMS) uses different types of contractors to conduct prepayment and postpayment reviews of Medicare fee-for-service claims at high risk for improper payments. Medicare Administrative Contractors (MAC) conduct prepayment and postpayment reviews; Recovery Auditors (RA) generally conduct postpayment reviews; and the Supplemental Medical Review Contractor (SMRC) conducts postpayment reviews as part of studies directed by CMS. CMS, its contractors, and provider organizations identified few significant differences between conducting and responding to prepayment and postpayment reviews. Using prepayment reviews to deny improper claims and prevent overpayments is consistent with CMS's goal to pay claims correctly the first time and can better protect Medicare funds because not all overpayments can be collected. In 2013 and 2014, 98 percent of MAC claim reviews were prepayment, and 85 percent of RA claim reviews and 100 percent of SMRC reviews were postpayment. Because CMS is required by law to pay RAs contingency fees from recovered overpayments, the RAs can only conduct prepayment reviews under a demonstration. From 2012 through 2014, CMS conducted a demonstration in which the RAs conducted prepayment reviews and were paid contingency fees based on claim denial amounts. CMS officials considered the demonstration a success. However, CMS has not requested legislation that would allow for RA prepayment reviews by amending existing payment requirements and thus may be missing an opportunity to better protect Medicare funds. The contractors focused their reviews on different types of claims. In 2013 and 2014, the RAs focused their reviews on inpatient claims, which represented about 30 percent of Medicare improper payments. In 2013 and 2014, inpatient claim reviews accounted for 78 and 47 percent, respectively, of all RA claim reviews. Inpatient claims had high average identified improper payment amounts, reflecting the costs of the services. The RAs' focus on inpatient claims was consistent with the financial incentives from their contingency fees, which are based on the amount of identified overpayments, but the focus was not consistent with CMS's expectations that RAs review all claim types. CMS has since taken steps to limit the RAs' focus on inpatient claims and broaden the types of claims being reviewed. The MACs focused their reviews on physician and durable medical equipment claims, the latter of which had the highest rate of improper payments. The focus of the SMRC's claim reviews varied. In 2013 and 2014, the RAs had an average cost per review to CMS of $158 and identified $14 in improper payments per dollar paid by CMS to the RAs. The SMRC had an average cost per review of $256 and identified $7 in improper payments per dollar paid by CMS. GAO was unable to determine the cost per review and amount of improper payments identified by the MACs per dollar paid by CMS because of unreliable data on costs and claim review savings. Inconsistent with federal internal control standards, CMS has not provided written guidance on how the MACs should calculate savings from prepayment reviews. Without reliable savings data, CMS does not have the information it needs to evaluate the MACs' performance and cost effectiveness in preventing improper payments, and CMS cannot compare performance across contractors. GAO recommends that CMS (1) request legislation to allow the RAs to conduct prepayment claim reviews, and (2) provide written guidance on calculating savings from prepayment reviews. The Department of Health and Human Services disagreed with the first recommendation, but concurred with the second. GAO continues to believe the first recommendation is valid as discussed in the report.
| 7,653 | 972 |
In the 1990s, a number of influential studies sponsored by NIH, IOM, and AAMC and the American Medical Association (AMA) identified some major problems in clinical research and highlighted NIH's role in addressing some of these problems. First, there was a general concern that clinical research was receiving substantially less support than basic research at NIH, yet there was little systematic data to document how much, in fact, NIH was spending on clinical research. In an analysis of NIH investigator-initiated extramural grants active in 1991, an IOM committee found that 16 percent involved human research. A few years later, a panel appointed by the NIH director known as the "Nathan Panel," developed a broad definition of clinical research (the definition NIH now uses) and, applying this definition to all NIH competing extramural research grants in fiscal year 1996, found that 27 percent of grants and 38 percent of dollars were devoted to clinical research. The Nathan Panel believed that this fraction of the extramural budget devoted to clinical research was reasonable and should remain about the same, as efforts to increase the NIH budget as a whole were pursued. The studies sponsored by NIH, IOM, and AAMC/AMA recommended that NIH monitor and track its expenditures on clinical research. A second concern was that clinical research proposals, especially those from individual investigators, did not fare as well as basic research proposals in peer review at NIH. Grant applications for clinical trials, clinical research centers, and clinical research training are typically reviewed by the sponsoring institute; however, the peer review of individual investigator grant applications usually takes place centrally, within CSR. CSR has approximately 65 study sections that review research. A study section is a panel of experts established according to scientific disciplines or research areas for the purpose of evaluating the scientific and technical merit of grant applications. In 1994 an NIH- commissioned study reported that patient-oriented research applications were less likely to receive favorable reviews in CSR than laboratory- oriented research applications when reviewed in study sections with less than 30 percent patient-oriented research applications. However when patient-oriented research applications were grouped in study sections with greater than 50 percent patient-oriented research, they fared as well as laboratory-oriented research applications. Consequently, this report recommended that study sections reviewing patient-oriented research should have at least 50 percent of such applications and that a means should be developed and implemented to collect and track data prospectively on research applications that are predominantly patient- oriented, laboratory-oriented, mixed, or clinical epidemiology and behavioral research. Similarly, the Nathan Panel recommended that panels that review clinical research must include experienced clinical investigators and that at least 30 to 50 percent of the applications reviewed by these panels must be for clinical research. The IOM committee also recommended more oversight of study section composition, functions, and outcomes pertaining to human research. A third problem identified in these studies was the adequacy of support for the infrastructure (that is, facilities, equipment, data systems, and research personnel) for the conduct of clinical research. Since the late 1950s, NIH has funded GCRCs across the U.S to provide clinical research infrastructure--facilities, equipment, and personnel--for NIH-funded investigators as well as non-federally funded investigators conducting patient-oriented research. Interdisciplinary and collaborative research is encouraged at these centers. The Nathan Panel, the IOM committee, and others recommended increasing financial support for GCRCs and broadening their leadership role in clinical research and research training. A fourth concern was the decline in the number of physicians conducting clinical research. According to data collected by the AMA, the number of physicians reporting research as their primary career activity fell by 6 percent from 1980 to 1997 (from 15,377 to 14,434), while the number reporting patient care as their primary career activity almost doubled (376,512 to 620,472). Observers identified a variety of challenges in pursuing a career as a clinical investigator, including the indebtedness of medical students, the length of time a clinical scientist must train, the culture of academic medicine, as well as the competition from other career options. For many years NIH has supported the training of investigators through extramural and intramural predoctoral, postdoctoral training and career development awards. However, there was concern that these awards were being directed toward basic research and were not sufficiently supporting the training and development of clinical investigators. The IOM committee, the Nathan Panel, and the AAMC/AMA reports recommended that NIH provide substantial new support for clinical research training, career development, and debt relief. NIH reports that it increased its funding of clinical research and expanded its clinical research activities in response to CREA. NIH estimates that it spent about one-third of its budget, or approximately $6.4 billion, on clinical research in fiscal year 2001. Based on these estimates, the proportion of the NIH budget spent on clinical research has remained fairly constant since fiscal year 1997. NIH's estimates of clinical research expenditures represent the best available indications of financial trends over time, but they are not precise figures because the process of counting clinical research dollars varies widely across ICs. Finally, in response to CREA, some NIH ICs have developed specific clinical research initiatives. In fiscal year 2001, NIH estimated that it spent approximately $6.4 billion on clinical research, which represented about 32 percent of total research spending (see table 1). The institutes that spent the most on clinical research in fiscal year 2001 were the National Cancer Institute (NCI); the National Heart, Lung, and Blood Institute (NHLBI); and the National Institute of Mental Health (NIMH) (see app. I). NIH's estimated expenditures on clinical research have kept pace with the overall growth in NIH's budget. As NIH's reported clinical research expenditures increased by 44 percent (adjusted for inflation) from fiscal year 1997 to fiscal year 2001, the proportion of research dollars spent on clinical research remained constant, at 32 percent, each year. NIH estimates that in fiscal year 2001, it spent approximately $5.9 billion on extramural clinical research, about 35 percent of its total extramural research expenditures. NIH's extramural clinical research dollars were spent through a variety of funding mechanisms in fiscal year 2001. About 40 percent of the awarded dollars were grants to individual investigators, followed by other funding mechanisms, center grants, cooperative agreements, research program projects, and research and development contracts (see fig. 1). Of NIH's total extramural research expenditures for cooperative agreements and center grants, the majority of dollars were spent on clinical research in fiscal year 2001. In fiscal year 2001, NIH estimated that it spent about $529 million, or 27 percent of its intramural research expenditures, on clinical research. NIH's intramural clinical research activities include research at the Clinical Center on NIH's Bethesda, Maryland, campus, as well as research by individual institutes. The Clinical Center's budget represents more than half of the intramural clinical research expenditures. The budget of the Clinical Center increased from approximately $204 million in fiscal year 1997 to an estimated $303 million in fiscal year 2002. This budget increase supported an increase in admissions, inpatient days, and outpatient visits. NIH's reports of clinical research expenditures represent the best available indications of financial trends, but are not precise figures. The methods NIH uses to count clinical research dollars are inconsistent across ICs, potentially underestimating or overestimating its actual clinical research expenditures. Since fiscal year 1997, the Office of Budget, within the Office of the Director, has collected information from each IC on its extramural and intramural clinical research expenditures. The ICs use the NIH definition of clinical research (described earlier), but they count the dollars in very different ways. The 20 ICs that fund clinical research reported three different ways of counting clinical research dollars. First, 12 ICs count 100 percent of the grant dollars of research projects that include any clinical research. Second, one institute, NCI, codes a research project's "percent relevance" to clinical research. Projects are coded as 100 percent, major, minor, or 0 percent clinical research. If they are classified as "major," they are assigned a percentage relevancy of 50 percent, and 50 percent of the dollars are counted. If they are classified as "minor," they are assigned a percentage relevancy of 5 percent, and 5 percent of the dollars are counted. Third, 7 ICs either attempt to estimate the dollars of a research project spent on clinical research or the percentage of a project that is clinical research and apply that percentage to the total grant dollars. These different methods of counting clinical research dollars can produce very different results. For example, given a hypothetical grant to an investigator of $300,000 for which an IC has estimated that $50,000 of the budget would be spent on clinical research, some ICs would report that $300,000 was spent on clinical research; NCI could conclude that this grant has only minor relevance to clinical research and therefore would count 5 percent, or $15,000, as clinical research dollars; the rest of the ICs would estimate that this project is about 17 percent clinical research and therefore count $50,000 of the grant as clinical research dollars. The Office of Budget said that the reason the ICs count clinical research dollars differently is that each developed its own methods over time, and for historical consistency, they are reluctant to change. One IC director, who heads an NIH Director's committee concerned with clinical research spending told us that NIH is working on ways to make its process of tracking and reporting clinical research dollars more consistent and accurate. In response to CREA, some institutes have developed new clinical research initiatives. For example, since the passage of CREA, NCI has funded two new clinical cancer centers and funded 22 new Specialized Programs of Research Excellence for different types of cancer, all of which involved early phase clinical trials. NHLBI is establishing new clinical research centers to study ways to reduce racial and economic disparities in asthma prevalence, treatment, and mortality and is funding trials to assess innovative strategies to improve the implementation of clinical practice guidelines for heart, lung, and blood diseases. The National Institute of Arthritis and Musculoskeletal Diseases has a new osteoarthritis initiative; funds multidisciplinary clinical research centers in arthritis, musculoskeletal, and skin diseases; and plans to enhance its translational research projects in children's diseases. The National Institute of Allergy and Infectious Diseases (NIAID) has continued to fund large clinical trial networks such as the AIDS Clinical Trials Group, a $120 million per year initiative that involves research on pediatric and adult AIDS. Since passage of CREA, NIH has acted to strengthen its peer review of clinical research applications. CSR established two new study sections in the areas of clinical oncology and clinical cardiovascular sciences. In study sections with a mix of clinical and basic proposals, CSR tries to group clinical research applications and reviewers, but officials could not provide data to determine how successful it has been in achieving this goal. NIH has established peer review mechanisms at the institutes for the review of career development and training awards established under CREA. In response to concerns that clinical research proposals are not fairly reviewed in its study sections, CSR has established two new clinically oriented study sections, Clinical Oncology and Clinical Cardiovascular Sciences. In these scientific areas, CSR found that there were a sufficient number of clinical research applications to justify separate study sections. Although the two new clinical research study sections have been welcomed by the research community, some concerns remain among clinical investigators about the fairness of the review of clinical research by other study sections that have a mix of clinical and basic research. In these study sections, CSR officials told us they try to group clinical research applications and clinical research reviewers. CSR officials told us that it is their general goal to review clinical research applications in study sections in which at least 30 percent of the applications involve clinical research and in which at least 30 percent of the reviewers are themselves clinical investigators. CSR officials also explained that this goal cannot always be achieved because if the number of clinical research applications in a specific scientific area is small, it may not be possible to group the applications to 30 percent and still review them in a study section that provides the appropriate scientific context for review. They emphasized that reviewing applications in the appropriate scientific context is given priority over quantitative targets for grouping. CSR officials could not provide data on the extent to which they have been able to group clinical research applications and have very limited data on which reviewers are clinical investigators. The officials told us that, to date, they do not have reliable and accurate methods for identifying and tracking clinical applications or clinical reviewers. CSR officials told us they are in the process of a broader review and restructuring of their peer review system, with input from the scientific community, to account for new developments in science. According to CSR, one of the goals of this reorganization is grouping applications and reviewers at 30 percent so that there is a "density of expertise" in review sections. In addition, CSR has recently appointed a special advisor on clinical research review to serve as a liaison with the clinical research communities. To determine NIH's response to CREA's requirement that NIH establish appropriate mechanisms for the peer review of clinical research career development and training applications, we surveyed nine ICs that sponsored the highest number of clinical research career development awards in fiscal year 2001. We found that three ICs used a Special Emphasis Panel, while the six others used established committees or subcommittees to review clinical research career development and training applications. In addition, the ICs reported that most of the reviewers of these applications have clinical research experience, and some are involved in clinical research training. One institute brings in temporary reviewers to augment its committee if special expertise is needed. NCRR uses CSR for peer review of some career development applications that require very specific scientific expertise and therefore require review by the discipline-specific study sections of CSR. NIH has increased its support of GCRCs and GCRCs' scope of work, as required by CREA. The GCRC budget has grown over time, although more slowly than NIH's estimates of clinical research spending. Adjusted for inflation, the funding for GCRCs increased by 24 percent from fiscal year 1997 to fiscal year 2001, compared to a 44 percent estimated increase in clinical research spending at NIH during that same period. Although NIH has stopped funding some GCRCs, there has been a gradual increase in the number of GCRCs over time, from 74 in fiscal year 1997 to 79 in fiscal year 2001. There has also been an increase in the activities of GCRCs and some expansion in their scope since passage of CREA. NIH has increased funding for the GCRC program, although funding for the GCRCs has grown more slowly than NIH's estimate of overall expenditures on clinical research. From fiscal year 1997 through fiscal year 2001, funding for the GCRCs increased from $153,521,000 to $220,824,000 (see table 2). Adjusted for inflation, this represents an increase of 24 percent, compared to the 44 percent estimated growth in total clinical research expenditures during this period. The number of GCRCs gradually increased during this period, from 74 to 79. Funding levels for individual GCRCs in fiscal year 2001 ranged from $712,339 to $6.2 million, with an average funding level of about $2.8 million. NIH officials told us that in fiscal year 2002, they are opening two new GCRCs, one at the University of Maryland and one at the University of Miami. Establishing a new GCRC costs about $2.5 million and requires a certain threshold of investigators. Once a GCRC is set up, attracting additional investigators and research activities is easier, according to NIH officials. Also shown in table 2, some activities of GCRCs have increased in recent years. For example, the number of research protocols and investigators supported by GCRCs increased from fiscal year 1997 through fiscal year 2001. While the number of inpatient days funded by GCRCs declined from 70,814 in fiscal year 1997 to 62,769 in fiscal year 2001, the number of outpatient visits increased from 282,125 to 334,828 during the same period. Since passage of CREA, NIH officials told us there has not been a change in the mission of GCRCs, but there has been an increase in the scope of GCRC activities. For example, in fiscal year 2002, 27 GCRCs have funded Clinical Research Feasibility pilot projects to support the research of beginning investigators. In addition, 76 GCRCs now each have a Research Subject Advocate who helps ensure that GCRC research is conducted safely and protects human research subjects. CREA required that NIH expand the activities of the GCRCs through increased use of telecommunications and telemedicine initiatives. In response, NIH officials told us they increased their support of specialized bioinformatics networks that electronically link research data across GCRCs. Specifically, NCRR established a Biomedical Informatics Research Network, a computerized network that allows investigators affiliated with GCRCs to share high-resolution images of human brains and large volumes of complex data and conduct remote analysis of the data. In fiscal year 2001, NCRR funded five bioinformatics centers at $2.1 million, and a coordinating center at $1.6 million, spending a total of $3.7 million on this initiative. In fiscal year 2002, $6 million has been set aside to extend this network. NCRR also funded a collaborative pilot project between the Cystic Fibrosis Foundation and several GCRCs, called CFnet, to assess whether clinical trials could be facilitated across GCRC sites with Web- based data handling. Based on the success of this pilot, NCRR plans to extend CFnet to 20 GCRCs and also establish a comparable network among the eight U.S. medical schools that have a high proportion of minority students to facilitate the schools' participation in clinical trials that relate to health disparities. NIH has established the four new career development award programs required by CREA. Three of these have been implemented, and the fourth is just beginning. NIH has also established intramural and extramural clinical research training programs for medical and dental students and clinical research continuing education programs as required by CREA. NIH recently established three new clinical research career development award programs for individuals and institutions outside government that are designed to increase the supply and expertise of clinical investigators (see table 3). NIH used its K award mechanism, its usual method for providing support for career development of investigators, to establish these programs. In fiscal year 1999, NIH implemented the Mentored Patient-Oriented Research Career Development Award (K23) to support investigators who are committed to conducting patient-oriented research for 3 to 5 years. In the same year, NIH implemented the Mid-Career Investigator Award in Patient-Oriented Research (K24) to provide support for more senior clinicians to relieve them of patient-care duties and administrative responsibilities so that they can conduct patient-oriented research and serve as mentors for beginning clinical investigators. The Clinical Research Curriculum Award (K30), also implemented in fiscal year 1999, supports the development and expansion of clinical research teaching programs at institutions. About half of the K30 programs offer graduate degrees in clinical research (for example, masters or doctorate). The response to these new award programs was substantial, and NIH funded more awards than originally planned. NCRR and the largest institutes (for example, NCI, NHLBI, and NIMH) sponsored the highest number of the new K23 and K24 awards. NHLBI is administering the majority of the K30 awards. Although NIH has received applications for K23 and K24 awards from a variety of clinical investigators, most applicants and awardees are physicians. The K30 awards have primarily gone to academic medical centers. The new awards combined represent 25 percent of expenditures NIH allotted for all K awards under its Career Development Program in fiscal year 2001 (see fig. 2). NIH officials told us that they are initiating plans to evaluate the new clinical research career development awards and track career outcomes. The design of this assessment will be based on previous studies of training award recipients, specifically NIH's study of the outcomes of the National Research Service Awards (NRSA) and will rely on NIH's new electronic grant application. In 2001 NIH announced a fourth new clinical research career development award, the Mentored Clinical Research Scholar Program (K12). This award program, sponsored by NCRR and linked to the GCRCs, is NIH's response to CREA's directive to support graduate training in clinical research. NCRR decided to start the K12 program as a small pilot project and then expand it later if successful. In fiscal year 2002, NCRR received 43 applications for this award and expects to fund 10 of these. In the first year of the program, each funded award may enroll three clinical research scholars, for a total of 30 scholars. NIH projects that the number of scholars could grow to 120 in 5 years. We interviewed several K30 program directors who indicated that obtaining graduate tuition and stipend support for their students and prospective students was a major constraint. The K30 award, which has been well received in the research community, funds curriculum, staff, as well as tuition and other costs in special circumstances, but generally does not directly support students. Instead, students must seek funding from other NIH, federal, or private sources. An NIH official estimated that the number of formal trainees in individual K30 programs ranges from several to three dozen. This official was not able to provide data on whether these students had tuition support and what kind of support. However, the K30 program directors we talked to said some of their students had tuition support from other NIH funding mechanisms; others had support from their university. Although the new K12 program is consistent with the requirements of CREA, some K30 program directors and other experts believe the size and scope of the program will be too small to meet the need for graduate training support for clinical investigators. In terms of fellowships for clinical research training, in fiscal year 2001, NCRR announced a new mentored medical student clinical research program that will support a small number of medical and dental students at GCRCs. This program provides supplemental grants to GCRCs to offer 1 year of support for medical and dental students, usually from their third through fourth year of school, in the form of salary, supplies, and tuition assistance. A total of five students may eventually be supported at each GCRC site annually, although NCRR plans to provide support for only one medical student per GCRC in fiscal year 2002. Since 1997, NIH has also trained medical and dental students at its campus in the area of clinical research. In this program, partially supported by a pharmaceutical company, 15 to 20 students are selected each year and are each paired with a mentor for a year of academic study and clinical research experience. NIH has launched an extramural loan repayment program for clinical investigators as required by CREA, and most of NIH's ICs participate in the program. In the first year of implementation, eligibility for the loan repayment program was tied to receipt of NIH funding. However, in fiscal year 2003, NIH plans to extend eligibility to allow clinical investigators who receive funding from other sources, such as other federal agencies and nonprofit foundations, to apply. In response to CREA, NIH established an extramural Clinical Research Loan Repayment Program. This new loan repayment program joins four other extramural loan repayment programs and four intramural loan repayment programs that are administered by NIH's Office of Loan Repayment and Scholarship. The new extramural Clinical Research Loan Repayment Program was implemented on December 28, 2001, and a total of 456 applications were received by February 28, 2002. NIH plans to fund 396 loan repayment contracts for a total of $20.2 million by the end of fiscal year 2002. The program provides for the repayment of up to $35,000 per year of the principal and interest of an individual's educational loans for each year of obligated service. These individuals are obligated to engage in clinical research for at least 2 years. The clinical research loan repayment program represents a sizeable proportion (almost two-thirds) of the total extramural loan repayment program budget. To be eligible for the clinical research loan repayment program, a clinical investigator must have received an NIH research service award, training grant, career development award, or other NIH grant as a first-time principal investigator or a first-time director of a subproject on a grant or cooperative agreement. In fiscal year 2003, the Director of the Office of Loan Repayment and Scholarship told us that NIH plans to remove the NIH-funding restriction and allow clinical investigators who receive funding from other sources, such as other federal agencies and nonprofit foundations, to apply for the loan repayment program. In addition, NIH expects to almost double the size of the extramural Clinical Research Loan Repayment Program in fiscal year 2003. Although NIH has a central office that administers all the loan repayment programs, funding for the clinical research loan repayment program was distributed to the ICs, based on reported clinical research expenditures in fiscal year 1999. Thus 21 of NIH's 27 ICs plan to participate in the program by reviewing applications and awarding loan repayment contracts (see app. II). The ICs sponsoring the highest number of contracts are NCI, NHLBI, and NIMH. NCRR also plans to sponsor a significant number of loan repayment contracts. As with most of the training and career development awards, an NIH official told us that the ICs were in the best position to assess applications and the clinical research career potential of awardees. In general, NIH has complied with the key provisions in CREA. It has increased its financial support of clinical research, expanded its clinical research activities, made improvements in its review of clinical research proposals, expanded its support of GCRCs, established new clinical research career development and training programs, and begun to implement a new extramural clinical research loan repayment program. Some of NIH's actions were taken prior to CREA's passage and some are still being implemented. However, we identified some inconsistencies with the way that NIH counts clinical research expenditures. These inconsistencies limit the precision of NIH's reports of clinical research expenditures and its ability to monitor the support of clinical research. To strengthen the tracking and reporting of intramural and extramural expenditures for clinical research, we recommend that the Director of NIH develop and implement a consistent, accurate, and practical way for all ICs to count intramural and extramural clinical research expenditures. NIH reviewed a draft of this report and provided comments, which are included as appendix III. NIH concurred with our recommendation and reported that it is taking steps to implement a better, more unified system for tracking and reporting clinical research expenditures across the ICs. According to NIH, this new system will be implemented in fiscal year 2003. NIH also provided technical comments, which we incorporated as appropriate. In particular, NIH clarified its response to our questions about the peer review of clinical research. NIH emphasized that it recognizes the importance of collecting data on the grouping of clinical research applications and reviewers. Toward that end, NIH stated that one of the responsibilities of CSR's newly appointed Special Advisor on Clinical Research Review will be to investigate new methods to reliably identify and track clinical research applications and clinical research reviewers. We will send copies to the Secretary of Health and Human Services, the Director of NIH, appropriate congressional committees, and others who are interested. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO's Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-7119 or Martin T. Gahart at (202) 512-3596. Key contributors to this assignment were Anne Dievler, Cedric Burton, and Elizabeth Morrison.
|
Clinical research is critical for the development of strategies for the prevention, diagnosis, prognosis, treatment, and cure of diseases. Clinical research has been defined as patient-oriented research, epidemiologic and behavioral studies, and outcomes research and health services research. The National Institutes of Health (NIH) is the principal federal agency that funds clinical research supporting individual clinical investigators, clinical trials, general and specialized clinical research centers, and clinical research training. For many years, there have been concerns that clinical research proposals are viewed less favorably than basic research during the peer review process at NIH and that clinical research has not received its fair share of NIH funding. In November 2000, the Clinical Research Enhancement Act was enacted to address some of these concerns. NIH reports that it has increased its financial support of clinical research and that spending on clinical research has kept pace with total NIH research spending. NIH has taken some steps to improve its peer review of clinical research applications. The Center for Scientific Review recently added two new peer review study sections for the review of clinical research applications--one for clinical cardiovascular science and other for clinical oncology. NIH has increased its support of general clinical research centers, as required by the act, although the program has grown more slowly than NIH's overall estimated expenditures on clinical research. NIH has established the four clinical research career enhancement award programs mandated by the act. Three of these programs have been implemented, and they support new and midcareer clinical investigators and institutional clinical research teaching programs. The fourth program is designed to support graduate training in clinical investigation. NIH has initiated a new extramural loan repayment program specifically for clinical investigators as required by the act. This program was launched in December 2001. NIH received 456 applications by the February 2002 deadline. Twenty-one of NIH's institutes plan to fund 396 loan repayment contracts, for a total of $20.2 million, by the end of fiscal year 2002.
| 6,269 | 425 |
The LDA, as amended by HLOGA, requires lobbyists to register with the Secretary of the Senate and the Clerk of the House and file quarterly reports disclosing their lobbying activity. No specific requirements exist for lobbyists to create or maintain documentation in support of the reports they file. However, LDA guidance issued by the Secretary of the Senate and the Clerk of the House recommends lobbyists retain copies of their filings and supporting documentation for at least 6 years after their reports are filed. Lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point (as opposed to separately with the Secretary of the Senate and the Clerk of the House as was done prior to HLOGA). Registrations and reports must be publicly available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. The LDA requires that the Secretary of the Senate and the Clerk of the House of Representatives provide guidance and assistance on the registration and reporting requirements of the LDA and develop common standards, rules, and procedures for compliance with the LDA. The Secretary and the Clerk are to review the guidance semiannually, with the latest revision having occurred in June 2010 and the latest review having occurred in December 2010. The guidance provides definitions of terms in the Act, Secretary and Clerk interpretations of the LDA as amended by HLOGA, specific examples of different scenarios, and an explanation of why the scenarios prompt or do not prompt disclosure under LDA. In meetings with the Secretary and Clerk, they stated that they consider information we report on lobbying disclosure compliance when they periodically update the guidance. The LDA defines a "lobbyist" as an individual who is employed or retained by a client for compensation; who has made more than one lobbying contact (written or oral communication to a covered executive or legislative branch official made on behalf of a client); and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who are lobbyists on behalf of a client other than that person or entity. 2 U.S.C. SS 1602(9). for lobbying activities. Lobbyists are also required to submit a quarterly report, also known as an LD-2 report, for each registration filed. The registration and subsequent LD-2 reports must disclose: the name of the organization, lobbying firm, or self-employed individual that is lobbying on that client's behalf; a list of individuals who acted as lobbyists on behalf of the client during whether any lobbyists served as covered executive branch or legislative branch officials in the previous 20 years, known as a "covered official" position; the name of and further information about the client, including a general description of its business or activities; information on the general issue areas and corresponding issue codes used to describe lobbying activities; any foreign entities that have an interest in the client; whether the client is a state or local government; information on which federal agencies and house(s) of Congress the lobbyist contacted on behalf of the client during the reporting period; the amount of income related to lobbying activities received from the client (or expenses for organizations with in-house lobbyists) during the quarter rounded to the nearest $10,000; and a list of constituent organizations that contribute more than $5,000 for lobbying in a quarter and actively participate in planning, supervising, or controlling lobbying activities, if the client is a coalition or association. The LDA, as amended, also requires lobbyists to report certain contributions semiannually in the contributions report, also known as the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each organization registered to lobby and by each individual listed as a lobbyist on an organization's lobbying reports. The lobbyists or organizations must: list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which they made contributions equal to or exceeding $200 in the aggregate during the semiannual period; report contributions made to presidential library foundations and presidential inaugural committees; report funds contributed to pay the cost of an event to honor or recognize a covered official, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official or to pay the costs of a meeting or other event held by or in the name of a covered official; and certify that they have read and are familiar with the gift and travel rules of the Senate and House and that they have not provided, requested, or directed a gift or travel to a member, officer, or employee of Congress that would violate those rules. The Secretary of the Senate and the Clerk of the House of Representatives, along with the U.S. Attorney's Office for the District of Columbia (the Office) are responsible for the enforcement of the LDA. The Secretary and the Clerk notify lobbyists or lobbying firms in writing that they may be in noncompliance with the LDA, and subsequently refer those lobbyists who fail to provide an appropriate response to the Office. The Office researches these referrals and sends additional noncompliance notices to the lobbyists, requesting that the lobbyists file reports or correct reported information. If no response is received after 60 days, the Office decides whether to pursue a civil case against referred lobbyists which could result in penalties up to $200,000 or a criminal case against lobbyists who knowingly and corruptly fail to comply with the act that could lead to a maximum of 5 years in prison. While no specific requirements exist for lobbyists to create or maintain documentation in support of the reports they file, LDA guidance issued by the Secretary of the Senate and Clerk of the House recommends lobbyists retain copies of their filings and supporting documentation for at least 6 years after their reports are filed. As in our prior reviews most lobbyists reporting $5,000 or more in income or expenses were able to provide documentation to varying degrees for the reporting elements in their disclosure reports. Lobbyists for an estimated 97 percent of LD-2 reports were able to provide documentation for income and expenses for the fourth quarter of 2009 and the first three quarters of 2010. The most common forms of documentation provided included invoices for income and payroll records for expenses. According to the documentation lobbyists provided for income and expenses, we estimate that the amount disclosed was supported for 68 percent (65 of 96) of the LD-2 reports; differed by at least $10,000 from the reported amount in 13 percent (13 of 96) of LD-2 reports; and had rounding errors in 19 percent (18 of 96) of LD-2 reports. Lobbyists for an estimated 90 percent of the LD-2 reports filed year-end 2009 or midyear 2010 LD-203 contribution reports for all of the lobbyists and the lobbying firm listed on the report as required. All individual lobbyists and lobbying firms reporting specific lobbying activity are required to file LD-203 reports each period even if they have no contributions to report, because they must certify compliance with the gift and travel rules. Figure 1 illustrates the extent to which lobbyists were able to provide documentation to support selected elements on the LD-2 reports. Of the 100 LD-2 reports in our sample, 52 disclosed lobbying activities at executive branch agencies with lobbyists for 28 of these reports providing documentation to support lobbying activities at all agencies listed. These results are consistent with our findings from last year's Lobbying Disclosure report. Based on this we estimate that approximately 54 percent of all reports disclosing executive branch activities could be supported by documentation. The LDA requires lobbyists to disclose previously held covered positions when first registering as a lobbyist for a new client, either on the lobbying registration (LD-1) or on the first LD-2 quarterly filing when added as new. Of the 100 reports in our sample, 15 reports listed lobbyists who did not disclose covered positions when they first lobbied on behalf of the client as required or on subsequent disclosure reports. We therefore estimate that a minimum of 9 percent of all LD-2 reports, list lobbyists who never properly disclosed one or more previously held covered positions. Table 1 lists the common reasons why lobbyists we interviewed stated they did not have documentation for some of the elements of their LD-2 report. For 21 of the LD-2 reports in our sample, lobbyists indicated they planned to file an amendment as a direct result of our review. As of March 2011, 12 of those 21 lobbyists had filed amended LD-2 reports. Reasons for filing amendments varied, but included reporting lobbyists covered positions, changing the income or expense amounts previously reported, and removing lobbyists who did not lobby on behalf of the client during the quarter under review. In addition to the 21 reports that lobbyists stated they were going to file an amendment following our review, lobbyists filed amendments for 8 of the reports in our sample after being notified their report was selected as part of our random sample but prior to our review. Specific reasons lobbyists filed amendments to change the original filing were to: Report no lobbying activity, reduce the amount of lobbying income from $21,000 to less than $5,000, and remove the previously reported lobbying contact with the Senate and the House, which the lobbyists stated did not occur during the quarter. Add a lobbying contact with the Senate and lower the income reported from $10,000 to less than $5,000. Add the client's interest in a foreign entity. Change the client's name, remove and add the name of the federal agencies lobbied, remove the earlier reported lobbying contact with the Senate, and remove a lobbyist. Add a lobbying contact with the House and add a lobbyist. Add lobbyists. Add a federal agency and remove a bill number. Change the point of contact. We estimate that a minimum of 5 percent of all LD-203 reports with contributions omit one or more FEC-reportable contributions. The sample of LD-203 reports we reviewed contained 80 reports with political contributions and 80 reports without political contributions. We compared those reports against the contribution reports in the FEC database to identify any instance when the FEC database listed political contributions made by the lobbyists that were not disclosed on the lobbyist's LD-203 report. Of the 80 LD-203 reports sampled with contributions reported, 7 sampled reports failed to disclose one or more political contributions that were documented in the FEC database. Of the 80 LD-203 reports sampled with no contributions reported, 1 of the sampled reports failed to disclose political contributions that were documented in the FEC database. We estimate that among all reports a minimum of 2 percent failed to disclose one or more political contributions. Of the 4,553 new registrations we identified from fiscal year 2010 we were able to match 4,132 reports filed in the first quarter in which they were registered, which is a match rate of more than 90 percent of registrations, similar to our prior reviews. To determine whether new registrants were meeting the requirement to file, we matched newly filed registrations in the last quarter of 2009 and the first three quarters of 2010 from the Senate and House Lobbyists Disclosure Databases to their corresponding quarterly disclosure reports using an electronic matching algorithm that allowed for misspelling and other minor inconsistencies between the registrations and reports. While most lobbyists we interviewed told us they thought that the reporting requirements were clear, a few lobbyists highlighted areas of potential inconsistency and confusion in applying some aspects of LDA reporting requirements. Several of the lobbyists said that the Secretary of the Senate and Clerk of the House staff were helpful in providing clarifications when needed. As part of our review, lobbyists present during reviews were asked to rate various terms associated with LD-2 reporting as being clear and understandable, not clear and understandable, or somewhat clear and understandable. Figure 2 shows the terms associated with LD-2 reporting that the lobbyists we interviewed in our sample of reports were asked to rate and how they responded to each term. Table 2 summarizes the feedback we obtained from the lobbyists in our sample of reports that rated the lobbying terms as either not clear and understandable or somewhat clear and understandable. Sixty-nine lobbyists in our sample of LD-2 reports said that they found the reporting requirements easy to meet. However, 10 lobbyists we interviewed told us that they found meeting the deadline for filing disclosure reports difficult because of the short time frame between the end of the reporting period and the deadline for filing reports. For example, one lobbyist mentioned that they have to estimate the income for the final month of the reporting period because bills are prepared after the filing deadline. The deadline for filing disclosure reports is 20 days after each reporting period, or the first business day after the 20th day if the 20th day is not a business day. Prior to enactment of HLOGA, the deadline for filing disclosure reports was 45 days after the end of each reporting period. While the electronic filing system used for lobbying reports may reduce the amount of time filers must spend on data entry, a few lobbyists stated that they misreported on their LD-2 reports because they carried information from old reports to new reports without properly updating information. As a result, some lobbyists now have to amend their LD-2 reports to accurately reflect the lobbying activity for the quarter under review. Since the enactment of HLOGA, quarterly referrals for noncompliance with the LD-2 requirements have been received from both the Secretary of the Senate and the Clerk of the House. From June 2009 to July 2010, the Office received referrals from both the Secretary and the Clerk for noncompliance with reports filed for the 2008 and 2009 reporting periods. The Office received a total of 418 referrals of lobbying firms for the 2008 filing period and 457 referrals of lobbying firms from the 2009 filing period for noncompliance with the quarterly LD-2 reporting requirements. The Office has not yet received all referrals from the Secretary of the Senate and the Clerk of the House for the 2010 reporting period. In addition, the Office has received referrals from the Secretary Clerk for noncompliance with LD-203 contribution reports. For noncompliance in the 2008 calendar year, the Office has to date received LD-203 referrals from the Secretary of the Senate for 1,324 lobbying firms, and 126 LD-203 referrals from the Clerk of the House. The Office mailed 962 noncompliance letters to the registered lobbying firms and included the names of the individual lobbyists who were not in compliance with t requirement to report federal campaign and political contributions and certify that they understand the gift rules. However, the Office stated that there is confusion among the lobbying community as to whether the individual or organization is responsible for responding to letters of noncompliance with LD-203 requirements. To date, the Office has receive 765 lobbying firm LD-203 referrals from the Secretary of the Senate, and 195 referrals from the Clerk of the House for the 2009 calendar year. The Office has not yet sent letters of noncompliance with the LD-203 referrals for the 2009 calendar year. d To enforce LDA compliance, the Office has primarily focused on sending letters to lobbyists who have potentially violated the LDA by not filing disclosure reports as required. Not all referred lobbyists are sent noncompliance letters because some of the lobbyists have terminated their registrations, or lobbyists may have complied by filing the report before the Office sends noncompliance letters. The letters request that the lobbyists comply with the law and promptly file the appropriate disclosure documents. Resolution typically involves the lobbyists coming into compliance by filing the reports or terminating their lobbying status. As of January 25, 2011, about 47 percent (758 of 1,597) of the lobbyists sent letters for noncompliance with 2008 and 2009 referrals are now considered compliant because the lobbyists in question have either filed reports or they have terminated their registrations as lobbyists. Additionally, about 49 percent (776 of 1,597) are pending action because the Office did not receive a response from the lobbyist and plans to conduct additional research to determine if they can locate the lobbyist or close the referral because the lobbyist cannot be located. The remaining 4 percent (63 of 1,597) of the referrals did not require action because the lobbyists were found to be compliant when the Office received them. This may occur when lobbyists have responded to the contact letters from the Secretary of the Senate and Clerk of the House after the referrals have been received by the Office. Other referrals did not require action because the lobbyist or client was no longer in business or the lobbyist was deceased. Figure 3 shows the status of enforcement actions as a result of noncompliance letters sent to registrant organizations for 2008 and 2009 referrals. Since the LDA was passed in 1995, the Office has settled with three lobbyists and collected civil penalties totaling about $47,000. All of the settled cases involved a failure to file. The settlements occurred before the enactment of HLOGA, which increased the penalties for offenses committed after January 1, 2008, to involve a civil fine of not more than $200,000 and criminal penalties of not more than 5 years in prison. Criminal penalties may be imposed against lobbyists who knowingly and corruptly fail to comply with the act. Officials from the Office stated that they have sufficient civil and criminal statutory authorities to enforce the LDA. As we reported previously, the Office identified six lobbyists whose names appeared frequently in the referrals and sent them letters more targeted toward repeat nonfilers. However, the Office has decided not to pursue action against any of them because they determined the lobbyists were unaware of the need to file, and therefore did not intentionally avoid compliance with the requirements of the LDA. In all of those cases, the lobbyists terminated or filed once they were made aware of the requirements. In addition, in the summer of 2010, six additional lobbyists were identified as repeat nonfilers and to date, no action has been taken against any of them. Three of these cases have been resolved because the Office decided not pursue further action due to lobbyists' illness, inability to pay, or lobbyists' stating the failure to file was the result of an inadvertent oversight. In an additional case, the Office determined the level of noncompliance was not sufficiently significant for further action. The Office continues to consider further enforcement actions for the remaining two, and has forwarded these matters to the Assistant United States Attorney for Civil Enforcement for further review. In addition, the Office plans to identify additional cases for civil enforcement review in the coming months. In a prior report, we raised issues regarding the tracking, analysis, and reporting of enforcement activities for lobbyists whom the Secretary of the Senate and the Clerk of the House identify as failing to comply with LDA requirements. Our report recommended that the Office complete efforts to develop a structured approach for tracking referrals when they are made, recording reasons for referrals, recording the actions taken to resolve them, and assessing the results of actions taken. The Office has developed a system to address that recommendation. The current system provides a foundation that allows the Office to better focus its lobbying compliance efforts by tracking and recording the status and disposition of enforcement activities. In addition, the system allows the Office to monitor lobbyists who continually fail to file the required disclosure reports. Under HLOGA, the Attorney General is required to file an enforcement report with Congress after each semiannual period beginning on January 1 and July 1, detailing the aggregate number of enforcement actions DOJ took under the act during the semiannual period and, by case, any sentences imposed. On September 6, 2009, the Attorney General filed his report for the semiannual period ending June 30, 2009. We found information provided in the enforcement report generally matched information the system provided to GAO. In cases where we identified inconsistencies, they were very minor. For example, the differences in the number of noncompliance letters sent were less than 10 out of several hundred letters sent. In addition, there were small inconsistencies regarding the dates referrals were received from the Secretary of the Senate and Clerk of the House. There were also inconsistencies in the number of referrals received regarding individual lobbyists and registrant organizations. These inconsistencies totaled less than 10 out of more than a thousand referrals received. We brought these minor errors to the attention of the Office and asked them about their processes for ensuring data accuracy. Officials from the Office stated that they do not have formal procedures for ensuring that data are entered into the system in a timely fashion. In addition, they stated that there are no formal processes in place to review, validate, or edit the system data after they are entered to help ensure that accurate data are entered into the system and to help ensure that erroneous data are identified, reported, and corrected. The Office stated that they plan to formalize data review, refine summary data, and institute procedures to ensure data are accurate and reliable in the next few months. As part of this effort, they plan to establish periodic quality checks and verification of data as we suggested when we met with them in January 2011. Officials from the Office stated that they have sufficient civil and criminal statutory authorities to enforce LDA. The Office has increased the number of staff assigned to assist with lobbying compliance issues from 6 to 17. All of the staff continue to work on lobbying disclosure enforcement part- time and primarily in an administrative capacity. Some of their administrative activities include researching the Senate and House databases to determine if referrals have been resolved, or mailing noncompliance letters. In addition to those 17 part-time staff members, one contractor was hired in September 2010 to work on lobbying compliance issues on a full-time basis. We provided a draft of this report to the Attorney General for review and comment. We met with the Assistant U.S. Attorney for the District of Columbia, who on behalf of the Attorney General responded that DOJ had no comments. We are sending copies of this report to the Attorney General, Secretary of the Senate, Clerk of the House of Representatives, and interested congressional committees and members. This report also is available at no charge on the GAO Web site at http://www.gao.gov. Please contact J. Christopher Mihm at (202) 512-6806 or [email protected] if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Consistent with the audit requirements in the Honest Leadership and Open Government Act of 2007, our objectives were to: determine the extent to which lobbyists are able to demonstrate compliance with the Lobbying Disclosure Act of 1995 (LDA), as amended by providing documentation to support information contained on reports filed under the LDA; identify any challenges that lobbyists report to compliance and potential improvements; and describe the resources and authorities available to the U.S. Attorney's Office for the District of Columbia (the Office), and the efforts the Office has made to improve enforcement of the LDA, including identifying trends in past lobbying disclosure compliance. To respond to our mandate, we used information in the lobbying disclosure database maintained by the Clerk of the House of Representatives. To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and spoke to officials responsible for maintaining the data. Although registrations and reports are filed thorough a single Web portal, each chamber subsequently receives copies of the data and follows different data cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases that result from chamber differences in data processing. For example, Senate staff has told us during previous reviews that they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database, and as a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance; and House staff told us during previous reviews that they rely heavily on automated processing, and that while they manually review reports that do not perfectly match information on file for a given registrant or client, they will approve and upload such reports as originally filed by each lobbyist even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reason to believe that the content of the Senate and House systems would vary substantially. While we determined that both the Senate and House disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure reports (LD-2 reports) and for assessing whether newly filed registrants also filed required reports, we chose to use data from the Clerk of the House for sampling LD-2 reports from the last quarter of 2009, first three quarters of 2010, as well as for sampling year- end 2009 and midyear 2009 contributions reports (LD-203 reports), and finally for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House, both of which have key roles in the lobbying disclosure process, although we consulted with officials from each office, and they provided us with general background information at our request and detailed information on data processing procedures. To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a stratified random sample of 100 LD-2 reports from the fourth quarter of calendar year 2009 and the first, second, and third quarters of calendar year 2010, with 25 reports selected from each quarter. We excluded reports with no lobbying activity or with income less than $5,000 from our sampling frame and drew our sample from 55,282 activity reports filed for the last quarter of 2009 and the first three quarters of 2010 available in the public House database, as of our final download date for each quarter. There is 1 LD-2 report in the sample that amended their LD-2 after notification of being selected for the sample but prior to our review. The amended LD-2 report decreased lobbying activity income for that quarter from $21,000 to less than $5,000. Further, that report was amended to show no lobbying contact, whereas the original LD-2 activity report showed lobbying contact with the Senate and House. We conducted a review of this report because they amended to no activity with lobbying income of less than $5,000 following notification of inclusion in the sample. Since "no lobbying activity" was indicated on the amended LD-2 activity report, lobbyists were not required to provide information for all reporting elements on the LD-2. Therefore, in certain calculations this 1 report is excluded from the sample. Our sample is based on a stratified random selection, and it is only one of a large number of samples that we may have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample's results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples that we could have drawn. All percentage estimates in this report have 95 percent confidence intervals of within plus or minus 10.0 percentage points or less of the estimate itself, unless otherwise noted. When estimating compliance with certain of the elements we examined, we base our estimate on a one-sided 95 percent confidence interval to generate a conservative estimate of either the minimum or maximum percentage of reports in the population exhibiting the characteristic. We contacted all the lobbyists and lobbying firms in our sample and asked them to provide support for key elements in their reports, including: the amount of income reported for lobbying activities, the amount of expenses reported on lobbying activities, the names of those lobbyists listed in the report, the houses of Congress and federal agencies that they lobbied, and the issue codes that they had lobbied. In addition, we determined whether each individual lobbyist listed on the LD-2 report had filed a semiannual LD-203 report. Prior to interviewing lobbyists about each LD-2 report in our sample, we conducted an open-source search to determine whether each lobbyist listed on the report appeared to have held a covered official position required to be disclosed. For lobbyists registered prior to January 1, 2008, covered official positions held within 2 years of the date of the report must be disclosed; this period was extended to 20 years for lobbyists who registered on or after January 1, 2008. Lobbyists are required to disclose covered official positions on either the client registration (LD-1) or on the first LD-2 report for a specific client, and consequently those who had held covered official positions may have disclosed the information on a LD-2 report filed prior to the report we examined as part of our random sample. To identify likely covered official positions, we examined lobbying firms' Web sites and conducted extensive open-source search of Leadership Directories, Who's Who in American Politics, and U.S. newspapers through Nexis for lobbyists' names and variations on their names. We then examined the current LD-2 report under review, prior LD-2 reports, and the client registration to determine if the identified covered positions were disclosed properly. Finally, we asked lobbying firms and organizations about each lobbyist listed on the LD-2 report that we had identified as having a previous covered official position that we had not found disclosure of to determine whether covered official positions had been appropriately disclosed or whether there was some other acceptable reason for the omission (such as having been disclosed on an earlier registration or LD-2 report). Despite our rigorous search protocol, it is possible that our search failed to identify omitted reports of covered official positions. Thus, our estimate of the proportion of reports with lobbyists who failed to appropriately disclose covered official positions is a lower-bound estimate of the minimum proportion of reports that failed to report such positions. In addition to examining the content of LD-2 reports, we confirmed whether year-end 2009 and midyear 2010 LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD- 203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports. To determine if the LDA's requirement for registrants to file a report in the quarter of registration was met for the fourth quarter of 2009 and the first, second, and third quarters of 2010, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using direct matching and text and pattern matching procedures, we were able to identify matching disclosure reports for 4,132 of the 4,553, or 90.8 percent, of newly filed registrations. We began by standardizing client and registrant names in both the report and registration files (including removing punctuation and standardizing words and abbreviations such as "Company and CO"). We then matched reports and registrations using the House identification number (which is linked to a unique registrant-client pair), as well as the names of the registrant and client. For reports we could not match by identification number and standardized name, we also attempted to match reports and registrations by client and registrant name, allowing for variations in the names to accommodate minor misspellings or typos. We could not readily identify matches in the report database for the remaining registrations using electronic means. To assess the accuracy of the LD-203 reports, we analyzed two stratified random samples of LD-203 reports from the 32,893 total LD-203 reports. The first sample contains 80 reports of the 10,956 reports with political contributions and the second contains 80 reports of the 21,937 reports listing no contributions. Each sample contains 40 reports from the year- end 2009 filing period and 40 reports from the midyear 2010 filing period. The samples allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95 percent confidence interval of plus or minus 7.1 percentage points or less, and to within 3.5 percentage points of the estimate when analyzing both samples together. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available Federal Elections Commission's (FEC) political contribution database. For our fiscal year 2009 report, we interviewed staff at the FEC responsible for administering the database and determined that the data reliability is suitable for the purpose of confirming whether a FEC-reportable disclosure listed in the FEC database had been reported on an LD-203. We compared the FEC-reportable contributions reported on the LD-203 reports with information in the FEC database. The verification process required text and pattern matching procedures, and we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. For contributions reported in the FEC database and not on the LD-203, we asked the lobbyists or organizations to provide an explanation of why the contribution was not listed on the LD-203 report or to provide documentation of those contributions. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist's LD-203 report. We did not estimate the percent of other non-FEC political contributions that were omitted (such as honoraria, or gifts to presidential libraries). We obtained views from lobbyists included in our sample of reports on any challenges to compliance. To describe the processes used by the Office in following up on referrals from the Secretary of the Senate and the Clerk of the House, data reliability in the Office's tracking system for referrals, and to provide information on the resources and authorities used by the Office in its role in enforcing compliance with the LDA, we interviewed officials from the Office and obtained information on the capabilities of the system they established to track and report compliance trends and referrals and other practices they have established to focus resources on enforcement of the LDA; the extent to which they have implemented data reliability checks into their tracking system; and the level of staffing and resources dedicated to lobbying disclosure enforcement. The Office provided us with reports from the tracking system on the number and status of cases referred, pending, and resolved. The mandate does not include identifying lobbyists who failed to register and report in accordance with LDA requirements, or whether for those lobbyists that did register and report all lobbying activity or contributions were disclosed. We conducted this performance audit from April 2010 through March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The random sample of lobbying disclosure reports we selected was based on unique combinations of registrant lobbyists and client names (see table 3). See table 4 for a list of lobbyists and lobbying firms from our random sample of lobbying contribution reports with contributions. See table 5 for a list of lobbyists and lobbying firms from our random sample of lobbying contribution reports without contributions. In addition to the contacts named above, Robert Cramer, Associate General Counsel; Bill Reinsberg, Assistant Director; Shirley Jones, Assistant General Counsel; Crystal Bernard; Amy Bowser; Anna Maria Ortiz; Melanie Papasian; Katrina Taylor; Megan Taylor; and Greg Wilmoth made key contributions to this report. Assisting with lobbyist file reviews and interviews were Sarah Arnett, Sandra Beattie, Colleen Candrl, Irina Carnevale, Jeffrey DeMarco, Nicole Dery, Shannon Finnegan, Robert Gebhart, Meredith Graves, Lauren Grossman, Amanda Harris, Lois Hanshaw, Angela Leventis, Blake Luna, Patricia MacWilliams, Stacy Ann Spence, Jonathan Stehle, and Daniel Webb.
|
The Honest Leadership and Open Government Act of 2007 requires that GAO annually (1) determine the extent to which lobbyists can demonstrate compliance with disclosure requirements, (2) identify any challenges that lobbyists report to compliance, and (3) describe the resources and authorities available to the U.S. Attorney's Office for the District of Columbia (the Office), and the efforts the Office has made to improve its enforcement of the Lobbying Disclosure Act of 1995 as amended (LDA). This is GAO's fourth report under the mandate. GAO reviewed a stratified random sample of 100 lobbying disclosure reports filed from the fourth quarter of calendar year 2009 through the third quarter of calendar year 2010. GAO also selected two random samples totaling 160 reports of federal political campaign contributions from year-end 2009 and midyear 2010. This methodology allowed GAO to generalize to the population of 55,282 disclosure reports with $5,000 or more in lobbying activity. GAO also met with officials from the Office regarding efforts to focus resources on lobbyists who fail to comply. GAO provided a draft of this report to the Attorney General for review and comment. The Assistant U.S. Attorney for the District of Columbia responded on behalf of the Attorney General that the Department of Justice had no comments on the draft of this report. Lobbyists were generally able to provide documentation to support the amount of income and expenses reported; however, less documentation was provided to support other items in their disclosure reports. This finding is similar to GAO's results from prior reviews. There are no specific requirements for lobbyists to create or maintain documentation related to disclosure reports they file under the LDA. For income and expenses, two key elements of the reports, GAO estimates that lobbyists could provide documentation for approximately 97 percent of the disclosure reports for the fourth quarter 2009 and the first three quarters of 2010. According to the documentation lobbyists provided for income and expenses, we estimate the amount disclosed was supported for 68 percent of disclosure reports. After GAO's review, 21 lobbyists stated that they planned to amend their disclosure reports to make corrections on one or more data elements. As of March 2011, 12 of the 21 amended their disclosure reports. For political contributions reports, GAO estimates that a minimum of 2 percent of reports failed to disclose political contributions that were documented in the Federal Election Commission database. The majority of lobbyists who newly registered with the Secretary of the Senate and Clerk of the House of Representatives in the last quarter of 2009 and first three quarters of 2010 filed required disclosure reports for that period. GAO could identify corresponding reports on file for lobbying activity for 90 percent of registrants. The majority of lobbyists felt that the terms associated with disclosure reporting were clear and understandable. For the few lobbyists who stated that disclosure reporting terminology remained a challenge, areas of potential inconsistency and confusion in applying the terms associated with disclosure reporting requirements have been highlighted. Some lobbyists reported a lack of clarity in determining lobbying activities versus non-lobbying activities. A few lobbyists stated that they misreported on their disclosure reports because they carried information from old reports to new reports without properly updating information. The Office is responsible for enforcement of the LDA and has the authority to pursue a civil or criminal case for noncompliance. To enforce LDA compliance, the Office has primarily focused on sending letters to lobbyists who have potentially violated the LDA by not filing disclosure reports. For calendar years 2008 and 2009, the Office sent 1,597 noncompliance letters for disclosure reports and political contributions reports. About half of the lobbyists who received noncompliance letters are now compliant. In response to an earlier GAO recommendation, the Office has developed a system to better focus enforcement efforts by tracking and recording the status of enforcement activities. The system allows the Office to monitor lobbyists who continually fail to file the required disclosure reports. The Office stated that they plan to institute procedures to formalize data review, refine summary data, and ensure data are accurate and reliable in the next few months.
| 7,951 | 859 |
We interviewed staff from USAID and its implementing partners in Nairobi who had responsibility for oversight of the EFSP-funded operations in both Kenya and Somalia. including some countries with areas considered high security risk. Obligations for cash-based EFSP projects grew from $75.8 million in fiscal year 2010 to $409.5 million in fiscal year 2014--an increase of 440 percent over the 5-year period, the majority of which was in response to a large and sustained humanitarian crisis in Syria, including cash-based food assistance to Syrian refugees in the Syria region. Of the $991 million in total grant funding obligated in fiscal years 2010 to 2014, $330.6 million was for cash interventions and $660.3 million was for voucher interventions. The majority of the funding--$621.7 million (or 63 percent)--was awarded to WFP, and $369.3 million (or 37 percent) was awarded to other implementing partners. To deliver cash-based food assistance, USAID's implementing partners employ a variety of mechanisms ranging from direct distribution of cash in envelopes to the use of information technologies such as cell phones and smart cards to redeem electronic vouchers or access accounts established at banks or other financial institutions (see fig. 1). The value of cash and voucher transfers is generally based on a formula that attempts to bridge the gap between people's food needs and their capacity to cover them. internal control framework that, according to COSO, has gained broad acceptance and is widely used around the world. Both frameworks include the five components of internal control: control environment, risk assessment, control activities, information and communication, and monitoring. Internal control generally serves as a first line of defense in safeguarding assets, such as cash and vouchers. In implementing internal control standards, management is responsible for developing the detailed policies, procedures, and practices to fit the entity's operations and to ensure they are built into and are an integral part of operations. In our March 2015 report, we found that USAID had developed processes for awarding cash-based food assistance grants; however, it lacked formal internal guidance for its process to approve award modifications and provided no guidance for partners on responding to changing market conditions that might warrant an award modification. USAID's process for awarding EFSP funds. USAID outlined its process for reviewing and deciding to fund proposals for cash-based food assistance projects in the Annual Program Statement (APS) for International Emergency Food Assistance. According to USAID, the APS functions as guidance on cash-based programming by describing design and evaluation criteria for selecting project proposals and explaining the basic steps in the proposal review process. The APS also serves as a primary source of information for prospective applicants that apply for emergency food assistance awards using EFSP resources. Under the terms of the APS, USAID awards new cash-based food assistance grants through either a competitive proposal review or an expedited noncompetitive process. For our March 2015 report, we reviewed 22 proposals for new cash-based food assistance projects that were awarded and active as of June 1, 2014; we found that USAID made 13 of these awards through its competitive process, 7 through an abbreviated noncompetitive review, and 2 under authorities allowing an expedited emergency response. USAID lacked guidance for staff on modifying awards. In our March 2015 report, we found that although the APS outlined the review process for new award proposals, neither the current 2013 APS nor the two previous versions provide clear guidance on the process for submission, review, and approval of modifications to existing awards. According to USAID officials, USAID follows a similar process in reviewing requests to modify ongoing awards, which implementing partners may propose for a variety of reasons, such as an increase in the number of beneficiaries within areas covered by an award or a delay in completing cash distributions. Two main types of modifications may be made to a grant agreement--no-cost modifications and cost modifications. For the four case study countries, in our March 2015 report, we reviewed 13 grant agreements made from January 2012 to June 2014 that had 41 modifications during that period. Twenty of these cost modifications resulted in an increase in total funding for the 13 grants from about $91 million to about $626 million, a 591 percent increase. Ten of these cost modifications were made to 1 award, the Syria regional award, whose funding increased from $8 million to $449 million (see fig. 2). The Syria regional award modifications amounted to about 82 percent of the total increase in funding for the cost modifications we reviewed. We concluded that without formal guidance, USAID cannot hold its staff and its partners accountable for taking all necessary steps to justify and document the modification of awards. At the time of our study, USAID noted that its draft internal guidance for modifying awards was under review. In our March 2015 report, we recommended that USAID expedite its efforts to establish formal guidance for staff reviewing modifications of cash-based food assistance grant awards. USAID concurred with our recommendation. In June 2015, USAID reported that it issued written guidance that addresses the review and approval of grant modifications. We have yet to verify this information to determine whether it addresses the issues we identified. USAID lacked guidance for implementing partners. Additionally, in our March 2015 report we found that, although USAID required partners implementing cash-based food assistance to monitor market conditions, USAID did not provide clear guidance about how to respond when market conditions change--for example, when and how partners might adjust levels of assistance that beneficiaries receive. We analyzed data on the prices of key staple commodities in selected markets for our case study countries from fiscal years 2010 through 2014. We found that the prices of key cereal commodities in Niger and Somalia changed significantly without corresponding adjustments to all implementing partners' cash- based projects. We did not find similar food price changes in Jordan and Kenya. According to USAID officials, USAID does not have a standard for identifying significant price changes, since the definition of significance is specific to each country and region. In addition, we did not find guidance addressing modifications in response to changing market conditions in the APS. We found that this lack of guidance had resulted in inconsistent responses to changing market conditions among different cash and voucher projects funded by USAID. For example, an implementing partner, whose project we reviewed in Kenya, predetermined, as part of its project design, when adjustments to cash transfer amounts would be triggered by food price changes, while an implementing partner whose project we reviewed in Niger relied on an ad hoc response. The implementing partner in Kenya established the cash and voucher transfer rate based on the value of the standard food basket; it reviewed prices every month but would change cash and voucher transfer amounts only in response to price fluctuations, in either direction, of more than 10 percent. We concluded that without clear guidance about when and how implementing partners should modify cash-based food assistance projects in response to changing market conditions, USAID ran the risk of beneficiaries' benefits eroding through price increases or inefficient use of scarce project funding when prices decrease. We recommended in our March 2015 report that USAID develop formal guidance to implementing partners for modifying cash-based food assistance projects in response to changes in market conditions. USAID concurred with this recommendation. In June 2015, USAID reported entering into an agreement with the Cash Learning Partnership (CaLP), an organization that is working to improve the use of cash and vouchers, to help develop guidance to implementing partners on adapting programs to changing market conditions. USAID plans to complete this guidance by April 2016. We have yet to verify this information to determine whether it addresses the issues we identified. In our March 2015 report, we found that USAID relied on its implementing partners to implement financial oversight of EFSP projects, but it did not require them to conduct comprehensive risk assessments to plan financial oversight activities--two key components of an internal control framework. In addition, we found that USAID provided little or no guidance to partners and its own staff on carrying out these components. Risk assessments were lacking. Our March 2015 report found that for case study projects we reviewed in four countries, neither USAID nor its implementing partners conducted comprehensive risk assessments that address financial vulnerabilities that may affect cash-based food assistance projects, such as counterfeiting, diversion, and losses. USAID officials told us that they conduct a risk assessment for all USAID's programs within a country rather than separate risk assessments for cash-based food assistance projects. According to USAID, its country-based risk assessments focus primarily on the risks that U.S. government funds may be used for terrorist activities and on the security threat levels that could affect aid workers and beneficiaries; these risk assessments do not address financial vulnerabilities that may affect cash-based food assistance projects, such as counterfeiting, diversion, and losses. A USAID official provided us with internal EFSP guidance to staff on the grant proposal and award process stating that an award would not be delayed if a risk-based assessment has not been conducted. According to USAID officials, its partners have established records of effective performance in implementing cash and voucher projects and they understand the context of operating in these high-risk environments. As a result, USAID expects that its partners will conduct comprehensive risk assessments, including financial risk assessments, and develop appropriate risk mitigation measures for their cash-based food assistance projects. However, none of the partners implementing EFSP- funded projects in our four case study countries had conducted a comprehensive risk assessment based on their guidance or widely accepted standards during the period covered by our March 2015 review. We found that USAID did not require its implementing partners to develop and submit comprehensive risk assessments with mitigation plans as part of the initial grant proposals and award process or as periodic updates, including when grants are modified.EFSP grant proposals and agreements do not contain risk assessments and mitigation plans. In addition, the implementing partners we reviewed had not consistently prioritized the identification or the development of financial risks that address vulnerabilities such as counterfeiting, diversion, and losses. USAID officials stated that most We concluded that without comprehensive risk assessments of its projects, USAID staff would be hampered in developing financial oversight plans to help ensure that partners are implementing the appropriate controls, including financial controls over cash and vouchers to mitigate fraud and misuse of EFSP funds. In our March 2015 report, we recommended that USAID require implementing partners of cash-based food assistance projects to conduct comprehensive risk assessments and submit the results to USAID along with mitigation plans that address financial vulnerabilities such as counterfeiting, diversion, and losses. USAID concurred with our recommendation. In June 2015, USAID noted that the Fiscal Year 2015 APS includes a requirement for applicants to provide an assessment of risk of fraud or diversion and controls in place to prevent any diversion or counterfeiting. We have yet to verify this information to determine whether it addresses the issues we identified. Control activities had weaknesses. In our March 2015 report, we found that USAID's partners had generally implemented financial controls over cash and voucher distributions but the partners' financial oversight guidance had weaknesses. We reviewed selected distribution documents for three implementing partners with projects that began around 2012 in our four case study countries (Jordan, Kenya, Niger, and Somalia). Our review found that the three implementing partners had generally implemented financial controls over their cash and voucher distribution processes. For example, in Niger, we verified that there were completed and signed beneficiary payment distribution lists with thumb prints; field cash payment reconciliation reports that were signed by the partner, the financial service provider, and the village chief; and payment reconciliation reports prepared, signed, and stamped by the financial service provider. Additionally, we determined that these three implementing partners generally had proper segregation of financial activities between their finance and program teams. Nonetheless, in Kenya, our review showed that in some instances, significant events affecting the cash distribution process were not explained in the supporting documentation. Our review also found that in most instances the implementing partners had submitted reports required by their grant awards, and generally within the required time frames; in addition, we found that these reports contained the key reporting elements required by the grant award. However, in some instances, we were unable to determine whether quarterly reports were submitted on time because USAID was unable to provide us with the dates when it received these reports from the implementing partner. According to USAID officials, USAID does not have a uniform system for recording the date of receipt for quarterly progress reports and relies on FFP officers to provide this information; however, individual FFP officers have different methods for keeping track of the reports and the dates on which they were received. Financial oversight guidance had gaps. In our March 2015 report, we found that implementing partners in the four case study countries we reviewed had developed some financial oversight guidance for their cash and voucher projects, but we found gaps in the guidance that could hinder effective implementation of financial control activities. For example, one implementing partner developed a financial procedures directive in 2013 that requires, among other things, risk assessments, reconciliations, and disbursement controls. However, the directive lacked guidance on how to estimate and report losses. Another implementing partner had developed field financial guidance in 2013 that provides standardized policies and procedures for financial management and accounting in the partner's field offices. However, the implementing partner acknowledged that the field manual does not address financial procedures specifically for voucher projects. In addition, we found that USAID's guidance to partners on financial control activities is limited. For example, USAID lacked guidance to aid implementing partners in estimating and reporting losses. We concluded that when implementing partners for EFSP projects have gaps in financial guidance and limitations with regard to oversight of cash- based food assistance projects, the partners may not put in place appropriate controls for areas that are most vulnerable to fraud, diversion and misuse of EFSP funding. In our March 2015 report, we recommended that USAID develop a policy and comprehensive guidance for USAID staff and implementing partners for financial oversight of cash- based food assistance projects. USAID concurred with our recommendation and in June 2015 reported that CalP is expected, as part of its award, to work on the development and dissemination of policy and guidance related to cash-based food assistance. USAID plans to complete this effort by April 2016. We have not yet verified this information to determine whether it addresses the issues we identified. Limitations in USAID's field financial oversight. As we reported in March 2015, according to USAID officials, Washington-based country backstop officers (CBO) perform desk reviews of implementing partners' financial reports and quarterly and final program reports and share this information with FFP officers in the field; in addition, both the Washington- based CBOs and FFP officers in-country conduct field visits. However, we found that the ability of the CBOs and FFP officers to consistently perform financial oversight in the field may be constrained by limited staff resources, security-related travel restrictions and requirements, and a lack of specific guidance on conducting oversight of cash transfer and food voucher programs. Field visits are an integral part of financial oversight and a key control to help ensure management's objectives are carried out. They allow CBOs and FFP officers to physically verify the project's implementation, observe cash disbursements, and conduct meetings with beneficiaries and implementing partners to determine whether the project is being implemented in accordance with the grant award. According to the CBOs and FFP officers, the frequency of field visits for financial oversight depends on staff availability and security access. In our four case study countries, the FFP officers told us that because of their large portfolios and conflicting priorities, they performed limited site visits for the projects that we reviewed. In Kenya, the FFP officer told us that her portfolio covered 14 counties, and the cash-based food assistance project we reviewed was just one component. Owing to the demands of all her projects, she had been able to perform limited site visits for the projects we reviewed. We also found that USAID had two staff members in the field to oversee its Syria regional cash-based projects spread over five countries that had received approximately $450 million in EFSP funding from July 2012 through December 2014. Because of staff limitations, FFP officers primarily rely on implementing partners' reports from the field and regular meetings with them to determine whether a project is being executed as intended. However, USAID's guidance to its FFP officers and its implementing partners on financial oversight and reporting is limited. For example, FFP staff in Niger stated that they have had insufficient guidance and training on financial oversight of cash-based food assistance projects. Furthermore, the FFP officers told us that USAID is not prescriptive in the financial oversight procedures it expects from its implementing partners. Additionally, they noted that USAID has not set a quantitative target for site visits by FFP officers. FFP officers in our four case study countries told us that they use a risk-based approach to select which sites to visit. We concluded that without systematic financial oversight of the distribution of cash and voucher activities in the field, USAID is hampered in providing reasonable assurance that is EFSP funds and are being used for their intended purposes. In our March 2015 report, we recommended that USAID require its staff to conduct systematic financial oversight of USAID's cash-based food assistance projects in the field. USAID concurred with this recommendation. As of June 2015, USAID reported that it is working to develop training for its staff and will continue to explore using third-party monitors where security constraints may be an issue. USAID plans to complete these actions by April 2016. We have not yet verified this information to determine whether it addresses the issues we identified. Chairman Rouzer, Ranking Member Costa, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have questions about this testimony, please contact Thomas Melito, Director, International Affairs and Trade at (202) 512- 9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Joy Labez (Assistant Director), Rathi Bose, Ming Chen, Beryl H. Davis, David Dayton, Martin De Alteriis, Fang He, Teresa Abruzzo Heger, Dainia Lawes, Kimberly McGatlin, Diane Morris, Shannon Roe, Barbara Shields, Sushmita Srikanth, and Dan Will.
|
For over 60 years, the United States has provided assistance to food-insecure countries primarily in the form of food commodities procured in the United States and transported overseas. In recent years, the United States has joined other major donors in increasingly providing food assistance in the form of cash or vouchers. In fiscal year 2014, U.S.-funded cash and voucher projects in 28 countries totaled about $410 million, the majority of which was for the Syria crisis, making the United States the largest single donor of cash-based food assistance. This testimony summarizes GAO's March 2015 report (GAO-15-328) that (1) reviewed USAID's processes for awarding and modifying cash-based food assistance projects and (2) assessed the extent to which USAID and its implementing partners have implemented financial controls to help ensure appropriate oversight of such projects. GAO analyzed program data and documents for selected projects in Jordan, Kenya, Niger, and Somalia; interviewed relevant officials; and conducted fieldwork in Jordan, Kenya, and Niger. The U.S. Agency for International Development (USAID) awards new cash-based food assistance grants under its Emergency Food Security Program (EFSP) through a competitive proposal review or an expedited noncompetitive process; however, USAID lacks formal internal guidance for modifying awards. In its March 2015 review of 22 grant awards, GAO found that USAID made 13 through its competitive process, 7 through an abbreviated noncompetitive review, and 2 under authorities allowing an expedited emergency response. According to USAID, the agency follows a similar process for modification requests. Partners may propose cost or no-cost modifications for a variety of reasons, such as an increase in the number of beneficiaries or changing market conditions affecting food prices. In its review of 13 grant awards that had been modified, GAO found that cost modifications for 8 awards resulted in an increase in funding for the 13 awards from about $91 million to $626 million. According to USAID, procedures for modifying awards have been updated but GAO has yet to verify this information. GAO also found that though USAID requires partners to monitor market conditions--a key factor that may trigger an award modification--it did not provide guidance on when and how to respond to changing market conditions. GAO concluded that, until USAID institutes formal guidance, it cannot hold its staff and implementing partners accountable for taking all necessary steps to justify and document the medication of awards. USAID relies on implementing partners for financial oversight of EFSP projects but did not require them to conduct comprehensive risk assessments to plan financial oversight activities, and it provided little related procedural guidance to partners and its own staff. For projects in four case study countries reviewed in its March 2015 report, GAO found that neither USAID nor its implementing partners conducted comprehensive risk assessments to identify and mitigate financial vulnerabilities. Additionally, although USAID's partners had generally implemented financial controls over cash and voucher distributions that GAO reviewed, some partners' guidance for financial oversight had weaknesses, such as a lack of information on how to estimate and report losses. In addition, GAO found that USAID had limited guidance on financial control activities and provided no information to aid partners in estimating and reporting losses. As a result, partners may neglect to implement appropriate financial controls in areas that are most vulnerable to fraud, diversion, and misuse of EFSP funding. GAO's March 2015 report included recommendations to strengthen USAID's guidance for staff on approving award modifications and guidance for partners on responding to changing market conditions. GAO also made recommendations to strengthen financial oversight of cash-based food assistance projects by addressing gaps in USAID's guidance on risk assessments and mitigation plans and on financial control activities. USAID concurred with the recommendations.
| 3,968 | 801 |
unable to purchase services. However, over 30 other programs exist. (See appendix for an overview of some of these programs.) These other programs, which collectively spent more than $1 billion a year as of 1996, use one of three strategies aimed to ensure that all populations have access to care. Providing incentives to health professionals practicing in underserved areas. Under the Rural Health Clinic and Medicare Incentive Payment programs, providers are given additional Medicare and/or Medicaid reimbursement to practice in underserved areas. In 1996, these reimbursements amounted to over $400 million. In addition, over $112 million was spent on the National Health Service Corps program, which supports scholarships and repays education loans for health care professionals who agree to practice in designated shortage areas. Under another program, called the J-1 Visa Waiver, U.S. trained foreign physicians are allowed to remain in the United States if they agree to practice in underserved areas. Paying clinics and other providers caring for people who cannot afford to pay. More than $758 million funded programs that provide grants to help underwrite the cost of medical care at community health centers and other federally qualified health centers. These centers also receive higher Medicare and Medicaid payments. Similar providers also receive higher Medicare and Medicaid payments as "look-alikes" under the Federally Qualified Health Center program. Paying institutions to support the education and training of health professionals. Medical schools and other teaching institutions received over $238 million in 1996 to help increase the national supply, distribution, and minority representation of health professionals through various education and training programs under Titles VII and VIII of the Public Health Service Act. number needed to remove federal designation as a shortage area, while 785 shortage areas requesting providers did not receive any providers at all. Of these latter locations, 143 had unsuccessfully requested a National Health Service Corps provider for 3 years or more. Taking other provider placement programs into account shows an even greater problem in effectively distributing scarce provider resources. For example, HHS identified a need for 54 physicians in West Virginia in 1994, but more than twice that number--116 physicians--were placed there using the National Health Service Corps and J-1 Visa Waiver programs. We identified eight states where this occurred in 1995. While almost $2 billion has been spent in the last decade on Title VII and VIII education and training programs, HHS has not gathered the information necessary to evaluate whether these programs had a significant effect on changes that occurred in the national supply, distribution, or minority representation of health professionals or their impact on access to care. Evaluations often did not address these issues, and those that did address them had difficulty establishing a cause-and-effect relationship between federal funding under the programs and any changes that occurred. Such a relationship is difficult to establish because the programs have other objectives besides improving supply, distribution, and minority representation and because no common goals or performance measures for improving access had been established. the problem. Despite 3 decades of federal efforts, the number of areas HHS has classified as underserved using these systems has not decreased. HHS uses two systems to identify and measure underservice: the Health Professional Shortage Area (HPSA) system and the Medically Underserved Area (MUA) system. First used in 1978 to place National Health Service Corps providers, the HPSA system is based primarily on provider-to- population ratios. In general, HPSAs are self-defined locations with fewer than one primary care physician for every 3,500 persons. Developed at about the same time, the MUA system more broadly identifies areas and populations considered to have inadequate health services, using the additional factors of poverty and infant mortality rates and percentage of population aged 65 or over. We previously reported on the long-standing weaknesses in the HPSA and MUA systems in identifying the types of access problems in communities and in measuring how well programs focus services on the people who need them, including the following: The systems have relied on data that are old and inaccurate. About half of the U.S. counties designated as medically underserved areas since the 1970s would no longer qualify as such if updated using 1990 data. Formulas used by the systems, such as physician-to-population ratios, do not count all primary care providers available in communities, overstating the need for additional physicians in shortage areas by 50 percent or more. The systems fail to count the availability of those providers historically used by the nation to improve access to care, such as National Health Service Corps physicians and U.S. trained foreign physicians, as well as nurse practitioners, physician assistants, and nurse midwives. demand for services. As a result, the systems do not accurately identify whether access problems are common for everyone living in the area, or whether only specific subpopulations, such as the uninsured poor, have difficulty accessing primary care resources that are already there but underutilized. Without additional criteria to identify the type of access barriers existing in a community, programs may not benefit the specific subpopulation with insufficient access to care. The Rural Health Clinic program, established to improve access in remote rural areas, illustrates this problem. Under the program, all providers located in rural HPSAs, MUAs, and HHS-approved state-designated shortage areas can request rural health clinic certification to receive greater Medicare and Medicaid reimbursement. However, if the underserved group is the uninsured poor, such reimbursement does little or nothing to address the access problem. Most of the 76 clinics we surveyed said the uninsured poor made up the majority of underserved people in their community, yet only 16 said they offered health services on a sliding-fee scale based on the individual's ability to pay for care. Even if rural health clinics do not treat the group that is actually underserved, they receive the higher Medicare and Medicaid reimbursement, without maximum payment limits if operated by a hospital or other qualifying facility. These payment benefits continue indefinitely, regardless of whether the clinic is no longer in an area that is rural and underserved. Last February, we testified before this Subcommittee that improved cost controls and additional program criteria were needed for the Rural Health Clinic program. In August of this year, the Balanced Budget Act of 1997 made changes to the program that were consistent with our recommendations. Specifically, the act placed limits, beginning next January, on the amount of Medicare and Medicaid payments made to clinics owned by hospitals with more than 50 beds. The act also made changes to the program's eligibility criteria in the following three key areas:In addition to being located in a rural HPSA, MUA, or HHS-approved state-designated shortage area, the clinic must also be in an area in which the HHS Secretary determines there is an insufficient number of health care practitioners. Clinics are allowed only in shortage areas designated within the past 3 years. Existing clinics that are no longer located in rural shortage areas can remain in the program only if they are essential for the delivery of primary care that would otherwise be unavailable in the area, according to criteria that the HHS Secretary must establish in regulations by 1999. Limiting payments will help control program costs. But until, and depending on how, the Secretary defines the types of areas needing rural health clinics, HHS will continue to rely on flawed HPSA and MUA systems that assume providing services to anyone living in a designated shortage area will improve access to care. HHS has been studying changes needed to improve the HPSA and MUA systems for most of this decade, but no formal proposals have been published. In the meantime, new legislation continues to require the use of these systems, thereby increasing the problem. For example, the newly enacted Balanced Budget Act authorizes Medicare to pay for telehealth services--consultative health services through telecommunications with a physician or qualifying provider--for beneficiaries living in rural HPSAs. However, since HPSA qualification standards do not distinguish rural communities that are located near a wide range of specialty providers and facilities from truly remote frontier areas, there is little assurance that the provision will benefit those rural residents most in need of telehealth services. To make the Rural Health Clinic program and other federal programs more accountable for improving access to primary care, HHS will have to devise a better management approach to measure need and evaluate individual program success in meeting this need. If effectively implemented, the management approach called for under the Results Act offers such an opportunity. Under the Results Act, HHS would ask some basic questions about its access programs: What are our goals and how can we achieve them? How can we measure our performance? How will we use that information to improve program management and accountability? These questions would be addressed in annual performance plans that define each year's goals, link these goals to agency programs, and contain indicators for measuring progress in achieving these goals. Using information on how well programs are working to improve access in communities, program managers can decide whether federal intervention has been successful and can be discontinued, or if other strategies for addressing access barriers that still exist in communities would provide a more effective solution. The Results Act provides an opportunity for HHS to make sure its access programs are on track and to identify how efforts under each program will fit within the broader access goals. The Results Act requires that agencies complete multi-year strategic plans by September 30, 1997, that describe the agency's overall mission, long-term goals, and strategies for achieving these goals. Once these strategic plans are in place, the Results Act requires that for each fiscal year, beginning fiscal year 1999, agencies prepare annual performance plans that expand on the strategic plans by establishing specific performance goals and measures for program activities set forth in the agencies' budgets. These goals are to be stated in a way that identifies the results--or outcomes--that are expected, and agencies are to measure these outcomes in evaluating program success. Establishing performance goals and measures such as the following could go far to improve accountability in HHS' primary access programs. The Rural Health Clinic program currently tracks the number of clinics established, while the Medicare Incentive Payment program tracks the number of physicians receiving bonuses and dollars spent. To focus on access outcomes, HHS will need to track how these programs have improved access to care for Medicare and Medicaid populations or other underserved populations. Success of the National Health Service Corps and health center programs has been based on the number of providers placed or how many people they served. To focus on access outcomes, HHS will need to gather the information necessary to report the number of people who received care from National Health Service Corps providers or at the health centers who were otherwise unable to access primary care services available in the community. survey, to measure progress toward this goal by counting the number of people across the nation who do and do not have a usual source of primary care. For those people without a usual source of primary care, the survey categorizes the reasons for this problem that individual programs may need to address, such as people's inability to pay for services, their perception that they do not need a physician, or the lack of provider availability. Although HHS officials have started to look at how individual programs fit under these national goals, they have not yet established links between the programs and national goals and measures. Such links are important so resources can be clearly focused and directed to achieve the national goals. For example, HHS' program description, as published in the Federal Register, states that the health center programs directly address the Healthy People 2000 objectives by improving access to preventive and primary care services for underserved populations. While HHS' fiscal year 1998 budget documents contain some access-related goals for health center programs, it also contains other goals, such as creating 3,500 jobs in medically underserved communities. Although creating jobs may be a desirable by-product of supporting health center operations, it is unclear how this employment goal ties to national objectives to ensure access to care. Under the Results Act, HHS has an opportunity to clarify the relationships between its various program goals and define their relative importance at the program and national levels. Viewing program performance in light of program costs--such as establishing a unit cost per output or outcome achieved--can help HHS and the Congress make informed decisions on the comparative advantage of continuing current programs. For example, HHS and the Congress could better determine whether the effects gained through the program were worth their costs--financial and otherwise--and whether the current program was superior to alternative strategies for achieving the same goals. Unfortunately, in the past, information needed to answer these questions has been lacking or incomplete, making it difficult to determine how to get the "biggest bang for the buck." cost information to allocate resources between its scholarship and loan repayment programs. While both of these programs pay education expenses for health professionals who agree to work in underserved areas, by law, at least 40 percent of amounts appropriated each year must fund the scholarship program and the rest may be allocated at the HHS Secretary's discretion. However, our analysis found that the loan repayment program costs the federal government at least one-fourth less than the scholarship program for a year of promised service and was more successful in retaining providers in these communities. Changing the law to allow greater use of the loan repayment program would provide greater opportunity to stretch program dollars and improve provider retention. Comparisons between different types of programs may also indicate areas of greater opportunity to improve access to care. However, the per-person cost of improving access to care under each program is unknown. Collecting and reporting reliable information on the cost-effectiveness of HHS programs is critical for HHS and the Congress to decide how to best spend scarce federal resources. Although the Rural Health Clinic program and other federal programs help to provide health care services to many people, the magnitude of federal investment creates a need to hold these programs accountable for improving access to primary care. The current HPSA and MUA systems are not a valid substitute for developing the program criteria necessary to manage program performance along these lines. The management discipline provided under the Results Act offers direction in improving individual program accountability. Once it finalizes its strategic plan, HHS can develop in its annual performance plans individual program goals for the Rural Health Clinic program and other programs that are consistent with the agency's overall access goals, as well as outcome measures that can be used to track each program's progress in addressing access barriers. services, would have greater effect in achieving HHS' national primary care access goals. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or members of the Subcommittee may have. Program (amount of federal funding) Rural Health Clinic ($295) Medicare Incentive Pay ($107) National Health Service Corps ($112) J-1 Visa Waiver ($0) Health Centers Grants($758) Title VII/VIII Health Education and Training Programs ($238) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the Rural Health Clinic Program in the broader context of GAO's past reviews of federal efforts to improve access to primary health care, focusing on: (1) the common problems GAO found and some recent initiatives to address them; and (2) how the type of management changes called for under the Government Performance and Results Act of 1993 can help the Rural Health Clinic and related programs improve accountability. GAO noted that: (1) GAO's work has identified many instances in which the Rural Health Clinic program and other federal programs have provided aid to communities without ensuring that this aid has been used to improve access to primary care; (2) in some cases, programs have provided more than enough assistance to eliminate the defined shortage, while needs in other communities remain unaddressed; (3) GAO's work has identified a pervasive cause for this proa reliance on flawed systems for measuring health care shortages; (4) these systems often do not work effectively to identify which programs would work best in a given setting or how well a program is working to meet the needs of the underserved once it is in place; (5) for several years, the Department of Health and Human Services has tried unsuccessfully to revise these systems to address these problems; and (6) the goal-setting and performance measurement discipline available under the Results Act, however, appears to offer a suitable framework for ensuring that programs are held accountable for improving access to primary care.
| 3,429 | 304 |
PPACA directed each state to establish a state-based health insurance marketplace for individuals to enroll in private health insurance plans, apply for income-based financial assistance, and, as applicable, obtain a determination of their eligibility for other health coverage programs, such as Medicaid or the State Children's Health Insurance Program (CHIP). For states that did not establish a marketplace, PPACA required the federal government to establish and operate a marketplace for that state, referred to as the federally facilitated marketplace. For plan year 2014, 17 states elected to establish their own marketplace, and CMS operated a federally facilitated marketplace or partnership marketplace for 34 states. The act required the marketplaces to be operational on or before January 1, 2014, and Healthcare.gov began facilitating enrollments on October 1, 2013, at the beginning of the first annual open enrollment period established by CMS. The initial open enrollment period ended on April 15, 2014. Requirements for ensuring the security and privacy of individuals' personally identifiable information (PII), such as that collected and processed by Healthcare.gov and related systems, have been established by a number of federal laws and guidance. These include the following: The Federal Information Security Management Act of 2002 (FISMA), which requires each federal agency to develop, document, and implement an agency-wide information security program. National Institute of Standards and Technology (NIST) guidance and standards, which are to be used by agencies to, among other things, categorize their information systems and establish minimum security requirements. The Privacy Act of 1974, which places limitations on agencies' collection, access, use, and disclosure of personal information maintained in systems of records. The Computer Matching Act, which is a set of amendments to the Privacy Act requiring agencies to follow specific procedures before engaging in computerized comparisons of records for establishing or verifying eligibility or recouping payments for federal benefit programs. The E-Government Act of 2002, which requires agencies to analyze how personal information is collected, stored, shared, and managed before developing or procuring information technology that collects, maintains, or disseminates information in an identifiable form. The Health Insurance Portability and Accountability Act of 1996, which requires the adoption of standards for the electronic exchange, privacy, and security of health information. The Internal Revenue Code, which provides for the confidentiality of tax returns and return information. IRS Publication 1075, which establishes security guidelines for safeguarding federal tax return information used by federal, state, and local agencies. Under FISMA, the Secretary of HHS has overall responsibility for the department's agency-wide information security program; this responsibility has been delegated to the department's Chief Information Officer (CIO). The HHS CIO is also responsible for the department's response to information security incidents and the development of privacy impact assessments for the department's systems. The CMS Center for Consumer Information and Insurance Oversight has overall responsibilities for federal systems supporting the federally facilitated marketplace and for overseeing state marketplaces. Further, security and privacy responsibilities for Healthcare.gov and supporting systems are shared among several offices and individuals within CMS, including the CIO, the Chief Information Security Officer, component-level information systems security officers, the CMS Senior Official for Privacy, and the CMS Office of e-Health Standards Privacy Policy and Compliance. In particular, the CMS CIO is responsible for implementing and administering the CMS information security program, which covers the systems developed by CMS to satisfy PPACA requirements. The Chief Information Security Officer is responsible for, among other things, ensuring the assessment and authorization of all systems and the completion of periodic risk assessments, including annual security testing and security self-assessments. The process of enrolling for insurance through Healthcare.gov is facilitated by a number of major systems managed by CMS. Figure 1 shows the major entities that exchange data in support of marketplace enrollment in qualified health plans and how they are connected. The major systems that facilitate enrollment include the following: The Healthcare.gov website: This serves as the user interface for individuals to obtain coverage through a federally facilitated marketplace. It has two major functions: (1) providing information about PPACA health insurance reforms and health insurance options and (2) facilitating enrollment in coverage. Enterprise Identity Management System: This system allows CMS to verify the identity of an individual applying for coverage and establish a login account for that user. Once an account is created using a name and e-mail address, the person's identity is confirmed using additional information, which can include a Social Security number, address, phone number, and date of birth. Federally Facilitated Marketplace System (FFM): This system consists of three major modules to facilitate (1) eligibility and enrollment, (2) plan management, and (3) financial management. For eligibility, an applicant's information is collected to determine whether they are eligible for insurance coverage and financial assistance. Once eligibility is determined, the system allows the applicant to view, compare, select, and enroll in a qualified health plan. The plan management module is to provide state agencies and issuers of qualified health plans with the ability to submit, certify, monitor, and renew qualifying health plans. The financial management module is to facilitate payments to health insurers, among other things. From a technical perspective, the FFM system relies on "cloud-based" data processing and storage services from private- sector vendors. Federal Data Services Hub: This system acts as a single portal for exchanging information between the FFM system and other systems or external partners, which include other federal agencies, state-based marketplaces, other state agencies, other CMS systems, and issuers of qualified health plans. The data hub supports, among other things, real- time eligibility queries, transfer of applicant and taxpayer information, exchange of enrollment information with plan issuers, monitoring of enrollment information, and submission of health plan applications. Healthcare.gov-related activities are also supported by other CMS systems, including a data warehouse system to provide reporting and performance metrics; the Health Insurance Oversight System, which provides an interface for issuers of qualified health plans to submit information about qualifying health plans; and a general accounting system that handles payments associated with advance premium tax credits and cost-sharing reductions. In addition, CMS relies on a variety of federal, state, and private-sector entities to support Healthcare.gov-related activities, and these entities exchange information with CMS's systems: Federal agencies such as the Social Security Administration (SSA), Department of Homeland Security (DHS), and Internal Revenue Service (IRS), along with Equifax, Inc. (a private-sector credit agency under contract with CMS) provide or verify information used in making determinations of a person's eligibility for coverage and financial assistance. The Department of Defense (DOD), Office of Personnel Management (OPM), Peace Corps, and Department of Veterans Affairs (VA) assist in determining whether a potential applicant has alternate means for obtaining minimum essential coverage. State-based marketplaces may rely on the FFM system for certain functions, and state Medicaid and CHIP agencies may connect to the FFM to exchange enrollment data, which are typically routed through CMS's data hub. In addition to accessing the plan management and financial management modules of the FFM, issuers of qualified health plans receive information from the system when an individual completes the application process. Agents and brokers may access the Healthcare.gov website on behalf of applicants. To facilitate offline, paper-based applications, CMS contracted with a private-sector company for intake, routing, review, and troubleshooting of paper applications for enrollment into health plans and insurance affordability programs. While CMS has security and privacy-related protections in place for Healthcare.gov and related systems, weaknesses exist that put the personal information these systems collect, process, and maintain at risk of inappropriate modification, loss, or disclosure. The agency needs to take a number of actions to address these deficiencies in order to better protect individuals' personally identifiable information. CMS established security-related policies and procedures for Healthcare.gov. Specifically, it assigned overall responsibility for securing the agency's information and systems to appropriate officials, including the agency CIO and Chief Information Security Officer, and designated information system security officers to assist in certifying particular CMS systems; documented information security policies and procedures to safeguard the agency's information and systems; developed a process for planning, implementing, evaluating, and documenting remedial actions to address identified information security deficiencies; and established interconnection security agreements with the federal agencies with which it exchanges information, including DOD, DHS, IRS, SSA, and VA; these agreements identify the requirements for the connection, the roles and responsibilities of each party, the security controls protecting the connection, the sensitivity of the data to be exchanged, and the required training and background checks for personnel with access to the connection. In addition, CMS took steps to protect the privacy of applicants' information. For example, it published and updated a system-of-records notice for Healthcare.gov that addressed required information such as the types of information that will be maintained in the system and the external entities that may receive such information without affected individuals' explicit consent; developed basic privacy training for all staff and role-based training for staff who have access to PII while executing their routine duties; and established an incident-handling and breach response plan and an incident response team to manage responses to privacy incidents, identify trends, and make recommendations to HHS to reduce risks to PII. However, when Healthcare.gov was deployed in October 2013, CMS accepted increased security risks because of the following: CMS allowed four states to connect to the data hub even though they had not completed all CMS security requirements. These states were given a 60-day interim authorization to connect, because CMS officials regarded this as a mission-critical need. Subsequently, all four states addressed the weaknesses in their security assessments and were granted 3-year authorizations. CMS authorized the FFM system to operate even though all the security controls had not been tested for a fully integrated version of the system. This authority to operate was granted for 6 months, on the condition that a full security assessment was conducted within 60 to 90 days of October 1, 2013. In December 2013, an assessment of the eligibility and enrollment module was conducted. However, the plan management and financial management modules, which had not yet been fully developed, were not tested. Although CMS developed and documented security policies and procedures, it did not fully implement required actions before Healthcare.gov began collecting and maintaining PII from individual applicants: System security plans were not complete. While system security plans for the FFM and data hub incorporated most of the elements specified by NIST, each was missing or had not completed one or more relevant elements. For example, the FFM security plan did not define the system's accreditation boundary, or explain why five of the security controls called for by NIST guidance were determined not to be applicable. Without complete system security plans, agency officials will be hindered in making fully informed judgments about the risks involved in operating those systems. Interconnection agreements were not all complete. CMS had not completed security documentation governing its interconnection with Equifax, Inc., but instead was relying on a draft data use agreement that had not been fully approved within CMS. This makes it more difficult for agency officials to ensure that adequate security controls are in place to protect the connection. Privacy risks were not assessed. In completing privacy impact assessments for the FFM and data hub, CMS did not assess risks associated with the handling of PII or identify mitigating controls to address such risks. Without such an analysis, CMS cannot demonstrate that it thoroughly considered and addressed options for mitigating privacy risks associated with these systems. Interagency agreements governing data exchanges were not complete. CMS established computer matching agreements with DHS, DOD, IRS, SSA, and VA for its data exchanges to verify eligibility for healthcare coverage and premium tax credits; however, it had not established such agreements with OPM or the Peace Corps. This increases the risk that appropriate protections will not be applied to the PII being exchanged with these agencies. Security testing was not complete. While CMS has undertaken, through its contractors and at the agency and state levels, a series of security-related testing activities for various Healthcare.gov-related systems, these assessments did not effectively identify and test all relevant security controls prior to deploying the systems. For example, the assessments of the FFM did not include all the security controls specified by NIST and CMS, such as incident response controls and controls specified for physical and environmental protection. In addition, CMS could not demonstrate that it had tested all the security controls specified in the FFM's October 2013 security plan, and it did not test all the system's components before deployment or test them on the integrated system. Testing of all deployed eligibility and enrollment modules and plan management modules did not occur until March 2014, and as of June 2014 FFM testing remained incomplete. Without comprehensive testing, CMS lacks assurance that security controls for the FFM system are working as intended. Alternate processing site was not fully established. CMS developed and documented contingency plans for the FFM and data hub that identified activities, resources, responsibilities, and procedures needed to carry out operations during prolonged disruptions of the systems. It also established system recovery priorities, a line of succession based on the type of disaster, and specific procedures on how to restore both systems and their associated applications in the event of a disaster. However, although the contingency plans designated a site at which to recover the systems, this site had not been established. Specifically, according to CMS, data supporting the FFM were being backed up at the recovery site, but backup systems are not otherwise supported there, limiting the facility's ability to support disaster recovery efforts. CMS did not effectively implement or securely configure key security controls on the systems supporting Healthcare.gov. For example: Strong passwords (i.e., passwords of sufficient length or complexity) were not always required or enforced on systems supporting the FFM. This increases the likelihood that an attacker could gain access to the system. Certain systems supporting the FFM were not restricted from accessing the Internet, increasing the risk that unauthorized users could access data from the FFM network. CMS did not consistently apply security patches to FFM systems in a timely manner, and several critical systems had not been patched or were no longer supported by their vendors. This increased the risk that servers supporting the FFM could be compromised through exploitation of known vulnerabilities. One of CMS's contractors had not properly secured its administrative network, which could allow for unauthorized access to the FFM network. In addition to these weaknesses, we also identified weaknesses in security controls related to boundary protection, identification and authentication, authorization, and configuration management. Collectively, these weaknesses put Healthcare.gov systems and the information they contain at increased and unnecessary risk of unauthorized access, use, disclosure, modification, and loss. The security weaknesses we identified occurred in part because CMS did not ensure that the multiple parties contributing to the development of the FFM system had a shared understanding of how security controls were to be implemented. Specifically, CMS and contractor staff did not always agree on how security controls for the FFM were to be implemented or who was responsible for ensuring they were functioning properly. For example, although CMS identified one subcontractor as responsible for managing firewall rules, this responsibility was not included in the subcontractor's statement of work, and staff for the subcontractor said that this was the responsibility of a different contractor. Without ensuring agreement on security roles and responsibilities, CMS has less assurance that controls will function as intended, increasing the risk that attackers could compromise the system and the data it contains. In our September 2014 report, we made the following six recommendations aimed at improving the management of the security of Healthcare.gov: 1. Ensure that system security plans for the FFM and data hub contain all information recommended by NIST. 2. Ensure that all privacy risks associated with Healthcare.gov are analyzed and documented in privacy impact assessments. 3. Develop computer matching agreements with OPM and the Peace Corps to govern data that are being compared with CMS data to verify eligibility for advance premium tax credits and cost-sharing reductions. 4. Perform a comprehensive security assessment of the FFM, including the infrastructure, platform, and all deployed software elements. 5. Ensure that the planned alternate processing site for the systems supporting Healthcare.gov is established and made operational in a timely fashion. 6. Establish detailed security roles and responsibilities for contractors, including participation in security control reviews, to better ensure effective communication among individuals and entities with responsibility for the security of the FFM and its supporting infrastructure. In an associated report with limited distribution, we also made 22 recommendations to resolve technical security weaknesses related to access controls, configuration management, and contingency planning. Implementing these recommendations will enable HHS and CMS to better ensure that Healthcare.gov systems and the information they collect and process are effectively protected from threats to their confidentiality, integrity, and availability. In its comments on our draft reports, HHS concurred with 3 of the 6 recommendations to fully implement its information security program, partially concurred with the remaining 3 recommendations, and concurred with all 22 of the recommendations to resolve technical weaknesses in security controls, describing actions it had under way or planned related to each of them. In conclusion, Healthcare.gov and its related systems represent a complex system of systems that interconnects a broad range of federal agency systems, state agencies and systems, and other entities, such as contractors and issuers of health plans. Ensuring the security of such a system poses a significant challenge. While CMS has taken important steps to apply security and privacy safeguards to Healthcare.gov and its supporting systems, significant weaknesses remain that put these systems and the sensitive, personal information they contain at risk of compromise. Given the complexity of the systems and the many interconnections among external partners, it is particularly important to analyze privacy risks, effectively implement technical security controls, comprehensively test the security controls over the system, and ensure that an alternate processing site for the systems is fully established. Chairman Issa, Ranking Member Cummings, and Members of the Committee, this concludes my statement. I would be pleased to answer any questions you have. If you have any questions about this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or Dr. Nabajyoti Barkakati at (202) 512- 4499. We can also be reached by e-mail at [email protected] and [email protected]. Other key contributors to this testimony include John de Ferrari, Lon Chin, West Coile, and Duc Ngo (assistant directors); Mark Canter; Marisol Cruz; Sandra George; Nancy Glover; Torrey Hardee; Tammi Kalugdan; Lee McCracken; Monica Perez-Nelson; Justin Palk; and Michael Stevens. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
PPACA requires the establishment of health insurance marketplaces in each state to assist individuals in comparing, selecting, and enrolling in health plans offered by participating issuers. CMS is responsible for overseeing these marketplaces, including establishing a federally facilitated marketplace in states that do not establish their own. These marketplaces are supported by an array of IT systems, including Healthcare.gov, the website that serves as the consumer portal to the marketplace. This statement is based on two September 2014 reports examining the security and privacy of the Healthcare.gov website and related systems. The specific objectives of this work were to (1) describe the planned exchanges of information between the Healthcare.gov website and other organizations and (2) assess the effectiveness of programs and controls implemented by CMS to protect the security and privacy of the information and IT systems supporting Healthcare.gov. Enrollment through Healthcare.gov is supported by the exchange of information among many systems and entities. The Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services (CMS) has overall responsibility for key information technology (IT) systems supporting Healthcare.gov. These include, among others, the Federally Facilitated Marketplace (FFM) system, which facilitates eligibility and enrollment, plan management, and financial management, and the Federal Data Services Hub, which acts as the single portal for exchanging information between the FFM and other systems or external partners. CMS relies on a variety of federal, state, and private-sector entities to support Healthcare.gov activities. For example, it exchanges information with the Department of Defense, Department of Homeland Security, Department of Veterans Affairs, Internal Revenue Service, Office of Personnel Management, Peace Corps, and the Social Security Administration to help determine applicants' eligibility for healthcare coverage and/or financial assistance. Healthcare.gov-related systems are also accessed and used by CMS contractors, issuers of qualified health plans, state agencies, and others. While CMS has security and privacy-related protections in place for Healthcare.gov and related systems, weaknesses exist that put these systems and the sensitive personal information they contain at risk. Specifically, CMS established security-related policies and procedures for Healthcare.gov, including interconnection security agreements with the federal agencies with which it exchanges information. It also instituted certain required privacy protections, such as notifying the public of the types of information that will be maintained in the system. However, weaknesses remained in the security and privacy protections applied to Healthcare.gov and its supporting systems. For example, CMS did not ensure system security plans contained all required information, which makes it harder for officials to assess the risks involved in operating those systems; analyze privacy risks associated with Healthcare.gov systems or identify mitigating controls; fully establish an alternate processing site for Healthcare.gov systems to ensure that they could be recovered in the event of a disruption or disaster. In addition, a number of weaknesses in specific technical security controls jeopardized Healthcare.gov-related systems. These included certain systems supporting the FFM not being restricted from accessing the Internet and inconsistent implementation of security patches, among others. An underlying reason for many of these weaknesses is that CMS did not establish a shared understanding of security roles and responsibilities with all parties involved in securing Healthcare.gov systems. Until these weaknesses are addressed, the systems and the information they contain remain at increased risk of unauthorized use, disclosure, modification, or loss. In its September 2014 reports GAO made 6 recommendations to HHS to implement security and privacy controls to enhance the protection of systems and information related to Healthcare.gov. In addition, GAO made 22 recommendations to resolve technical weaknesses in security controls. HHS agreed with 3 of the 6 recommendations, partially agreed with 3, agreed with all 22 technical recommendations, and described plans to implement them.
| 4,114 | 797 |
Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT investments and how these funds are to be allocated. In fiscal year 2011, government IT spending reported to OMB totaled approximately $79 billion. OMB plays a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. To assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these investments. Further, the act places responsibility for managing investments with the heads of agencies and establishes chief information officers (CIO) to advise and assist agency heads in carrying out this responsibility. The Clinger-Cohen Act strengthened the requirements of the Paperwork Reduction Act of 1995, which established agency responsibility for maximizing value and assessing and managing the risks of major information systems initiatives. The Paperwork Reduction Act also requires that OMB develop and oversee policies, principles, standards, and guidelines for federal agency IT functions, including periodic evaluations of major information systems. Another key law is the E-Government Act of 2002, which requires OMB to report annually to Congress on the status of e-government. In these reports, referred to as Implementation of the E-Government Act reports, OMB is to describe the administration's use of e-government principles to improve government performance and the delivery of information and services to the public. To help carry out its oversight role, in 2003, OMB established the Management Watch List, which included mission-critical projects that needed to improve performance measures, project management, IT security, or overall justification for inclusion in the federal budget. Further, in August 2005, OMB established a High-Risk List, which consisted of projects identified by federal agencies, with the assistance of OMB, as requiring special attention from oversight authorities and the highest levels of agency management. Over the past several years, we have reported and testified on OMB's initiatives to highlight troubled IT projects, justify investments, and use project management tools. We have made multiple recommendations to OMB and federal agencies to improve these initiatives to further enhance the oversight and transparency of federal projects. Among other things, we recommended that OMB develop a central list of projects and their deficiencies and analyze that list to develop governmentwide and agency assessments of the progress and risks of the investments, identifying opportunities for continued improvement. In addition, in 2006 we also recommended that OMB develop a single aggregate list of high-risk projects and their deficiencies and use that list to report to Congress on progress made in correcting high-risk problems. As a result, OMB started publicly releasing aggregate data on its Management Watch List and disclosing the projects' deficiencies. Furthermore, OMB issued governmentwide and agency assessments of the projects on the Management Watch List and identified risks and opportunities for improvement, including in the areas of risk management and security. More recently, to further improve the transparency and oversight of agencies' IT investments, in June 2009, OMB publicly deployed a website, known as the IT Dashboard, which replaced the Management Watch List and High-Risk List. It displays federal agencies' cost, schedule, and performance data for the approximately 800 major federal IT investments at 27 federal agencies. According to OMB, these data are intended to provide a near-real-time perspective on the performance of these investments, as well as a historical perspective. Further, the public display of these data is intended to allow OMB; other oversight bodies, including Congress; and the general public to hold the government agencies accountable for results and progress. The Dashboard was initially deployed in June 2009 based on each agency's exhibit 53 and exhibit 300 submissions. After the initial population of data, agency CIOs have been responsible for updating cost, schedule, and performance fields on a monthly basis, which is a major improvement from the quarterly reporting cycle OMB previously used for the Management Watch List and High-Risk List. For each major investment, the Dashboard provides performance ratings on cost and schedule, a CIO evaluation, and an overall rating, which is based on the cost, schedule, and CIO ratings. As of July 2010, the cost rating is determined by a formula that calculates the amount by which an investment's total actual costs deviate from the total planned costs. Similarly, the schedule rating is the variance between the investment's planned and actual progress to date. Figure 1 displays the rating scale and associated categories for cost and schedule variations. Each major investment on the Dashboard also includes a rating determined by the agency CIO, which is based on his or her evaluation of the performance of each investment. The rating is expected to take into consideration the following criteria: risk management, requirements management, contractor oversight, historical performance, and human capital. This rating is to be updated when new information becomes available that would affect the assessment of a given investment. Last, the Dashboard calculates an overall rating for each major investment. This overall rating is an average of the cost, schedule, and CIO ratings, with each representing one-third of the overall rating. However, when the CIO's rating is lower than both the cost and schedule ratings, the CIO's rating will be the overall rating. Figure 2 shows the overall performance ratings of the 797 major investments on the Dashboard as of August 2011. We have previously reported that the cost and schedule ratings on OMB's Dashboard were not always accurate for selected agencies. In July 2010, we reviewed investments at the Departments of Agriculture, Defense, Energy, Health and Human Services, and Justice, and found that the cost and schedule ratings on the Dashboard were not accurate for 4 of 8 selected investments and that the ratings did not take into consideration current performance; specifically, the ratings calculations factored in only completed activities. We also found that there were large inconsistencies in the number of investment activities that agencies report on the Dashboard. In the report, we recommended that OMB report on the effect of planned changes to the Dashboard and provide guidance to agencies to standardize activity reporting. We further recommended that the selected agencies comply with OMB's guidance to standardize activity reporting. OMB and the Department of Energy concurred with our recommendations, while the other selected agencies provided no comments. In July 2010, OMB updated the Dashboard's cost and schedule calculations to include both ongoing and completed activities. In March 2011, we reported that agencies and OMB need to do more to ensure the Dashboard's data accuracy. Specifically, we reviewed investments at the Departments of Homeland Security, Transportation, the Treasury, and Veterans Affairs, and the Social Security Administration. We found that cost ratings were inaccurate for 6 of 10 selected investments and schedule ratings were inaccurate for 9 of 10. We also found that weaknesses in agency and OMB practices contributed to the inaccuracies on the Dashboard; for example, agencies had uploaded erroneous data, and OMB's ratings did not emphasize current performance. We therefore recommended that the selected agencies provide complete and accurate data to the Dashboard on a monthly basis and ensure that the CIOs' ratings of investments disclose issues that could undermine the accuracy of investment data. Further, we recommended that OMB improve how it rates investments related to current performance and schedule variance. The selected agencies generally concurred with our recommendation. OMB disagreed with the recommendation to change how it reflects current investment performance in its ratings because Dashboard data are updated on a monthly basis. However, we maintained that current investment performance may not always be as apparent as it should be; while data are updated monthly, the ratings include historical data, which can mask more recent performance. Most of the cost and schedule ratings on the Dashboard were accurate, but did not provide sufficient emphasis on recent performance to inform oversight and decision making. Performance rating discrepancies were largely due to missing or incomplete data submissions from the agencies. However, we generally found fewer such discrepancies than in previous reviews, and in all cases the selected agencies found and corrected these inaccuracies in subsequent submissions. In the case of GSA, officials did not disclose that performance data on the Dashboard were unreliable for one investment because of an ongoing baseline change. Without proper disclosure of pending baseline changes, the Dashboard will not provide the appropriate insight into investment performance needed for near-term decision making. Additionally, because of the Dashboard's ratings calculations, the current performance for certain investments was not as apparent as it should be for near-real-time reporting purposes. If fully implemented, OMB's recent and ongoing changes to the Dashboard, including new cost and schedule rating calculations and updated investment baseline reporting, should address this issue. These Dashboard changes could be important steps toward improving insight into current performance and the utility of the Dashboard for effective executive oversight. In general, the number of discrepancies we found in our reviews of selected investments has decreased since July 2010. According to our assessment of the eight selected investments, half had accurate cost ratings and nearly all had accurate schedule ratings on the Dashboard. Table 1 shows our assessment of the selected investments during a 6- month period from October 2010 through March 2011. As shown above, the Dashboard's cost ratings for four of the eight selected investments were accurate, and four did not match the results of our analyses during the period from October 2010 through March 2011. Specifically, State's Global Foreign Affairs Compensation System and Interior's Land Satellites Data System investments had inaccurate cost ratings for at least 5 months, GSA's System for Tracking and Administering Real Property/Realty Services was inaccurate for 3 months, and Interior's Financial and Business Management System was inaccurate for 2 months. In all of these cases, the Dashboard's cost ratings showed poorer performance than our assessments. For example, State's Global Foreign Affairs Compensation System investment's cost performance was rated "yellow" (i.e., needs attention) in October and November 2010, and "red" (i.e., significant concerns) from December 2010 through March 2011, whereas our analysis showed its cost performance was "green" (i.e., normal) during those months. Additionally, GSA's System for Tracking and Administering Real Property/Realty Services investment's cost performance was rated "yellow" from October 2010 through December 2010, while our analysis showed its performance was "green" for those months. Regarding schedule, the Dashboard's ratings for seven of the eight selected investments matched the results of our analyses over this same 6-month period, while the ratings for one did not. Specifically, Interior's Land Satellites Data System investment's schedule ratings were inaccurate for 2 months; its schedule performance on the Dashboard was rated "yellow" in November and December 2010, whereas our analysis showed its performance was "green" for those months. As with cost, the Dashboard's schedule ratings for this investment for these 2 months showed poorer performance than our assessment. There were three primary reasons for the inaccurate cost and schedule Dashboard ratings described above: agencies did not report data to the Dashboard or uploaded incomplete submissions, agencies reported erroneous data to the Dashboard, and the investment baseline on the Dashboard was not reflective of the investment's actual baseline (see table 2). Missing or incomplete data submissions: Four selected investments did not upload complete and timely data submissions to the Dashboard. For example, State officials did not upload data for one of the Global Foreign Affairs Compensation System investment's activities from October 2010 through December 2010. According to a State official, the department's investment management system was not properly set to synchronize all activity data with the Dashboard. The official stated that this issue was corrected in December 2010. Erroneous data submissions: One selected investment--Interior's Land Satellites Data System--reported erroneous data to the Dashboard. Specifically, Interior officials mistakenly reported certain activities as fully complete rather than partially complete in data submissions from September 2010 through December 2010. Agency officials acknowledged the error and stated that they submitted correct data in January and February 2011 after they realized there was a problem. Inconsistent investment baseline: One selected investment--GSA's System for Tracking and Administering Real Property/Realty Services--reported a baseline on the Dashboard that did not match the actual baseline tracked by the agency. In June 2010, OMB issued new guidance on rebaselining, which stated that agencies should update investment baselines on the Dashboard within 30 days of internal approval of a baseline change and that this update will be considered notification to OMB. The GSA investment was rebaselined internally in November 2010, but the baseline on the Dashboard was not updated until February 2011. GSA officials stated that they submitted the rebaseline information to the Dashboard in January 2011 and thought that it had been successfully uploaded; however, in February 2011, officials realized that the new baseline was not on the Dashboard. GSA officials successfully uploaded the rebaseline information in late February 2011. Additionally, OMB's guidance states that agency CIOs should update the CIO evaluation on the Dashboard as soon as new information becomes available that affects the assessment of a given investment. During an agency's internal process to update an investment baseline, the baseline on the Dashboard will not be reflective of the current state of the investment; thus, investment CIO ratings should disclose such information. However, the CIO evaluation ratings for GSA's System for Tracking and Administering Real Property/Realty Services investment did not provide such a disclosure. Without proper disclosure of pending baseline changes and resulting data reliability weaknesses, OMB and other external oversight groups will not have the appropriate information to make informed decisions about these investments. In all of the instances where we identified inaccurate cost or schedule ratings, agencies had independently recognized that there was a problem with their Dashboard reporting practices and taken steps to correct them. Such continued diligence by agencies to report accurate and timely data will help ensure that the Dashboard's performance ratings are accurate. According to OMB, the Dashboard is intended to provide a near-real-time perspective on the performance of all major IT investments. Furthermore, our work has shown cost and schedule performance information from the most recent 6 months to be a reliable benchmark for providing this perspective on investment status. This benchmark for current performance provides information needed by OMB and agency executive management to inform near-term budgetary decisions, to obtain early warning signs of impending schedule delays and cost overruns, and to ensure that actions taken to reverse negative performance trends are timely and effective. The use of such a benchmark is also consistent with OMB's exhibit 300 guidelines, which specify that project activities should be broken into segments of 6 months or less. In contrast, the Dashboard's cost and schedule ratings calculations reflect a more cumulative view of investment performance dating back to the inception of the investment. Thus, a rating for a given month is based on information from the entire history of each investment. While a historical perspective is important for measuring performance over time relative to original cost and schedule targets, this information may be dated for near- term budget and programmatic decisions. Moreover, combining more recent and historical performance can mask the current status of the investment. As more time elapses, the impact of this masking effect will increase because current performance becomes a relatively smaller factor in an investment's cumulative rating. In addition to our assessment of cumulative investment performance (as reflected in the Dashboard ratings), we determined whether the ratings were also reflective of current performance. Our analysis showed that two selected investments had a discrepancy between cumulative and current performance ratings. Specifically, State's Global Foreign Affairs Compensation System investment's schedule performance was rated "green" on the Dashboard from October 2010 through March 2011, whereas our analysis showed its current performance was "yellow" for most of that time. From a cumulative perspective, the Dashboard's ratings for this investment were accurate (as previously discussed in this report); however, these take into account activities dating back to 2003. Interior's Financial and Business Management System investment's cost performance was rated "green" on the Dashboard from December 2010 through March 2011; in contrast, our analysis showed its current performance was "yellow" for those months. The Dashboard's cost ratings accurately reflected cumulative cost performance from 2003 onward. Further analysis of the Financial and Business Management System's schedule performance ratings on the Dashboard showed that because of the amount of historical performance data factored into its ratings as of July 2011, it would take a minimum schedule variance of 9 years on the activities currently under way in order to change its rating from "green" to "yellow," and a variance of more than 30 years before turning "red." We have previously recommended to OMB that it develop cost and schedule Dashboard ratings that better reflect current investment performance. At that time, OMB disagreed with the recommendation, stating that real-time performance is always reflected in the ratings since current investment performance data are uploaded to the Dashboard on a monthly basis. However, in September 2011, officials from OMB's Office of E- Government & Information Technology stated that changes designed to improve insight into current performance on the Dashboard have either been made or are under way. If OMB fully implements these actions, the changes should address our recommendation. Specifically, New project-level reporting: In July 2011, OMB issued new guidance to agencies regarding the information that is to be reported to the Dashboard. In particular, beginning in September 2011, agencies are required to report data to the Dashboard at a detailed project level, rather than at the investment level previously required. Further, the guidance emphasizes that ongoing work activities should be broken up and reported in increments of 6 months or less. Updated investment baseline reporting: OMB officials stated that agencies are required to update existing investment baselines to reflect planned fiscal year 2012 activities, as well as data from the last quarter of fiscal year 2011 onward. OMB officials stated that historical investment data that are currently on the Dashboard will be maintained, but plans have yet to be finalized on how these data may be displayed on the new version of the Dashboard. New cost and schedule ratings calculations: OMB officials stated that work is under way to change the Dashboard's cost and schedule ratings calculations. Specifically, officials said that the new calculations will emphasize ongoing work and reflect only development efforts, not operations and maintenance activities. In combination with the first action on defining 6-month work activities, the calculations should result in ratings that better reflect current performance. OMB plans for the new version of the Dashboard to be fully viewable by the public upon release of the President's Budget for fiscal year 2013. Once OMB implements these changes, they could be significant steps toward improving insight into current investment performance on the Dashboard. We plan to evaluate the new version of the Dashboard once it is publicly available in 2012. Since our first review in July 2010, the accuracy of investment ratings on the Dashboard has improved because of OMB's refinement of its cost and schedule calculations, and the number of discrepancies found in our reviews has decreased. While rating inaccuracies continue to exist, for the discrepancies we identified, the Dashboard's ratings generally showed poorer performance than our assessments. Reasons for inaccurate Dashboard ratings included missing or incomplete agency data submissions, erroneous data submissions, and inconsistent investment baseline information. In all cases, the selected agencies detected the discrepancies and corrected them in subsequent Dashboard data submissions. However, in GSA's case, officials did not disclose that performance data on the Dashboard were unreliable for one investment because of an ongoing baseline change. Additionally, the Dashboard's ratings calculations reflect cumulative investment performance--a view that is important but does not meet OMB's goal of reporting near-real-time performance. Our IT investment management work has shown a 6-month view of performance to be a reliable benchmark for current performance, as well as a key component of informed executive decisions about the budget and program. OMB's Dashboard changes could be important steps toward improving insight into current performance and the utility of the Dashboard for effective executive oversight. To better ensure that the Dashboard provides accurate cost and schedule performance ratings, we are recommending that the Administrator of GSA direct its CIO to comply with OMB's guidance related to Dashboard data submissions by updating the CIO rating for a given GSA investment as soon as new information becomes available that affects the assessment, including when an investment is in the process of a rebaseline. Because we have previously made recommendations addressing the development of Dashboard ratings calculations that better reflect current performance, we are not making additional recommendations to OMB at this time. We provided a draft of our report to the five agencies selected for our review and to OMB. In written comments on the draft, Commerce's Acting Secretary concurred with our findings. Also in written comments, GSA's Administrator stated that GSA agreed with our finding and recommendation and would take appropriate action. Letters from these agencies are reprinted in appendixes III and IV. In addition, we received oral comments from officials from OMB's Office of E-Government & Information Technology and written comments via e-mail from an Audit Liaison from Interior. These comments were technical in nature and we incorporated them as appropriate. OMB and Interior neither agreed nor disagreed with our findings. Finally, an Analyst from Education and a Senior Management Analyst from State indicated via e-mail that they had no comments on the draft. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Director of OMB; the Secretaries of Commerce, Education, the Interior, and State; the Administrator of GSA; and other interested parties. In addition, the report will be available at no charge on GAO's website at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objective was to examine the accuracy of the cost and schedule performance ratings on the Dashboard for selected investments. We selected 5 agencies and 10 investments to review. To select these agencies and investments, we used the Office of Management and Budget's (OMB) fiscal year 2011 exhibit 53 to identify 6 agencies with the largest information technology (IT) budgets, after excluding the 10 agencies included in our first two Dashboard reviews. We then excluded the National Aeronautics and Space Administration because it did not have enough investments that met our selection criteria. As a result, we selected the Departments of Commerce, Education, the Interior, and State, as well as the General Services Administration (GSA). In selecting the specific investments at each agency, we identified the largest investments that, according to the fiscal year 2011 budget, were spending at least 25 percent of their budget on IT development, modernization, and enhancement work. To narrow this list, we excluded investments that, according to the fiscal year 2011 budget, were in the planning phase or were infrastructure-related. We then selected the top 2 investments per agency. The 10 final investments were Commerce's Geostationary Operational Environmental Satellite--Series R Ground Segment project and Advanced Weather Interactive Processing System, Education's Integrated Partner Management system and National Student Loan Data System, Interior's Financial and Business Management System and Land Satellites Data System, State's Global Foreign Affairs Compensation System and Integrated Logistics Management System, and GSA's Regional Business Application and System for Tracking and Administering Real Property/Realty Services. To assess the accuracy and currency of the cost and schedule performance ratings on the Dashboard, we evaluated, where available, agency or contractor documentation related to cost and schedule performance for 8 of the selected investments to determine their cumulative and current cost and schedule performance and compared our ratings with the performance ratings on the Dashboard. The analyzed investment performance-related documentation included program management reports, internal performance management system performance ratings, earned value management data, investment schedules, system requirements, and operational analyses. To determine cumulative cost performance, we weighted our cost performance ratings based on each investment's percentage of development spending (represented in our analysis of the program management reports and earned value data) and steady-state spending (represented in our evaluation of the operational analysis), and compared our weighted ratings with the cost performance ratings on the Dashboard. To evaluate earned value data, we determined cumulative cost variance for each month from October 2010 through March 2011. To assess the accuracy of the cost data, we electronically tested the data to identify obvious problems with completeness or accuracy, and interviewed agency and program officials about the earned value management systems. We did not test the adequacy of the agency or contractor cost-accounting systems. Our evaluation of these cost data was based on what we were told by each agency and the information it could provide. To determine cumulative schedule performance, we analyzed requirements documentation to determine whether investments were on schedule in implementing planned requirements. To perform the schedule analysis of the earned value data, we determined the investment's cumulative schedule variance for each month from October 2010 through March 2011. To determine both current cost and schedule performance, we evaluated investment data from the most recent 6 months of performance for each month from October 2010 through March 2011. We were not able to assess the cost or schedule performance of 2 selected investments, Education's Integrated Partner Management investment and National Student Loan Data System investment. During the course of our review, we determined that the department did not establish a validated performance baseline for the Integrated Partner Management investment until March 2011. Therefore, the underlying cost and schedule performance data for the time frame we analyzed were not sufficiently reliable. We also determined during our review that the department recently rescoped development work on the National Student Loan Data System investment and did not have current, representative performance data available. Further, we interviewed officials from OMB and the selected agencies to obtain additional information on agencies' efforts to ensure the accuracy of the data used to rate investment performance on the Dashboard. We used the information provided by agency officials to identify the factors contributing to inaccurate cost and schedule performance ratings on the Dashboard. We conducted this performance audit from February 2011 to November 2011 at the selected agencies' offices in the Washington, D.C., metropolitan area. Our work was done in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Below are descriptions of each of the selected investments that are included in this review. The Advanced Weather Interactive Processing System is used to ingest, analyze, forecast, and disseminate operational weather data. Enhancements currently being implemented to the system are intended to improve the system's infrastructure and position the National Weather Service to meet future requirements in the years ahead. The Geostationary Operational Environmental Satellite--Series R Ground Segment includes the development of key systems needed for the on- orbit operation of the next generation of geostationary operational environmental satellites, receipt and processing of information, and distribution of satellite data products to users. The Integrated Partner Management investment is to replace five legacy applications and provide, in one solution, improved eligibility, enrollment, and oversight processes for schools, lenders, federal and state agencies, and other entities that administer financial aid to help students pay for higher education. The National Student Loan Data System includes continued operations and maintenance of an application that manages the integration of data regarding student aid applicants and recipients. The investment also includes a development portion that is intended to ensure that reporting and data collection processes are in place to efficiently determine partner eligibility to participate in higher education financial aid programs, and ensure only eligible students receive loans, grants, or work study awards. The Financial and Business Management System is an enterprisewide system that is intended to replace most of the department's administrative systems, including budget, acquisitions, financial assistance, core finance, personal and real property, and enterprise management information systems. The Land Satellites Data System investment includes the continued operation of Landsat satellites and the IT-related costs for the ground system that captures, archives, processes, and distributes data from land- imaging satellites. The development efforts under way are intended to enable the U.S. Geological Survey to continue to capture, archive, process, and deliver images of the earth's surface to customers. The Global Foreign Affairs Compensation System is intended to enable the department to replace six obsolete legacy systems with a single system better suited to support the constant change of taxation and benefits requirements in more than 180 countries, and to help the department make accurate and timely payments to its diverse workforce and retired Foreign Service officers. The Integrated Logistics Management System is the department's enterprisewide supply chain management system. It is intended to be the backbone of the department's logistics infrastructure and provide for requisition, procurement, distribution, transportation, receipt, asset management, mail, diplomatic pouch, and tracking of goods and services both domestically and overseas. The Regional Business Application includes three systems that are intended to provide a means to transition from a semi-automated to an integrated acquisition process, and provide tools to expedite the processing of customer funding documents and vendor invoices. The System for Tracking and Administering Real Property/Realty Services investment includes continued operations of a transaction processor that supports space management, revenue generation, and budgeting. The investment also includes development of a new system that is intended to simplify user administration and reporting, and improve overall security. Table 3 provides additional details for each of the selected investments in our review. In addition to the contact named above, the following staff also made key contributions to this report: Carol Cha, Assistant Director; Emily Longcore; Lee McCracken; Karl Seifert; and Kevin Walsh.
|
Each year the federal government spends billions of dollars on information technology (IT) investments. Given the importance of program oversight, the Office of Management and Budget (OMB) established a public website, referred to as the IT Dashboard, that provides detailed information on about 800 federal IT investments, including assessments of actual performance against cost and schedule targets (referred to as ratings). According to OMB, these data are intended to provide both a near-real-time and historical perspective of performance. In the third of a series of Dashboard reviews, GAO was asked to examine the accuracy of the Dashboard's cost and schedule performance ratings. To do so, GAO compared the performance of eight major investments undergoing development from four agencies with large IT budgets (the Departments of Commerce, the Interior, and State, as well as the General Services Administration) against the corresponding ratings on the Dashboard, and interviewed OMB and agency officials. Since GAO's first report in July 2010, the accuracy of investment ratings has improved because of OMB's refinement of the Dashboard's cost and schedule calculations. Most of the Dashboard's cost and schedule ratings for the eight selected investments were accurate; however, they did not sufficiently emphasize recent performance for informed oversight and decision making. (1) Cost ratings were accurate for four of the investments that GAO reviewed, and schedule ratings were accurate for seven. In general, the number of discrepancies found in GAO's reviews has decreased. In each case where GAO found rating discrepancies, the Dashboard's ratings showed poorer performance than GAO's assessment. Reasons for inaccurate Dashboard ratings included missing or incomplete agency data submissions, erroneous data submissions, and inconsistent investment baseline information. In all cases, the selected agencies found and corrected these inaccuracies in subsequent Dashboard data submissions. Such continued diligence by agencies to report complete and timely data will help ensure that the Dashboard's performance ratings are accurate. In the case of the General Services Administration, officials did not disclose that performance data on the Dashboard were unreliable for one investment because of an ongoing baseline change. Without proper disclosure of pending baseline changes, OMB and other external oversight bodies may not have the appropriate information needed to make informed decisions. (2) While the Dashboard's cost and schedule ratings provide a cumulative view of performance, they did not emphasize current performance--which is needed to meet OMB's goal of reporting near-real-time performance. GAO's past work has shown cost and schedule performance information from the most recent 6 months to be a reliable benchmark for providing a near-real-time perspective on investment status. By combining recent and historical performance, the Dashboard's ratings may mask the current status of the investment, especially for lengthy acquisitions. GAO found that this discrepancy between cumulative and current performance ratings was reflected in two of the selected investments. For example, a Department of the Interior investment's Dashboard cost rating indicated normal performance from December 2010 through March 2011, whereas GAO's analysis of current performance showed that cost performance needed attention for those months. If fully implemented, OMB's recent and ongoing changes to the Dashboard, including new cost and schedule rating calculations and updated investment baseline reporting, should address this issue. These Dashboard changes could be important steps toward improving insight into current performance and the utility of the Dashboard for effective executive oversight. GAO plans to evaluate the new version of the Dashboard once it is publicly available in 2012. GAO is recommending that the General Services Administration disclose on the Dashboard when one of its investments is in the process of a rebaseline. Since GAO previously recommended that OMB improve how it rates investments relative to current performance, it is not making further recommendations. The General Services Administration agreed with the recommendation. OMB provided technical comments, which GAO incorporated as appropriate.
| 6,451 | 811 |
Multiple executive-branch agencies have key roles and responsibilities for different steps of the federal government's personnel security clearance process. For example, in 2008, Executive Order 13467 designated the DNI as the Security Executive Agent. As such, the DNI is responsible for developing policies and procedures to help ensure the effective, efficient, and timely completion of background investigations and adjudications relating to determinations of eligibility for access to classified information and eligibility to hold a sensitive position. In turn, executive branch agencies determine which of their positions--military, civilian, or private- industry contractors--require access to classified information and, therefore, which people must apply for and undergo a personnel security clearance investigation. Investigators--often contractors--from Federal Investigative Services within the Office of Personnel Management (OPM) conduct these investigations for most of the federal government using federal investigative standards and OPM internal guidance as criteria for collecting background information on applicants. OPM provides the resulting investigative reports to the requesting agencies for their internal adjudicators, who use the information along with the federal adjudicative guidelines to determine whether an applicant is eligible for a personnel security clearance. DOD is OPM's largest customer, and its Under Secretary of Defense for Intelligence (USD(I)) is responsible for developing, coordinating, and overseeing the implementation of DOD policy, programs, and guidance for personnel, physical, industrial, information, operations, chemical/biological, and DOD Special Access Program security. Additionally, the Defense Security Service, under the authority, direction, and control of USD(I), manages and administers the DOD portion of the National Industrial Security Program for the DOD components and other federal agencies by agreement, as well as providing security education and training, among other things. Section 3001 of the Intelligence Reform and Terrorism Prevention Act of 2004 prompted government-wide suitability and security clearance reform. The act required, among other matters, an annual report to Congress--in February of each year from 2006 through 2011--about progress and key measurements on the timeliness of granting security clearances. It specifically required those reports to include the periods of time required for conducting investigations and adjudicating or granting clearances. However, the Intelligence Reform and Terrorism Prevention Act requirement for the executive branch to annually report on its timeliness expired in 2011. More recently the Intelligence Authorization Act of 2010 established a new requirement that the President annually report to Congress the total amount of time required to process certain security clearance determinations for the previous fiscal year for each element of the Intelligence Community. The Intelligence Authorization Act of 2010 additionally requires that those annual reports include the total number of active security clearances throughout the United States government, to include both government employees and contractors. Unlike the Intelligence Reform and Terrorism Prevention Act of 2004 reporting requirement, the requirement to submit these annual reports does not expire. In 2007, DOD and the Office of the Director of National Intelligence (ODNI) formed the Joint Security Clearance Process Reform Team, known as the Joint Reform Team, to improve the security clearance process government-wide. In a 2008 memorandum, the President called for a reform of the security clearance and suitability determination processes and subsequently issued Executive Order 13467, which in addition to designating the DNI as the Security Executive Agent, also designated the Director of OPM as the Suitability Executive Agent. Specifically, the Director of OPM, as Suitability Executive Agent, is responsible for developing policies and procedures to help ensure the effective, efficient, and timely completion of investigations and adjudications relating to determinations of suitability, to include consideration of an individual's character or conduct. Further, the executive order established a Suitability and Security Clearance Performance Accountability Council to oversee agency progress in implementing the reform vision. Under the executive order, this council is accountable to the President for driving implementation of the reform effort, including ensuring the alignment of security and suitability processes, holding agencies accountable for implementation, and establishing goals and metrics for progress. The order also appointed the Deputy Director for Management at the Office of Management and Budget as the chair of the council. In the first step of the personnel security clearance process, executive branch officials determine the requirements of a federal civilian position, including assessing the risk and sensitivity level associated with that position, to determine whether it requires access to classified information and, if required, the level of access. Security clearances are generally categorized into three levels: top secret, secret, and confidential. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national defense or foreign relations. A sound requirements process is important because requests for clearances for positions that do not need a clearance or need a lower level of clearance increase investigative workloads and costs. In 2012, we reported that the DNI, as the Security Executive Agent, had not provided agencies clearly defined policy and procedures to consistently determine if a position requires a security clearance, or established guidance to require agencies to review and revise or validate existing federal civilian position designations. We recommended that the DNI issue policy and guidance for the determination, review, and validation of requirements, and ODNI concurred with those recommendations, stating that it recognized the need to issue or clarify policy. Currently, OPM and ODNI are in the process of issuing a joint revision to the regulations guiding requirements determination. Specifically, according to officials from the ODNI, these offices had obtained permission from the President to re- issue the federal regulation jointly, drafted the proposed rule, and obtained public input on the regulation by publishing it in the Federal Register. According to ODNI and OPM officials, they will jointly review and address comments and prepare the final rule for approval from the Office of Management and Budget. Once an applicant is selected for a position that requires a personnel security clearance, the applicant must obtain a security clearance in order to gain access to classified information. While different departments and agencies may have slightly different personnel security clearance processes, the phases that follow--application submission, investigation, and adjudication--are illustrative of a typical process. Since 1997, federal agencies have followed a common set of personnel security investigative standards and adjudicative guidelines for determining whether federal civilian workers, military personnel, and others, such as private industry personnel contracted by the government, are eligible to hold a security clearance. Figure 1 illustrates the steps in the personnel security clearance process, which is representative of the general process followed by most executive branch agencies and includes procedures for appeals and renewals. During the application submission phase, a security officer from an executive branch agency (1) requests an investigation of an individual requiring a clearance; (2) forwards a personnel security questionnaire (Standard Form 86) using OPM's electronic Questionnaires for Investigations Processing (e-QIP) system or a paper copy of the Standard Form 86 to the individual to complete; (3) reviews the completed questionnaire; and (4) sends the questionnaire and supporting documentation, such as fingerprints and signed waivers, to OPM or its investigation service provider. During the investigation phase, investigators--often contractors--from OPM's Federal Investigative Services use federal investigative standards and OPM's internal guidance to conduct and document the investigation of the applicant. The scope of information gathered in an investigation depends on the needs of the client agency and the personnel security clearance requirements of an applicant's position, as well as whether the investigation is for an initial clearance or a reinvestigation to renew a clearance. For example, in an investigation for a top secret clearance, investigators gather additional information through more time-consuming efforts, such as traveling to conduct in-person interviews to corroborate information about an applicant's employment and education. However, many background investigation types have similar components. For instance, for all investigations, information that applicants provide on electronic applications are checked against numerous databases. Both secret and top secret investigations contain credit and criminal history checks, while top secret investigations also contain citizenship, public record, and spouse checks as well as reference interviews and an Enhanced Subject Interview to gain insight into an applicant's character. Table 1 highlights the investigative components generally associated with the secret and top secret clearance levels. After OPM, or the designated provider, completes the background investigation, the resulting investigative report is provided to the adjudicating agency. During the adjudication phase, adjudicators from the hiring agency use the information from the investigative report to determine whether an applicant is eligible for a security clearance. To make clearance eligibility decisions, the adjudication guidelines specify that adjudicators consider 13 specific areas that elicit information about (1) conduct that could raise security concerns and (2) factors that could allay those security concerns and permit granting a clearance. If a clearance is denied or revoked, appeals of the adjudication decision are possible. We have work underway to review the process for security revocations. We expect to issue a report on this process by spring of 2014. Once an individual has obtained a personnel security clearance and as long as they remain in a position that requires access to classified national security information, that individual is reinvestigated periodically at intervals that are dependent on the level of security clearance. For example, top secret clearance holders are reinvestigated every 5 years, and secret clearance holders are reinvestigated every 10 years. Some of the information gathered during a reinvestigation would focus specifically on the period of time since the last approved clearance, such as a check of local law enforcement agencies where an individual lived and worked since the last investigation. Further, the Joint Reform Team began an effort to review the possibility of continuous evaluations, which would ascertain on a more frequent basis whether an eligible employee with access to classified information continues to meet the requirements for access. Specifically, the team proposed to move from periodic review to that of continuous evaluation, meaning annually for top secret and similar positions and at least once every five years for secret or similar positions, as a means to reveal security-relevant information earlier than the previous method, and provide increased scrutiny on populations that could potentially represent risk to the government because they already have access to classified information. The current federal investigative standards state that the top secret level of security clearances may be subject to continuous evaluation. The executive branch has developed some metrics to assess quality at different phases of the personnel security clearance process; however, those metrics have not been fully developed and implemented. To promote oversight and positive outcomes, such as maximizing the likelihood that individuals who are security risks will be scrutinized more closely, we have emphasized, since the late 1990s, the need to build and monitor quality throughout the personnel security clearance process. Having assessment tools and performance metrics in place is a critical initial step toward instituting a program to monitor and independently validate the effectiveness and sustainability of corrective measures. However, we have previously reported that executive branch agencies have not fully developed and implemented metrics to measure quality in key aspects of the personnel security clearance process, including: (1) investigative reports; (2) adjudicative files; and (3) the reciprocity of personnel security clearances, which is an agency's acceptance of a background investigation or clearance determination completed by any authorized investigative or adjudicative executive branch agency. We have previously identified deficiencies in OPM's investigative reports--results from background investigations--but as of August 2013 OPM had not yet implemented metrics to measure the completeness of these reports. OPM supplies about 90 percent of all federal clearance investigations, including those for DOD. For example, in May 2009 we reported that, with respect to DOD initial top secret clearances adjudicated in July 2008, documentation was incomplete for most OPM investigative reports. We independently estimated that 87 percent of about 3,500 investigative reports that DOD adjudicators used to make clearance decisions were missing at least one type of documentation required by federal investigative standards. The type of documentation most often missing from investigative reports was verification of all of the applicant's employment, followed by information from the required number of social references for the applicant and complete security forms. We also estimated that 12 percent of the 3,500 investigative reports did not contain a required personal subject interview. At the time of our 2009 review, OPM did not measure the completeness of its investigative reports, which limited the agency's ability to explain the extent or the reasons why some reports were incomplete. As a result of the incompleteness of OPM's investigative reports on DOD personnel, we recommended in May 2009 that OPM measure the frequency with which its investigative reports meet federal investigative standards, so that the executive branch can identify the factors leading to incomplete reports and take corrective actions. In a subsequent February 2011 report, we noted that OMB, ODNI, DOD, and OPM leaders had provided congressional members with metrics to assess the quality of the security clearance process, including investigative reports and other aspects of the process. For example, the Rapid Assessment of Incomplete Security Evaluations was one tool the executive branch agencies planned to use for measuring quality, or completeness, of OPM's background investigations. However, according to an OPM official in June 2012, OPM chose not to use this tool. Instead, OPM opted to develop another tool. In following up on our 2009 recommendations, as of August 2013, OPM had not provided enough details on its tool for us to determine if the tool had met the intent of our 2009 recommendation, and included the attributes of successful performance measures identified in best practices, nor could we determine the extent to which the tool was being used. OPM also assesses the quality of investigations based on voluntary reporting from customer agencies. Specifically, OPM tracks investigations that are (1) returned for rework from the requesting agency, (2) identified as deficient using a web-based customer satisfaction survey, or (3) identified as deficient through adjudicator calls to OPM's quality hotline. However, in our past work, we have noted that the number of investigations returned for rework is not by itself a valid indicator of the quality of investigative work because DOD adjudication officials told us that they have been reluctant to return incomplete investigations in anticipation of delays that would impact timeliness. Further, relying on agencies to voluntarily provide information on investigation quality may not reflect the quality of OPM's total investigation workload. We are beginning work to further review OPM's actions to improve the quality of investigations. We have also reported that deficiencies in investigative reports affect the quality of the adjudicative process. Specifically, in November 2010, we reported that agency officials who utilize OPM as their investigative service provider cited challenges related to deficient investigative reports as a factor that slows agencies' abilities to make adjudicative decisions. The quality and completeness of investigative reports directly affects adjudicator workloads, including whether additional steps are required before adjudications can be made, as well as agency costs. For example, some agency officials noted that OPM investigative reports do not include complete copies of associated police reports and criminal record checks. Several agency officials stated that in order to avoid further costs or delays that would result from working with OPM, they often choose to perform additional steps internally to obtain missing information. According to ODNI and OPM officials, OPM investigators provide a summary of police and criminal reports and assert that there is no policy requiring inclusion of copies of the original records. However, ODNI officials also stated that adjudicators may want or need entire records as critical elements may be left out. For example, according to Defense Office of Hearings and Appeals officials, in one case, an investigator's summary of a police report incorrectly identified the subject as a thief when the subject was actually the victim. DOD has taken some intermittent steps to implement measures to determine the completeness of adjudicative files to address issues identified in our 2009 report regarding the quality of DOD adjudications. In 2009, we found that some clearances were granted by DOD adjudicators even though some required data were missing from the OPM investigative reports used to make such determinations. For example, we estimated in our 2009 review that 22 percent of the adjudicative files for about 3,500 initial top secret clearances that were adjudicated favorably did not contain all the required documentation, even though DOD regulations require that adjudicators maintain a record of each favorable and unfavorable adjudication decision and document the rationale for granting clearance eligibility to applicants with security concerns revealed during the investigation. Documentation most frequently missing from adjudicative files was the rationale for granting security clearances to applicants with security concerns related to foreign influence, financial considerations, and criminal conduct. At the time of our 2009 review, DOD did not measure the completeness of its adjudicative files, which limited the agency's ability to explain the extent or the reasons why some files are incomplete. In 2009, we made two recommendations to improve the quality of adjudicative files. First, we recommended that DOD measure the frequency with which adjudicative files meet requirements, so that the executive branch can identify the factors leading to incomplete files and include the results of such measurement in annual reports to Congress on clearances. In November 2009, DOD subsequently issued a memorandum that established a tool to measure the frequency with which adjudicative files meet the requirements of DOD regulation. Specifically, the DOD memorandum stated that it would use a tool called the Review of Adjudication Documentation Accuracy and Rationales, or RADAR, to gather specific information about adjudication processes at the adjudication facilities and assess the quality of adjudicative documentation. In following up on our 2009 recommendations, as of 2012, a DOD official stated that RADAR had been used in fiscal year 2010 to evaluate some adjudications, but was not used in fiscal year 2011 due to funding shortfalls. DOD restarted the use of RADAR in fiscal year 2012. Second, we recommended that DOD issue guidance to clarify when adjudicators may use incomplete investigative reports as the basis for granting clearances. In response to our recommendation, DOD's November 2009 guidance that established RADAR also outlines the minimum documentation requirements adjudicators must adhere to when documenting personnel security clearance determinations for cases with potentially damaging information. In addition, DOD issued guidance in March 2010 that clarifies when adjudicators may use incomplete investigative reports as the basis for granting clearances. This guidance provides standards that can be used for the sufficient explanation of incomplete investigative reports. While some efforts have been made to develop quality metrics, agencies have not yet implemented metrics for tracking the reciprocity of personnel security clearances, which is an agency's acceptance of a background investigation or clearance determination completed by any authorized investigative or adjudicative executive branch agency. Although executive branch agency officials have stated that reciprocity is regularly granted, as it is an opportunity to save time as well as reduce costs and investigative workloads, we reported in 2010 that agencies do not consistently and comprehensively track the extent to which reciprocity is granted government-wide. ODNI guidance requires, except in limited circumstances, that all Intelligence Community elements "accept all in- scope security clearance or access determinations." Additionally, Office of Management and Budget guidance requires agencies to honor a clearance when (1) the prior clearance was not granted on an interim or temporary basis; (2) the prior clearance investigation is current and in- scope; (3) there is no new adverse information already in the possession of the gaining agency; and (4) there are no conditions, deviations, waivers, or unsatisfied additional requirements (such as polygraphs) if the individual is being considered for access to highly sensitive programs. While the Performance Accountability Council has identified reciprocity as a government-wide strategic goal, we have found that agencies do not consistently and comprehensively track when reciprocity is granted, and lack a standard metric for tracking reciprocity. Further, while OPM and the Performance Accountability Council have developed quality metrics for reciprocity, the metrics do not measure the extent to which reciprocity is being granted. For example, OPM created a metric in early 2009 to track reciprocity, but this metric only measures the number of investigations requested from OPM that are rejected based on the existence of a previous investigation and does not track the number of cases in which an existing security clearance was or was not successfully honored by the agency. Without comprehensive, standardized metrics to track reciprocity and consistent documentation of the findings, decision makers will not have a complete picture of the extent to which reciprocity is granted or the challenges that agencies face when attempting to honor previously granted security clearances. In 2010, we reported that executive branch officials routinely honor other agencies' security clearances, and personnel security clearance information is shared between OPM, DOD, and, to some extent, Intelligence Community databases. However, we found that some agencies find it necessary to take additional steps to address limitations with available information on prior investigations, such as insufficient information in the databases or variances in the scope of investigations, before granting reciprocity. For instance, OPM has taken steps to ensure certain clearance data necessary for reciprocity are available to adjudicators, such as holding interagency meetings to determine new data fields to include in shared data. However, we also found that the shared information available to adjudicators contains summary-level detail that may not be complete. As a result, agencies may take steps to obtain additional information, which creates challenges to immediately granting reciprocity. Further, in 2010 we reported that because there is no government-wide standardized training and certification process for investigators and adjudicators, according to agency officials, a subject's prior clearance investigation and adjudication may not meet the standards of the inquiring agency. Although OPM has developed some training, security clearance investigators and adjudicators are not required to complete a certain type or number of classes. As a result, the extent to which investigators and adjudicators receive training varies by agency. Consequently, as we have previously reported, agencies are reluctant to be accountable for investigations and/or adjudications conducted by other agencies or organizations. To achieve fuller reciprocity, clearance-granting agencies seek to have confidence in the quality of prior investigations and adjudications. Consequently, we recommended in 2010 that the Deputy Director of Management, Office of Management and Budget, in the capacity as Chair of the Performance Accountability Council, should develop comprehensive metrics to track reciprocity and then report the findings from the expanded tracking to Congress. Although OMB agreed with our recommendation, a 2011 ODNI report found that Intelligence Community agencies experienced difficulty reporting on reciprocity. The agencies are required to report on a quarterly basis the number of security clearance determinations granted based on a prior existing clearance as well as the number not granted when a clearance existed. The numbers of reciprocal determinations made and denied are categorized by the individual's originating and receiving organizational type: (1) government to government, (2) government to contractor, (3) contractor to government, and (4) contractor to contractor. The report stated that data fields necessary to collect the information described above do not currently reside in any of the datasets available and the process was completed in an agency specific, semi-manual method. Further, the Deputy Assistant Director for Special Security of the Office of the Director of National Intelligence noted in testimony in June 2012 that measuring reciprocity is difficult, and despite an abundance of anecdotes, real data is hard to come by. To address this problem, ODNI is developing a web-based form for individuals to submit their experience with reciprocity issues to the ODNI. According to ODNI, this will allow them to collect empirical data, perform systemic trend analysis, and assist agencies with achieving workable solutions. As previously discussed, DOD accounts for the majority of security clearances within the federal government. We initially placed DOD's personnel security clearance program on our high-risk list in 2005 because of delays in completing clearances. It remained on our list until 2011 because of ongoing concerns about delays in processing clearances and problems with the quality of investigations and adjudications. In February 2011, we removed DOD's personnel security clearance program from our high-risk list largely because of the department's demonstrated progress in expediting the amount of time processing clearances. We also noted DOD's efforts to develop and implement tools to evaluate the quality of investigations and adjudications. Even with the significant progress leading to removal of DOD's program from our high-risk list, we noted in June 2012 that sustained leadership would be necessary to continue to implement, monitor, and update outcome-focused performance measures. The initial development of some tools and metrics to monitor and track quality not only for DOD but government-wide were positive steps; however, full implementation of these tools and measures government-wide have not yet been realized. While progress in DOD's personnel security clearance program resulted in the removal of this area from our high-risk list, significant government- wide challenges remain in ensuring that personnel security clearance investigations and adjudications are high-quality. In conclusion, oversight of the reform efforts to measure and improve the quality of the security clearance process--including background investigations--are imperative next steps. Failing to do so increases the risk of damaging, unauthorized disclosures of classified information. The progress that was made with respect to expediting the amount of time processing clearances would not have been possible without committed and sustained congressional oversight and the leadership of the Performance Accountability Council. Further actions are needed now to fully develop and implement metrics to oversee quality at every step in the process. Chairman Carper, Ranking Member Coburn, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Committee may have at this time. For further information on this testimony, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, who may be reached at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Lori Atkinson (Assistant Director), Darreisha Bates, Renee Brown, John Van Schaik, and Michael Willems. Personnel Security Clearances: Further Actions Needed to Improve the Process and Realize Efficiencies. GAO-13-728T. Washington, D.C.: June 20, 2013. Managing for Results: Agencies Should More Fully Develop Priority Goals under the GPRA Modernization Act. GAO-13-174. Washington, D.C.: April 19, 2013. Security Clearances: Agencies Need Clearly Defined Policy for Determining Civilian Position Requirements. GAO-12-800. Washington, D.C.: July 12, 2012. Personnel Security Clearances: Continuing Leadership and Attention Can Enhance Momentum Gained from Reform Effort. GAO-12-815T. Washington, D.C.: June 21, 2012. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue. GAO-12-342SP. Washington, D.C.: February 28, 2012. Background Investigations: Office of Personnel Management Needs to Improve Transparency of Its Pricing and Seek Cost Savings. GAO-12-197. Washington, D.C.: February 28, 2012. GAO's 2011 High-Risk Series: An Update. GAO-11-394T. Washington, D.C.: February 17, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Personnel Security Clearances: Overall Progress Has Been Made to Reform the Governmentwide Security Clearance Process. GAO-11-232T. Washington, D.C.: December 1, 2010. Personnel Security Clearances: Progress Has Been Made to Improve Timeliness but Continued Oversight Is Needed to Sustain Momentum. GAO-11-65. Washington, D.C.: November 19, 2010. DOD Personnel Clearances: Preliminary Observations on DOD's Progress on Addressing Timeliness and Quality Issues. GAO-11-185T. Washington, D.C.: November 16, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD's Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD's Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO's High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
A high-quality personnel security clearance process is necessary to minimize the associated risks of unauthorized disclosures of classified information and to help ensure that information about individuals with criminal activity or other questionable behavior is identified and assessed as part of the process for granting or retaining clearances. Personnel security clearances allow individuals access to classified information that, through unauthorized disclosure, can in some cases cause exceptionally grave damage to U.S. national security. In 2012, the DNI reported that more than 4.9 million federal government and contractor employees held or were eligible to hold a security clearance. GAO has reported that the federal government spent over $1 billion to conduct background investigations (in support of security clearances and suitability determinations--the consideration of character and conduct for federal employment) in fiscal year 2011. This testimony addresses the (1) overall security clearance process, including roles and responsibilities; and (2) extent that executive branch agencies have metrics to help determine the quality of the security clearance process. This testimony is based on GAO work issued between 2008 and 2013 on DOD's personnel security clearance program and governmentwide suitability and security clearance reform efforts. As part of that work, GAO (1) reviewed statutes, federal guidance, and processes, (2) examined agency data on the timeliness and quality of investigations and adjudications, (3) assessed reform efforts, and (4) reviewed samples of case files for DOD personnel. Multiple executive branch agencies are responsible for different steps of the multi-phased personnel security clearance process that includes: determination of whether a position requires a clearance, application submission, investigation, and adjudication. Agency officials must first determine whether a federal civilian position requires access to classified information. The Director of National Intelligence (DNI) and the Office of Personnel Management (OPM) are in the process of issuing a joint revision to the regulations guiding this step in response to GAO's 2012 recommendation that the DNI issue policy and guidance for the determination, review, and validation of requirements. After an individual has been selected for a federal civilian position that requires a personnel security clearance and the individual submits an application for a clearance, investigators--often contractors--from OPM conduct background investigations for most executive branch agencies. Adjudicators from requesting agencies use the information from these investigations and consider federal adjudicative guidelines to determine whether an applicant is eligible for a clearance. Further, individuals are subject to reinvestigations at intervals that are dependent on the level of security clearance. For example, top secret and secret clearance holders are to be reinvestigated every 5 years and 10 years, respectively. Executive branch agencies have not fully developed and implemented metrics to measure quality throughout the personnel security clearance process. For more than a decade, GAO has emphasized the need to build and monitor quality throughout the personnel security clearance process to promote oversight and positive outcomes such as maximizing the likelihood that individuals who are security risks will be scrutinized more closely. For example, GAO reported in May 2009 that, with respect to initial top secret clearances adjudicated in July 2008 for the Department of Defense (DOD), documentation was incomplete for most of OPM's investigative reports. GAO independently estimated that 87 percent of about 3,500 investigative reports that DOD adjudicators used to make clearance eligibility decisions were missing some required documentation, such as the verification of all of the applicant's employment. GAO also estimated that 12 percent of the 3,500 reports did not contain the required personal subject interview. In 2009, GAO recommended that OPM measure the frequency with which its investigative reports met federal investigative standards in order to improve the quality of investigation documentation. As of August 2013, however, OPM had not implemented this recommendation. GAO's 2009 report also identified issues with the quality of DOD adjudications. Specifically, GAO estimated that 22 percent of about 3,500 initial top secret clearances that were adjudicated favorably did not contain all the required documentation. As a result, in 2009 GAO recommended that DOD measure the frequency with which adjudicative files meet requirements. In November 2009, DOD issued a memorandum that established a tool called the Review of Adjudication Documentation Accuracy and Rationales (RADAR) to measure the frequency with which adjudicative files meet the requirements of DOD regulation. According to a DOD official, RADAR had been used in fiscal year 2010 to evaluate some adjudications, but was not used in fiscal year 2011 due to funding shortfalls. DOD restarted the use of RADAR in fiscal year 2012.
| 7,177 | 982 |
The Small Business Act of 1953 created SBA, whose function is to aid, counsel, assist, and protect the interests of small businesses. The Act also stipulated that SBA would ensure small businesses a fair proportion of government contracts. The Business Opportunity Development Reform Act of 1988 amended the Small Business Act to require the President to establish an annual government-wide goal of awarding not less than 20 percent of prime contract dollars to small businesses. The Small Business Reauthorization Act of 1997 further amended the Small Business Act to increase this goal to not less than 23 percent. SBA is responsible for coordinating with executive branch agencies to ensure that the federal government meets the mandated goal. Of the federal agencies with procurement authority, 20 agencies accounted for over 99 percent of total government contract dollars in fiscal year 2000. These 20 agencies and their fiscal year 2000 procurement dollars, as reported to FPDC, are listed in appendix IV. FPDC collects data on prime contract actions from over 50 executive branch agencies. These agencies report their prime contract actions to FPDC on standard forms. Since fiscal year 1998, FPDC has used this information to compile the "Report on Annual Procurement Preference Goal Achievements" that summarizes total government prime contract actions and each agency's small business contract actions. In fiscal year 2000, SBA assigned small business prime contract goals directly to agencies after initial agency goals did not total the 23-percent government-wide goal. At the same time, SBA assigned fiscal year 2001 goals without engaging in a formal negotiation process as had been done in the past. SBA adopted this change in strategy to ensure that the 23-percent goal was established, and that it was done in a timely manner. The government had difficulty establishing the 23-percent goal in fiscal year 2000. In addition, only 22.26 percent of procurement dollars were actually awarded to small businesses. The difficulties were primarily because the Department of Energy was directed to change its method of calculating prime contract small business awards. While SBA's direct assignment of goals was intended to meet the statutory goal-setting requirement, SBA has not documented the criteria it used to derive the assigned goals. Furthermore, the direct assignment of goals has reduced the consultation and negotiation process envisioned by Congress. Some agency officials noted that they did not have the opportunity to negotiate fiscal year 2001 goals. Federal agencies' initial goal submissions to SBA for fiscal year 2000 totaled only 20.4 percent in the aggregate, falling short of the mandated 23-percent government-wide goal. SBA's requests to agencies to increase their goals resulted in a government-wide goal of only 21.2 percent. In February 2000, SBA decided to assign goals directly to the 20 agencies that account for over 99 percent of procurement dollars so that the 23-percentgoal could be met. At the same time, in a conference call between SBA and the agencies, SBA assigned fiscal year 2001 goals that were identical to the 2000 goals. In a memorandum to the agencies, the SBA Associate Deputy Administrator cited a significant change in the way the Department of Energy calculates its small business achievements as a key reason for the difficulties in setting the fiscal year 2000 goal and a primary justification for SBA's decision to unilaterally assign goals. As shown in appendix IV, the Department of Energy is second only to the Department of Defense in the procurement dollars it reports. Based on a 1991 letter from the Office of Management and Budget's Office of Federal Procurement Policy, the Department of Energy had been counting contracts awarded by its management and operating contractors as prime contracts rather than subcontracts. Specifically, the letter stated that "a strong case can be made for management and operating contractors to be treated, for the purposes of small business goaling, as Government prime contracts." The ruling noted that procurements made by management and operating contractors are for the direct benefit of the federal government and that these contractors are required to follow Department of Energy procurement rules and policies that are similar to those government agencies must use in awarding contracts. Using this methodology, the Department of Energy awarded about 18 percent of its prime contracts to small businesses in fiscal year 1998 and about 17 percent in 1999, according to FPDC reports. However, in the opinion of SBA officials, awards made by the Department's management and operating contractors are actually subcontracts, not prime contracts. Nevertheless, the Department of Energy continued to support its practice of counting them as prime contracts. In 1999, SBA and the Department of Energy asked the Office of Federal Procurement Policy to resolve this disagreement. In November 1999, the Office of Federal Procurement Policy reversed its earlier position and, supporting SBA's position, determined that the contracts awarded by the Department's management and operating contractors should be counted as subcontracts. The Administrator, Office of Federal Procurement Policy, stated in the decision that federal agencies should be consistent in the types of awards counted as prime contracts. As a result of the change in methodology, the Department of Energy's reported prime contract actions to small businesses fell sharply. In fiscal year 2000, the Department's prime contract goal dropped to 5 percent from its 18-percent goal in fiscal year 1999. This reduction in the Department of Energy's small business goal affected the government's overall ability to establish the 23-percent goal. According to SBA officials, the direct assignment of goals--as was done in fiscal years 2000 and 2001--has not only ensured that the mandated 23-percent goal will be established, but it has also ensured that goals will be established in a timely manner. Guidance issued by the Office of Management and Budget's Office of Federal Procurement Policy stipulates that small business prime contract goals are to be established by the start of the fiscal year. However, both SBA and the agencies have been delinquent in setting goals in a timely manner. For example, in fiscal years 1998, 1999, and to a lesser extent in fiscal year 2000, many agency goals were established after the start of the fiscal year. However, we found no link between the timeliness of goal-setting and actual awards to small businesses. According to SBA records, all 20 large agencies submitted goals after the start of fiscal year 1999. One reason for the delay was that SBA's letter requesting goals from individual agencies was not distributed until more than 2 weeks after the fiscal year began. In addition, SBA's deadline for agencies to submit their 1998 and 1999 goals was 1 to 2 months after the start of the fiscal year. SBA officials explained that designating a grace period enabled agencies to evaluate the prior year's performance and develop strategies for improvement. The officials acknowledged, however, that the period was not used for this purpose because FPDC does not issue preliminary prior-year results until the second quarter of the following fiscal year. Despite SBA's lenient deadlines, most agencies did not submit their goals on time. For example, in fiscal year 1999 only one agency--the Department of Defense--met SBA's deadline of November 1, 1998. Three agencies submitted goals in the second quarter, three submitted goals in the third quarter, and one submitted its goal in the fourth quarter. Some small agencies did not establish any goals at all. Timeliness improved in fiscal year 2000, when 11 agencies met SBA's deadline. Once again, some small agencies did not submit goals at all. When the large agencies were late in submitting goals, SBA followed up with letters. However, SBA conducted little if any follow-up with the small agencies because they represent a very small fraction of federal procurement dollars. Timeliness of goal submissions was not a problem in fiscal year 2001, because SBA assigned the goals directly. We did not find a link between timeliness of goal-setting and actual small business achievements. For example, the Department of Commerce, which submitted its fiscal year 1999 goal 4 months after SBA's deadline, exceeded its goal of 35 percent, awarding 40.83 percent of its prime contract actions to small businesses. On the other hand, the Department of Agriculture, which missed SBA's deadline by only 2 days, did not achieve its goal of 45.1 percent, awarding small businesses only 37.96 percent of its prime contract actions. The approach and criteria SBA used to derive individual agency goals in fiscal years 2000 and 2001 have not been formalized or shared with the procurement agencies. Some agency officials expressed confusion about how SBA had determined the assigned goal for their agencies. The extent to which SBA changed individual agencies' fiscal year 2000 goals from the negotiated fiscal year 1999 goals varied by agency. The February 2000 memorandum from SBA's Associate Deputy Administrator stated that every agency was assigned an increased goal compared to the 1999 goals. However, while most of the agency goals were increased, goals for four agencies--in addition to the Department of Energy--decreased. SBA officials could not explain their methodology for assigning fiscal year 2000 goals. Table 1 compares fiscal year 1999 negotiated goals with SBA's assigned goals for fiscal year 2000. According to SBA officials, given SBA's mandate to establish a goal of not less than 23 percent and the difficulties in setting that goal in fiscal year 2000, they had little choice other than to assign goals that year. SBA notified agencies in a conference call early in 2000 that their fiscal year 2001 goals would be identical to their 2000 goals. Some agency officials said that they appreciated knowing their fiscal year 2001 goals well ahead of the start of the fiscal year. Other officials, however, noted that they did not have the opportunity to consult with SBA about the 2001 goals. The head of each federal agency shall, after consultation with the Administration , establish goals for participation by small business concerns...in procurement contracts of such agency. Goals established under this subsection shall be jointly established by the Administration and the head of each Federal agency.... Agencies have recourse if they disagree with SBA. The law provides for agencies to submit their disagreement on established goals to the Administrator, Office of Federal Procurement Policy for final determination. Thus far, no agencies have done so in response to the assigned goals. Since fiscal year 1998, SBA has directed FPDC to exclude certain types of contracts when calculating annual small business prime contract achievements. SBA officials explained that the excluded contracts fall into three broad categories of contract actions: (1) those for which small businesses' chances to compete are limited or nonexistent, (2) those using non-appropriated funds, and (3) those made by agencies that are not subject to the Federal Acquisition Regulation or are otherwise exempt from federal procurement regulations. SBA officials' decision to exclude certain types of contracts from the small business calculations is consistent with SBA's authority under the Small Business Act. However, SBA's rationale for making these exclusions is not documented. Prior to 1998, agencies reported their small business achievements directly to SBA and excluded from their calculations certain types of contracts, such as those for which small businesses had a limited or no chance to compete. SBA then published an annual report summarizing each agency's achievements. SBA officials said that in some cases they were not aware of all exclusions the agencies made when reporting their numbers. In 1998, the reporting process changed, with FPDC reporting small business achievements based on information received from the agencies. With this change, some of the exclusions were no longer made. An example of this change is a contract between NASA and the California Institute of Technology to operate the Jet Propulsion Laboratory, a federally funded research and development center. SBA and NASA had agreed that this contract would be excluded from NASA's small business reports to SBA, because small businesses would have little chance to compete for it. Since 1998, however, when the reporting method changed and FPDC began to report small business goals, the contract has been included in NASA's business achievement results. We found that one exclusion, made on the basis that small businesses would have a limited chance to compete, has not been applied consistently across the government. In 1998, SBA granted an exclusion to the Federal Highway Administration for its anticipated congressionally-directed contract actions, based on the premise that small businesses would have a limited chance to compete for these contracts. However, this exclusion has not been used, nor has such an exclusion been granted to any other government agency. In addition, while SBA excludes the United States Mint's contract actions on the basis that it is a non-appropriated activity, an additional reason to exclude these actions is the Mint's legislated exemption from federal procurement regulations. Table 2 shows the types of contracts that are excluded from the small business achievement calculations and SBA's explanation of its rationale for the exclusions. According to officials at the Department of Transportation and FPDC, the Department of Transportation's Senior Procurement Executive requested in 1998 that FPDC exclude from the small business baseline those contract actions that Congress had encouraged the Federal Highway Administration to award to certain sources, primarily universities and research centers. SBA agreed, based on the rationale that small businesses would have no chance to compete for these contracts; and in 1998 FPDC implemented the exclusion. The Federal Highway Administration is the only government agency with this type of exclusion. In practice, however, according to Department of Transportation officials, the Federal Highway Administration has awarded no contract actions to the sources cited by Congress. Rather, these awards are made in the form of assistance agreements (grants or cooperative agreements), which are not reported to FPDC, in accordance with FPDC's guidelines. The officials said that when the Senior Procurement Executive requested the exclusion in 1998, it was anticipated that the Federal Highway Administration might award contracts--as opposed to assistance agreements--to congressionally-directed sources, but that this has not occurred to date. Nevertheless, according to FPDC records, the Federal Highway Administration has reported contract actions meeting the exclusion criteria. In 1998, 1999, and 2000, FPDC subtracted $298,000, $20,000, and $1.7 million, respectively, from the Federal Highway Administration's small business baseline based on the 1998 agreement. Further, FPDC data show that all of these actions were awarded to small businesses. Department of Transportation officials stated that Federal Highway Administration personnel had miscoded these actions and that they should not have been excluded from the baseline. The officials stated that the errors have been corrected in the agency's database. As noted in table 2, four Treasury bureaus report contract actions to FPDC, and SBA in turn has directed FPDC to exclude these contracts from the small business baseline on the basis that they use non-appropriated funds. The U.S. Mint operates under the Public Enterprise Fund. In the Public Enterprise Act of 1995, Congress exempted the Mint from all federal procurement regulations. This statutory exemption is an additional reason to exclude the Mint's contract actions. From fiscal year 1998 through 2000, the excluded contracts that we could quantify accounted for about 10 percent of all federal procurement dollars. The vast majority of the exclusions are for Department of Defense contracts for foreign sales and contracts performed outside the United States. The excluded contract dollars are shown in table 3. The Department of State's policy is not to report its personal services contract actions to FPDC. However, according to FPDC data, some State Department contracting officers did report personal services contracts, in the amount of $6.5 million in fiscal year 2000. The Department of State could not quantify the total dollar value of its personal services contract actions. In fiscal year 2000, Department of Defense contract actions accounted for about $17.4 billion, or 77 percent of the $22.6 billion in total exclusions for fiscal year 2000. Most of the Department's excluded dollars were for foreign sales and contracts performed outside the United States. Figure 1 shows the percent of exclusions by agency. Figure 2 shows the types of exclusions for the Department of Defense, which accounts for 77 percent of the excluded dollars. Appendix VI lists the exclusions by agency for fiscal year 2000. Although SBA's annual guidance on goal-setting lists the types of contracts that FPDC excludes in its annual calculations of small business achievements, the guidance is confusing and incomplete. The absence of a rationale for each exclusion--as discussed above--and the lack of distinction between categories of exclusions, along with other short- comings, have made the guidance a less than "user-friendly" document. The guidance is the only source of information available to Congress, the small business community, and federal agencies on the contracts excluded from the small business achievement calculations. However, the guidance presents an unclear picture of the contract exclusions. Examples of weaknesses in the guidance are: FPDC instructs federal agencies not to report contracts that use predominantly non-appropriated funds and contracts from agencies that are not subject to the Federal Acquisition Regulation, such as the Federal Aviation Administration. However, when listing the exclusions, SBA's guidance does not distinguish between these types of contracts--that are never included in the FPDC database--and contracts that SBA explicitly directed FPDC to exclude for purposes of calculating small business achievements (e.g., the Federal Highway Administration exclusion). Consequently, readers reviewing the guidance come away with the impression that SBA is directing more exclusions than is actually the case. The guidance states that the exclusions include "Wholesale Supply Sources, such as stock programs of the Defense Logistics Agency, the Department of Veterans Affairs, and military inventory control points." Wholesale supply sources are mandatory sources under the Federal Acquisition Regulation. SBA officials said that this exclusion pertains to transactions between federal agencies. For example, the military services purchase spare parts from the Defense Logistics Agency, which is a mandatory source. However, these transactions are not contract actions. Rather, they are simply intra-governmental transfers of funds and, as such, are not reported to FPDC. Thus, no exclusions are made in practice for the wholesale supply source category. The inclusion of this category in the guidance as an "exclusion" is confusing and misleading. The guidance lists contracts awarded and performed outside the United States as a type of exclusion. In practice, however, the exclusion applies only to the place of performance, not to the location at which the contract was awarded. The exception is contract actions reported by certain Department of State embassies, which are specifically identified in FPDC's programming logic and automatically excluded from the small business achievements. SBA officials explained that, except for these embassies, FPDC does not currently have a mechanism for capturing the location of the contract award. All other contract actions performed outside the United States are excluded, regardless of where the contract was awarded. SBA's guidance is misleading in stating that excluded contracts necessarily have to be awarded and performed outside the United States. The lack of transparency in SBA's process for deriving individual agency goals is a matter of concern. SBA's methodology for establishing these goals is neither clearly documented nor communicated to the procurement agencies. A transparent methodology is especially critical in light of the fact that the Small Business Act directs goals to be established through a consultation process and that this process has been weakened with the direct assignment of goals. SBA's failure to document the reasons for excluding certain types of contracts precludes a clear picture of how small business achievements are calculated. The lack of documentation has also contributed to confusion about the Federal Highway Administration exclusion. In addition, the lack of sufficient detail in SBA's guidance makes it difficult for Congress, procurement agencies, and the small business community to be aware of the excluded contracts and the rationale for the exclusions. We recommend that the Administrator of SBA Set forth clearly the approach and criteria used to establish individual agency goals. This documentation should be presented in SBA's annual guidance and in letters to individual procurement agencies. Ensure that all agencies have an opportunity to negotiate goals for fiscal year 2002 and future years. Determine whether the exclusion for the Federal Highway Administration is appropriate. Document SBA's rationale for excluding contracts from the small business baseline and ensure that this documentation reflects the fact that the United States Mint is legislatively exempt from the Federal Acquisition Regulation. Revise the goaling guidance to (1) clarify the types of contracts that are excluded at the behest of SBA versus those that are not reported to FPDC, (2) delete reference to the wholesale supply source exclusion if it is determined to be inapplicable, and (3) reflect the fact that FPDC excludes contracts performed outside the United States and, with the exception of specific State Department embassies, does not consider where the contract was awarded. We received written comments on a draft of this report from SBA, the Department of State, and NASA. We also received oral comments and comments via e-mail from 8 other agencies, as discussed below. All agencies generally agreed with our findings and recommendations. SBA concurred with our findings and recommendations and offered additional technical comments which we have incorporated where appropriate. SBA's comments appear in appendix I. The Department of State noted that we had failed to distinguish between those contracts that are performed outside the United States and those that are awarded and performed outside the United States. We have clarified the wording on this issue in the report. The State Department's comments appear in appendix II. NASA concurred with our recommendations to the extent that they affect NASA as a procuring agency and noted that the agency continues to work closely with SBA in establishing and exceeding its small business goals. NASA's comments appear in appendix III. We received oral comments or comments via e-mail from the Departments of Defense, Energy, Treasury, and Transportation; the General Services Administration; the Office of Federal Procurement Policy; the U.S. Agency for International Development; and FPDC. The Departments of Defense and Energy and the Office of Federal Procurement Policy concurred with our findings and had no further comments. The General Services Administration's Office of Enterprise Development concurred with our findings and recommendations. The Office added that, to enhance agency performance and results related to small business participation in federal procurement, it is imperative that a collaborative process, informed by reliable trend analysis, be instituted between federal agencies and SBA. The Administration remains committed to providing small businesses with maximum practical procurement opportunities and working with SBA to implement the recommendations in the report. The Department of Transportation and FPDC concurred with the report's findings and offered technical comments that we have incorporated where appropriate. FPDC noted that it was unaware that the State Department policy was not to report personal service contract actions. FPDC provided us with data showing that about $6.5 million had in fact been reported in fiscal year 2000. The Department of Treasury had no comments on the substance of the report, but suggested the following ideas for improving the SBA goaling process: (1) SBA drafts a set of recommended goals for each agency based on statutory requirements, past performance, and prior goals. (2) SBA sends goals to agencies for review. (3) Agencies may accept goals or negotiate them based on special circumstances such as major special projects, budget, etc. The U.S. Agency for International Development concurred with the State Department's comment about clarifying our discussion regarding contracts awarded and performed overseas. To identify the process by which small business prime contract goals are established, we reviewed the Small Business Reauthorization Act of 1997 and other pertinent legislation; guidance issued by the Office of Federal Procurement Policy, SBA, and FPDC; prior GAO reports; and FPDC's final reports on small business achievements for fiscal years 1998, 1999, and 2000. We reviewed correspondence between SBA and the Departments of Defense and Energy; the General Services Administration; and NASA. These four agencies awarded about 83 percent of federal prime contract dollars in fiscal year 2000. We also held discussions with officials at SBA, FPDC, and the Office of Federal Procurement Policy; the Departments of Defense, Energy, and State; the General Services Administration; NASA; and the Chair of the Office of Small and Disadvantaged Business Utilization Interagency Council. To determine (1) the types of contracts that are excluded when FPDC calculates small business achievements, as well as the rationale for excluding these contracts and (2) the adequacy of SBA's guidance, we reviewed SBA guidance from fiscal years 1998 through 2001, FPDC guidance and programming logic, relevant legislation, and prior GAO reports. We held discussions with officials at SBA and FPDC; the Departments of Defense, Energy, State, Transportation, and Treasury; the General Services Administration; the Office of Federal Procurement Policy; and NASA. To determine the dollar value of the excluded contracts and the dollar value of total procurements, we used FPDC's annual reports on the small business program from fiscal year 1998 through 2000 and special reports generated by FPDC. We obtained the dollar value of contracts awarded by the Federal Aviation Administration directly from the Administration, as these contracts are no longer reported to FPDC. Government-wide dollar value of contracts awarded with non-appropriated funds were not available. We conducted our review between November 2000 and July 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees and the Secretaries of Defense, Energy, State, Transportation, and Treasury. We also are sending copies to the Director, Office of Management and Budget; the Administrator, General Services Administration; the Administrator, NASA; the Administrator, SBA; the Administrator, Office of Federal Procurement Policy; and the Administrator, U.S. Agency for International Development. We will make copies available to others upon request. As requested by your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. Key contributors to this assignment were Michele Mackin, William McPhail, and James Smoak. If you have any questions regarding this report, please contact me on (202) 512-4125 or Hilary Sullivan on (214) 777-5652. Table 4 shows dollars obligated to small businesses for all federal agencies having procurement authority. As previously stated, 20 agencies account for over 99 percent of all federal purchases. The table also shows the percent of procurement dollars awarded to small businesses after taking into account each agency's exclusions of certain procurement actions as discussed above. Table 5 shows the total dollar value of fiscal year 2000 procurements and exclusions for all federal agencies having procurement authority. Twenty agencies account for 99.5 percent of all federal purchases.
|
The Small Business Reauthorization Act of 1997 directed the President to set a goal of awarding not less than 23 percent of the federal government's prime contracting dollars to small business for each fiscal year. The Small Business Administration (SBA) is charged with working with federal agencies to ensure that agency goals, in the aggregate, meet or exceed the overall goal. To help SBA determine if agency goals are being met, the Federal Procurement Data Center (FPDC)--part of the General Services Administration--collects data on all federal contract actions and calculates the government's annual small business achievements on the basis of procurement information received from the agencies. This report reviews (1) SBA's process for establishing annual small business prime contract goals and the reasons for recent changes to the process; (2) the types of contracts that are excluded when achievements are calculated, as well as SBA's rationale for excluding them; and (3) the dollar value of the excluded contracts. GAO found that in fiscal year 2000, SBA began assigning goals directly to individual agencies because the goals that agencies proposed did not in the aggregate reach the mandated 23-percent governmentwide goal. When calculating the percentage of federal procurements awarded to small businesses, FPDC must exclude (a) those contracts for which small businesses' chances to compete are limited or nonexistent, (b) those using non-appropriated funds, and (c) those made by agencies which are subject to the Federal Acquisition Regulation or are otherwise exempted by statute. The excluded contracts total about 10 percent of federal procurement dollars and are usually military contracts for foreign sales and contracts performed outside the United States. SBA's annual guidance is the only source of information to which federal agencies, the small business community, and Congress can turn for information on the contracts that are excluded from the small business baseline. However, the guidance is unclear and incomplete, precluding a clear picture of the universe of contracts reflected in FPDC's annual reports of small business achievements.
| 5,738 | 426 |
The SSI program is administered by SSA. Claims representatives at SSA field offices are responsible for processing all applications and determining the amount of monthly SSI payments. SSI is a program based on need. The maximum monthly SSI benefit in 1996 was $470. Clients receive less than the maximum or become ineligible for the program altogether if their earned and unearned income and resources exceed certain thresholds. Monthly changes in the amount of non-SSI income that clients receive increase or decrease the amount of SSI benefits to which they are entitled. If clients do not report these income fluctuations promptly to SSA, an overpayment or underpayment will accrue. SSI clients are required to self-report any income that they receive to claims representatives when applying for SSI and are also required to self-report any changes in their monthly income by the 10th day of the following month. SSA policy requires that claims representatives verify this reported income, but does not require that claims representatives check for unreported income unless they suspect that clients are not reporting all their income during initial or subsequent eligibility determinations. SSA uses both financial eligibility reviews, known as redeterminations, and computer matching to identify and prevent overpayments. During redeterminations, clients report their income on mailed questionnaires or during face-to-face or telephone interviews. The method used to contact the client and the frequency of such contacts depend on the likelihood that a client's financial situation will change. Computer matches detect some types of income that clients have not reported. They consist of comparing the SSI payment records against client information contained in the payment files of other government agencies. In order to detect unreported income, the earnings and unemployment insurance (UI) benefits reported by SSI clients are compared or matched, for example, against earnings and UI information that employers report to state agencies. However, no computer matches are done that could identify unreported or underreported income from the Aid to Families With Dependent Children (AFDC) and workers' compensation (WC) programs. In 1994, SSA began establishing on-line connections between its field offices and state agencies that had automated databases that could easily be linked to SSA's computer system. SSA did this so that information on earnings and AFDC and UI benefits contained in these databases could be automatically obtained as soon as claims representatives requested it. Claims representatives use such information for a variety of purposes, including verifying the amount of income that clients report when applying for SSI and during redeterminations of their continuing financial eligibility. On-line access began in a limited number of SSA field offices in Nashville, Tennessee, in July 1994. Currently, claims representatives at all 30 Tennessee field offices are able to access state data on earnings, UI, and AFDC for any client using the computers on their desks. In addition, at the time of this report, on-line access to WC information was also being negotiated in Tennessee. This access takes only a minute or two to retrieve the pertinent data. As of mid-1996, on-line access to state wage, UI, and AFDC data was also fully implemented in SSA field offices in North Carolina, South Carolina, and Kentucky, and partially implemented in SSA field offices in 2 other states. Initial contacts or negotiations for on-line access were also being conducted in 11 other states. SSA estimates that in Tennessee, on-line access saves at least $6.50 in administrative costs every time a claims representative obtains data on-line, which occurs thousands of times each month. SSA further estimates that having such access results in clients receiving their first benefit check at least 1 week earlier. These improvements occur because in processing an initial claim and in conducting redeterminations, claims representatives no longer have to telephone, write, fax, or sometimes even visit state agencies to obtain necessary documentation--they simply obtain the information on their computers. Claims representatives use on-line access for a variety of purposes, including verifying the amount of AFDC or other benefit income a client reports receiving. According to SSA, in one field office, SSA staff found that on-line access saves 30 to 45 minutes of a claim representative's time per claim. In another field office, a claims representative told us that she was able to obtain needed AFDC information on-line for a co-worker in a field office in another state who had been waiting for 6 weeks to get this information through conventional channels. Another way that claims representatives use on-line access is to follow up on computer-matching results indicating that clients may not have reported all their earnings or UI benefits. This entails contacting the recipient and often an employer or government agency to confirm the receipt of earnings or benefits. One claims representative told us, for example, that she uses on-line access to obtain more up-to-date addresses on the clients she is investigating and that without such information it has taken her up to 2 months to find correct addresses. Other claims representatives told us that they routinely check earnings on-line when investigating computer-matching results because the on-line access can provide more current earnings information and identify additional employers who are not listed on SSA's computer matches with their own earnings data or state data. A final way that claims representatives use on-line access is to obtain miscellaneous personal information, such as the Social Security numbers and birthdays of members of a client's household, as well as the types of public assistance household members receive. Because this type of information is frequently available on-line from the states, claims representatives no longer have to delay claims processing while clients search for this information. SSA believes that the benefits of on-line access have been sufficiently demonstrated in field offices in the six states where it has been fully or partially implemented to warrant expanding it to other states. These benefits, which we observed during our field office visits, include reducing the amount of time and paperwork required for income verification as well as the amount of time required for claims processing. At the time of our work, SSA was developing a long-range plan to expand on-line access nationwide, which included developing benchmarks to better measure the amount of time and money such access will save. SSA was also actively pursuing such expansion in field offices in at least 11 additional states. In addition to expanding on-line access to other states, however, SSA could also use it to improve the accuracy of its payments. Our analysis of SSA overpayment data shows that an estimated $131.3 million in overpayments nationwide occurred between June 1993 and May 1994 because some SSI clients did not report or underreported income they received. More than $34 million of these overpayments resulted from unreported or underreported AFDC, UI, and WC benefit income, and more than $97 million resulted from unreported or underreported earned income. If SSA field offices nationwide had used on-line access to state databases, SSI program dollars could have been saved because the overpayments might have been prevented or more quickly detected. Preventing overpayments saves program dollars for two reasons: overpayments would never be made and SSA would not have to spend additional money trying to recover them. Detecting overpayments sooner also saves program dollars because, according to SSA officials, the sooner overpayments are detected, the more likely it is that they will be recovered. Of the $34.1 million in overpayments related to AFDC, UI, and WC income, about 89 percent ($30.5 million) occurred because both newly eligible and ongoing clients did not report to the claims representatives handling their cases that they were receiving state-administered benefits. The remaining 11 percent ($3.6 million) resulted when newly eligible and ongoing clients who had reported receiving benefits did not report increases in the amount of their benefits or did so only after they had received their SSI check covering the period during which the increase occurred. (See table 1.) The $30.5 million in overpayments resulting from unreported AFDC, UI, and WC benefits shown in table 1 might have been prevented if claims representatives had used on-line access to state information on these benefits. State welfare and employment departments maintain such information and, according to officials we spoke with, generally update it at least monthly. Through an on-line connection to these data, SSA could easily check to see if SSI clients were receiving these benefits before becoming eligible for SSI or whether clients were receiving these benefits and SSI simultaneously. Alternatively, programming could be put in place that would automatically compare SSA records to relevant state earnings and benefit data and generate lists of discrepancies that were found. Verification would make it less likely that such income would be undetected. On-line access would not have prevented all of the $3.6 million in overpayments resulting from underreported AFDC, UI, and WC from occurring, but might have detected the overpayments sooner. Moreover, had they been discovered sooner, the amount overpaid would have been less. These overpayments resulted because clients either did not self-report increases in their income to claims representatives or did so only after receiving their SSI checks. On-line access might not have prevented these overpayments from occurring because the benefit income that caused these overpayments would have to have been paid in order for this income to be reflected in the databases that claims representatives access on-line. On-line access might have detected these overpayments sooner than they would otherwise have been detected, however, because such access allows SSA to verify the amount of these benefits as soon as the benefit amount is updated at the relevant state agency, which occurs once a month or more frequently. The primary ways SSA currently detects SSI overpayments are through client self-reporting; miscellaneous tips from third parties; or, in the case of the UI program only, computer matching. All three methods are limited: both self-reporting and third-party tips may not occur or may not be timely and computer matching, which is done only for UI, cannot detect overpayments until 6 months after they have begun. Many SSI overpayments were not discovered for periods ranging from several months to more than 1 year after they began. According to fiscal year 1993 and 1994 SSA nationwide data, it took on average 9 months to discover overpayments made to newly eligible clients that were caused by the receipt of AFDC, UI, WC, and other types of unearned income. For ongoing clients, it took SSA on average 15 months to discover such overpayments. Of the $97.2 million in overpayments related to earned income, 34.2 percent ($33.2 million) resulted because clients failed to disclose to claims representatives that they had any earned income at all. Table 2 refers to these overpayments as unreported earnings. The remaining 65.8 percent ($64.0 million) in overpayments occurred because clients earned more in some months than they had reported to SSA or reported it only after receiving SSI benefits covering the same period. Table 2 refers to these overpayments as underreported earnings. Newly eligible clients commonly either did not report or underreported earnings. Ongoing clients, however, much more commonly underreported earnings as opposed to simply not reporting them. (See table 2.) On-line access has the potential to detect some overpayments caused by unreported or underreported earnings earlier than they can be detected using current SSA procedures. On-line detection cannot prevent overpayments caused by either unreported or underreported earnings, however, because of the time lag between when these earnings occur and when the data pertaining to them are available on-line. In states where on-line access has been implemented, the earnings data that SSA accesses from state employment departments are 4 to 6 months old and are updated four times a year. Thus, by using these data, SSA can detect overpayments 4 to 6 months after they begin. The data that SSA currently uses to detect earnings overpayments in computer matches, on the other hand, are between 6 and 21 months old. Thus, under current review procedures, some overpayments may exist for nearly 2 years before they are detected. Two of SSA's predominant methods of detecting unreported and underreported earnings are (1) requiring that clients self-report all non-SSI income that they receive and (2) conducting computer matches to detect any unreported income once clients are on the rolls. Because self-reporting can result in information on earnings that is even more up to date than what can be obtained on-line, some overpayments could be detected sooner through self-reporting than with on-line access. The problem, however, is that SSA cannot rely on clients to fully report all the income that they receive at or near the time that they receive it. In order to detect income that clients do not report, SSA conducts computer matches. The earliest that computer matching could detect an overpayment varies with the age of the data used in the match. For example, the earned-income computer matches rely on earnings data that are anywhere from 6 to 21 months old. Therefore, the earliest that detection could occur would be 6 to 21 months after the overpayment began. The earliest that on-line access could detect an unreported or underreported earnings overpayment, on the other hand, would be 4 to 6 months after it began, because that is the age of the data being used. In field offices that have on-line access, information resulting from SSA's computer matching of state earnings data to detect overpayments duplicates information that claims representatives already have on-line. This is because on-line access is using the same data that SSA headquarters obtains twice a year to conduct wage and UI computer matches. Two SSA officials we spoke with mentioned that replacing the state computer matches with an automatic computerized interface that notified claims representatives when earnings information needed to be checked would result in more timely notification and would also free up SSA headquarters resources currently used to conduct the matches. SSA's state computer-matching program compares the earnings that SSI clients report with the earnings data that employers submit to the states. Employees at SSA headquarters prepare computer tapes of the SSI recipients in each state and mail them to the appropriate state employment departments. State employees compare these files against their own earnings databases, adding the names of any SSI recipients who received earnings over the previous 6 months, and mail the tapes back to SSA. At SSA headquarters, computers then compare the state earnings data against the earnings reported by SSI recipients and generate lists of recipients with unreported earnings. SSA headquarters then provides these lists to the appropriate field offices where claims representatives investigate the unreported earnings. With an automatic interface using the same telecommunication lines now connecting SSA field offices on-line with the states, this earnings comparison could be done without having SSA headquarters employees prepare computer tapes and issue lists to field offices. Such an interface would consist of programming that would automatically do this comparison and issue these lists over the telecommunication lines, thereby saving administrative costs. The claims representatives who have on-line access in the three states we visited--Tennessee, South Carolina, and Wisconsin--continue to rely on the traditional methods of self-reporting and computer matching to identify overpayments caused by unreported or underreported income. They normally do not use on-line access to identify overpayments resulting either from state-administered benefits or from earnings. When asked why, they said it was because they did not believe that on-line access would prevent or more quickly detect a significant amount of overpayments. This is because claims representatives believe that they are normally able to tell from experience when a client is hiding income and, in such situations, they then try to uncover it. They further stated that SSA policy normally does not require that they check independent information sources to determine whether clients may be receiving various types of income that they have not reported, a process referred to as negative verification. However, several SSA officials we spoke with mentioned that on-line access could make checking for unreported income sources feasible in states where data are sufficiently automated because it would be inexpensive and nearly instantaneous. Using benefit and earnings overpayment data from Tennessee between September 1994 and August 1995, we compared conventional overpayment detection methods with detection using on-line access. On-line access can discover overpayments related to AFDC, UI, and WC within 1 month or less.However, we determined that only 18 percent of such overpayments were detected within this time frame using conventional methods. We also analyzed Tennessee overpayment cases to see how many earned-income overpayments might have been detected sooner if claims representatives had used on-line access as opposed to current methods. Because on-line access can discover overpayments related to earnings in 4 to 6 months, we determined how many earnings overpayments detected through conventional means were discovered more than 6 months after they occurred. We found that on-line access could have detected nearly 60 percent of the overpayment dollars in Tennessee occurring between September 1, 1994, and August 31, 1995, more quickly than was actually done using conventional methods. Moreover, on-line access was about equally as effective in detecting overpayments more quickly for both newly eligible and ongoing clients. Establishing on-line connections between SSA field offices and the appropriate agencies within a state is not technically difficult or costly in many states. Considerable effort, however, may be required to identify state agencies willing to give SSA on-line access and to negotiate the agreements necessary. Some state agencies may not want to grant SSA such access because they are concerned that the privacy of individuals may be violated by sharing such personal information as income and Social Security numbers on-line. Others, while agreeing to give SSA on-line access in principle, may refuse to negotiate the necessary agreements to implement such access until SSA agrees to grant them reciprocal on-line access to SSA data. The technology needed for on-line access consists of (1) telecommunication lines that link SSA field office computers to state databases and (2) programming that establishes the actual on-line connections. Most states only need upgrades to existing telecommunication lines and programming to implement on-line access. Telecommunication lines that already exist between SSA headquarters and the states can be used in many instances to link state databases with SSA field offices. In cases where upgraded lines may be needed, SSA officials told us that, based on their experience where on-line access has been implemented, such lines are neither costly nor difficult to install. According to an SSA systems official, software that is currently used to route data between SSA's headquarters and state agencies can be used, with minor programming changes, for the actual on-line transmissions. This same official further explained that the programming changes that must be made at SSA headquarters take about 15 to 30 minutes for each state whose data are being accessed. The programming that must be done by the state agencies, however, is more involved. It consists of inputting the names and passwords of the new SSA users so that the state computer systems will (1) allow these individuals access to the state data and (2) block access to any data elements containing personal information about individuals that these users do not have a legal right to see. SSA has not tracked the costs it has incurred when making the necessary changes to its systems to implement on-line access because they are viewed as minimal. Moreover, these costs are not projected to be extensive nationwide, should on-line access be implemented in all field offices. One systems official estimated, for example, that it would take one-fourth of 1 work year to implement on-line access nationwide. According to SSA's district managers in states that have implemented on-line access, SSA has committed to pay a total of $67,000 for hardware and software changes necessary for SSA to access state systems. Average state charges when a claims representative accesses state records range from nothing to 1.5 cents. Moreover, a number of state agencies have expressed willingness to provide free access to state records in exchange for on-line access to SSA data. (For more information on reciprocal access, see p. 14 under the heading "States Have Resisted On-Line Access Until SSA Gives Them Reciprocal Access.") Finally, some states that do not have fully automated data are willing to explore giving SSA on-line access in exchange for SSA sharing the costs of automating their data. SSA would incur more significant costs should an on-line system be developed that would permit claims representatives in one state to access on-line state information in any other state. These additional costs would include developing commonly formatted computer screens displaying state data for SSA claims single names and passwords that SSA claims representatives could use for all states so that the audit trails would still be maintained, and a common menu from which SSA claims representatives could access data from all states. Some states have resisted allowing on-line access to their information because of their concerns about privacy. Privacy concerns center around ensuring that personal information that an individual provides to one government agency is protected from being disclosed to other agencies that do not have a legal right to it. Granting SSA on-line access to state data does not violate the privacy of individuals who provide this information, because SSA is simply using on-line connections to access information to which it has a legal right. SSA already routinely obtains this information from government agencies during the claims-handling process. Moreover, state agencies can decide which parts of a record claims representatives will view. Although on-line access has not changed the type of information that claims representatives obtain from other agencies, it has made obtaining this information faster and easier. One state official responsible for the security of data told us that this can cause personal privacy concerns, because in making information easier to obtain for official use, it also becomes easier to obtain for unofficial or illegal use. SSA and state officials heavily involved with on-line access believe that it is possible to develop on-line systems that minimize the possibility of accessing personal data for unofficial or illegal use and also identify the perpetrators, should such abuses occur. They further believe that abusing the access to personal information is not more likely with on-line access than with other data-sharing methods. SSA and the states have taken steps specified in federal security standards that, we were told, ensure the confidentiality and security of on-line data in states where on-line access has been implemented. These include (1) installing software that screens each user before establishing an on-line connection to ensure that the connection is being made from an SSA field office by a valid user and also screens specific requests for items of information, thereby limiting what queries a user can get answered; (2) instituting written agreements between SSA and the state agencies regarding how the on-line data will be used; (3) requiring that SSA claims representatives sign releases stating that they will not access state data for any unofficial reasons; (4) using computer lines dedicated solely to the transmission of data between government agencies; and (5) issuing all SSI claims representatives passwords that they must enter before gaining access to the on-line data. This last feature leaves an audit trail each time an agency's data are accessed on-line. If abuse is suspected, an agency official can check this trail to see under which SSA employee's identification code the state data were accessed, which data screens were viewed, and when they were viewed. Another reason that some state agencies have been reluctant to grant SSA field offices on-line access to their data is that, until recently, SSA has refused to give states on-line access to SSA data. State agencies use information on recipients of SSA benefits for a variety of reasons. For example, states are required by law to determine whether welfare clients receive any SSA benefits. This is accomplished by comparing an electronic file of SSA recipients of benefits against the client databases maintained by state agencies. Historically, SSA has had concerns about granting on-line access to state agencies for some of the same reasons that have made states reluctant to grant SSA on-line access. For example, like some state officials, some SSA officials are also concerned that on-line access could compromise the security and confidentiality of the personal information contained in SSA databases. Although we did not evaluate the effectiveness of SSA's security procedures, SSA system and policy officials told us that these procedures will be stringent enough to grant government agencies on-line access to its data without compromising confidentiality. SSA intends, for example, to evaluate the type of telecommunication connections a state will use to access SSA data and then institute procedures specified in federal security standards that are commensurate with the security risks such on-line access may pose. These measures will include up-front screening to ensure that the on-line data requests are from valid users in valid states and the creation of audit trails that can track on-line usage back to individual users. SSA will also employ software that automatically checks for sudden changes in usage patterns, which might indicate questionable use of the data. SSA is beginning to develop policies for granting on-line access to its databases. These policies detail the circumstances under which agencies will be granted on-line access and how the security and privacy of SSA data will be maintained. According to one official involved in developing these on-line policies, SSA is moving in this direction so that state agencies will agree to grant SSA on-line access to their data. As part of its initial move toward implementing reciprocal on-line access, SSA is planning to pilot state access to SSA data in North Carolina and Tennessee in early 1997. The same connections that give SSA field offices on-line access to state data in these states will be used to give state agencies on-line access to SSA data. Only those agencies that are legally entitled to SSA data will be able to access such data on-line. Furthermore, these agencies will only be able to access those portions of SSA data to which they are already legally entitled. On-line access has demonstrated that programmatic and administrative savings can be realized that reduce SSA's costs and improve service to the public by reducing the time it takes to process a claim. With appropriate attention to privacy and computer security issues, the expanded use of on-line access can enhance SSA's ability to efficiently and effectively manage the SSI program. We found that on-line access could help SSA prevent or more quickly detect many overpayments. Preventing or detecting overpayments more quickly would bolster the integrity of the SSI program by better ensuring that clients are only receiving those benefits to which they are entitled. We estimated that about $131.3 million in overpayments could have been avoided or more quickly detected with the use of on-line access. Although SSA has efforts under way to implement on-line access nationwide, it has not (1) examined its policy regarding how on-line access can be used in overpayment detection and prevention in places where it has been implemented, (2) investigated how such access can be used to replace less timely and labor-intensive computer-matching methods for overpayment detection such as those used for earnings and unemployment insurance, and (3) determined how on-line access can be used to identify overpayments that are not currently detected through computer matching with the payment systems of the AFDC and WC programs. To prevent overpayments or detect them sooner, we recommend that the Commissioner of SSA require claims representatives to use on-line access to routinely check for unreported sources of income when initial and subsequent assessments of eligibility are done, provided that it is cost-effective to do so and that the data available on-line pertain to the time periods covered by SSI payments and develop automatic interfaces with state databases that comply with laws and standards governing computer matching, privacy, and security that can (1) more fully automate the earnings and UI computer matches and (2) identify additional income sources that do not currently have computer matches. In commenting on a draft of this report, SSA agreed that on-line access can be a useful tool for reducing overpayments and it also agreed with our recommendations. However, SSA officials noted that although on-line access is easy and inexpensive in many states, this may not be true for all states. They cited, for example, that some state agencies may not have automated data or the systems within an agency or between agencies in the same state may not be compatible. SSA also noted that because on-line access will presumably be more difficult and costly in some states than in others, a more thorough analysis of its costs and benefits is necessary before it is used for overpayment prevention. Our report indicates that part of SSA's expansion of on-line access would include developing benchmarks to better measure the costs and benefits of such a system. However, as our report also indicates, there are states where on-line connections can now access data inexpensively and easily. Thus, there is no reason why such access cannot be used for overpayment prevention in these instances while SSA pursues the cost-effectiveness of on-line access for other states. SSA's formal response is in appendix II. We are sending copies of this report to relevant congressional committees, the Commissioner of Social Security, and other interested parties. If you have any questions about this report, please contact me on (202) 512-7215. Major contributors to this report are listed in appendix III. This report focuses on the extent to which on-line access can (1) improve the administration of the SSI program, (2) reduce overpayments, and (3) be easily implemented in SSA offices nationwide. To accomplish our objectives we (1) visited states where SSA field offices had on-line access to state data, (2) interviewed officials at SSA headquarters and regional and field offices, and (3) analyzed overpayment data from several sources. We conducted field visits in two states where on-line access to state earnings, UI, and AFDC data was fully implemented--Tennessee and South Carolina. We also visited Wisconsin, where on-line access was being piloted in the largest SSA field offices and interviewed officials involved with the project at SSA's headquarters in Baltimore. During these visits, we met with SSA officials and claims representatives as well as with the state government officials whose data were being accessed. We discussed (1) the pros and cons of on-line access, (2) how such access was being used in each state, (3) how it could potentially be used to prevent overpayments and increase SSA staff efficiency, (4) the start-up and continuing costs associated with such access, and (5) the issues involved in replicating on-line access in other states. In regard to overpayments, the report examines those caused by unreported or underreported earnings, UI, WC, and AFDC. These sources were chosen because (1) they were common causes of overpayments and (2) many of these overpayments could be prevented or detected sooner by claims representatives electronically accessing state data. We obtained nationwide aggregate data from SSA overpayment studies on all overpayments resulting from earnings, UI, WC, and AFDC between June 1, 1993, and May 31, 1994. We then determined how many of these overpayments resulted because clients did not report or underreported the income they received from the above sources. We did this by analyzing information contained in the data regarding why these overpayments occurred. The nationwide aggregate overpayment data that we obtained were limited in that they only showed how long the overpayments lasted on average. We obtained more detailed information, however, by examining all SSI beneficiary cases in Tennessee that had overpayments caused by wages, AFDC, UI, and WC. In order to determine the point at which on-line access might have prevented or more quickly detected these overpayments, we analyzed (1) when each of these overpayments began and ended and (2) what percentage of them could have been more quickly detected had on-line access been used. This determination was made on the basis of the age of the on-line data in Tennessee and how frequently they were updated. The records we examined of Tennessee residents pertained to clients who were assessed for initial or continuing eligibility between September 1, 1994, and August 31, 1995. In addition to those named above, the following individuals made important contributions to this report: Christopher C. Crissman assisted as an adviser, James Wright and John Smale assisted with design support, and Nancy Crothers assisted in report design. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the Social Security Administration's (SSA) use of online access to state databases on income information to determine whether it could: (1) improve the administration of the Supplemental Security Income (SSI) Program; (2) reduce overpayments; and (3) easily implement online access nationwide in SSA field offices. GAO found that the use of online access to state-maintained income data could: (1) improve the administration of the SSI program by cutting the time needed to verify client information; (2) prevent or detect overpayments due to underreported and unreported benefit income; (3) replace the current computer-matching system that relies on old data; (4) enable necessary security measures to protect client confidentiality; and (5) be inexpensively implemented nationwide with minimal programming.
| 7,286 | 170 |
The United States imports substantial amounts of food. In 1993, the value of food imports from countries other than Canada amounted to about $21 billion. Similarly, the value of Canadian food imports from countries other than the United States totaled about $3.2 billion. The United States and Canada are concerned about imported foods since these foods are produced and processed under unknown conditions. Each country has several federal agencies that regulate and monitor the safety of imported foods. In the United States, the Department of Health and Human Services' Food and Drug Administration (FDA) is the federal agency responsible for overseeing the safety of most domestic and imported food products, including fish and seafood. The U.S. Department of Agriculture's Food Safety and Inspection Service (FSIS) is responsible for ensuring the safety of domestic and imported meat and poultry products. In general, Health Canada establishes the standards for food safety and has overall responsibility for ensuring that all food sold in Canada meets federal health and safety standards. Health Canada shares responsibility for inspections with Agriculture and Agri-Food Canada, which is responsible for inspecting meat, poultry, fruits, vegetables, dairy products, and eggs, and with Fisheries and Oceans Canada, which is responsible for inspecting fish and seafood. The two countries' systems and standards for ensuring the safety of imported foods are similar. For meat and poultry, both FSIS and Agriculture and Agri-Food Canada certify that foreign countries' processing and inspection systems are equivalent to the respective U.S. and Canadian domestic systems, then supplement that certification with inspections of foreign plants and spot checks of imports. For other foods, the two countries generally exercise control by selectively inspecting imports as they enter the country, although FDA and Fisheries and Oceans Canada inspect some foreign plants as well. The United States and Canada each inspect a limited amount of imported foods. The countries determine which foods to inspect on the basis of factors such as experience with the products and producers and their resources. In the United States, FSIS samples and examines about 15 percent of the meat and poultry imported from countries other than Canada. FDA samples and analyzes, on average, less than 2 percent of all other imported foods; inspection rates are higher for high-risk foods, such as seafood and low-acid canned foods, and those with a significant history of violations. In Canada, Agriculture and Agri-Food Canada inspects about 20 percent of the imported meat and poultry and lesser amounts of other foods. Fisheries and Oceans Canada inspects, on average, about 17 percent of the imported seafood. In both countries, foods that do not pass inspection may be conditioned, destroyed, or reexported at the discretion of the importer with one exception--meat rejected by Canada cannot be conditioned. Some imported products, such as those with a history of violations, are detained automatically when they enter either the United States or Canada; inspectors must specifically determine that these foods comply with applicable standards. Other products are inspected according to a sampling plan determined by such factors as the risk of contamination. Recent international events are smoothing the way for increased trade in foods. Under the General Agreement on Tariffs and Trade, the world's nations are moving toward equivalent food safety standards that are expected to facilitate trade and thus increase food imports into the United States and Canada. Furthermore, the North American Free Trade Agreement (NAFTA) promises to lessen customs restrictions on trade between the United States and Canada, making it easier for foods imported into one country to pass into the other. Finally, U.S. and Canadian efforts under the Canada-United States Free Trade Agreement and NAFTA are helping harmonize the two countries' food safety standards, making it easier for the two countries to share information and to rely on each other's food safety information. Recognizing the value of sharing information about imported foods, the United States and Canada have, over time, developed an ad hoc system for communicating selected information about unsafe food imports. Agency-to-agency arrangements have been established between (1) FSIS and Agriculture and Agri-Food Canada for meat and poultry products, (2) FDA and Fisheries and Oceans Canada for fish and seafood products, and (3) FDA and Health Canada for all other food products. In addition, some officials communicate with one another at the regional level. For example, FDA officials in Blaine, Washington, work closely with officials of Fisheries and Oceans Canada, located about 40 miles away in Vancouver, Canada. Table 1 describes selected regional and agency-to-agency arrangements for sharing information on potentially unsafe imports, foods rejected as unsafe, and inspections of foreign plants. Opportunities exist for improving the current U.S.-Canada information-sharing system in two areas: (1) shipments of unsafe foods refused at one country's port of entry and (2) inspections of foreign food-processing plants. In addition, although each country inspects some foreign plants that export to it, the two countries do not maximize the use of limited resources by coordinating inspections of plants that export to both countries. While the current ad hoc system alerts each country to some problems with unsafe imported foods detected by the other, it does not ensure that all relevant information is exchanged. Neither the United States nor Canada informs the other country of refused shipments being returned to the country of origin, even though those shipments could be rerouted once they leave port. Furthermore, the two countries do not always notify each other about shipments rejected at their respective borders that are then sent directly to the other country. For example, in 1993 the Canadian government notified U.S. officials about rejected shipments in 25 of 37 instances. Similar information on U.S. notifications to Canada was not available because the U.S. agencies do not consistently document this information. The United States is even less systematic in notifying Canada of such refused shipments, in part because FDA officials, unlike their Canadian counterparts, usually do not know where the shipments are going until they have left the country. The U.S. Customs Service, which is responsible for ensuring that rejected shipments of food leave the United States, generally does not notify FDA until after the shipments have left. Even when U.S. officials are notified of problem shipments, their follow-up is sporadic. For example, for the 25 rejected shipments that Canadian officials reported to the United States in 1993, the United States traced 11 shipments and part of another, while 13 shipments and part of another remained unaccounted for. FSIS was responsible for eight of the unaccounted-for shipments. FSIS either did not track or did not document its tracking of these shipments. FDA, which was responsible for the remaining unaccounted-for shipments, could not track them because it either could not identify the port of entry or had no record of the Canadian notification. Officials from FDA and FSIS cited scarce resources as their reason for not putting more emphasis on tracking each rejected shipment. For details on Canada's tracking of shipments rejected by the United States, see the accompanying OAG report. The United States and Canada have an opportunity to build on each other's information about foreign food-processing plants that ship products to North America. Although both countries inspect these plants, they share little information on the results of those inspections or recurring problems with the plants. For meat-processing plants, where most U.S. foreign inspections occur, the only inspection information shared is FSIS' required annual list of plants that have been certified and decertified. Agriculture and Agri-Food Canada receives a copy of this published list. However, neither FSIS nor Agriculture and Agri-Food Canada asks for or provides the results of its inspections to its counterpart agency. For foreign seafood-processing plants, FDA and Fisheries and Oceans Canada began, in February 1994, to discuss sharing the results of their inspections annually. To date, FDA has provided a list of the foreign plants it has inspected and the results to Fisheries and Oceans Canada. A more routine exchange of information would enable both countries to learn where duplication is occurring or coverage is lacking and help them identify problem plants for future inspections. Additional information about each country's experiences in inspecting foreign plants could, in turn, enable the United States and Canada to maximize scarce inspection resources by coordinating such inspections. For example, between 1991 and 1993, FSIS and Agriculture and Agri-Food Canada inspected the same meat and poultry plants 103 times--6 percent of the United States' annual inspections and 76 percent of Canada's inspections. During the same period, FDA and Fisheries and Oceans Canada inspected five of the same tuna-processing plants--3 percent of FDA's inspections of low-acid canned food plants and 33 percent of Fisheries and Oceans Canada's inspections. At the same time, many foreign food-processing plants were not inspected by either country. For example, in 1991, 1992, and 1993, neither FSIS nor Agriculture and Agri-Food Canada inspected 300 (on average) of the 750 foreign meat-processing plants certified to export to the United States. For the same period, neither country inspected over 35,000 of the estimated 36,000 processing plants that export seafood or low-acid canned food to the United States. The disparity between the way the United States covers meat-processing plants and other food-production plants in foreign countries occurs largely because of the way U.S. laws divide responsibility and resources for inspecting such plants between FSIS and FDA. For example, FSIS, which oversees approximately 750 foreign plants certified to export to the United States, spent $2.5 million to inspect foreign plants in fiscal year 1993. FDA, which spent about $300,000 to inspect foreign plants in the same period, is responsible for the safety of all other imported foods, including high-risk foods, from over 36,000 foreign plants. U.S. and Canadian officials acknowledge the need to avoid duplicating effort and to enhance coverage by sharing inspection results. According to officials from both governments, the two nations would have to establish that their foreign inspection systems were comparable before they could fully depend on the results of each other's foreign inspections. The domestic inspection programs for meat and poultry in both countries are considered to be equivalent. Therefore, U.S. agency officials believe that the two countries' systems for inspecting all foods are probably similar enough so that the United States and Canada could use each other's inspection results when planning upcoming inspections in order to target their resources more efficiently and effectively. As the border between the United States and Canada becomes more open, the two countries are becoming increasingly aware of the value of cooperating fully to ensure that unsafe food does not enter either country and of making better use of each country's limited resources. Agencies and some agency officials have taken actions on their own to establish informal cross-border arrangements to share information about unsafe imported foods. We believe these efforts are commendable. By notifying each other about rejected shipments and making each other aware of which processing plants have passed or failed inspection, the United States and Canada could build on the current system and better ensure that unsafe food does not enter either country. Furthermore, inspection coverage of foreign food-processing plants could be more comprehensive if the two countries coordinated inspections. To better ensure the safety of imported foods and to make better use of limited resources, we recommend that the Secretaries of Agriculture and of Health and Human Services take the lead in developing, in concert with their Canadian counterparts and to the extent necessary with the U.S. Customs Service, a more comprehensive system for sharing crucial information on and coordinating activities for unsafe imported foods. As part of this comprehensive system, the agencies should consider coordinating U.S. and Canadian inspections of foreign food-processing plants. While developing a comprehensive bilateral system will take some time, there are shorter-term steps that U.S. agencies could take to tighten control over unsafe food that has been rejected by one country and routed to the other. Specifically, we recommend that the Secretaries of Agriculture and of Health and Human Services direct that FSIS and FDA ensure that available information on rejected shipments being sent to Canada is transmitted to the Canadian government and that information from the Canadian government on such shipments being sent to the United States is consistently followed up. We discussed a draft of this report with FSIS' Director, Review and Assessment Programs, and FDA's Director, Division of Import Operations Policy. They generally agreed with the information we presented, and we incorporated their suggestions where appropriate. In developing information for this report, we spoke with and obtained documentation from FDA and FSIS officials at headquarters and at selected regional and port sites in the states of Washington, California, and New York. We provided relevant parts of this information to our counterpart OAG team. In turn, we received from the OAG team information from officials at Agriculture and Agri-Food Canada, Fisheries and Oceans Canada, and Health Canada in headquarters and corresponding regional locations. We conducted our review between November 1993 and October 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees; interested Members of Congress; the Canadian Parliament; the Secretaries of Agriculture and Health and Human Services; the Commissioner, Food and Drug Administration; the Acting Administrator, Food Safety and Inspection Service; and other interested parties. We will also make copies available to others on request. Robert A. Robinson, Associate Director Edward M. Zadjura, Assistant Director Karla J. Springer, Project Leader Keith W. Oleson, Adviser Marci D. Kramer, Evaluator Donya Fernandez, Evaluator Carol Herrnstadt Shulman, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO reviewed how the United States and Canada share information on and coordinate activities for shipments of unsafe imported foods, focusing on whether opportunities exist to make better use of limited inspection resources and thereby increase the likelihood that unsafe imported foods would be stopped from entering the United States and Canada. GAO found that: (1) U.S. and Canadian food safety officials share information through generally informal agency-to-agency exchanges and cross-border contacts at ports of entry; (2) U.S.-Canadian information sharing efforts focus primarily on shipments of potentially unsafe foods, food shipments refused at one port of entry that may be rerouted to the other port, and inspections of foreign food-processing plants; (3) opportunities exist for the United States and Canada to develop a more comprehensive system for sharing information about shipments of unsafe foods and inspections of foreign food-processing plants and for coordinating these inspections; and (4) improvements in U.S. and Canadian information sharing efforts would enable the two nations to better target their limited inspection resources.
| 3,140 | 222 |
When the Congress enacted FACA in 1972, one of the principal concerns it was responding to was that certain special interests had too much influence over federal agency decision makers. In this act, the Congress articulated certain principles regarding advisory committees, including broad requirements for balance, independence, and transparency. Specifically, FACA requires that the membership of committees be "fairly balanced in terms of points of view presented and the functions to be performed by the advisory committee." Courts have interpreted this requirement as providing agencies with broad discretion in balancing their committees. Further, FACA requires that any legislation or agency action that creates a committee contain provisions to ensure that the advice and recommendations of the committee will be independent and not inappropriately influenced by the appointing authority (the agency) or any special interest. Finally, FACA generally requires that agencies announce committee meetings ahead of time and give notice to interested parties about such meetings. With some exceptions, the meetings are to be open to the public, and agencies are to prepare meeting minutes and make them available to interested parties. FACA also set broad guidelines for the creation and management of federal advisory committees, most of which are created or authorized by the Congress. Agencies also establish committees using their general statutory authority, and some are created by presidential directives. Further, the act requires that all committees have a charter, and that each charter contain specific information, including the committee's scope and objectives, a description of duties, and the number and frequency of meetings. As required by FACA, advisory committee charters generally expire at the end of 2 years unless renewed by the agency or by the Congress. This requirement encourages agencies to periodically reexamine their need for specific committees. GSA, through its Committee Management Secretariat, is responsible for prescribing administrative guidelines and management controls applicable to advisory committees governmentwide. However, GSA does not have the authority to approve or deny agency decisions regarding the creation or management of advisory committees. To fulfill its responsibilities, GSA has developed guidance to assist agencies in implementing FACA requirements, provides training to agency officials, and was instrumental in creating the Interagency Committee on Federal Advisory Committee Management. GSA also has created and maintains an online FACA database (available to the public at www.fido.gov/facadatabase) for which the agencies provide and verify the data, which include committee charters; membership rosters; budgets; and, in many cases, links to committee meeting schedules, minutes, and reports. The database also includes information about a committee's classification (e.g., scientific and technical, national policy issue, or grant review). While GSA's Committee Management Secretariat provides FACA guidance to federal agencies, each agency also develops its own policies and procedures for following FACA requirements. Under FACA, agency heads are responsible for issuing administrative guidelines and management controls applicable to their agency's advisory committees. Generally, federal agencies have a reasonable amount of discretion with regard to creating committees, drafting their charters, establishing their scope and objectives, classifying the committee type, determining what type of advice they are to provide, and appointing members to serve on committees. In addition, to assist with the management of their federal advisory committees, agency heads are required to appoint a committee management officer to oversee the agency's compliance with FACA requirements, including recordkeeping. Finally, agency heads must appoint a designated federal official for each committee to oversee its activities. Among other things, the designated federal official must approve or call the meetings of the committee, approve the agendas (except for presidential advisory committees), and attend the meetings. OGE is responsible for issuing regulations and guidance for agencies to follow in complying with statutory conflict-of-interest provisions that apply to all federal employees, including special government employees serving on federal advisory committees. A special government employee is statutorily defined as an officer or employee who is retained, designated, appointed, or employed by the government to perform temporary duties, with or without compensation, for not more than 130 days during any period of 365 consecutive days. Many agencies use special government employees, either as advisory committee members or as individual experts or consultants. Special government employees, like regular federal employees, are to provide their own best judgment in a manner that is free from conflicts of interest and without acting as a stakeholder to represent any particular point of view. Accordingly, special government employees appointed to federal advisory committees are hired for their expertise and skills and are expected to provide advice on behalf of the government on the basis of their own best judgment. Special government employees are subject to the federal financial conflict-of-interest requirements, although ones that are somewhat less restrictive than those for regular federal government employees. Specifically, special government employees serving on federal advisory committees are provided with an exemption that allows them to participate in particular matters that have a direct and predictable effect on their financial interest if the interest arises from their nonfederal employment and the matter will not have a special or distinct effect on the employee or employer other than as part of a class. This exemption does not extend to a committee member's personal financial and other interests in the matter, such as stock ownership in the employer. If a committee member has a potential financial conflict of interest that is not covered under this or other exemptions, a waiver of the conflict-of- interest provisions may be granted if the appointing official determines that the need for the special government employee's services outweighs the potential for conflict of interest or that the conflict is not significant. This standard for granting waivers is less stringent than the standard for regular government employees. The principal tool that agencies use to assess whether nominees or members of advisory committees have conflicts of interest is the OGE Form 450, Executive Branch Confidential Financial Disclosure Report, which special government employees are required to submit annually. The Form 450 requests financial information about the committee member and the member's spouse and dependent children, such as sources of income and identification of assets, but it does not request filers to provide the related dollar amounts, such as salaries. Even if committees are addressing broad or general issues, rather than particular matters, committee members hired as special government employees are generally required to complete the confidential financial disclosure form. Agencies appoint ethics officials who are responsible for ensuring agency compliance with the federal conflict-of-interest statutes, and OGE conducts periodic audits of agency ethics programs to evaluate their compliance and, as warranted, makes recommendations to agencies to correct deficiencies in their ethics programs. Under administrative guidance initially developed in the early 1960s, a number of members of federal advisory committees are not hired as special government employees, but are instead appointed as representatives. Members appointed to advisory committees as representatives are expected to represent the views of relevant stakeholders with an interest in the subject of discussion, such as an industry, a union, an environmental organization, or other such entity. That is, representative members are expected to represent a particular and known bias--it is understood that information, opinions, and advice from representatives are to reflect the bias of the particular group that they are appointed to represent. Because these individuals are to represent outside interests, they do not meet the statutory definition of federal employee or special government employee and are therefore not subject to the criminal financial conflict-of-interest statute. According to GSA and OGE officials, in 2004 reliable governmentwide data on the number of representative members serving on federal advisory committees were not available. In 2004, we concluded that additional governmentwide guidance could help agencies better ensure the independence of federal advisory committee members and the balance of federal advisory committees. We found that OGE guidance to federal agencies had shortcomings and did not adequately ensure that agencies appropriately appoint individuals selected to provide advice on behalf of the government as special government employees. We found that some agencies were inappropriately appointing members as representatives who, as a result, were not subject to conflict-of-interest reviews. In addition, GSA guidance to federal agencies, and agency-specific policies and procedures, needed to be improved to better ensure that agencies elicit from potential committee members information that could be helpful in determining their viewpoints regarding the subject matters being considered--information that could help ensure that committees are, and are perceived as being, balanced. Specifically, we found the following: OGE guidance on the appropriate use of representative or special government employee appointments to advisory committees had limitations that we believed were a factor in three of the agencies we reviewed continuing the long-standing practice of essentially appointing all members as representatives. That is, the Department of Energy, the Department of the Interior, and the Department of Agriculture had appointed most or all members to their federal advisory committees as representatives--even in cases where the members were called upon to provide advice on behalf of the government and thus would be more appropriately appointed as special government employees. Because conflict-of-interest reviews are required only for federal or special government employees, agencies do not conduct conflict-of-interest reviews for members appointed as representatives. As a result, the agencies could not be assured that the real or perceived conflicts of interest of their committee members who provided advice on behalf of the government were identified and appropriately mitigated. Further, allegations that the members had conflicts of interest could call into question the independence of the committee and jeopardize the credibility of the committee's work. In addition to the FACA requirement for balance, it is important that committees are perceived as balanced in order for their advice to be credible and effective. However, we reported that GSA guidance did not address what types of information could be helpful to agencies in assessing the points of view of potential committee members, nor did agency procedures identify what information should be collected about potential members to make decisions about committee balance. Consequently, many agencies did not identify and systematically collect and evaluate information pertinent to determining the points of view of committee members regarding the subject matters being considered. For example, of the nine agencies we reviewed, only the Environmental Protection Agency (EPA) consistently (1) collected information on committee members appointed as special government employees that enabled the agency to assess the points of view of the potential members and (2) used this information to help achieve balance. Without sufficient information about prospective committee members prior to appointment, agencies cannot ensure that their committees are, and are perceived as being, balanced. We identified several promising practices for forming and managing federal advisory committees that could better ensure that committees are, and are perceived as being, independent and balanced. These practices include (1) obtaining nominations for committees from the public, (2) using clearly defined processes to obtain and review pertinent information on potential members regarding potential conflicts of interest and points of view, and (3) prescreening prospective members using a structured interview. In our view, these measures reflect the principles of FACA by employing clearly defined procedures to promote systematic, consistent, and transparent efforts to achieve independent and balanced committees. In addition, we identified selected measures that could promote greater transparency in the federal advisory committee process and improve the public's ability to evaluate whether agencies have complied with conflict- of-interest requirements and FACA requirements for balance, such as providing information on how the members of the committees are identified and screened and indicating whether the committee members are providing independent or stakeholder advice. Implemented effectively, these practices could help agencies avoid the public criticisms to which some committees have been subjected. That is, if more agencies adopted and effectively implemented these practices, they would have greater assurance that their committees are, and are perceived as being, independent and balanced. Because the effectiveness of competent federal advisory committees can be undermined if the members are, or are perceived as, lacking in independence or if committees as a whole do not appear to be properly balanced, we made 12 recommendations to GSA and OGE to provide additional guidance to federal agencies under three broad categories: (1) the appropriate use of representative appointments; (2) information that could help ensure committees are, in fact and in perception, balanced; and (3) practices that could better ensure independent and balanced committees and increase transparency in the federal advisory process. While our report focused primarily on scientific and technical federal advisory committees, the limitations of the guidance and the promising practices we identified pertaining to independence and balance are pertinent to federal advisory committees in general. Thus, our recommendations were directed to GSA and OGE because of their responsibilities for providing governmentwide guidance on federal ethics and advisory committee management requirements. GSA and OGE have taken steps to implement many, but not all, of the recommendations we made in 2004. Regarding representative appointments, we recommended that guidance from OGE to agencies could be improved to better ensure that members appointed to committees as representatives were, in fact, representing a recognizable group or entity. OGE agreed with our conclusion that some agencies may have been inappropriately identifying certain advisory committee members as representatives instead of special government employees and issued OGE guidance documents in July 2004 and August 2005 that clarified the distinction between special government employees and representative members. In particular, as we recommended, OGE clarified that (1) members should not be appointed as representatives purely on the basis of their expertise, (2) appointments as representatives are limited to circumstances in which the members are speaking as stakeholders for the entities or groups they represent, and (3) the term "representative" or similar terms in an advisory committees' authorizing legislation or other documents does not necessarily mean that members are to be appointed as representatives. We also recommended that OGE and GSA modify their FACA training materials to incorporate the changes in guidance regarding the appointment process, which they have done. In addition, we recommended that GSA expand its FACA database to identify each committee member's appointment category and, for representative members, the entity or group represented. GSA quickly implemented this recommendation and now has data on appointments beginning in 2005. We also recommended that OGE and GSA direct agencies to review their appointments of representative and special government employee committee members to make sure that they were appropriate. OGE's 2004 and 2005 guidance documents addressed this issue by, among other things, recommending that agency ethics officials periodically review appointment designations to ensure that they are proper. OGE's guidance expressed the concern that some agencies may be designating their committee members as representatives primarily to avoid subjecting them to the disclosure statements required for special government employees to identify potential conflicts of interest. The guidance further stated that such improper appointments should be corrected immediately. OGE also suggested that for the committees required to renew their charters every 2 years, agencies use the rechartering process to ensure that the appointment designations are correct. In March 2008, the Director of GSA's Committee Management Secretariat told us that while GSA has not issued formal guidance directing agencies to review appointment designations, it has addressed this recommendation by examining the types of appointments agencies are planning when it conducts desk audits of committee charters for both new and renewed committees and by providing information on appropriate appointments at quarterly meetings with committee management staff and at FACA training classes. The GSA official said that when GSA sees questionable appointments--for example, subject matter experts being appointed as representatives instead of as special government employees--it recommends that agency staff clear this decision with their legal counsel. However, he added that agencies are not compelled to respond to GSA guidance, and some have not changed their long-standing appointment practices despite GSA's questions and suggestions. He noted that, under FACA, GSA has the authority to issue guidance but not regulations. Neither OGE nor GSA implemented our recommendation aimed at ensuring that committee members serving as representative members do not have points of view or biases other than the known interests they are representing. Because members appointed to committees as representatives do not undergo the conflict-of-interest review that special government employees receive, we recommended that representative members, at a minimum, receive ethics training and be asked whether they know of any reason their participation on the committee might reasonably be questioned--for example, because of any personal benefits that could ensue from financial holdings, patents, or other interests. OGE neither agreed or disagreed with this recommendation when commenting on our draft report but subsequently stated in its comments on the published report that it does not have the authority to prescribe rules of conduct for persons who are not employees or officers of the executive branch, such as committee members appointed as representatives. The GSA official said while the agency supports the intent of our recommendation, it defers to OGE on ethics matters. However, in this case, given the limitations OGE identified, it may be more appropriate for GSA to take the lead on implementing this recommendation under FACA. Regarding the importance of ensuring that committees are, in fact and in perception, balanced in terms of points of view and functions to be performed, we recommended that GSA issue guidance to agencies on the types of information that they should gather about prospective committee members. While GSA has not issued formal guidance in this regard, its does include in its FACA training materials examples of agency practices that do ask prospective members about, for example, their previous or ongoing involvement with the issue or public statements or positions on the matter being reviewed. Finally, to better ensure independent and balanced committees and increase transparency in the federal advisory process, we recommended that GSA issue guidance to agencies to help ensure that the committee members, agency and congressional officials, and the public better understand the committee formation process and the nature of the advice provided by advisory committees. Specifically, we recommended that GSA issue guidance that agencies should identify the committee formation process used for each committee, particularly how members are identified and screened and how the committees are assessed for balance; state in the appointment letters whether the members are special government employees or representatives and, in cases where appointments are as representatives, the letters should further identify the entity or group that they are to represent; and state in the committee products the nature of the advice that was to be provided--that is, whether the product is based on independent advice or on consensus among the various identified interests or stakeholders. In its comments on our draft 2004 report and in a July 2004 letter regarding the published report, GSA stated that addressing these recommendations would require further consultation with OGE and affected executive agencies. In the ensuing years, GSA has not issued formal guidance implementing these recommendations. In March 2008, the Director of the Committee Management Secretariat told us that he generally supports the intent of the recommendations but that GSA is reluctant to direct agencies to carry out these aspects of their personnel or advisory committee practices without the statutory authority to do so. He noted that regarding the recommendation addressing the committee formation process, GSA's FACA management training materials provide information on the best practice employed by some of EPA's federal advisory committees of articulating their committee formation process and providing this information on their committees' Web pages. We consider this action a partial implementation of the recommendation. You asked us to provide recommendations for improving the Federal Advisory Committee Act. Regarding the key recommendations we made aimed at addressing the inappropriate use of representative appointments, while both OGE and GSA were fully responsive to our recommendations to issue guidance to federal agencies clarifying such appointments, appointment data we reviewed raise questions about agency compliance. For example, in 2004, we reported that three of the nine agencies we reviewed had historically used representative appointments for all or most of their advisory committees, even when the agencies called upon the members to provide independent advice on behalf of the government. Overall, based on our review of the latest data on committee appointments, for these three agencies, this appointment practice continued through fiscal year 2007. Further, of these three agencies, which we identified as having questionable practices with respect to appointments for scientific and technical committees in 2004, one is still appointing members to scientific and technical committees primarily as representatives, and one has reduced the number of representative appointments but still has a majority of representative appointments. The third shifted substantially away from representative appointments for its scientific and technical committees in 2006 following our report--but made appointments to two new committees in 2007 with representative members that might be more appropriately appointed as special government employees. Regarding the agency that is still primarily using representative members on its scientific and technical committees, not only do the subject matters being considered by many of these committees suggest that the government would be seeking independent expert advice rather than stakeholder advice, but the agency's identification of the entities or persons some representatives are speaking for suggests this agency is not abiding by the OGE and GSA guidance regarding representative appointments. For example, for some committees, this agency identifies the entity that all of the individual representative members are speaking for as the advisory committee itself. We believe these instances likely reflect an inappropriate use of representative rather than special government employee appointments. In addition, we note that some members appointed as representatives are described in the FACA database as representing an expertise or "academia" generally. As discussed above, the OGE guidance clarified that generally members may not be appointed as representatives to represent classes of expertise. Thus, it is not clear that agencies inappropriately using representative appointments have taken sufficient corrective action or that such actions will be sustained despite steps OGE and GSA have taken to clarify the appropriate use of representatives in response to our recommendations. Governmentwide data collected by GSA show that from 2005 (when GSA began to collect the data in response to our recommendation to do so) through 2007, the percentage of committee members appointed as special government employees increased from about 28 percentage to about 32 percent; the members appointed as representatives declined from just over 17 percent to about 16 percent. In March 2008, the Director of the Committee Management Secretariat at GSA told us that it is not clear whether these data indicate that the problem of inappropriate use of representative appointments has been fixed. He emphasized that GSA can suggest to agencies that they change the type of committee appointments they make but cannot direct them to do so. He noted that the agencies that historically have relied on representative appointments may not feel compelled to comply with the guidance because "it is not in the law." Finally, he said GSA would support incorporating the substance of our recommendations regarding representative and special government employees into FACA. Clarifying appointment issues in the act could resolve questions about or challenges to GSA's authorities and thereby better support agency compliance with GSA and OGE guidance on this critical issue. In consideration of the above, the Subcommittee may want to consider amendments to FACA that could help prevent the inappropriate use of representative appointments and better ensure the independence of committee members by clarifying the nature of advice to be provided by special government employees versus representative members of advisory committees and require that all committee members, not just special government employees, be provided ethics training. In addition, as discussed above, our 2004 recommendations to GSA addressing (1) committee balance and (2) practices that could better ensure independent and balanced committees and increase transparency have either not been implemented or have been partially addressed. We believe it is significant that, on the basis of its understanding of its authorities and its experience in overseeing federal advisory committees-- including trying to convince agencies to follow its guidance and training materials--GSA told us in March 2008 that it would support incorporating the substance of our recommendations in these areas into FACA. Not only are our recommendations consistent with four categories (or objectives) of amendments to the act that GSA told us the agency generally supports, but they identify actions that GSA believes could help achieve its objectives, such as enhancing the federal advisory committee process and increasing the public's confidence both in the process and in committee recommendations. Consequently, we believe the Subcommittee may also wish to incorporate into FACA the substance of our recommendations addressing (1) the types of information agencies should consider in assessing prospective committee members' points of view to better ensure the overall balance of committees, (2) the committee formation process, clarity in appointment letters as to the type of advice members are being asked to provide, and (3) identifying in committee products the nature of the advice provided. Along these lines, we understand that the proposed legislative amendments to FACA that may be introduced today may incorporate some of our 2004 recommendations. Overall, we believe that additions to FACA along the lines discussed in our testimony and detailed in our 2004 report could provide greater assurance that committees are, and are perceived as being, independent and balanced. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Robin M. Nazzaro on (202) 512-3841 or [email protected]. Contact points for our Congressional Relations and Public Affairs Offices may be found on the last page of this statement. Contributors to this testimony include Christine Fishkin (Assistant Director), Ross Campbell, Carol Kolarik, Nancy Crothers, Richard P. Johnson, and Jeanette Soares. This is a work of the U.S. government and is not subject to copyright protection in the United States. This published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Because advisory committees provide input to federal decision makers on significant national issues, it is essential that their membership be, and be perceived as being, free from conflicts of interest and balanced as a whole. The Federal Advisory Committee Act (FACA) was enacted in 1972, in part, because of concerns that special interests had too much influence over federal agency decision makers. The General Services Administration (GSA) develops guidance on establishing and managing FACA committees. The Office of Government Ethics (OGE) develops regulations and guidance for statutory conflict-of- interest provisions that apply to some advisory committee members. As requested, this testimony discusses key findings and conclusions in our 2004 report, Federal Advisory Committees: Additional Guidance Could Help Agencies Better Ensure Independence and Balance; GAO's recommendations to GSA and OGE and their responses; and potential changes to FACA that could better ensure the independence and balance of advisory committees. For our 2004 work, we reviewed policies and procedures issued by GSA, OGE, and nine federal agencies that sponsor many committees. For this testimony, we obtained information from GSA and OGE on actions they have taken to implement our recommendations; we also reviewed data in GSA's FACA data base on advisory committee appointments. In 2004, GAO concluded that additional governmentwide guidance could help agencies better ensure the independence of federal advisory committee members and the balance of federal advisory committees. For example, OGE guidance to federal agencies did not adequately ensure that agencies appoint individuals selected to provide advice on behalf of the government as "special government employees" subject to conflict-of-interest regulations. Further, GAO found that some agencies were inappropriately appointing most or all members as "representatives"--expected to reflect the views of the entity or group they are representing and not subject to conflict-of-interest reviews--even when the agencies call upon the members to provide advice on behalf of the government and thus should have been appointed as special government employees. In addition, GSA guidance to federal agencies and agency-specific policies and procedures needed to be improved to better ensure that agencies collect and evaluate information, such as previous or ongoing research, that could be helpful in determining the viewpoints of potential committee members regarding the subject matters being considered and in ensuring that committees are, and are perceived as being, balanced. GAO also identified several promising practices for forming and managing federal advisory committees that could better ensure that committees are independent and balanced as a whole, such as providing information on how the members of the committee are identified and screened and indicating whether the committee members are providing independent or stakeholder advice. To help improve the effectiveness of federal advisory committees so that members are, and are perceived as being, independent and committees as a whole are properly balanced, GAO made 12 recommendations to GSA and OGE to provide additional guidance to federal agencies under three broad categories: (1) the appropriate use of representative appointments; (2) information that could help ensure committees are, in fact, and in perception, balanced; and (3) practices that could better ensure independence and balanced committees and increase transparency in the federal advisory process. GSA and OGE implemented our recommendations to clarify the use of representative appointments. However, current data on appointments indicate that some agencies may continue to inappropriately use representatives rather than special government employees on some committees. Further, GSA said it agrees with GAO's other recommendations, including those relating to committee balance and measures that would promote greater transparency in the federal advisory committee process, but has not issued guidance in these areas as recommended, because of limitations in its authority to require agencies to comply with its guidance. In light of indications that some agencies may continue to use representative appointments inappropriately and GSA's support for including GAO's 2004 recommendations in FACA--including those aimed at enhancing balance and transparency--the Subcommittee may wish to incorporate the substance of GAO's recommendations into FACA as it considers amendments to the act.
| 5,399 | 851 |
The Energy Policy Act of 1992 and Executive Order 12902 require federal agencies to reduce their consumption of energy in federal buildings. The act set a goal for the agencies of lowering their consumption (measured in British thermal units per square foot) by 20 percent below fiscal year 1985 levels by fiscal year 2000. The executive order, issued in March 1994, increased this goal to 30 percent by the year 2005. Because performance contracting enables federal agencies to implement energy efficiencies at no capital cost to the government, the act directed the agencies to use this approach and required DOE to establish methods and procedures for the agencies to use in performance contracting. DOE's performance contracting procurement regulations went into effect on April 10, 1995. Performance contracting presents an alternative to appropriations as a means of financing energy-saving capital improvements for federal facilities. Under this approach, a federal agency may enter into a multiyear contract with an energy service company, which pays all of the up-front costs of implementing the improvements. These costs may include identifying a federal building's energy requirements and acquiring, installing, operating, and maintaining energy-efficient equipment. In addition, the contractor is responsible for training government personnel in operating and maintaining the energy conservation equipment and measuring the energy savings. In exchange, after the contracting federal agency accepts the newly installed equipment, the contractor receives a share of the savings--in both utility and related operations and maintenance costs--resulting from the improvements until the contract expires. After that time, the federal government retains all of the savings and equipment. Figure 1 shows how performance contracting pays for energy-saving improvements and lowers federal agencies' energy costs. Under DOE's performance contracting procurement regulations, the contracting federal agency is to prepare a solicitation for prospective offerors using a model developed by DOE. Although the solicitation can be sent to any firm, under the Energy Policy Act of 1992, the contracting federal agency can negotiate only with firms designated as qualified by DOE or determined to be qualified by the contracting agency using the same selection methods and procedures as DOE. At DOE, a qualification review board evaluates a firm's application package to determine whether the firm is qualified. At the time DOE developed its performance contracting procurement regulations for federal civilian agencies, the Department of Defense (DOD) had already developed a similar policy for the military services based, in part, on its own legislative authority. In addition, DOD had already developed its own list of qualified firms. Consequently, DOE decided to accept as qualified any firm approved by DOD. If a firm has been approved by DOD, DOE does not evaluate the firm's qualifications but instead requests a copy of the application package that the firm sent to DOD and checks to ensure that the firm is, in fact, on DOD's list of qualified firms. DOE's performance contracting procurement regulations direct federal agencies to consider using DOE's model solicitation to the maximum extent practicable. The model solicitation establishes criteria for evaluation and selection, including not only the cost of the proposed work but also the firm's contracting experience and technical expertise. Using the model solicitation, the contracting federal agency rates proposals against the various criteria for firms responding to the solicitation and selects the firm whose overall rating reflects the best value for the government. According to FEMP, the benefits of performance contracting for the federal government generally include (1) reducing energy costs, (2) improving energy efficiency and helping agencies meet their energy savings requirements, (3) eliminating the costs of maintaining and repairing aging or obsolete energy-consuming equipment, (4) making contractors rather than the government responsible for operating and maintaining energy-saving equipment, and (5) creating an incentive for contractors to develop highly efficient improvements by linking their compensation to the savings achieved through their work. As of April 10, 1996, two civilian agencies, the National Park Service and the Federal Bureau of Prisons, had each awarded a performance contract using DOE's April 10, 1995, performance contracting procurement regulations. The Park Service was the first federal civilian agency to award a performance contract under DOE's performance contracting procurement regulations. This contract, for about $2.3 million in energy conservation measures at the Statue of Liberty National Monument and Ellis Island, was awarded on July 25, 1995. The contractor is to reduce energy consumption at both Ellis and Liberty islands by installing energy-efficient interior and exterior lighting, highly efficient motors for the air handling and pumping systems, and an energy management control system. The contractor is to provide, finance, install, and maintain the equipment for 15 years in exchange for a portion of the energy savings realized each year. After the Park Service accepts the equipment, the contractor will, for the duration of the contract, receive compensation from the Park Service from funds in its budget that would otherwise have gone to pay its utility bill. According to the contract, the contractor will be reimbursed for its costs, which include capital and financing costs and a profit, in accordance with a multiyear schedule contained in the contract. The contractor is also to receive a rebate of about $1.1 million from the local utility--Public Service Electric and Gas Company--in New Jersey after the equipment included in the rebate program has been installed and its performance has been verified. This rebate made the project "economically attractive" for the contractor, according to the Park Service's contracting officer. The Park Service, meanwhile, is guaranteed at least $1 annually and, at the end of the 15-year contract period, will acquire energy-saving equipment valued in 1995 at about $1.2 million, thereby eliminating the need to obtain appropriations for this capital equipment. All savings in excess of $1 will also go to the agency. For example, the total savings to the agency during the first year are expected to be about $27,000, according to an official at DOE's National Renewable Energy Laboratory (NREL). The Park Service began the performance contracting process by preparing a solicitation for prospective offerors. In this solicitation, it identified the terms of the contract and included the criteria for evaluating proposals and the measures that the Park Service believed would increase the facilities' energy efficiency. The Park Service sent the solicitation to about 160 prospective firms and received proposals from 3 of them. Two of the three were on DOE's list of qualified firms when they submitted their proposals; one was not. The Park Service reviewed but did not select the proposal from the firm that was not on the list. Had the Park Service wanted to select that proposal, it could not have done so until the firm had been approved for DOE's list. For 11 months after receiving the proposals, the Park Service discussed them with the offerors, exchanged information, and amended the solicitation to reflect the results of mutual decisions or of decisions made by the Park Service to add some items or delete others that had proved infeasible. The offerors modified their proposals in response to the amended solicitation, and an evaluation team, consisting of staff from the Park Service and advisers from DOE's Lawrence Berkeley National Laboratory (LBNL) and NREL, reviewed the final proposals using technical and cost criteria. The Park Service assigned the highest technical score to the proposal offered by CES/Way International, Inc., of Houston, Texas, determining that it was the "best value" and "most advantageous to the government." According to Park Service staff, the Park Service's performance contract is unique not only because it was the first awarded under DOE's April 1995 final regulations but also because it involved work on nationally significant structures that warranted special consideration. For example, the buildings' historic or aesthetic qualities had to be preserved, and the work had to be scheduled so as not to interfere with the museums' normal operations. We discussed the practicality of the contract provision that guarantees that the government will receive $1 in energy savings and all energy savings exceeding the guaranteed amount during our visit to Ellis Island. Some of the on-site Park Service staff and the on-site CES/Way representative we interviewed said that the arrangement did not provide a strong incentive to the contractor to maximize the potential savings available at the facility. The NREL staff person who helped DOE develop the model solicitation said that FEMP and NREL staff had discussed the feasibility of including language in the solicitation that would have created such an incentive, giving the contractor a share in any savings exceeding the guaranteed amount. They did not include the language, the NREL staff person explained, because they were unable to develop criteria specifically for evaluating proposals containing an incentive option. FEMP and NREL staff agreed to reconsider an incentive option and acknowledged that such an option, where and when applicable, could bring further benefits to both the government and the contractors. On February 13, 1996, the Bureau awarded a 20-year performance contract for about $700,000 for a solar hot water system at a federal prison in Phoenix, Arizona. In part because the project would demonstrate the viability of solar technology to reduce the use of conventional energy and would therefore support both FEMP's mission to assist agencies in reducing their use of conventional energy and NREL's mission to promote renewable energy technologies, a cooperative research and development agreement was used to develop the proposal, according to an NREL official who assisted with the project. Since the specific solar technology to be installed under this performance contract was available from only one source, the NREL official said, only one firm was considered for the award. The contract was awarded to the Industrial Solar Technology Corporation of Golden, Colorado. Construction for the project is not scheduled to begin until the fall of 1996, according to an official in the Bureau's Facilities Management Branch. For 1995, DOE received 97 applications from 88 applicants for inclusion on a list of qualified firms, which the Energy Policy Act of 1992 directed DOE to prepare. In total, 58 firms were found to be qualified, including 20 that DOD had previously approved. Ten of the 88 were not found to be qualified. An additional 20 applications were pending at the end of the 1995 application cycle. To identify the characteristics of the qualifying firms, we reviewed the application files that DOE was able to provide at the time of our review. Some of the applicants whose files we reviewed did not respond to all of DOE's requests for data. Our review of the application files that were available for 53 of the 58 approved firms revealed substantial differences among these firms. Under the Small Business Administration's criteria, 25 of these firms were classified as small and 28 were not classified as small. Two classified themselves as disadvantaged and 51 did not classify themselves as disadvantaged. Three classified themselves as woman-owned and 50 did not classify themselves as woman-owned. The average number of employees for the 51 approved firms providing this information ranged from 6 to 54,800; the median number was 55. The net worth of the 45 approved firms providing this information ranged from $100,000 to over $3.9 billion; the median figure was about $2.3 million. The average sales of the 48 approved firms providing this information ranged from $10,000 to $6.4 billion; the median figure was $5 million. Some of the firms with the largest average number of employees, net worth, and/or average sales were utility companies. In other respects, the approved firms also differed substantially from one another. For the 53 whose files were available, the number of years' experience as an energy service company ranged from 2 years to 179 years; the median number of years was 12. For the 52 firms providing this information, the maximum dollar amount of the contract that a firm would accept ranged from $1 million to $100 million; the median amount was $20 million. Eleven firms indicated that they would accept a contract of any amount. Of the 53 firms, 39 indicated that they would apply for performance contracts nationwide while 14 indicated that they would work only in specific regions. Consistent with the Energy Policy Act of 1992, DOE's April 1995 performance contracting guidance permits contracts to be awarded on the basis of the best value to the government rather than the lowest price. Consequently, the Park Service's evaluation board, after reviewing each of three offerors' proposals, selected the firm whose technical proposal represented the best value to the government. The board determined that this firm provided the most comprehensive and technically sound energy-efficient proposal. One of the three firms that submitted a proposal to the Park Service was not on DOE's list of qualified firms when it submitted the proposal. As noted earlier, an offeror does not have to be on DOE's list of qualified firms at the time it submits a proposal. The firm must submit an application to DOE in time for the qualification review board to review and, if appropriate, approve it before contract negotiations begin. For the Bureau contract, only one firm was considered because the specific solar technology to be used was available from only one source, according to an NREL official. The contracting agencies, such as the Park Service and the Bureau, are responsible for developing solicitations, mailing requests for proposals to prospective offerors, and evaluating contract proposals. DOE, FEMP, and three of DOE's national laboratories provide technical assistance to the contracting agencies. Quantifying the administrative costs that these federal agencies have incurred through their involvement has been difficult because the agencies, in general, do not have accounting systems that can track the costs of their work on individual contracts. These costs include, for example, salaries and travel for the full range of activities needed to successfully enter into a performance contract. Performance contracting involved the Park Service and the Bureau in a variety of administrative activities, such as developing solicitations and mailing them to prospective offerors, placing notices in the Commerce Business Daily, conducting site tours for prospective offerors, evaluating contract proposals, and negotiating with the successful offeror. DOE's agencies--FEMP and up to three of the national laboratories, NREL, LBNL, and the Pacific Northwest National Laboratory (PNNL)--provided technical assistance to the contracting agencies. They helped to prepare solicitations, evaluate firms for DOE's list of qualified firms, evaluate project proposals, develop rules and regulations, and finalize contract awards. In addition, FEMP trains staff from federal agencies interested in entering into performance contracts. FEMP has acted as a facilitator, linking the federal agency seeking energy-saving improvements with the laboratory that can best assist the agency. To link the two, FEMP prepares a work order for the laboratory and sends it to a central location--the field office in Golden, Colorado--to be assigned. This work order is a task order or a modification to a master contract already in place with the laboratory. According to FEMP and NREL staff, NREL assisted in developing the model solicitation for energy savings performance contracting, which is available for any agency to use in developing its performance contract. Specifically, NREL provided technical assistance to both the Park Service and the Bureau in developing and/or evaluating their individual performance contracts. LBNL led the development of new guidelines for federal energy projects, which can be used to measure and verify the energy and cost savings associated with federal agencies' performance contracts. LBNL staff conduct the metrics portion of the performance contracting training provided by FEMP. Specifically, LBNL provided an adviser to the technical board for the Statue of Liberty/Ellis Island contract's evaluation process. PNNL staff performed the baseline energy-use audits for the Park Service's performance contract. The federal agencies that worked on the Park Service's and the Bureau's performance contracts estimated their administrative costs because they do not have accounting systems that can track the costs of their work on individual contracts. We obtained estimates from DOE, the Park Service, and/or the Bureau of about $246,000 for work on the Park Service's contract and $70,500 for work on the Bureau's contract. These estimates include the applicable costs for DOE's national laboratories. The Park Service was unable to provide exact data on the administrative costs associated with its performance contract. The staff who worked on this contract also had other responsibilities, and their record-keeping process did not provide for charging time to specific performance contracting tasks. The Park Service did, however, provide the number of staff that worked on the contract and their estimated salary costs. A Park Service official estimated salary costs of $87,559 for the 12 staff who worked on the contract during 1993-96. In addition, he estimated other administrative costs of $5,000, for travel, training, mailing, telephone, and paper costs. According to this official, the costs for subsequent performance contracts would probably be lower because the agency would benefit from its experience with the first contract. Department of the Interior officials stressed that a team needs to be formed to assist civilian agencies in developing and implementing performance contracts, which are much more complex than other, more traditional forms of contracting. We obtained administrative cost estimates for the three laboratories that assisted with this contract. NREL staff estimated total costs of about $17,000 for two NREL staff for about 2 to 3 months each and one NREL technical consultant for about 9 days. This estimate includes travel expenses. NREL was able to estimate its costs for the Park Service's contract because several staff worked for extended periods with the Park Service and FEMP in developing the solicitation, serving on the evaluation panel for the project proposals, and providing assistance to the Park Service at the facility. The cost of PNNL's assistance in performing energy audits was $125,000, according to FEMP staff. An LBNL official estimated administrative costs of about $2,500, including travel costs, for the one staff person who participated on the evaluation panel for this contract. FEMP officials noted that FEMP's accounting system is not set up to track specific project support costs because many of FEMP's activities support a number of agencies simultaneously. FEMP officials, however, estimated administrative costs for the three staff persons associated with this contract at $8,900, including travel costs. FEMP staff, for example, assisted the Park Service by providing a list of firms to which the solicitation could be mailed. According to Bureau officials, the Bureau's administrative costs are estimated because tracking these costs would be labor intensive. The Bureau estimated that it incurred costs of $17,500 for salaries for two staff, related travel expenses during fiscal years 1993-96, and other miscellaneous contract-related administrative expenses. NREL estimated that its administrative costs for the Bureau's contract were about $53,000, including travel expenses. These costs covered the work of three staff who visited the site and/or helped to prepare the solicitation, develop baseline data, and review the technical proposal. FEMP had no administrative costs directly associated with this contract. We transmitted a draft of this report to the Secretary of Energy for review and comment. We met with officials of the Department, including the Director of FEMP, who generally agreed with the report's findings. They provided technical and editorial revisions, which we incorporated as appropriate. We also transmitted a draft of this report to the Secretary of the Interior for review and comment. We met with officials of the Department, including the National Park Service's Deputy Superintendent of the Statue of Liberty/Ellis Island, who agreed with the report's findings. They provided technical and editorial revisions, which we incorporated as appropriate. We transmitted pertinent sections of a draft of this report to officials with the Department of Justice's Federal Bureau of Prisons for review and comment. The Bureau suggested wording concerning its tracking of contract costs, which we incorporated as appropriate. We performed our work from December 1995 through August 1996 in accordance with generally accepted government auditing standards. Appendix I provides more information on our objectives, scope, and methodology. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to the appropriate congressional committees, the Secretary of Energy, the Secretary of the Interior, the Attorney General, and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix II. The Energy Policy Act of 1992 requires GAO to review the 5-year pilot program for energy savings performance contracting. As agreed with congressional staff, we provided information on the number of performance contracts awarded by civilian agencies, from April 10, 1995, to April 10, 1996, under DOE's final performance contracting procurement regulations; the characteristics of the firms on DOE's list of the firms qualified for performance contracting; the firms that submitted project proposals but were not awarded contracts and the reasons why; and the responsibilities of the federal civilian agencies involved in performance contracting activities and the administrative costs they incurred through their involvement. To determine the number of energy savings performance contracts awarded during the first year after the issuance of DOE's final regulations, we contacted DOE's FEMP office and reviewed its data files of awarded contracts. We did not review any energy-related performance contracts awarded before DOE issued the final regulations. To determine the specifics of the Statue of Liberty/Ellis Island contract, we visited Liberty and Ellis islands and interviewed Park Service staff, including the superintendent, the professional services division chief, and the contracting officer. In addition, we interviewed the awardee contractor and representatives of the participating utility company. For information on the Bureau's performance contract, we contacted Bureau and NREL staff. To determine the number and characteristics of the applicant firms and those approved for DOE's list of qualified firms and these firms' characteristics, we reviewed the applications submitted to DOE from April 10, 1995, through February 26, 1996. These files were maintained by a DOE contractor--Enterprise Advisory Services, Inc./Advanced Sciences, Inc.--that assisted with the review and evaluation process for the list of qualified firms. We obtained information from the Park Service on the reason why it rejected qualified offerors' project proposals. To determine the responsibilities of the federal civilian agencies involved in performance contracting and to gather relevant administrative cost data, we obtained information from the awarding agencies (the Park Service and the Bureau), FEMP, and NREL. Energy Conservation: Contractors' Efforts at Federally Owned Sites (GAO/RCED-94-96, Apr. 29, 1994). Energy Conservation: Federal Agencies' Funding Sources and Reporting Procedures (GAO/RCED-94-70, Mar. 30, 1994). Energy Conservation: DOE's Efforts to Promote Energy Conservation and Efficiency (GAO/RCED-92-103, Apr. 16, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a legislative requirement, GAO reviewed the implementation of energy savings performance contracting by two federal civilian agencies, focusing on the: (1) characteristics of the firms that the Department of Energy (DOE) deemed qualified for performance contracts; (2) firms that submitted project proposals but were not awarded contracts; and (3) civilian agencies' responsibilities and administrative costs. GAO found that: (1) the National Park Service and the Federal Bureau of Prisons (BOP) awarded energy savings performance contracts in 1995 and 1996; (2) the Park Service awarded a contract for about $2.3 million to a firm whose technical expertise and contracting experience was determined to be the best value; (3) energy savings improvements at the Park Service's Ellis Island and Statue of Liberty included new lighting, more efficient motors for air handling and pumping systems, and an energy management control system; (4) BOP awarded a contract for about $700,000 for a solar hot water system to the only firm qualified by DOE to provide such a service; (5) the benefits of performance contracting included reducing energy costs, helping agencies meet their energy savings requirements, and creating an incentive for contractors by linking their compensation to the savings achieved through their work; and (6) the Park Service spent about $246,000 and BOP spent about $70,500 in administrative costs associated with the performance contracts.
| 5,087 | 286 |
Federal funds for subsidizing child care for low-income families, particularly those on welfare, are primarily provided to the states through two block grants--CCDF and TANF. Within certain guidelines established by the block grants, states have discretion in deciding how these funds will support child care, including who will be eligible, the payment mechanism to be used to pay providers, and the portion of TANF funds to be used for child care versus other eligible support services. The cost of child care can create a barrier to employment, especially for low-income families. To help these families meet their child care needs, PRWORA created CCDF by repealing three former child care programs and modifying a fourth one; it also included in CCDF the target populations of the programs it replaced. Between fiscal years 1997 and 2002, CCDF will provide states with a total of $20 billion in federal funds--ranging from $2.9 billion in fiscal year 1997 to $3.7 billion in fiscal year 2002--to subsidize child care for both welfare and nonwelfare families. Each state's annual federal allocation consists of separate discretionary, mandatory, and matching funds. A state does not have to obligate or spend any state funds to receive the discretionary and mandatory funds. However, to receive federal matching funds--and thus its full CCDF allocation--a state must maintain its program spending at a specified level, referred to as a state's maintenance of effort (MOE), and spend additional state funds above that level. Further, states may be spending more of their own funds on child care than the amount actually accounted for under CCDF's MOE and match requirements. States must also spend at least 4 percent of their total CCDF expenditures for a given fiscal year on activities intended to improve the quality and availability of child care. These activities can include but are not limited to improving consumer education about child care, providing grants or loans to providers to assist them in meeting applicable child care standards, giving financial assistance to child care resource and referral agencies, improving monitoring and enforcement of child care standards, improving provider compensation, and providing training and technical assistance to providers. In addition to the 4 percent states must spend improving the quality and availability of child care, the Congress specifically earmarked money in CCDF's discretionary fund in fiscal years 1998 and 1999 for certain activities and age groups: $19 million for school- age care and resource and referral services, and $223 million for quality- related activities. States may provide child care assistance to families whose income is as high as 85 percent of the SMI, thus including families at both the lowest and more moderate income levels. States may also establish a maximum income eligibility below this level. Looking across all states, 85 percent of SMI for a family of four in calendar year 1998--the most recent year for which data are available--ranged from a low of $36,753 per year to a high of $64,203 per year. In addition to establishing the maximum income level at which a family is eligible for a child care subsidy, the states also determine which groups of low-income families within that income eligibility limit will have priority over others in receiving subsidies, such as a family with a special needs child. Families who receive child care subsidies under CCDF must be offered the choice of using a voucher, which is a certificate assuring a provider that the state will pay a portion of the child care fee, or using a provider who has a contract with the state to provide care to subsidized families. Vouchers can be used to pay any type of provider, including those providers who may also have a contract with the state. Information about a state's use of vouchers and contracts, the income level of families to whom the state will provide assistance, and its priorities for funding those families is contained in a state's CCDF plan, which must be submitted and approved by HHS every 2 years. TANF, which is currently authorized through fiscal year 2002, ended the individual entitlement to welfare benefits afforded under the Aid to Families with Dependent Children (AFDC) established by the Social Security Act in 1935. In its place, PRWORA created TANF block grants, which provide an entitlement to eligible states of $16.5 billion annually. Federal funding under the TANF grant is fixed, and states are required to maintain a significant portion of their own historic financial commitment to their welfare programs, discussed earlier as a state's MOE, as a condition of receiving their TANF grant. These two sources of funds--federal funds and state funds for MOE--represent the bulk of resources available to states as they design, finance, and implement their low-income assistance programs under TANF. TANF includes provisions to ensure that cash assistance to eligible families is temporary and that those receiving TANF assistance either work or prepare to work. To support state efforts in helping welfare families make this transition to work, PRWORA allowed states wide discretion over how to design their TANF programs. Instead of prescribing in detail how programs are to be structured, the new law authorizes states to use their block grants in any manner reasonably calculated to accomplish the purposes of TANF. For example, states are allowed to set their own criteria for defining who will be eligible and what assistance and services will be available. These services can include cash assistance, work-related activities such as job search assistance, substance abuse counseling, transportation assistance, and child care. In addition, states can choose to use their TANF money to help a broader population of low-income families through programs that, for example, provide refundable tax credits or job retention and advancement services. To ensure the temporary nature of TANF assistance and provide an impetus for moving recipients toward self- reliance, the law established a 5-year lifetime limit on assistance to families and required that states ensure that specified levels of recipients participate in work activities. States can incur financial penalties if these levels are not met. These levels started at 25 percent of a state's welfare caseload for fiscal year 1997 and will increase to 50 percent in fiscal year 2002. In addition to giving states more flexibility to design their welfare programs, TANF also shifted much of the fiscal responsibility to the states. In doing so, the importance of state fiscal planning was underscored as states faced greater choices about how to allocate TANF dollars among the competing needs and priorities of various low-income programs that help families find and keep jobs and prevent them from returning to welfare. Under AFDC, the federal government and the states shared any increased welfare costs because welfare benefits were a matched, open-ended entitlement to the states. But under TANF, states receive a fixed amount of funds regardless of any changes in state spending or the number of people the program serves. Because of a combination of declining welfare caseloads, higher federal grant levels than would have been provided under AFDC, and MOE requirements that states maintain a specified level of welfare spending at 75 to 80 percent of their historical spending on welfare, states currently have more total budgetary resources available for their welfare programs than they would have had under AFDC. These additional resources presented states with numerous decisions to make about the families they would serve, the mix of support services they would offer and the extent to which these services would be funded, and the amount of TANF funds they would reserve for use in later years, particularly in the event of an economic downturn when welfare costs could rise. In addition, PRWORA allows states the flexibility to use TANF funds directly from the block grant to pay for child care or transfer it to other block grants. States may transfer up to 30 percent of their TANF funds to CCDF or 10 percent to the Social Services Block Grant (SSBG), which can also be used by states to fund child care and other social services, depending on their child care needs and priorities. Between fiscal years 1997 and 1999, states' reported expenditures for child care from CCDF, TANF, and their own funds increased annually. For example, CCDF expenditures almost doubled in this time period--growing from $2.5 billion to $4.5 billion--while funds spent from the TANF block grant for child care grew from $14 million to almost $600 million in these 3 years. However, while states spent increased amounts from these sources and their own funds, they still had unspent TANF and CCDF balances at the end of fiscal year 1999. Nationwide, states spent increasingly larger amounts of their CCDF, TANF, and state money on child care between fiscal years 1997 and 1999--a total of more than $16 billion, as shown in table 1. CCDF expenditures made up almost two-thirds of the total amount spent on child care from these sources. These expenditures included funds that states transferred from TANF into CCDF, which more than tripled in 3 years--increasing from $483 million in fiscal year 1997 to around $1.7 billion in fiscal year 1999. (See app. I, tables 3 through 5, for more detailed information on TANF transfers for fiscal years 1997 through 1999 in current dollars.) The CCDF expenditure figures also include federal matching dollars for which states must spend a specified amount of state funds in order to receive their maximum CCDF matching allocation. Forty-seven states received the maximum fiscal year 1997 CCDF federal match while 49 received the maximum fiscal year 1998 match. By the end of fiscal year 1999, almost two-thirds of the states had already spent the required amount of state funds to receive their full fiscal year 1999 federal match even though they had until the end of fiscal year 2000 to do so. As with TANF transfers, states reported spending increasingly more federal TANF dollars on child care directly from the TANF block grant for fiscal years 1997 through 1999. These expenditures grew more than 40-fold, from $14 million in fiscal year 1997 to around $583 million in fiscal year 1999. Spending on child care programs for low-income families increased substantially in the seven states we reviewed in more depth. As table 2 shows, total spending on child care programs in state fiscal year 1994-95 ranged from $58 million in Wisconsin to $661 million in California. By state fiscal year 1999-2000, spending on these programs had grown, ranging from $77 million in Oregon to around $1.8 billion in California. Thus, the percentage increase for these seven states during this period ranged from 20 to 186 percent in constant 1997 dollars. In state fiscal year 1999-2000, five of the seven states relied on significant amounts of federal funds--between 54 and 70 percent--to finance their growing child care programs. Only Connecticut and Texas reported spending more of their own funds than federal funds on these programs for that year. The amount of money states ultimately choose to spend on child care is a result of their budget processes--which decide the extent to which the competing needs of different programs and priorities statewide will be supported--and the requirements imposed by the block grant. As part of these decisions, the states we reviewed made choices about how to spend TANF, CCDF, and other funds to provide many different support services to low-income families. However, while CCDF funds have to be spent on child care, TANF funds can be spent on a range of support services, including child care, assuming these services meet the goals of PRWORA. In addition, these states attempted to strike a balance between spending TANF funds on the current needs of their low-income families and reserving portions of these funds for future spending. For example, both Maryland and Wisconsin plan to use a significant amount of their TANF funds to expand their child care programs in addition to funding other parts of their welfare programs for low-income families. Maryland budget officials are projecting that the state will have $160 million in federal TANF carryover balances to use in fiscal year 2001 in addition to their annual TANF block grant. Using these funds, the state will finance more than 5,700 new child care spaces. Similarly, Wisconsin budget officials assumed that almost $350 million in TANF carryover balances in the fiscal year 1999-2000 budget would be available in addition to its $317 million annual TANF block grant. According to state budget officials, these resources will help pay for a number of new expansions to their child care programs, including increasing the income eligibility of families who can receive child care subsidies from 165 to 185 percent of the poverty level and reducing copayment amounts for families. California, Connecticut, Michigan, Oregon, and Texas, also increased their child care spending between state fiscal years 1994-95 and 1999-2000 to meet the increased need for child care as more families made the transition from welfare to work, but these states were not planning to use TANF funds for large expansions of their child care programs. For example, Texas increased its child care funding for state fiscal year 2000-1 to a level where it was able to serve about half the children on its waiting list at a given point in time with child care subsidies, but it also chose to leave about $107 million in TANF funds in reserve. Connecticut had about $41 million in unspent TANF funds at the end of state fiscal years 1998-99 and 1999-2000 but chose to use these funds to replace state funds already allocated for other programs. Budget officials in Oregon told us that they adjusted their budget twice in the last 2 years because the number of applicants for child care subsidies was lower than expected. Some of the state funds from these adjustments were reinvested into the program to reduce the child care copayment amount; the rest--about $40 million--was used for other state priorities. Finally, counties in California have received more than $685 million in TANF funds from the state as a reward for reducing welfare caseloads. These funds must be used for TANF-allowable purposes, including child care, although the counties have wide discretion over how to spend this money. However, at the time of our study, about 1 percent of it had been spent. While states are spending more federal and state funds on child care, portions of their CCDF and TANF funds remain unspent. CCDF funds, for example, must be spent within certain timeframes prescribed by the regulations. Our end-of-year analysis shows that, on average, states spent about 70 percent of their CCDF funds and retained approximately 30 percent in unspent funds for each of the three fiscal years, 1997 through 1999. It appears that most states have met or will meet the prescribed timeframes for spending these remaining monies. The amount of unspent CCDF funds varies by state and fiscal year, however, and appendix II, tables 9 through 11, provides detailed information by state for fiscal years 1997 through 1999, in current dollars. Along with unspent CCDF funds in a given fiscal year, states also reported about $8 billion dollars in unspent TANF funds at the end of fiscal year 1999. This represented about 41 percent of the total TANF funds available to the states for expenditure in fiscal year 1999 and included both fiscal year 1999 and prior year funds. States also reported that $5 billion of unspent TANF funds have been obligated, although the lack of uniformity in the way states report the status of these funds makes it difficult to determine exactly how much has been obligated. As with CCDF funds, the amount of unspent TANF funds varied by state. Appendix I, tables 6 through 8, provides information on TANF balances by state for fiscal years 1997 through 1999, in current dollars. To parents who receive child care subsidies under CCDF, states must provide flexibility and choice in selecting child care providers. Parents receiving subsidized child care through CCDF most often selected child care centers to provide care to their children. The subsidies parents receive are most often paid for through vouchers--a payment mechanism many think provides the most flexibility to parents--rather than contracts, although this can vary by state. Data on the type of care used by children whose parents receive TANF and the payment mechanisms states used to pay for their care are not available. Center care is the predominant type of child care used by children subsidized with CCDF funds as indicated by fiscal year 1998 data reported by states to HHS. Nationwide, 55 percent of children whose care is paid for by CCDF are in centers, 30 percent are in family child care homes, 11 percent are in the child's own home, and 4 percent are in group homes. The use of center care varied by state, however. HHS data show that the use of center care by CCDF subsidized children ranged from 19 percent in Michigan to 94 percent in the District of Columbia. Three of the seven states we visited--California, Texas, and Wisconsin--reported that between 60 and 80 percent of the children subsidized with CCDF funds used center care. On the other hand, Connecticut, Maryland, Michigan, and Oregon reported much lower use of center care by CCDF-subsidized children, ranging from 19 to 37 percent. While CCDF data can tell us what care parents choose, they cannot provide information on why parents make their choices. Many factors influence the choice of care selected by parents. Some factors can affect the choice of a particular provider over another, while others affect the choice of one provider type over another. For example, data show that younger children--those under 3 years of age--tend to be cared for in family child care homes or by relatives; older children are more often cared for in centers. For families subsidized with CCDF funds, the age of the child may be a factor that explains their greater use of center care. CCDF data show that over 70 percent of the CCDF-subsidized children are 3 to 12 years old, 37 percent are 3 to under 6 years old, and 35 percent are 6 to 12 years old. Lacking accessible and reliable transportation between home, work, and the child care provider can limit a family's child care options and affect the type of care a family chooses. Over the years, states have reported to us that TANF families lack reliable private transportation to get their children to child care providers and themselves to work. Moreover, some communities lack public transportation to get TANF participants where they need to go, especially in rural areas. Even when public transportation is available, families' child care options can be limited due to the difficulty and time it takes to navigate trips with children to a particular provider and then to work. An inadequate supply of providers is another barrier to obtaining care and a factor in selecting child care. In our previous work, we found that the supply of infant care, care for special needs children, and care during nonstandard hours has been much more limited than the overall supply. Low-income neighborhoods tend to have less overall child care supply as well as less supply for these particular care groups than do higher-income neighborhoods. The price of care can affect a low-income parent's choice of a particular provider. In general, child care is less affordable to poor families than nonpoor because it can consume a much larger percentage of their budget. Forty percent of families with incomes at or below 200 percent of poverty paid for child care and spent, on average, 16 percent of their annual earnings; however, 27 percent of these families paid more than 20 percent of their annual earnings for care. Nonpoor families--those with earnings above 200 percent of poverty--paid, on average, 6 percent of their annual earnings for child care with only 1 percent paying more than 20 percent of earnings for care. For families who receive a subsidy, affordability may not be an issue if the full cost of the care selected is within the subsidy amount. However, affordability can be affected by the amount of the copayment, which most states require subsidized parents to pay, again affecting parents' choice of a provider. For example, a recent HHS study shows that state variation in the amount charged to subsidized parents for copayments can represent 4 to 17 percent of their monthly income. CCDF regulations require that a parent eligible for a CCDF child care subsidy be offered the choice of receiving a voucher to pay a provider or enrolling the child with a provider that has a contract with or grant from the state to serve eligible children. A voucher is a certificate that documents that the state will pay a specified amount of the cost of care for an eligible child. The primary advantage of a voucher is that its portability provides maximum parental choice--it can be used to pay any available provider of the parent's choosing, including a relative. A contract, which is an agreement the state usually has with centers, allows the state to target funds to underserved areas, such as poorer parts of a city, or to specific populations, such as migrant farm children, and thus help stabilize the supply of care in these areas. Contracts can also help improve the quality of the child care by stipulating that certain requirements must be met, such as providing staff training or health screenings to the children in care. Vouchers are the most common method used by states to pay for child care subsidized with CCDF funds. Fiscal year 1998 data reported by the states to HHS, which are the most current data available, show that, nationwide, parents of 84 percent of the children receiving CCDF subsidies used a voucher to pay for child care while 10 percent used a provider that had a contract with or grant from the state. For the remaining CCDF-subsidized children, the states paid cash directly to the parent. However, the extent to which one type of payment mechanism is used over another varies among the states. For example, 21 states reported to HHS that they use contracts or grants; the percent of CCDF-subsidized children served by this payment method ranged from less than 1 percent in Vermont and Colorado to almost 73 percent in Florida. Six of the seven states we reviewed use vouchers as the primary method to pay for child care. California uses vouchers to a lesser extent than the other states we visited: 58 percent of California's children subsidized with CCDF funds were with contracted providers, 34 percent used vouchers, and 8 percent were subsidized through cash payments to parents. National data on the type of care used by children subsidized with TANF funds are not available because TANF regulations do not require states to collect and report this information to HHS. Officials in the seven states we reviewed reported that they currently have adequate funding to meet the child care needs of families on TANF and those who have recently left. In five of these states, other eligible families who applied for child care subsidies were also served. However, some officials raised concerns that their states' current funding levels are not sufficient to provide subsidies to all eligible low-income families who may need them, such as those on waiting lists, or to fully support important child care initiatives. State officials noted that one reason that the funding levels for these and other program goals are not higher is states' uncertainty about the continued level of federal funding. According to CCDF plans for fiscal years 2000 through 2001, more than half the states list TANF and TANF-transitional families either first or second on their priority list of families who are eligible for receiving child care subsidies. Likewise, four of the states we reviewed--California, Texas, Connecticut, and Maryland--also give priority for child care subsidies to those on welfare and those transitioning from welfare to work. The three remaining states--Michigan, Oregon, and Wisconsin--reported that they primarily rely on income, not welfare status, as a means of giving priority to certain families over others, with families earning the lowest incomes receiving child care subsidies first. Child care officials in the seven states we examined in more depth reported that their states have allocated adequate funding to meet the child care needs of families on TANF and those in the process of transitioning from welfare to work. However, some of these officials expressed uncertainty about their ability to continue to do this because, with the reauthorization of TANF and CCDF scheduled for the next fiscal year, the future level of federal funding for these block grants is unknown. Michigan and Wisconsin program officials expressed concern that any funding reductions may make it necessary for them to provide child care subsidies to TANF families first, over non-TANF families. But, among the seven states we examined, no state reported that it was currently unable to fund the child care needs of these families who requested services. Nationwide, 22 states placed non-TANF families third or lower in priority order for receiving child care subsidies according to CCDF plans approved by HHS for fiscal years 2000 through 2001. According to the CCDF plans of the states that we reviewed, California, Maryland, and Texas placed low- income families third or fourth after TANF and transitioning TANF families, while Connecticut placed these families fifth after other groups such as teen parents and children with special needs. As stated above, Michigan, Oregon, and Wisconsin did not establish priorities based on welfare status, but rather on income. Notwithstanding these priorities, program officials in Connecticut, Maryland, Michigan, Oregon, and Wisconsin reported that their states' funding allocations have been adequate to serve all eligible families who have applied. Further, data for state fiscal year 1999-2000 show that in four of these states non-TANF children represent the largest percentage of children in their subsidy program. A similar finding is reported in a recent HHS study that examined child care for low-income families in 25 communities nationwide. It found that, while states' funding policies favor TANF families over non-TANF families for receiving child care subsidies, children of non-TANF families represented the largest percentage of children receiving child care subsidies in most of the states that were examined. However, because many states do not track former TANF families for an extended period after leaving TANF, it is not known how many of these current non-TANF families are former TANF families who began receiving their subsidies when they were on welfare. While child care program officials in most of the states we reviewed reported serving all eligible low-income families who applied, California, Connecticut, Texas, and Oregon expressed concern that their funding of child care was not sufficient to provide child care subsidies for all eligible families. These program officials noted that their states' eligibility ceilings were established at levels below the maximum federal level of 85 percent of SMI, yet even at these lower ceiling levels, they do not serve all eligible families. For example, both Connecticut and California set maximum eligibility for receiving child care subsidies at 75 percent of SMI, but because their states did not allocate sufficient funding to serve families up to these eligibility levels, their child care program serves families mostly at or below 50 percent of SMI. In both California and Texas, this has resulted in waiting lists for child care subsidies. Nationwide, most states have not established income eligibility levels at the maximum level allowed under CCDF--85 percent of SMI. According to states' CCDF plans for fiscal year 2000 through 2001, eight states established eligibility at this level. Of the remaining 42 states and the District of Columbia, half set eligibility between 58 and 84 percent of SMI while the other half set it below 58 percent. A gap between the number of children eligible for child care subsidies under states' income eligibility criteria and those who actually receive them appears to exist nationwide. A 1998 HHS study shows that about one-fifth of all states are serving less than 10 percent of the children eligible for CCDF subsidies as defined by state income eligibility ceilings; three-fifths are serving between 10 and 25 percent; and one-fifth are serving 25 percent or more. While not all families who are eligible for child care subsidies want or need them, there are many reasons why families who are eligible and want child care subsidies do not apply for them. For example, they may already know that waiting lists for subsidies exist in their community; they may think they are not eligible; or the amount of the subsidy or the family copayment required to be paid by subsidized families may not make it worthwhile for a family to apply for them. Although all seven states increased the amount of CCDF funds spent on quality initiatives between fiscal years 1997 through 1999, child care program officials in four states were concerned about funding levels for activities to improve the quality of child care. CCDF expenditures for quality reported to HHS by these seven states show that expenditures grew from around $22 million in fiscal year 1997 to about $98 million in fiscal year 1999, totaling over $180 million for this period. The states spent this money on a range of activities to improve child care quality, most commonly to support child care resource and referral agencies, for training and technical assistance to providers, and on efforts to improve provider compliance with state child care regulations by state licensing agencies. Child care program officials in California, Connecticut, Oregon, and Texas reported that their states did not sufficiently fund some child care initiatives that could improve both child care supply and quality in their states. For example, child care program officials in California, Connecticut, and Oregon mentioned the need for more funding to provide higher wages to providers--either through paying higher payment rates or other compensation initiatives--in order to curtail the large numbers of providers leaving the field, typically referred to as turnover. High turnover could affect the adequacy of child care supply. It also disrupts the continuity of care for children, which is important to their development, especially for infants, and interferes with parents' job stability, particularly welfare parents who are new to the workforce. Child care program officials in Texas, Connecticut, and Oregon also discussed the need for funding to build capacity for care that is more difficult to find, such as care for infants and during nonstandard work hours, which is particularly important to welfare families transitioning to work. We received technical comments from program officials in the Administration for Children and Families' Child Care Bureau and Office of Family Assistance in the course of completing our work. We incorporated these comments where appropriate. We also received written comments from six of the seven states discussed in the report--California, Connecticut, Maryland, Michigan, Texas, and Wisconsin. In general, state comments focused on the differences in the expenditure data in the draft report compared with their own current expenditure figures. Because our analysis provides a snapshot of expenditures at several different points in time, the data we present vary from current year data or data that subsequently may have been reconciled or corrected. We expressed expenditure data in constant dollars in the report body to capture real growth in spending over time, but also provided these data in current year dollars in an appendix so that states would recognize the expenditures they reported to HHS. Two states, California and Connecticut, expressed concern with the way we characterized their budget decisions for using TANF funds. California officials believed that our discussion of the fiscal incentive payments that certain counties received for reducing TANF caseloads implied that these funds were for the purpose of increasing the counties' child care expenditures. We clearly state why counties were given these funds and that the counties have discretion about how the funds will be spent. Thus, the California counties that received these funds could decide to spend them on child care or any other activities consistent with TANF's goals and allowable under the law. Officials in Connecticut raised concerns about two issues. They thought that our statement that Connecticut was not planning to use TANF funds for a large expansion of child care implied that Connecticut had not increased its child care funding. We think the report clearly states just the opposite. Table 2 shows that Connecticut has significantly increased its child care expenditures in the time periods on which we gathered data. The report also states that Connecticut was one of only two states that we reviewed that spent more of their own funds than federal funds on these increases. Officials also wanted to make sure that we understood that they do not have unspent TANF funds. We agree, and believe that the report clearly states, that the $41 million Connecticut had in unspent TANF funds at one point in time was spent to reimburse the state for previous state expenditures on TANF-related purposes. Our reason for discussing this in the report was to illustrate the competing choices states face in spending TANF funds and that they do not always choose to spend them on child care. As agreed to with your staff, unless you publicly release its contents earlier, we will make no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Honorable William Thomas, Chairman, and the Honorable Charles Rangel, Ranking Minority Member, House Committee on Ways and Means; the Honorable Benjamin Cardin, Ranking Minority Member, Subcommittee on Human Resources, House Committee on Ways and Means; the Honorable Charles Grassley, Chairman, and the Honorable Max Baucus, Ranking Member, Senate Committee on Finance; and the Honorable Dr. David Satcher, Acting Secretary of HHS; and the Honorable Diann Dawson, Acting Assistant Secretary for Children and Families, HHS. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-7215, or Karen A. Whiten at (202) 512-7291. Other GAO contacts and staff acknowledgments are listed in appendix III. Key contributors to this report include Janet Mascia, Martha Elbaum, Susan Higgins, and Bill Keller. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
|
Nationwide, states reported that federal and state expenditures for child care under the Child Care and Development Fund (CCDF) block grant and the Temporary Assistance for Needy Families (TANF) block grant grew from $4.1 billion in fiscal year 1997 to $6.9 billion in fiscal year 1999 and totaled over $16 billion in constant fiscal year 1997 dollars for this three-year period. More than half of the children whose child care was subsidized with CCDF funds were cared for in centers, and CCDF subsidies for all types of care were primarily provided through vouchers. Eligible parents who were subsidized by CCDF were offered a choice of receiving a voucher to pay a provider of their choosing or using a provider who had a contract with the state. More than half of all the states gave TANF and former TANF families transitioning to work first or second priority for receiving child care subsidies while other eligible low-income families were assigned lower priorities. Officials reported that their states funded the child care needs of their TANF and former TANF families transitioning to work, and were serving all of these families who requested child care assistance. However, some of these officials were concerned that their states' funding levels were not sufficient to serve all other low-income families who were eligible for aid.
| 7,520 | 287 |
Addressing the Year 2000 problem is a tremendous challenge for the federal government. To meet this challenge and monitor individual agency efforts, OMB directed the major departments and agencies to submit quarterly reports on their progress, beginning May 15, 1997. These reports contain information on where agencies stand with respect to the assessment, renovation, validation, and implementation of mission-critical systems, as well as other management information on items such as business continuity and contingency plans and costs. The federal government's most recent reports show improvement in addressing the Year 2000 problem. While much work remains, the federal government has significantly increased the percentage of mission-critical systems that are reported to be Year 2000 compliant, as figure 1 illustrates. In particular, while the federal government did not meet its goal of having all mission-critical systems compliant by March 1999, 92 percent of these systems were reported to have met this goal. While this progress is notable, 11 agencies did not meet OMB's deadline for all of their mission-critical systems. Some of the systems that were not yet compliant support vital government functions. For example, many of the Federal Aviation Administration's (FAA) systems were not compliant as of the March deadline. As we testified last month, several of these systems provide critical functions, ranging from communications to radar processing to weather surveillance. Among other systems that did not meet the March 1999 deadline are those operated by Health Care Financing Administration (HCFA) contractors. As we testified in February 1999, these systems are critical to processing Medicare claims. Additionally, not all systems have undergone an independent verification and validation process. For example, the Environmental Protection Agency and the Department of the Interior reported that 57 and 3 of their systems, respectively, deemed compliant were still undergoing independent verification and validation. In some cases, independent verification and validation of compliant systems have found serious problems. For example, as we testified before you this February, none of HCFA's 54 external mission-critical systems reported by the Department of Health and Human Services as compliant as of December 31, 1998, was Year 2000 ready, based on serious qualifications identified by the independent verification and validation contractor. Other examples have been cited in agency quarterly reports. In February 1999, the Department of Commerce reclassified a system from compliant to noncompliant because an independent verification and validation contractor had concerns about some of the commercial- off-the-shelf software used in the system and wanted to review additional test data. In February 1999, the Environmental Protection Agency reported that its independent third-party review process found a Year 2000 error in a system that was later repaired, tested, and returned to production. In November 1998, the Department of Health and Human Services reported that it removed four Indian Health Service systems from compliant status because an independent verification and validation contractor found that their data exchanges were not compliant. Achieving individual system compliance, although important, does not necessarily ensure that a business function will continue to operate through the change of century--the ultimate goal of Year 2000 efforts. Key actions, such as end-to-end testing and business continuity and contingency planning, are vital to ensuring that this goal is met. Further, OMB has recently taken action on our April 1998 recommendation to set governmentwide priorities and has identified the government's high-impact programs. This is an excellent step toward ensuring the continuing delivery of vital services. To ensure that their mission-critical systems can reliably exchange data with other systems and that they are protected from errors that can be introduced by external systems, agencies must perform end-to-end testing of their critical core business processes. The purpose of end-to-end testing is to verify that a defined set of interrelated systems, which collectively support an organizational core business area or function, will work as intended in an operational environment. In the case of the year 2000, many systems in the end-to-end chain will have been modified or replaced. As a result, the scope and complexity of testing--and its importance--are dramatically increased, as is the difficulty of isolating, identifying, and correcting problems. Consequently, agencies must work early and continually with their data exchange partners to plan and execute effective end-to-end tests (our Year 2000 testing guide sets forth a structured approach to testing, including end-to-end testing). In January 1999, we testified that with the time available for end-to-end testing diminishing, OMB should consider, for the government's most critical functions, setting target dates, and having agencies report against them, for the development of end-to-end test plans, the establishment of test schedules, and the completion of the tests. On March 31, OMB and the Chair of the President's Council on Year 2000 Conversion announced that one of the key priorities that federal agencies will be pursuing during the rest of 1999 will be cooperative efforts regarding end-to-end testing to demonstrate the Year 2000 readiness of federal programs with states and other partners critical to the administration of those programs. We are also encouraged by some agencies' recent actions. For example, we testified this March, that the Department of Defense's Principal Staff Assistants are planning to conduct end-to-end tests to ensure that systems that collectively support core business areas can interoperate as intended in a Year 2000 environment. Further, our March 1999 testimony found that FAA had addressed our prior concerns with the lack of detail in its draft end-to-end test program plan and had developed a detailed end-to-end testing strategy and plans. Business continuity and contingency plans are essential. Without such plans, when unpredicted failures occur, agencies will not have well-defined responses and may not have enough time to develop and test alternatives. Federal agencies depend on data provided by their business partners as well as on services provided by the public infrastructure (e.g., power, water, transportation, and voice and data telecommunications). One weak link anywhere in the chain of critical dependencies can cause major disruptions to business operations. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. Accordingly, in April 1998, we recommended that the Council require agencies to develop contingency plans for all critical core business processes. OMB has clarified its contingency plan instructions and, along with the Chief Information Officers Council, has adopted our business continuity and contingency planning guide. In particular, on January 26, 1999, OMB called on federal agencies to identify and report on the high-level core business functions that are to be addressed in their business continuity and contingency plans as well as to provide key milestones for development and testing of business continuity and contingency plans in their February 1999 quarterly reports. Accordingly, in their February 1999 reports, almost all agencies listed their high-level core business functions. Indeed, major departments and agencies listed over 400 core business functions. For example, the Department of Veterans Affairs classified its core business functions into two critical areas: benefits delivery (six business lines supported this area) and health care. Our review of the 24 major departments' and agencies' February 1999 quarterly reports found that business continuity and contingency planning was generally well underway. However, we also found cases in which agencies (1) were in the early stages of business continuity and contingency planning, (2) did not indicate when they planned to complete and/or test their plan, (3) did not intend to complete their plans until after April 1999, or (4) did not intend to finish testing the plans until after September 1999. In January 1999, we testified before you that OMB could consider setting a target date, such as April 30, 1999, for the completion of business continuity and contingency plans, and require agencies to report on their progress against this milestone. This would encourage agencies to expeditiously develop and finalize their plans and would provide the President's Council on Year 2000 Conversion and OMB with more complete information on agencies' status on this critical issue. To provide assurance that agencies' business continuity and contingency plans will work if they are needed, we also suggested that OMB may want to consider requiring agencies to test their business continuity strategy and set a target date, such as September 30, 1999, for the completion of this validation. On March 31, OMB and the Chair of the President's Council on Year 2000 Conversion announced that completing and testing business continuity and contingency plans as insurance against disruptions to federal service delivery and operations from Year 2000-related failures will be one of the key priorities that federal agencies will be pursuing through the rest of 1999. OMB also announced that it planned to ask agencies to submit their business continuity and contingency plans in June. In addition to this action, we would encourage OMB to implement the suggestion that we made in our January 20 testimony and establish a target date for the validation of these business continuity and contingency plans. While individual agencies have been identifying and remediating mission- critical systems, the government's future actions need to be focused on its high-priority programs and ensuring the continuity of these programs, including the continuity of federal programs that are administered by states. Accordingly, governmentwide priorities need to be based on such criteria as the potential for adverse health and safety effects, adverse financial effects on American citizens, detrimental effects on national security, and adverse economic consequences. In April 1998, we recommended that the President's Council on Year 2000 Conversion establish governmentwide priorities and ensure that agencies set agencywide priorities. On March 26, 1999, OMB implemented our recommendation by issuing a memorandum to federal agencies designating lead agencies for the government's 42 high-impact programs (e.g., food stamps, Medicare, and federal electric power generation and delivery); the attachment contains a list of these programs and lead agencies. For each program, the lead agency was charged with identifying to OMB the partners integral to program delivery; taking a leadership role in convening those partners; assuring that each partner has an adequate Year 2000 plan and, if not, helping each partner without one; and developing a plan to ensure that the program will operate effectively. According to OMB, such a plan might include testing data exchanges across partners, developing complementary business continuity and contingency plans, sharing key information on readiness with other partners and the public, and taking other steps necessary to ensure that the program will work. OMB directed the lead agencies to provide a schedule and milestones of key activities in the plan by April 15. OMB also asked agencies to provide monthly progress reports. OMB's March 1999 memorandum identifies several high-impact state- administered programs, such as Food Stamps, Medicaid, and Temporary Assistance for Needy Families, in which both the federal government and the states have a huge vested interest, both financial and social. Reports by us and the federal lead agencies have indicated the need for the lead federal agency to work together with the states to ensure that programs vital to so many individuals can continue through the change of century. As we reported in November 1998, many systems that support such human services programs were at risk and much work remained to ensure continued services. In February 1999, we testified that while some progress had been achieved, many states' systems have been reported to be at risk and were not scheduled to become compliant until the last half of 1999. Further, progress reports had been based largely on state self- reporting, which, upon site visits, has occasionally been found to be overly optimistic. Accordingly, we concluded that given these risks, business continuity and contingency planning was even more important in ensuring continuity of program operations and benefits in the event of systems failures. In January 1999, OMB implemented a requirement that federal oversight agencies include the status of selected state human services systems in their quarterly reports. Specifically, OMB requested that the agencies describe actions to help ensure that federally supported, state-run programs will be able to provide services and benefits. OMB further asked that agencies report the date when each state's systems will be Year 2000 compliant. Table 1 summarizes the information gathered by the Departments of Agriculture, Health and Human Services, and Labor on how many state-level organizations are compliant or when in 1999 they planned to be compliant. This table illustrates the need for federal/state partnerships to ensure the continuity of these vital services, since a considerable number of state-level organizations are not due to be compliant until the last half of 1999, and the agencies have not received reports from many states. Such partnerships could include the coordination of federal and state business continuity and contingency plans for human resources programs. One agency that could serve as a model to other federal agencies in working with state partners is the Social Security Administration, which relies on states to help process claims under its disability insurance program. In October 1997, we made recommendations to the Social Security Administration to improve its monitoring and oversight of state disability determination services and to develop contingency plans that consider the disability claims processing functions within state disability determination services systems. The Social Security Administration agreed with these recommendations and, as we testified this February, has taken several actions. For example, it established a full-time disability determination services project team, designating project managers and coordinators and requesting biweekly status reports. The agency also obtained from each state disability determination service (1) a plan specifying the specific milestones, resources, and schedules for completing Year 2000 conversion tasks and (2) contingency plans. Such an approach could be valuable to other federal agencies in helping ensure the continued delivery of services. In addition to the state systems that support federal programs, another important aspect of the federal government's Year 2000 efforts with the states are data exchanges. For example, the Social Security Administration exchanges data files with the states to determine the eligibility of disabled persons for disability payments and the National Highway Traffic Safety Administration provides states with information needed for drivers registration. As part of addressing this issue, the General Services Administration is collecting information from federal agencies and the states on the status of their exchanges through a secured Internet World Wide Web site. According to an official at the General Services Administration, 70 percent of federal/state data exchanges are Year 2000 compliant. However, this official would not provide us with supporting documentation for this statement nor would the General Services Administration allow us access to its database. Accordingly, we could not verify the status of federal/state data exchanges. In conclusion, it is clear that much progress has been made in addressing the Year 2000 challenge. It is equally clear, however, that much additional work remains to ensure the continued delivery of vital services. The federal government and its partners must work diligently and cooperatively so that such services are not disrupted. Mr. Chairman, Ms. Chairwoman, this concludes my statement. I will be pleased to respond to any questions that you or other members of the Subcommittees may have at this time. Housing loans (Government National Mortgage Association) Bureau of Indians Affairs programs (continued) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO discussed federal agencies' progress in addressing the year 2000 computing challenge and outlined actions needed to ensure a smooth conversion to the next century, focusing on the: (1) status of the federal government's remediation of its mission-critical systems; (2) remaining challenges facing the government in ensuring the continuity of business operations, namely end-to-end testing and business continuity and contingency planning; (3) Office of Management and Budget's (OMB) efforts to identify the government's high-impact programs; and (4) readiness of state systems that are essential to the delivery of federal human services programs. GAO noted that: (1) addressing the year 2000 problem is a tremendous challenge for the federal government; (2) to meet this challenge and monitor individual agency efforts, OMB directed the major departments and agencies to submit quarterly reports on their progress, beginning on May 15, 1997; (3) these reports contain information on where agencies stand with respect to the assessment, renovation, validation, and implementation of mission-critical systems, as well as other management information; (4) the federal government's most recent reports show improvement in addressing the year 2000 problem; (5) while much work remains, the federal government has significantly increased the percentage of mission-critical systems that are reported to be year 2000 compliant; (6) in particular, while the federal government did not meet its goal of having all mission-critical systems compliant by March 1999, 92 percent of these systems were reported to have met this goal; (7) while this progress is notable, 11 agencies did not meet OMB's deadline for all of their mission-critical systems; (8) to ensure that their mission-critical systems can reliably exchange data with other systems and that they are protected from errors that can be introduced by external systems, agencies must perform end-to-end testing of their critical core business processess; (9) OMB and the President's Council on Year 2000 Conversion announced that one of the key priorities that federal agencies will be pursuing during the rest of 1999 will be cooperative efforts regarding end-to-end testing to demonstrate the year 2000 readiness of federal programs with states and other partners critical to the administration of those programs; (10) OMB called on federal agencies to identify and report on the high-level core business functions that are to be addressed in their business continuity and contingency plans in their February 1999 quarterly reports; (11) accordingly, in their February 1999 reports, almost all agencies listed their high-level core business functions; (12) OMB issued a memorandum to federal agencies designating lead agencies for the government's 42 high-impact programs; (13) OMB directed the lead agencies to provide a schedule and milestones of key activities in their year 2000 plans by April 15; (14) in January 1999, OMB implemented a requirement that federal oversight agencies include the status of selected state human services systems in their quarterly reports; and (15) specifically, OMB requested that the agencies describe actions to help ensure that federally supported, state-run programs will be able to provide services and benefits.
| 3,438 | 648 |
Homeland security is a complex mission that involves a broad range of functions performed throughout government, including law enforcement, transportation, food safety and public health, information technology, and emergency management, to mention only a few. Federal, state, and local governments have a shared responsibility in preparing for catastrophic terrorist attacks as well as other disasters. The initial responsibility for planning, preparing, and response falls upon local governments and their organizations--such as police, fire departments, emergency medical personnel, and public health agencies--which will almost invariably be the first responders to such an occurrence. For its part, the federal government has principally provided leadership, training, and funding assistance. The federal government's role in responding to major disasters has historically been defined by the Stafford Act, which makes most federal assistance contingent on a finding that the disaster is so severe as to be beyond the capacity of state and local governments to respond effectively. Once a disaster is declared, the federal government--through the Federal Emergency Management Agency (FEMA)--may reimburse state and local governments for between 75 and 100 percent of eligible costs, including response and recovery activities. In addition to post disaster assistance, there has been an increasing emphasis over the past decade on federal support of state and local governments to enhance national preparedness for terrorist attacks. After the nerve gas attack in the Tokyo subway system on March 20, 1995, and the Oklahoma City bombing on April 19, 1995, the United States initiated a new effort to combat terrorism. In June 1995, Presidential Decision Directive 39 was issued, enumerating responsibilities for federal agencies in combating terrorism, including domestic terrorism. Recognizing the vulnerability of the United States to various forms of terrorism, the Congress passed the Defense against Weapons of Mass Destruction Act of 1996 (also known as the Nunn-Lugar-Domenici program) to train and equip state and local emergency services personnel who would likely be the first responders to a domestic terrorist event. Other federal agencies, including those in FEMA; the Departments of Justice, Health and Human Services, and Energy; and the Environmental Protection Agency, have also developed programs to assist state and local governments in preparing for terrorist events. As emphasis on terrorism prevention and response grew, however, so did concerns over coordination and fragmentation of federal efforts. More than 40 federal entities have a role in combating and responding to terrorism, and more than 20 in bioterrorism alone. Our past work, conducted prior to the establishment of an Office of Homeland Security and a proposal to create a new Department of Homeland Security, has shown coordination and fragmentation problems stemming largely from a lack of accountability within the federal government for terrorism-related programs and activities. Further, our work found there was an absence of a central focal point that caused a lack of a cohesive effort and the development of similar and potentially duplicative programs. Also, as the Gilmore Commission report notes, state and local officials have voiced frustration about their attempts to obtain federal funds from different programs administered by different agencies and have argued that the application process is burdensome and inconsistent among federal agencies. President Bush took a number of important steps in the aftermath of the terrorist attacks of September 11th to address the concerns of fragmentation and to enhance the country's homeland security efforts, including the creation of the Office of Homeland Security in October 2001. The creation of such a focal point is consistent with a previous GAO recommendation. The Office of Homeland Security achieved some early results in suggesting a budgetary framework and emphasizing homeland security priorities in the President's proposed budget. The proposal to create a statutorily based Department of Homeland Security holds promise to better establish the leadership necessary in the homeland security area. It can more effectively capture homeland security as a long-term commitment grounded in the institutional framework of the nation's governmental structure. As we have previously noted, the homeland security area must span the terms of various administrations and individuals. Establishing a Department of Homeland Security by statute will ensure legitimacy, authority, sustainability, and the appropriate accountability to Congress and the American people. The President's proposal calls for the creation of a Cabinet department with four divisions, including Chemical, Biological, Radiological, and Nuclear Countermeasures; Information Analysis and Infrastructure Protection; Border and Transportation Security; and Emergency Preparedness and Response. Table 1 shows the major components of the proposed department with associated budgetary estimates. The DHS would be responsible for coordination with other executive branch agencies involved in homeland security, including the Federal Bureau of Investigation and the Central Intelligence Agency. Additionally, the proposal to establish the DHS calls for coordination with nonfederal entities and directs the new Secretary to reach out to state and local governments and the private sector in order to: ensure that adequate and integrated planning, training, and exercises occur, and that first responders have the equipment they need; coordinate and, as appropriate, consolidate the federal government's communications systems relating to homeland security with state and local governments' systems; direct and supervise federal grant programs for state and local emergency distribute or, as appropriate, coordinate the distribution of warnings and information to state and local government personnel, agencies and authorities, and the public. Many aspects of the proposed consolidation of homeland security programs are in line with previous recommendations and show promise towards reducing fragmentation and improving coordination. For example, the new department would consolidate federal programs for state and local planning and preparedness from several agencies and place them under a single organizational umbrella. Based on its prior work, GAO believes that the consolidation of some homeland security functions makes sense and will, if properly organized and implemented, over time lead to more efficient, effective and coordinated programs, better intelligence sharing, and a more robust protection of our people, and borders and critical infrastructure. However, as the Comptroller General has recently testified,implementation of the new department will be an extremely complex task, and in the short term, the magnitude of the challenges that the new department faces will clearly require substantial time and effort, and will take additional resources to make it effective. Further, some aspects of the new department, as proposed, may result in yet other concerns. As we reported on June 25, 2002, the new department would include public health assistance programs that have both basic public health and homeland security functions. These dual-purpose programs have important synergies that should be maintained and could be disrupted, as the President's proposal was not sufficiently clear on how both the homeland security and public health objectives would be accomplished. In addition, the recent proposal for establishing DHS should not be considered a substitute for, nor should it supplant, the timely issuance of a national homeland security strategy. At this time, a national homeland security strategy does not exist. Once developed, the national strategy should define and guide the roles and responsibilities of federal, state, and local entities, identify national performance goals and measures, and outline the selection and use of appropriate tools as the nation's response to the threat of terrorism unfolds. The new department will be a key player in the daunting challenge of defining the roles of the various actors within the intergovernmental system responsible for homeland security. In areas ranging from fire protection to drinking water to port security, the new threats are prompting a reassessment and shift of longstanding roles and responsibilities. However, proposed shifts in roles and responsibilities are being considered on a piecemeal and ad hoc basis without benefit of an overarching framework and criteria to guide this process. A national strategy could provide such guidance by more systematically identifying the unique capacities and resources of each level of government and matching them to the job at hand. The proposed legislation provides for the new department to reach out to state and local governments and the private sector to coordinate and integrate planning, communications, information, and recovery efforts addressing homeland security. This is important recognition of the critical role played by nonfederal entities in protecting the nation from terrorist attacks. State and local governments play primary roles in performing functions that will be essential to effectively addressing our new challenges. Much attention has already been paid to their role as first responders in all disasters, whether caused by terrorist attacks or natural hazards. State and local governments also have roles to play in protecting critical infrastructure and providing public health and law enforcement response capability. Achieving national preparedness and response goals hinge on the federal government's ability to form effective partnerships with nonfederal entities. Therefore, federal initiatives should be conceived as national, not federal in nature. Decisionmakers have to balance the national interest of prevention and preparedness with the unique needs and interests of local communities. A "one-size-fits-all" federal approach will not serve to leverage the assets and capabilities that reside within state and local governments and the private sector. By working collectively with state and local governments, the federal government gains the resources and expertise of the people closest to the challenge. For example, protecting infrastructure such as water and transit systems lays first and most often with nonfederal levels of government. Just as partnerships offer opportunities, they also pose risks based upon the different interests reflected by each partner. From the federal perspective, there is the concern that state and local governments may not share the same priorities for use of federal funds. This divergence of priorities can result in state and local governments simply replacing ("supplanting") their own previous levels of commitment in these areas with the new federal resources. From the state and local perspective, engagement in federal programs opens them up to potential federal preemption and mandates. From the public's perspective, partnerships if not clearly defined, risk blurring responsibility for the outcome of public programs. Our fieldwork at federal agencies and at local governments suggests a shift is potentially underway in the definition of roles and responsibilities between federal, state and local governments with far reaching consequences for homeland security and accountability to the public. The challenges posed by the new threats are prompting officials at all levels of government to rethink long standing divisions of responsibilities for such areas as fire services, local infrastructure protection and airport security. The proposals on the table recognize that the unique scale and complexity of these threats call for a response that taps the resources and capacities of all levels of government as well as the private sector. In many areas, the proposals would impose a stronger federal presence in the form of new national standards or assistance. For instance, the Congress is debating proposals to mandate new vulnerability assessments and protective measures on local communities for drinking water facilities. Similarly, new federal rules have mandated local airport authorities to provide new levels of protection for security around airport perimeters. The block grant proposal for first responders would mark a dramatic upturn in the magnitude and role of the federal government in providing assistance and standards for fire service training and equipment. Although promising greater levels of protection than before, these shifts in roles and responsibilities have been developed on an ad hoc piecemeal basis without the benefit of common criteria. An ad hoc process may not capture the real potential each actor in our system offers. Moreover, a piecemeal redefinition of roles risks the further fragmentation of the responsibility for homeland security within local communities, blurring lines of responsibility and accountability for results. While federal, state, and local governments all have roles to play, care must be taken to clarify who is responsible for what so that the public knows whom to contact to address their problems and concerns. The development of a national strategy provides a window of opportunity to more systematically identify the unique resources and capacities of each level of government and better match these capabilities to the particular tasks at hand. If developed in a partnerial fashion, such a strategy can also promote the participation, input and buy in of state and local partners whose cooperation is essential for success. Governments at the local level are also moving to rethink roles and responsibilities to address the unique scale and scope of the contemporary threats from terrorism. Numerous local general-purpose governments and special districts co-exist within metropolitan regions and rural areas alike. Many regions are starting to assess how to restructure relationships among contiguous local entities to take advantage of economies of scale, promote resource sharing, and improve coordination of preparedness and response on a regional basis. For example, mutual aid agreements provide a structure for assistance and for sharing resources among jurisdictions in preparing for and responding to emergencies and disasters. Because individual jurisdictions may not have all the resources they need to acquire equipment and respond to all types of emergencies and disasters, these agreements allow for resources to be regionally distributed and quickly deployed. The terms of mutual aid agreements vary for different services and different localities. These agreements provide opportunities for state and local governments to share services, personnel, supplies, and equipment. We have found in our fieldwork that mutual aid agreements can be both formal and informal and provide for cooperative planning, training, and exercises in preparation for emergencies and disasters. Additionally, some of these agreements involve private companies and local military bases, as well as local entities. The proposed Department, in fulfilling its broad mandate, has the challenge of developing a performance focus. The nation does not have a baseline set of performance goals and measures upon which to assess and improve preparedness. The capability of state and local governments to respond to catastrophic terrorist attacks remains uncertain. The president's fiscal year 2003 budget proposal acknowledged that our capabilities for responding to a terrorist attack vary widely across the country. The proposal also noted that even the best prepared states and localities do not possess adequate resources to respond to the full range of terrorist threats we face. Given the need for a highly integrated approach to the homeland security challenge, performance measures may best be developed in a collaborative way involving all levels of government and the private sector. Proposed measures have been developed for state and local emergency management programs by a consortium of emergency managers from all levels of government and have been pilot tested in North Carolina and North Dakota. Testing at the local level is planned for fiscal year 2002 through the Emergency Management Accreditation Program (EMAP). EMAP is administered by the National Emergency Management Association--an association of directors of state emergency management departments--and funded by FEMA. Its purpose is to establish minimum acceptable performance criteria, by which emergency managers can assess and enhance current programs to mitigate, prepare for, respond to, and recover from disasters and emergencies. For example, one such standard is the requirement (1) that the program must develop the capability to direct, control, and coordinate response and recovery operations, (2) that an incident management system must be utilized, and (3) that organizational roles and responsibilities shall be identified in the emergency operational plans. In recent meetings, FEMA officials have said that EMAP is a step in the right direction towards establishing much needed national standards for preparedness. FEMA officials have suggested they plan on using EMAP as a building block for a set of much more stringent, quantifiable standards. Standards are being developed in other areas associated with homeland security. For example, the Coast Guard is developing performance standards as part of its port security assessment process. The Coast Guard is planning to assess the security condition of 55 U.S. ports over a 3-year period, and will evaluate the security of these ports against a series of performance criteria dealing with different aspects of port security. According to the Coast Guard's Acting Director of Port Security, it also plans to have port authority or terminal operators develop security plans based on these performance standards. Communications is an example of an area for which standards have not yet been developed, but various emergency managers and other first responders have continuously highlighted that standards are needed. State and local governments often report there are deficiencies in their communications capabilities, including the lack of interoperable systems. Additionally, FEMA's Director has stressed the importance of improving communications nationwide. The establishment of national measures for preparedness will not only go a long way towards assisting state and local entities determine successes and areas where improvement is needed, but could also be used as goals and performance measures as a basis for assessing the effectiveness of federal programs. At the federal level, measuring results for federal programs has been a longstanding objective of the Congress. The Congress enacted the Government Performance and Results Act of 1993 (commonly referred to as the Results Act). The legislation was designed to have agencies focus on the performance and results of their programs rather than on program resources and activities, as they had done in the past. Thus, the Results Act became the primary legislative framework through which agencies are required to set strategic and annual goals, measure performance, and report on the degree to which goals are met. The outcome-oriented principles of the Results Act include (1) establishing general goals and quantifiable, measurable, outcome-oriented performance goals and related measures; (2) developing strategies for achieving the goals, including strategies for overcoming or mitigating major impediments; (3) ensuring that goals at lower organizational levels align with and support general goals; and (4) identifying the resources that will be required to achieve the goals. However, FEMA has had difficulty in assessing program performance. As the president's fiscal year 2003 budget request acknowledges, FEMA generally performs well in delivering resources to stricken communities and disaster victims quickly. The agency performs less well in its oversight role of ensuring the effective use of such assistance. Further, the agency has not been effective in linking resources to performance information. FEMA's Office of Inspector General has found that FEMA did not have an ability to measure state disaster risks and performance capability, and it concluded that the agency needed to determine how to measure state and local preparedness programs. In the area of bioterrorism, the Centers for Disease Control and Prevention (CDC) within the Department of Health and Human Services is requiring state and local entities to meet certain performance criteria in order to qualify for grant funding. The CDC has made available 20 percent of the fiscal year 2002 funds for the cooperative agreement program to upgrade state and local public health jurisdictions' preparedness for and response to bioterrorism and other public health threats and emergencies. However, the remaining 80% of the available funds is contingent on receipt, review, and approval of a work plan that must contain 14 specific critical benchmarks. These include the preparation of a timeline for assessment of emergency preparedness and response capabilities related to bioterrorism, the development of a state-wide plan for responding to incidents of bioterrorism, and the development of a system to receive and evaluate urgent disease reports from all parts their state and local public health jurisdictions on a 24-hour per day, 7-day per week basis. Performance goals and measures should be used to guide the nation's homeland security efforts. For the nation's homeland security programs, however, outcomes of where the nation should be in terms of domestic preparedness have yet to be defined. The national homeland security strategy, when developed, should contain such goals and measures and provide a framework for assessing program results. Given the recent and proposed increases in homeland security funding as well as the need for real and meaningful improvements in preparedness, establishing clears goals and performance measures is critical to ensuring both a successful and fiscally responsible effort. The choice and design of the policy tools the federal government uses to engage and involve other levels of government and the private sector in enhancing homeland security will have important consequences for performance and accountability. Governments have a variety of policy tools including grants, regulations, tax incentives, and information-sharing mechanisms to motivate or mandate other levels of government or the private sector to address security concerns. The choice of policy tools will affect sustainability of efforts, accountability and flexibility, and targeting of resources. The design of federal policy will play a vital role in determining success and ensuring that scarce federal dollars are used to achieve critical national goals. The federal government often uses grants to state and local governments as a means of delivering federal assistance. Categorical grants typically permit funds to be used only for specific, narrowly defined purposes. Block grants typically can be used by state and local governments to support a range of activities aimed at achieving a broad, national purpose and to provide a great deal of discretion to state and local officials. In designing grants, it is important to (1) target the funds to state and localities with the greatest need based on highest risk and lowest capacity to meet these needs from their own resource base, (2) discourage the replacement of state and local funds with federal funds, commonly referred to as supplantation, with a maintenance-of-effort requirement that recipients maintain their level of previous funding, and (3) strike a balance between accountability and flexibility. At their best, grants can stimulate state and local governments to enhance their preparedness to address the unique threats posed by terrorism. Ideally, grants should stimulate higher levels of preparedness and avoid simply subsidizing local functions that are traditionally state or local responsibilities. One approach used in other areas is the "seed money" model in which federal grants stimulate initial state and local activity with the intent of transferring responsibility for sustaining support over time to state and local governments. Recent funding proposals, such as the $3.5 billion block grant for first responders contained in the president's fiscal year 2003 budget, have included some of these provisions. This grant would be used by state and local government's to purchase equipment, train personnel, exercise, and develop or enhance response plans. FEMA officials have told us that it is still in the early stages of grant design and is in the process of holding various meetings and conferences to gain input from a wide range of stakeholders including state and local emergency management directors, local law enforcement responders, fire responders, health officials, and FEMA staff. Once the details of the grant have been finalized, it will be useful to examine the design to assess how well the grant will target funds, discourage supplantation, provide the appropriate balance between accountability and flexibility, and whether it provides temporary "seed money" or represents a long-term funding commitment. Other federal policy tools can also be designed and targeted to elicit a prompt, adequate, and sustainable response. In the area of regulatory authority, the Federal, state, and local governments share authority for setting standards through regulations in several areas, including infrastructure and programs vital to preparedness (for example, transportation systems, water systems, public health). In designing regulations, key considerations include how to provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state and local authorities and between the public and private sectors. An example of infrastructure regulations include the new federal mandate requiring that local drinking water systems in cities above a certain size provide a vulnerability assessment and a plan to remedy vulnerabilities as part of ongoing EPA reviews while the new Transportation Security Act is representative of a national preparedness regulation as it grants the Department of Transportation authority to order deployment of local law enforcement personnel in order to provide perimeter access security at the nation's airports. In designing a regulatory approach, the challenges include determining who will set the standards and who will implement or enforce them. There are several models of shared regulatory authority offer a range of approaches that could be used in designing standards for preparedness. Examples of these models range from preemption though fixed federal standards to state and local adoption of voluntary standards formulated by quasi-official or nongovernmental entities. As the Administration noted protecting America's infrastructure is a shared responsibility of federal, state, and local government, in active partnership with the private sector, which owns approximately 85 percent of our nation's critical infrastructure. To the extent that private entities will be called upon to improve security over dangerous materials or to protect critical infrastructure, the federal government can use tax incentives to encourage or enforce their activities. Tax incentives are the result of special exclusions, exemptions, deductions, credits, deferrals, or tax rates in the federal tax laws. Unlike grants, tax incentives do not generally permit the same degree of federal oversight and targeting, and they are generally available by formula to all potential beneficiaries who satisfy congressionally established criteria. Since the events of September 11th, a task force of mayors and police chiefs has called for a new protocol governing how local law enforcement agencies can assist federal agencies, particularly the FBI, given the information needed to do so. As the U.S. Conference of Mayors noted, a close working partnership of local and federal law enforcement agencies, which includes the sharing of intelligence, will expand and strengthen the nation's overall ability to prevent and respond to domestic terrorism. The USA Patriot Act provides for greater sharing of intelligence among federal agencies. An expansion of this act has been proposed (S1615; H.R. 3285) that would provide for information sharing among federal, state and local law enforcement agencies. In addition, the Intergovernmental Law Enforcement Information Sharing Act of 2001 (H.R. 3483), which you sponsored Mr. Chairman, addresses a number of information sharing needs. For instance, the proposed legislation provides that the Attorney General expeditiously grant security clearances to Governors who apply for them and to state and local officials who participate in federal counter- terrorism working groups or regional task forces. The proposal to establish a new Department of Homeland Security represents an important recognition by the Administration and the Congress that much still needs to be done to improve and enhance the security of the American people. The DHS will clearly have a central role in the success of efforts to strengthen homeland security, but it is a role that will be made stronger within the context of a larger, more comprehensive and integrated national homeland security strategy. Moreover, given the unpredictable characteristics of terrorist threats, it is essential that the strategy be formulated at a national rather than federal level with specific attention given to the important and distinct roles of state and local governments. Accordingly, decisionmakers will have to balance the federal approach to promoting homeland security with the unique needs, capabilities, and interests of state and local governments. Such an approach offers the best promise for sustaining the level of commitment needed to address the serious threats posed by terrorism. This completes my prepared statement. I would be pleased to respond to any questions you or other members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-2834 or Paul Posner at (202) 512-9573. Other key contributors to this testimony include Matthew Ebert, Thomas James, Kristen Massey, David Laverny-Rafter, Yvonne Pufahl, Jack Schulze, and Amelia Shachoy. Homeland Security: Proposal for Cabinet Agency Has Merit, But Implementation Will Be Pivotal to Success. GAO-02-886T. Washington, D.C.: June 25, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy. GAO-02-811T. Washington, D.C.: June 7, 2002. Homeland Security: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security GAO-02-621T. Washington, D.C.: April 11, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Homeland Security: Progress Made, More Direction and Partnership Sought. GAO-02-490T. Washington, D.C.: March 12, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Need to Consider VA's Role in Strengthening Federal Preparedness. GAO-02-145T. Washington, D.C.: October 15, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation's Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness. GAO-02-550T. Washington, D.C.: April 2, 2002. Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: March 28, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness. GAO-02-547T. Washington, D.C.: March 22, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD's Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on H.R. 525 to Create a President's Council on Domestic Preparedness. GAO-01-555T. Washington, D.C.: May 9, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response. GAO-01-15. Washington, D.C.: March 20, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination. GAO-01-14. Washington, D.C.: November 30, 2000. Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training. GAO/NSIAD-00-64. Washington, D.C.: March 21, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Observations on Growth in Federal Programs. GAO/T-NSIAD-99-181. Washington, D.C.: June 9, 1999. Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs. GAO-NSIAD-99-151. Washington, D.C.: June 9, 1999. Combating Terrorism: Use of National Guard Response Teams Is Unclear. GAO/NSIAD-99-110. Washington, D.C.: May 21, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO-NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program. GAO/T-NSIAD-99-16. Washington, D.C.: October 2, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Homeland Security: New Department Could Improve Coordination but may Complicate Public Health Priority Setting. GAO-02-883T. Washington, D.C.: June 25, 2002. Bioterrorism: The Centers for Disease Control and Prevention's Role in Public Health Protection. GAO-02-235T. Washington, D.C.: November 15, 2001. Bioterrorism: Review of Public Health and Medical Preparedness. GAO-02-149T. Washington, D.C.: October 10, 2001. Bioterrorism: Public Health and Medical Preparedness. GAO-02-141T. Washington, D.C.: October 10, 2001. Bioterrorism: Coordination and Preparedness. GAO-02-129T. Washington, D.C.: October 5, 2001. Bioterrorism: Federal Research and Preparedness Activities. GAO-01-915. Washington, D.C.: September 28, 2001. Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed. GAO-01-667. Washington, D.C.: September 28, 2001. West Nile Virus Outbreak: Lessons for Public Health Preparedness. GAO/HEHS-00-180. Washington, D.C.: September 11, 2000. Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/NSIAD-99-159. Washington, D.C.: August 16, 1999. Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives. GAO/T-NSIAD-99-112. Washington, D.C.: March 16, 1999. Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures. GAO-01-837. Washington, D.C.: August 31, 2001.
|
The challenges posed by homeland security exceed the capacity and authority of any one level of government. Protecting the nation against these threats calls for a truly integrated approach, bringing together the resources of all levels of government. The proposed Department of Homeland Security will have a central role in efforts to enhance homeland security. The proposed consolidation of homeland security programs has the potential to reduce fragmentation, improve coordination, and clarify roles and responsibilities. However, formation of a department should not be considered a replacement for the timely issuance of a national homeland security strategy to guide implementation of the complex mission of the department. Appropriate roles and responsibilities within and between the government and private sector need to be clarified. New threats are prompting a reassessment and shifting of long-standing roles and responsibilities, but these shifts are being considered on a piecemeal and ad hoc basis without benefit of an overarching framework and criteria. A national strategy could provide guidance by more systematically identifying the unique capacities and resources at each level of government to enhance homeland security and by providing increased accountability within the intergovernmental system. The nation does not yet have performance goals and measures upon which to assess and improve preparedness and develop common criteria that can demonstrate success; promote accountability; and determine areas where resources are needed, such as improving communications and equipment interoperability. A careful choice of the most appropriate tools is critical to achieve and sustain national goals. The choice and design of policy tools, such as grants, regulations, and tax incentives, will enable all levels of government to target areas of highest risk and greatest need, promote shared responsibilities, and track and assess progress toward achieving preparedness goals.
| 7,488 | 358 |
The use of information technology (IT) to electronically collect, store, retrieve, and transfer clinical, administrative, and financial health information has great potential to help improve the quality and efficiency of health care. Historically, patient health information has been scattered across paper records kept by many different caregivers in many different locations, making it difficult for a clinician to access all of a patient's health information at the time of care. Lacking access to these critical data, a clinician may be challenged to make the most informed decisions on treatment options, potentially putting the patient's health at greater risk. The use of electronic health records can help provide this access and improve clinical decisions. Electronic health records are particularly crucial for optimizing the health care provided to military personnel and veterans. While in military status and later as veterans, many VA and DOD patients tend to be highly mobile and may have health records residing at multiple medical facilities within and outside the United States. Making such records electronic can help ensure that complete health care information is available for most military service members and veterans at the time and place of care, no matter where it originates. Although they have identified many common health care business needs, both departments have spent large sums of money to develop and operate separate electronic health record systems that they rely on to create and manage patient health information. VA uses its integrated medical information system--the Veterans Health Information Systems and Technology Architecture (VistA)--which was developed in-house by VA clinicians and IT personnel. The system consists of 104 separate computer applications, including 56 health provider applications; 19 management and financial applications; 8 registration, enrollment, and eligibility applications; 5 health data applications; and 3 information and education applications. Besides being numerous, these applications have been customized at all 128 VA sites. According to the department, this customization increases the cost of maintaining the system, as it requires that maintenance also be customized. In 2001, the Veterans Health Administration undertook an initiative to modernize VistA by standardizing patient data and modernizing the health information software applications. In doing so, its goal was to move from the hospital-centric environment that had long characterized the department's health care operations to a veteran-centric environment built on an open, robust systems architecture that would more efficiently provide both the same functions and benefits of the existing system and enhanced functions based on computable data. VA planned to take an incremental approach to the initiative, based on six phases (referred to as "blocks") that were to be completed in 2018. Under this strategy, the department planned to replace the 104 VistA applications that are currently in use with 67 applications, 3 databases, and 10 common services. VA reported spending almost $600 million from 2001 to 2007 on eight projects, including an effort that resulted in a repository containing selected standardized health data, as part of the effort to modernize VistA. In April 2008, the department estimated an $11 billion total cost to complete, by 2018, the modernization that was planned at that time. However, according to VA officials, the modernization effort was terminated in August 2010. For its part, DOD relies on its Armed Forces Health Longitudinal Technology Application (AHLTA), which comprises multiple legacy medical information systems that the department developed from commercial software products that were customized for specific uses. For example, the Composite Health Care System (CHCS), which was formerly DOD's primary health information system, is still in use to capture information related to pharmacy, radiology, and laboratory order management. In addition, the department uses Essentris (also called the Clinical Information System), a commercial health information system customized to support inpatient treatment at military medical facilities. DOD obligated approximately $2 billion for AHLTA between 1997 and 2010. A key goal for sharing health information among providers, such as between VA's and DOD's health care systems, is achieving interoperability. Interoperability enables different information systems or components to exchange information and to use the information that has been exchanged. This capability allows patients' electronic health information to move with them from provider to provider, regardless of where the information originated. If electronic health records conform to interoperability standards, they can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. (Paper-based health records--if available--also provide necessary information, but unlike electronic health records, do not provide decision support capabilities, such as automatic alerts about a particular patient's health, or other advantages of automation.) Interoperability can be achieved at different levels. At the highest level, electronic data are computable (that is, in a format that a computer can understand and act on to, for example, provide alerts to clinicians on drug allergies). At a lower level, electronic data are structured and viewable, but not computable. The value of data at this level is that they are structured so that data of interest to users are easier to find. At a still lower level, electronic data are unstructured and viewable, but not computable. With unstructured electronic data, a user would have to find needed or relevant information by searching uncategorized data. Beyond these, paper records can also be considered interoperable (at the lowest level) because they allow data to be shared, read, and interpreted by human beings. Since 1998, VA and DOD have relied on a patchwork of initiatives involving their health information systems to achieve electronic health record interoperability. These have included efforts to: share viewable data in existing (legacy) systems; link and share computable data between the departments' modernized health data repositories; establish interoperability objectives to meet specific data-sharing needs; develop a virtual lifetime electronic health record to track patients through active service and veteran status; and implement IT capabilities for the first joint federal health care center. While, collectively, these initiatives have yielded increased data-sharing in various capacities, a number of them have nonetheless been plagued by persistent management challenges, which have created barriers to achieving the fully interoperable electronic health record capabilities long sought. Among the departments' earliest efforts to achieve interoperability was the Government Computer-Based Patient Record (GCPR) initiative, which was begun in 1998 with the intent of providing an electronic interface that would allow physicians and other authorized users of VA's and DOD's health facilities to access data from either of the other agency's health facilities. The interface was expected to compile requested patient health information in a temporary, "virtual" record that could be displayed on a user's computer screen. However, in reporting on this initiative in April 2001, we found that accountability for GCPR was blurred across several management entities and that basic principles of sound IT project planning, development, and oversight had not been followed, thus, creating barriers to progress. For example, clear goals and objectives had not been set; detailed plans for the design, implementation, and testing of the interface had not been developed; and critical decisions were not binding on all partners. While both departments concurred with our recommendations that they, among other things, create comprehensive and coordinated plans for the effort, progress on the initiative continued to be disappointing. The department subsequently revised the strategy for GCPR and, in May 2002, narrowed the scope of the initiative to focus on enabling DOD to electronically transfer service members' electronic health information to VA upon their separation from active duty. The initiative--renamed the Federal Health Information Exchange (FHIE)--was completed in 2004. Building on the architecture and framework of FHIE, VA and DOD also established the Bidirectional Health Information Exchange (BHIE) in 2004, which was aimed at allowing clinicians at both departments viewable access to records on shared patients (that is, those who receive care from both departments, such as veterans who receive outpatient care from VA clinicians and then are hospitalized at a military treatment facility). The interface also enabled DOD sites to see previously inaccessible data at other DOD sites. Further, in March 2004, the departments began an effort to develop an interface linking VA's Health Data Repository and DOD's Clinical Data Repository, as part of a long-term initiative to achieve the two-way exchange of health information between the departments' modernized systems--known as CHDR. The departments had planned to be able to exchange selected health information through CHDR by October 2005. However, in June 2004, we reported that the efforts of VA and DOD in this area demonstrated a number of management weaknesses. Among these were the lack of a well-defined architecture for describing the interface for a common health information exchange; an established project management lead entity and structure to guide the investment in the interface and its implementation; and a project management plan defining the technical and managerial processes necessary to satisfy project requirements. Accordingly, we recommended that the departments address these weaknesses, and they agreed to do so. In September 2005, we testified that the departments had improved the management of the CHDR program, but that this program continued to face significant challenges--in particular, with developing a project management plan of sufficient specificity to be an effective guide for the program. In a subsequent testimony, in June 2006, we noted that the project did not meet a previously established milestone: to be able to exchange outpatient pharmacy data, laboratory results, allergy information, and patient demographic information on a limited basis by October 2005. By September 2006, the departments had taken actions which ensured that the CHDR interface linked the departments' separate repositories of standardized data to enable a two-way exchange of computable outpatient pharmacy and medication allergy information. Nonetheless, we noted that the success of CHDR would depend on the departments instituting a highly disciplined approach to the project's management. To increase the exchange of electronic health information between the two departments, the National Defense Authorization Act (NDAA) for Fiscal Year 2008 included provisions directing VA and DOD to jointly develop and implement, by September 30, 2009, fully interoperable electronic health record systems or capabilities. To facilitate compliance with the act, the departments' Interagency Clinical Informatics Board, made up of senior clinical leaders who represent the user community, began establishing priorities for interoperable health data between VA and DOD. In this regard, the board was responsible for determining clinical priorities for electronic data sharing between the departments, as well as what data should be viewable and what data should be computable. Based on its work, the board established six interoperability objectives for meeting the departments' data-sharing needs: Refine social history data: DOD was to begin sharing with VA the social history data that are currently captured in the DOD electronic health record. Such data describe, for example, patients' involvement in hazardous activities and tobacco and alcohol use. Share physical exam data: DOD was to provide an initial capability to share with VA its electronic health record information that supports the physical exam process when a service member separates from active military duty. Demonstrate initial network gateway operation: VA and DOD were to demonstrate the operation of secure network gateways to support joint VA-DOD health information sharing. Expand questionnaires and self-assessment tools: DOD was to provide all periodic health assessment data stored in its electronic health record to VA such that questionnaire responses are viewable with the questions that elicited them. Expand Essentris in DOD: DOD was to expand its inpatient medical records system (CliniComp's Essentris product suite) to at least one additional site in each military medical department (one Army, one Air Force, and one Navy, for a total of three sites). Demonstrate initial document scanning: DOD was to demonstrate an initial capability for scanning service members' medical documents into its electronic health record and sharing the documents electronically with VA. The departments asserted that they took actions that met the six objectives and, in conjunction with capabilities previously achieved (e.g., FHIE, BHIE, and CHDR), had met the September 30, 2009, deadline for achieving full interoperability as required by the act. Nonetheless, the departments planned additional work to further increase their interoperable capabilities, stating that these actions reflected the departments' recognition that clinicians' needs for interoperable electronic health records are not static. In this regard, the departments focused on additional efforts to meet clinicians' evolving needs for interoperable capabilities in the areas of social history and physical exam data, expanding implementation of Essentris, and additional testing of document scanning capabilities. Even with these actions, however, we identified a number of challenges the departments faced in managing their efforts in response to the 2008 NDAA. Specifically, we identified challenges with respect to performance measurement, project scheduling, and planning. For example, in a January 2009 report, we noted that the departments' key plans did not identify results-oriented (i.e., objective, quantifiable, and measurable) performance goals and measures that are characteristic of effective planning and can be used as a basis to track and assess progress toward the delivery of new interoperable capabilities. We pointed out that without establishing results-oriented goals and reporting progress using measures relative to the established goals, the departments and their stakeholders would not have the comprehensive picture that they need to effectively manage their progress toward achieving increased interoperability. Accordingly, we recommended that DOD and VA take action to develop such goals and performance measures to be used as a basis for providing meaningful information on the status of the departments' interoperability initiatives. In response, the departments stated that such goals and measures would be included in the next version of the VA/DOD Joint Executive Council Joint Strategic Plan (known as the joint strategic plan). However, that plan was not approved until April 2010, 7 months after the departments asserted they had met the deadline for achieving full interoperability. In addition to its provisions directing VA and DOD to jointly develop fully interoperable electronic health records, the 2008 NDAA called for the departments to set up an Interagency Program Office (IPO) to be accountable for their efforts to implement these capabilities by the September deadline. Accordingly, in January 2009, the office completed its charter, articulating, among other things, its mission and functions with respect to attaining interoperable electronic health data. The charter further identified the office's responsibilities in carrying out its mission in areas such as oversight and management, stakeholder communication, and decision making. Among the specific responsibilities identified in the charter was the development of a plan, schedule, and performance measures to guide the departments' electronic health record interoperability efforts. In July 2009, we reported that the IPO had not fulfilled key management responsibilities identified in its charter, such as the development of an integrated master schedule and a project plan for the department's efforts to achieve full interoperability. Without these important tools, the office was limited in its ability to effectively manage and provide meaningful progress reporting on the delivery of interoperable capabilities. We recommended that the IPO establish a project plan and a complete and detailed integrated master schedule. In response to our recommendation, the office began to develop an integrated master schedule and project plan that included information about its ongoing interoperability activities. It is important to note, however, that in testifying before this committee in July 2011, the office's former Director stated that the IPO charter established a modest role for the office, which did not allow the office to be the single point of accountability for the development and implementation of interoperable electronic health records. Instead, the office served the role of coordination and oversight for the departments' efforts. Additionally, as pointed out by this official, control of the budget, contracts, and technical development remained with VA and DOD. As a result, each department had continued to pursue separate strategies and implementation paths, rather than coming together to build a unified, interoperable approach. In another attempt at furthering efforts to increase electronic health record interoperability, in April 2009, the President announced that VA and DOD would work together to define and build the Virtual Lifetime Electronic Record (VLER) to streamline the transition of electronic medical, benefits, and administrative information between the two departments. VLER is intended to enable access to all electronic records for service members as they transition from military to veteran status, and throughout their lives. Further, the initiative is to expand the departments' health information sharing capabilities by enabling access to private sector health data. Shortly after the April 2009 announcement, VA, DOD, and the IPO began working to define and plan for the initiative. In June 2009, the departments adopted a phased implementation strategy consisting of a series of 6-month pilot projects to deploy a set of health data exchange capabilities between existing electronic health record systems at local sites around the country. Each VLER pilot project was intended to build upon the technical capabilities of its predecessor, resulting in a set of baseline capabilities to inform project planning and guide the implementation of VLER nationwide. The first pilot, which started in August 2009, in San Diego, California, resulted in VA, DOD, and Kaiser Permanente being able to share a limited set of test patient data. Subsequently, between March 2010 and January 2011, VA and DOD conducted another pilot in the Tidewater area of southeastern Virginia, which focused on sharing the same data as the San Diego pilot plus additional laboratory data. The departments planned additional pilots, with the goal of deploying VLER nationwide at or before the end of 2012. In June 2010, DOD informed us that it planned to spend $33.6 million in fiscal year 2010, and $61.9 million in fiscal year 2011 on the initiative. Similarly, VA stated that it planned to spend $23.5 million in fiscal year 2010, and had requested $52 million for fiscal year 2011. However, in a February 2011 report on the departments' efforts to address their common health IT needs, we noted that although VA and DOD identified a high-level approach for implementing VLER and designated the IPO as the single point of accountability for the effort, they had not developed a comprehensive plan identifying the target set of capabilities that they intended to demonstrate in the pilot projects and then implement on a nationwide basis at all domestic VA and DOD sites by the end of 2012. Moreover, the departments conducted VLER pilot projects without attending to key planning activities that are necessary to guide the initiative. For example, as of February 2011, the IPO had not developed an approved integrated master schedule, master program plan, or performance metrics for the VLER initiative, as outlined in the office's charter. We noted that if the departments did not address these issues, their ability to effectively deliver capabilities to support their joint health IT needs would be uncertain. We recommended that the Secretaries of VA and DOD strengthen their ongoing efforts to establish VLER by developing plans that include scope definition, cost and schedule estimation, and project plan documentation and approval. Officials from both departments agreed with the recommendation, and we are monitoring their actions toward implementing them. Nevertheless, the departments were not successful in meeting their goal of implementing VLER nationwide by the end of 2012. VA and DOD also continued their efforts to share health information and resources in 2010 following congressional authorization of a 5-year demonstration project to more fully integrate the two departments' facilities that were located in proximity to one another in the North Chicago, Illinois, area. As authorized by the National Defense Authorization Act for fiscal year 2010, VA and DOD facilities in and around North Chicago were integrated into a first-of-its-kind system known as the Captain James A. Lovell Federal Health Care Center (FHCC). The FHCC is unique in that it is to be the first fully integrated federal health care center for use by both VA and DOD beneficiaries, with an integrated workforce, a joint funding source, and a single line of governance. In April 2010, the Secretaries of VA and DOD signed an Executive Agreement that established the FHCC and defined the relationship between the two departments for operating the new, integrated facility, in accordance with the 2010 NDAA. Among other things, the Executive Agreement specified three key IT capabilities that VA and DOD were required to have in place by the FHCC's opening day, in October 2010, to facilitate interoperability of their electronic health record systems: medical single sign-on, which would allow staff to use one screen to access both the VA and DOD electronic health record systems; single patient registration, which would allow staff to register patients in both systems simultaneously; and orders portability, which would allow VA and DOD clinicians to place, manage, and update clinical orders from either department's electronic health records systems for radiology, laboratory, consults (specialty referrals), and pharmacy services. However, in a February 2011 report that identified improvements the departments could make to the FHCC effort, we noted that project planning for the center's IT capabilities was incomplete. We specifically noted that the departments had not defined the project scope in a manner that identified all detailed activities. Consequently, they were not positioned to reliably estimate the project cost or establish a baseline schedule that could be used to track project performance. Based on these findings, we expressed concern that VA and DOD had jeopardized their ability to fully and expeditiously provide the FHCC's needed IT system capabilities. We recommended that the Secretaries of VA and DOD strengthen their efforts to establish the joint IT system capabilities for the FHCC by developing plans that included scope definition, cost and schedule estimation, and project plan documentation and approval. Although officials from both departments stated agreement with our recommendation, the departments' actions were not sufficient to preclude delays in delivering the FHCC's IT system capabilities, as we subsequently described in July 2011 and June 2012. Specifically, our 2011 report noted that none of the three IT capabilities had been implemented by the time of the FHCC's opening, as required by the Executive Agreement; however, FHCC officials reported that the medical single sign-on and single patient registration capabilities subsequently became operational in December 2010. In June 2012, we again reported on the departments' efforts to implement the FHCC's required IT capabilities, and found that portions of the orders portability capability--related to the pharmacy and consults components--remained delayed. VA and DOD officials described workarounds that the departments had implemented as a result of the delays, but did not have a timeline for completion of the pharmacy component, and estimated completion of the consults component by March 2013. The officials reported that as of March 2012, the departments had spent about $122 million on developing and implementing IT capabilities at the FHCC. However, they were unable to quantify the total cost for all the workarounds resulting from delayed IT capabilities. Beyond the aforementioned initiatives, in March 2011 the Secretaries of VA and DOD committed the two departments to developing a new common integrated electronic health record (iEHR), and in May 2012 announced their goal of implementing it across the departments by 2017. According to the departments, the decision to pursue iEHR would enable VA and DOD to align resources and investments with common business needs and programs, resulting in a platform that would replace the two departments' electronic health record systems with a common system. In addition, because it would involve both departments using the same system, this approach would largely sidestep the challenges they have encountered in trying to achieve interoperability between separate systems. To oversee this new effort, in October 2011, the IPO was re-chartered and given authority to expand its staffing level and provided with new authorities under the charter, including control over the budget. According to IPO officials, the office was expected to have a staff of 236 personnel--more than 7 times the number of staff originally allotted to the office by VA and DOD--when hiring under the charter was completed. However, IPO officials told us that, as of January 2013, the office was staffed at approximately 62 percent and that hiring additional staff remained one of its biggest challenges. Earlier this month, the Secretaries of VA and DOD announced that instead of developing a new common integrated electronic health record system, the departments would now focus on integrating health records from separate VA and DOD systems, while working to modernize their existing electronic health record systems. VA has stated that it will continue to modernize VistA while pursuing the integration of health data, while DOD has stated that it plans to evaluate whether it will adopt VistA or purchase a commercial off-the-shelf product. The Secretaries offered several reasons for this new direction, including cutting costs, simplifying the problem of integrating VA and DOD health data, and meeting the needs of veterans and service members sooner rather than later. The numerous challenges that the departments have faced in past efforts to achieve full interoperability between their existing health information systems heighten longstanding concerns about whether this latest initiative will be successful. We have ongoing work--undertaken at the request of the Chairman and Ranking Member of the Senate Committee on Veterans Affairs--to examine VA's and DOD's decisions and activities related to this endeavor. VA's and DOD's revised approach to developing iEHR highlights the need for the departments to address barriers they have faced in key IT management areas. Specifically, in a February 2011 report, we highlighted barriers that the departments faced to jointly addressing their common health care system needs in the areas of strategic planning, enterprise architecture, and investment management. In particular, the departments had not articulated explicit plans, goals, and time frames for jointly addressing the health IT requirements common to both departments' electronic health record systems, and their joint strategic plan did not discuss how or when they propose to identify and develop joint solutions to address their common health IT needs. In addition, although DOD and VA had taken steps toward developing and maintaining artifacts related to a joint health architecture (i.e., a description of business processes and supporting technologies), the architecture was not sufficiently mature to guide the departments' joint health IT modernization efforts. Further, the departments had not established a joint process for selecting IT investments based on criteria that consider cost, benefit, schedule, and risk elements, limiting their ability to pursue joint health IT solutions that both meet their needs and provide better value and benefits to the government as a whole. We noted that without having these key IT management capabilities in place, the departments would continue to face barriers to identifying and implementing IT solutions that addressed their common needs. In our report, we identified several actions that the Secretaries of Defense and Veterans Affairs could take to overcome these barriers, including the following: Revise the departments' joint strategic plan to include information discussing their electronic health record system modernization efforts and how those efforts will address the departments' common health care business needs. Further develop the departments' joint health architecture to include their planned future state and transition plan from their current state to the next generation of electronic health record capabilities. Define and implement a process, including criteria that consider costs, benefits, schedule, and risks, for identifying and selecting joint IT investments to meet the departments' common health care business needs. Officials from both VA and DOD agreed with these recommendations, and we have been monitoring their actions toward implementing them. Nonetheless, important work remains, and it takes on increased urgency in light of the departments' revised approach to developing the iEHR. For example, with respect to planning, the departments' joint strategic plan does not describe the new approach to how the departments will address their common health care business needs. Regarding architecture, in February 2012, the departments established the Health Architecture Review Board to provide architecture oversight, approval, and decision support for joint VA and DOD health information technology programs. While the board has generally met monthly since May 2012 and has been working to establish mechanisms for overseeing architecture activities, the extent to which the departments' revised approach to iEHR is guided by a joint health architecture remains to be seen. With regard to defining a process for identifying and selecting joint investments, the departments have established such a governance structure, but the effectiveness of this structure has not yet been demonstrated. In particular, the departments have not yet demonstrated the extent to which criteria that consider costs, benefits, schedule, and risks have been or will be used to identify and select planned investments. In summary, while VA and DOD have made progress in increasing interoperability between their health information systems over the past 15 years, these efforts have faced longstanding challenges. In large part, these have been the result of inadequate program management and accountability. In particular, there has been a persistent absence of clearly defined, measurable goals and metrics, together with associated plans and time frames, that would enable the departments to report progress in achieving full interoperability. Moreover, the Integrated Program Office has not functioned as it was intended--as a single point of accountability for efforts to implement fully interoperable electronic health record systems or capabilities. The 2011 decision to develop a single, integrated electronic health record system to be used across both departments could have avoided or mitigated some of these challenges. However, the more recent decision to reverse course and continue to operate separate systems and develop additional interoperable capabilities raises concern in light of historical challenges. Further, although the departments have asserted that their now planned approach will deliver capabilities sooner and at lower cost, deficiencies in key IT management areas of strategic planning, enterprise architecture, and investment management could continue to stand in the way of VA's and DOD's attempts to jointly address their common health care system needs in the most efficient and effective manner. Chairman Miller, Ranking Member Michaud, and Members of the Committee, this concludes my statement. I would be pleased to respond to any questions that you may have. If you have any questions concerning this statement, please contact Valerie C. Melvin, Director, Information Management and Technology Resources Issues, at (202) 512-6304 or [email protected]. Other individuals who made key contributions include Mark T. Bird, Assistant Director; Heather A. Collins; Kelly R. Dodson; Lee A. McCracken; Umesh Thakkar; and Eric L. Trout. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
VA and DOD operate two of the nation's largest health care systems-- systems that serve populations of veterans and active service members and their dependents. To better serve these populations, VA and DOD have been collaborating for about 15 years on a variety of initiatives to share data among the departments' health information systems. The use of IT to electronically collect, store, retrieve, and transfer such data has the potential to improve the quality and efficiency of health care. Particularly important in this regard is developing electronic health records that can be accessed throughout a patient's military and veteran status. Making such information electronic can ensure greater availability of health care information for service members and veterans at the time and place of care. Although they share many common business needs, both VA and DOD have spent large sums of money to develop and maintain separate electronic health record systems that they use to create and manage patient health information. GAO was asked to testify on (1) the departments' efforts, and challenges faced, in electronically sharing health information and (2) the recent change in their approach to developing an integrated electronic health record. In preparing this statement, GAO relied primarily on previously published work in this area. The Departments of Veterans Affairs (VA) and Defense (DOD) have undertaken a number of patchwork efforts over the past 15 years to achieve interoperability (i.e., the ability to share data) of records between their information systems; however, these efforts have faced persistent challenges. The departments' early efforts to achieve interoperability included enabling DOD to electronically transfer service members' electronic health information to VA; allowing clinicians at both departments viewable access to records on shared patients; and developing an interface linking the departments' health data repositories. As GAO reported, however, several of these efforts were plagued by project planning and management weaknesses, inadequate accountability, and poor oversight, limiting their ability to realize full interoperability. To further expedite data sharing, the National Defense Authorization Act of 2008 directed VA and DOD to jointly develop and implement fully interoperable electronic health record capabilities by September 30, 2009. The departments asserted that they met this goal, though they planned additional work to address clinicians' evolving needs. GAO identified weaknesses in the departments' management of these initiatives, such as a lack of defined performance goals and measures that would provide a comprehensive picture for managing progress. In addition, the departments' Interagency Program Office, which was established to be a single point of accountability for electronic health data sharing, had not fulfilled key management responsibilities. In 2009, the departments began work on the Virtual Lifetime Electronic Record initiative to enable access to all electronic records for service members transitioning from military to veteran status, and throughout their lives. To carry this out, the departments initiated several pilot programs but had not defined a comprehensive plan that defined the full scope of the effort or its projected cost and schedule. Further, in 2010, VA and DOD established a joint medical facility that was, among other things, to have certain information technology (IT) capabilities to facilitate interoperability of the departments' electronic health record systems. Deployment of these capabilities was delayed, however, and some have yet to be implemented. In 2011, the VA and DOD Secretaries committed to developing a new common integrated electronic health record system, with a goal of implementing it across the departments by 2017. This approach would largely sidestep the challenges in trying to achieve interoperability between separate systems. However, in February 2013, the Secretaries announced that the departments would focus on modernizing their existing systems, rather than developing a single system. They cited cost savings and meeting needs sooner rather than later as reasons for this decision. Given the long history of challenges in achieving interoperability, this reversal of course raises concerns about the departments' ability to successfully collaborate to share electronic health information. Moreover, GAO has identified barriers to the departments jointly addressing their common needs arising from deficiencies in key IT management areas, which could continue to jeopardize their pursuits. GAO is monitoring the departments' progress in overcoming these barriers and has additional ongoing work to evaluate their activities to develop integrated electronic health record capabilities. Since 2001, GAO has made numerous recommendations to improve VA's and DOD's management of their efforts to share health information.
| 6,573 | 920 |
Our preliminary results indicate that, in the absence of RAMP, FPS currently is not assessing risk at the over 9,000 federal facilities under the custody and control of GSA in a manner consistent with federal standards such as NIPP's risk management framework, as FPS originally planned. According to this framework, to be considered credible a risk assessment must specifically address the three components of risk: threat, vulnerability, and consequence. As a result, FPS has accumulated a backlog of federal facilities that have not been assessed for several years. According to FPS data, more than 5,000 facilities were to be assessed in fiscal years 2010 through 2012. However, we were not able to determine the extent of the FSA backlog because we found FPS's FSA data to be unreliable. Specifically, our analysis of FPS's December 2011 assessment data showed nearly 800 (9 percent) of the approximately 9,000 federal facilities did not have a date for when the last FSA was completed. We have reported that timely and comprehensive risk assessments play a critical role in protecting federal facilities by helping decision makers identify and evaluate potential threats so that countermeasures can be implemented to help prevent or mitigate the facilities' vulnerabilities. Although FPS is not currently assessing risk at federal facilities, FPS officials stated that the agency is taking steps to ensure federal facilities are safe. According to FPS officials, its inspectors (also referred to as law enforcement security officers) monitor the security posture of federal facilities by responding to incidents, testing countermeasures, and conducting guard post inspections. In addition, since September 2011, FPS's inspectors have collected information--such as location, purpose, agency contacts, and current countermeasures (e.g., perimeter security, access controls, and closed-circuit television systems) at over 1,400 facilities--which will be used as a starting point to complete FPS's fiscal year 2012 assessments. However, FPS officials acknowledged that this approach is not consistent with NIPP's risk management framework. Moreover, several FPS inspectors told us that they received minimal training or guidance on how to collect this information, and expressed concern that the facility information collected could become outdated by the time it is used to complete an FSA. We reported in February 2012 that multiple federal agencies have been expending additional resources to conduct their own risk assessments, in part because they have not been satisfied with FPS's past assessments. These assessments are taking place even though, according to FPS's Chief Financial Officer, FPS received $236 million in basic security fees from federal agencies to conduct FSAs and other security services in fiscal year 2011. For example, officials we spoke with at the Internal Revenue Service, Federal Emergency Management Agency, Environmental Protection Agency and the U.S. Army Corps of Engineers stated that they conduct their own risk assessments. GSA is also expending additional resources to assess risk. We reported in October 2010 that GSA officials did not always receive timely FPS risk assessments for facilities GSA considered leasing. GSA seeks to have these assessments completed before it takes possession of a property and leases it to tenant agencies. However, our preliminary work indicates that as of June 2012, FPS has not coordinated with GSA and other federal agencies to reduce or prevent duplication of its assessments. In September 2011, FPS signed an interagency agreement with Argonne National Laboratory for about $875,000 to develop an interim tool for conducting vulnerability assessments by June 30, 2012. According to FPS officials, on March 30, 2012, Argonne National Laboratory delivered this tool, called the Modified Infrastructure Survey Tool (MIST), to FPS on time and within budget. MIST is an interim vulnerability assessment tool that FPS plans to use until it can develop a permanent solution to replace RAMP. According to MIST project documents and FPS officials, among other things, MIST will: allow FPS's inspectors to review and document a facility's security posture, current level of protection, and recommend countermeasures; provide FPS's inspectors with a standardized way for gathering and recording facility data; and allow FPS to compare a facility's existing countermeasures against the Interagency Security Committee's (ISC) countermeasure standards based on the ISC's predefined threats to federal facilities (e.g., blast-resistant windows for a facility designed to counter the threat of an explosive device) to create the facility's vulnerability report. According to FPS officials, MIST will provide several potential improvements over FPS's prior assessment tools, such as using a standard way of collecting facility information and allowing edits to GSA's facility data when FPS inspectors find it is inaccurate. In addition, according to FPS officials, after completing a MIST vulnerability assessment, inspectors will use additional threat information gathered outside of MIST by FPS's Threat Management Division as well as local crime statistics to identify any additional threats and generate a threat assessment report. FPS plans to provide the facility's threat and vulnerability reports along with any countermeasure recommendations to the federal tenant agencies. In May 2012, FPS began training inspectors on MIST and how to use the threat information obtained outside MIST and expects to complete the training by the end of September 2012. According to FPS officials, inspectors will be able to use MIST once they have completed training and a supervisor has determined, based on professional judgment, that the inspector is capable of using MIST. At that time, an inspector will be able to use MIST to assess level I or II facilities. According to FPS officials, once these assessments are approved, FPS will subsequently determine which level III and IV facilities the inspector may assess with MIST. Our preliminary analysis indicates that in developing MIST, FPS increased its use of GAO's project management best practices, including alternatives analysis, managing requirements, and conducting user acceptance testing. For example, FPS completed, although it did not document, an alternatives analysis prior to selecting MIST as an interim tool to replace RAMP. It appears that FPS also better managed MIST's requirements. Specifically, FPS's Director required that MIST be an FSA- exclusive tool and thus helped avoid changes in requirements that could have resulted in cost or schedule increases during development. In March 2012, FPS completed user acceptance testing of MIST with some inspectors and supervisors, as we recommended in 2011.FPS officials, user feedback on MIST was positive from the user acceptance test, and MIST produced the necessary output for FPS's FSA process. However, FPS did not obtain GSA or federal tenant agencies' input in developing MIST's requirements. Without this input, FPS's customers may not receive the information they need to make well- informed countermeasure decisions. FPS has yet to decide what tool, if any, will replace MIST, which is intended to be an interim vulnerability assessment tool. According to FPS officials, the agency plans to use MIST for at least the next 18 months. Consequently, until FPS decides what tool, if any, will replace MIST and RAMP, it will still not be able to assess risk at federal facilities in a manner consistent with NIPP, as we previously mentioned. Our preliminary work suggests that MIST has several limitations: Assessing Consequence. FPS did not design MIST to estimate consequence, a critical component of a risk assessment. Assessing consequence is important because it combines vulnerability and threat information to evaluate the potential effects of an adverse event on a federal facility. Three of the four risk assessment experts we spoke with generally agreed that a tool that does not estimate consequences does not allow an agency to fully assess the risks to a federal facility. However, FPS officials stated that incorporating consequence information into an assessment tool is a complex task. FPS officials stated that they did not include consequence assessment in MIST's design because it would have required additional time to develop, validate, and test MIST. As a result, while FPS may be able to identify a facility's vulnerabilities to different threats using MIST, without consequence information, federal tenant agencies may not be able to make fully informed decisions about how to allocate resources to best protect federal facilities. FPS officials do not know if this capability can be developed in the future, but they said that they are working with the ISC and DHS's Science and Technology Directorate to explore the possibility. Comparing Risk across Federal Facilities. FPS did not design MIST to present comparisons of risk assessment results across federal facilities. Consequently, FPS cannot take a comprehensive approach to managing risk across its portfolio of 9,000 facilities to prioritize recommended countermeasures to federal tenant agencies. Instead, FPS takes a facility by facility approach to risk management where all facilities with the same security level are assumed to have the same security risk, regardless of their location. We reported in 2010 that FPS's approach to risk management provides limited assurance that the most critical risks at federal facilities across the country are being prioritized and mitigated.such a comprehensive approach to its FSA program when it developed RAMP and FPS officials stated that they may develop this capability for the next version of MIST. FPS recognized the importance of having Measuring Performance. FPS has not developed metrics to measure MIST's performance, such as feedback surveys from tenant agencies. Measuring performance allows organizations to track progress toward their goals and, gives managers critical information on which to base decisions for improving their programs. This is a necessary component of effective management, and should provide agency Without such managers with timely, action-oriented information. metrics, FPS's ability to improve MIST will be hampered. FPS officials stated that they are planning to develop performance measures for MIST, but did not give a time frame for when they will do so. GAO, Homeland Security: The Federal Protective Service Faces Several Challenges That Hamper its Ability to Protect Federal Facilities, GAO-08-683 (Washington, D.C.: June 11, 2008). these challenges in 2011, FPS did not stop using RAMP for guard oversight until June 2012 when the RAMP operations and maintenance contract was due to expire. In the absence of RAMP, in June 2012, FPS decided to deploy an interim method to enable inspectors to record post inspections. FPS officials said this capability is separate from MIST, will not allow FPS to generate post inspection reports, and does not include a way for FPS inspectors to check guard training and certification data during a post inspection. FPS officials acknowledged that this method is not a comprehensive system for guard oversight. Consequently, it is now more difficult for FPS to verify that guards on post are trained and certified and that inspectors are conducting guard post inspections as required. Although FPS collects guard training and certification information from the companies that provide contract guards, it appears that FPS does not independently verify that information. FPS currently requires its guard contractors to maintain their own files containing guard training and certification information and began requiring them to submit a monthly report with this information to FPS's regions in July 2011. To verify the guard companies' reports, FPS conducts monthly audits. As part of its monthly audit process, FPS's regional staff visits the contractor's office to select 10 percent of the contractor's guard files and check them against the reports guard companies send FPS each month. In addition, in October 2011, FPS undertook a month-long audit of every guard file to verify that guards had up-to-date training and certification information for its 110 contracts across its 11 regions. FPS provided preliminary October 2011 data showing that 1,152 (9 percent) of the 12,274 guard files FPS reviewed at that time were deficient, meaning that they were missing one or more of the required certification document(s). However, FPS does not have a final report on the results of the nation-wide audit that includes an explanation of why the files were deficient and whether deficiencies were resolved. FPS's monthly audits of contractor data provide limited assurance that qualified guards are standing post, as FPS is verifying that the contractor- provided information matches the information in the contractor's files. We reported in 2010 that FPS's reliance on contractors to self-report guard training and certification information without a reliable tracking system of its own may have contributed to a situation in which a contractor allegedly falsified training information for its guards. In addition, officials at one FPS region told us they maintain a list of the files that have been audited previously to avoid reviewing the same files, but FPS has no way of ensuring that the same guard files are not repeatedly reviewed during the monthly audits, while others are never reviewed. In the place of RAMP, FPS plans to continue using its administrative audit process and the monthly contractor-provided information to verify that qualified contract guards are standing post in federal facilities. We plan to finalize our analysis and report to the Chairman in August 2012, including recommendations. We discussed the information in this statement with FPS and incorporated technical comments as appropriate. Chairman Lungren, Ranking Member Clarke, and members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information on this testimony, please contact me at (202) 512-2834, or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tammy Conquest, Assistant Director; Geoffrey Hamilton; Greg Hanna; Justin Reed; and Amy Rosewarne. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
FPS provides security and law enforcement services to over 9,000 federal facilities managed by the General Services Administration (GSA). GAO has reported that FPS faces challenges providing security services, particularly completing FSAs and managing its contract guard program. To address these challenges, FPS spent about $35 million and 4 years developing RAMP--essentially a risk assessment and guard oversight tool. However, RAMP ultimately could not be used to do either because of system problems. This testimony is based on preliminary work for the Chairman and discusses the extent to which FPS is (1) completing risk assessments, (2) developing a tool to complete FSAs, and (3) managing its contract guard workforce. GAO reviewed FPS documents, conducted site visits at 3 of FPS's 11 regions and interviewed officials from FPS, Argonne National Laboratory, GSA, Department of Veterans Affairs, the Federal Highway Administration, Immigration and Customs Enforcement, and guard companies; as well as 4 risk management experts. GAO's preliminary results indicate that the Department of Homeland Security's (DHS) Federal Protective Service (FPS) is not assessing risks at federal facilities in a manner consistent with standards such as the National Infrastructure Protection Plan's (NIPP) risk management framework, as FPS originally planned. Instead of conducting risk assessments, since September 2011, FPS's inspectors have collected information, such as the location, purpose, agency contacts, and current countermeasures (e.g., perimeter security, access controls, and closed-circuit television systems). This information notwithstanding, FPS has a backlog of federal facilities that have not been ssessed for several years. According to FPS's data, more than 5,000 facilities were to be assessed in fiscal years 2010 through 2012. However, GAO was not able to determine the extent of FPS's facility security assessment (FSA) backlog because the data were unreliable. Multiple agencies have expended resources to conduct risk assessments, even though they also already pay FPS for this service. FPS has an interim vulnerability assessment tool, referred to as the Modified Infrastructure Survey Tool (MIST), which it plans to use to assess federal facilities until it develops a longer-term solution. In developing MIST, FPS generally followed GAO's project management best practices, such as conducting user acceptance testing. However, our preliminary analysis indicates that MIST has some limitations. Most notably, MIST does not estimate the consequences of an undesirable event occurring at a facility. Three of the four risk assessment experts GAO spoke with generally agreed that a tool that does not estimate consequences does not allow an agency to fully assess risks. FPS officials stated that they did not include consequence information in MIST because it was not part of the original design and thus requires more time to validate. MIST also was not designed to compare risks across federal facilities. Thus, FPS has limited assurance that critical risks at federal facilities are being prioritized and mitigated. GAO's preliminary work indicates that FPS continues to face challenges in overseeing its approximately 12,500 contract guards. FPS developed the Risk Assessment and Management Program (RAMP) to help it oversee its contract guard workforce by verifying that guards are trained and certified and for conducting guard post inspections. However, FPS faced challenges using RAMP for guard oversight, such as verifying guard training and certification information, and has recently determined that it would no longer use RAMP. Without a comprehensive system, it is more difficult for FPS to oversee its contract guard workforce. FPS is verifying guard certification and training information by conducting monthly audits of guard information maintained by guard contractors. However, FPS does not independently verify the contractor's information. Additionally, according to FPS officials, FPS recently decided to deploy a new interim method to record post inspections that replaces RAMP. GAO is not making any recommendations in this testimony. GAO plans to finalize its analysis and report to the Chairman in August 2012, including recommendations. GAO discussed the information in this statement with FPS and incorporated technical comments as appropriate.
| 3,037 | 870 |
In 2009, approximately 156 million nonelderly individuals obtained health insurance through their employer and another 16.7 million purchased health insurance in the individual market. Of those with employer- sponsored group health plans, in 2009, 43 percent were covered under a fully insured plan where the employer pays a per-employee premium to an insurance company. The remaining 57 percent were covered under self- funded plans where instead of purchasing health insurance from an insurance company the employer sets aside its own funds to pay for at least some of its employees' health care. Application denials result when an insurer determines that it will not offer coverage to an applicant either because the applicant does not meet eligibility requirements or because the insurer determines that the applicant is too high of a risk to insure. Underwriting is a process conducted by insurers to assess an applicant's health status and other risk factors to determine whether and on what terms to offer coverage to an applicant. Many consumers are protected from having their application for enrollment denied. Consumers who obtain health coverage through their employment by enrolling in a group health plan sponsored by their employer have certain protections against application denials. For example, under federal law, individuals enrolling in group health plan coverage are protected from being denied enrollment because of their health status. Under federal law, insurers also generally are prohibited from denying applications for individual health coverage for certain individuals leaving group health plan coverage and applying for coverage in the individual market. Currently, some consumers who apply for private health insurance through the individual market can have their applications denied for eligibility reasons or as a result of underwriting. For example, applications filed by some consumers with preexisting health conditions can be denied, unless prohibited by state or federal law. Additionally, insurers may accept the application but offer coverage at a premium level that is higher than the standard rate or that excludes coverage for certain benefits. The options for appealing application denials in the individual market can be limited to filing a complaint with the state department of insurance. However, in 35 states, individuals who--due to a preexisting health condition--have been denied enrollment or charged higher premiums in the individual market are typically eligible for coverage through high-risk health insurance pools (HRP). Additionally, as required under PPACA, individuals who have preexisting health conditions and have been uninsured for 6 months are eligible for enrollment in a temporary national HRP program. Coverage for medical services can be denied before or after the service has been provided, either through denial of preauthorization requests or denial of claims for payment. As a condition for coverage of some services, providers or consumers are required to request authorization prior to providing or receiving the service. Preauthorization denials occur when a determination is made that (1) the consumer is not eligible to receive the requested service, for example, because the service is not covered under the individual's policy, or (2) the service is not appropriate, meaning that it is not medically necessary or is experimental or investigational. Denials of claims occur for various reasons. Claims may be denied for billing reasons, such as the provider failing to include a piece of required information on the claim, such as documentation that the provider received preauthorization for a service, or submitting a duplicate claim. Claims may also be denied because of eligibility issues. For example, a claim may be submitted for a service provided before an individual's coverage began or after it was terminated, or a claim may be submitted for a service that has been excluded from coverage under an individual's policy. Another reason for denials reported by some insurers is that the individual has not met the cost-sharing requirements of his or her policy, such as the required deductible. Finally, claim denials can occur when a determination is made that the service provided was not appropriate, specifically that the service was not medically necessary or was experimental or investigational. Depending on the reason for a claim denial, either the provider or the consumer may bear the financial responsibility for the denied coverage amount. Claims that are denied because of such billing errors as the provider not providing a required piece of information can be resubmitted and ultimately paid. For claim denials, the full claim may be denied or, if the claim contained multiple lines, such as a surgery with charges for multiple procedures and supplies, only certain lines of the claim may be denied. How insurers and self-funded group health plans track claim denials and the reasons for denials may vary. For example, AMA officials noted that there is no guidebook for how reason codes should be assigned to claim denials. Officials noted that denials are often assigned the code for the most general reason even though the denial may be for a more specific reason. Consumers have several avenues available to dispute coverage denials. First, consumers can file an appeal of a denial with the insurer or self- funded group health plan for review, referred to as an internal appeal. Internal appeals can result in the denial being upheld or reversed. In addition, consumers in most states can have their appeal reviewed by an external party, such as an independent medical review panel established by the state. These appeals, referred to as external appeals, can also result in denials being reversed and in states recovering funds for consumers for the cost of the denied service. State external appeal options may only be available once the consumer has exhausted the internal appeal process or for consumers with certain types of coverage. Historically, those with self-funded group health plans generally did not have access to an external appeal process, but consumers could file suit against a health plan in court to challenge a denial. PPACA, however, required that group health plans, including self-funded plans, provide access to an external appeal process that meets federal standards for plan years beginning on or after September 2010. Finally, consumers may file complaints regarding coverage denials with the state, generally the department of insurance, or, for those with group health plans, with DOL. Filing a complaint can be a less formal mechanism for disputing a coverage denial than filing an appeal; however, complaints can result in reversals of denials and in financial recoveries for consumers. States have responsibility for regulating private health insurance, including insurers operating in the individual market and the fully insured group market. In overseeing insurer activity, states vary in the data they require insurers to submit on denials and internal appeals of denials. According to NAIC officials, few states require insurers to report data regularly on the frequency of denials and internal appeals, and NAIC has not issued any model laws or regulations that include requirements for insurers to report such data. States also may use data on complaints and external appeals to identify trends in the practices of insurers and target examinations of specific insurers' practices. Nearly all states and the District of Columbia regularly report complaint data, which includes information on the numbers of, reasons for, and outcomes of complaints, to NAIC. Historically, the federal government's role in oversight of private health insurance has included establishing requirements for states to enforce. For example, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) established consumer protections on access, portability, and renewability of coverage. In addition, with respect to group health plans, the federal government enforces disclosure, reporting, fiduciary, and claims-filing requirements under the Employee Retirement Income Security Act of 1974 (ERISA). DOL conducts a number of efforts to enforce the ERISA requirements. For example, the department conducts civil investigations that can result in corrective actions, such as monetary recoveries for consumers who are enrolled in employment-based plans. In addition to these formal methods, DOL also works to resolve complaints filed with the department. These efforts are considered informal resolutions, although complaints can also serve as a trigger for formal enforcement actions. PPACA expanded the federal oversight role by requiring HHS to begin collecting, monitoring, and publishing data from certain insurers. Specifically, PPACA required the establishment of an internet Web site through which individuals can identify affordable health insurance coverage options in their state. To implement this requirement, in May 2010, HHS issued an interim final rule requiring insurers in the individual and small group markets to submit data to HHS on their products, including data on the number of enrollees, geographic availability of the products, and customer service contact information, by May 21, 2010, and annually after that. In July 2010, HHS began publishing these data on the new Web site, which is designed for individuals and small businesses to obtain information on coverage options available in their state. In October 2010, HHS began posting additional data collected from insurers, including data on the percentage of applications denied for each product offered in the individual market. The interim final rule also required insurers to submit other data, such as data on the percentage of claims denied in the individual and small group markets, and the number and outcomes of appeals of denials to insure, pay claims, and provide preauthorization, in accordance with guidance to be issued by HHS. As of December 2010, HHS had not issued any guidance on reporting these additional data. Nationwide data from HHS showed variation in application denial rates across insurers operating in the individual market. Specifically, data collected by HHS from 459 state-licensed insurers on the number of applications received and denied from January through March 2010 indicated that, while the aggregate rate of application denials was 19 percent nationally, the rate varied significantly across insurers. For example, just over a quarter of insurers had application denial rates from 0 percent to 15 percent while another quarter of insurers had rates of 40 percent or higher. However, the insurers with rates of 40 percent or higher reported fewer applications. See table 1 for additional information on the range in application denial rates across insurers. HHS officials noted that the data the department collected on application denials, which represent a single calendar quarter of applications, are only a starting point. They told us that as insurers report additional quarters of data, the value and usefulness of the data will increase. In addition, officials said that they have taken steps to ensure the accuracy of the data and noted that the accuracy of these data is critical to HHS, because no other source of information on private health insurance has a complete catalog of insurers operating in the individual market and what products those insurers are selling. Data reported by Maryland--the only state we identified as collecting data on the incidence of application denials--indicated that variation in application denial rates across insurers operating in the state's individual market has occurred in that state for several years. Maryland data showed that the range of application denial rates across insurers was 26 percentage points or more in each of three reporting periods, 2008, 2009, and the first half of 2010. (See table 2 for the range in denial rates in the data reported by Maryland.) Data reported in studies by AHIP also showed variation in application denial rates. The AHIP data illustrated that application denial rates varied across age groups, with denial rates increasing as the age of the primary applicant increased. In 2008, when AHIP data showed that 13 percent of all medically underwritten applications were denied, in general the denial rate progressively increased as the applicant's age increased, from a low of 5 percent for applicants under 18 years of age to a high of 29 percent for applicants from 60 to 64 years of age. Similar variation in AHIP application denial rates was seen in data from 2006. (See fig. 1.) The available data on application denial rates provided little information on the reasons that applications were denied. For instance, the HHS and Maryland data did not include any information on the reasons for application denials. The AHIP data, however, provided limited information. Specifically, AHIP's data showed that a higher percentage of applications were denied because of the applicant's health status than for nonmedical reasons, such as the plan not being offered in the applicant's geographic area. AHIP data showed that in 2008, of the 1.8 million applications for enrollment that insurers either denied or made offers of coverage, 1 percent were denied for nonmedical reasons and 12 percent were denied after underwriting when the applicant's health status and other risk factors were assessed. According to an AHIP official, applications that were denied after underwriting were presumably denied because the applicant's medical questionnaire responses were beyond the insurer's threshold for issuing a policy. There are several issues to consider when interpreting application denial rates. First, application denial rates may not provide a clear estimate of the number of individuals that were ultimately able to secure health coverage, because individuals may submit applications with more than one insurer and be denied by one insurer but offered enrollment by another. Second, denial rates also do not reflect applications that have been withdrawn. For example, AHIP data for 2008 indicated that 8 percent of applicants withdrew their applications before underwriting occurred. Experts also noted that some individuals may not submit applications for health coverage because they believe or have been advised, for example by an insurance agent, that their application would likely be denied. Third, an insurer's denial rates may be affected by requirements of the states in which the insurer operates. For example, officials from one insurance company explained that for applicants in the state for which they are the insurer of last resort, state law prohibits them from denying applications for enrollment based on the health status of the applicant. Officials told us that a denial can occur only for nonmedical eligibility reasons, which the AHIP data indicate are far less frequent. Another consideration when interpreting application denial rates is that the rates do not reflect applications that have been accepted by an insurer but for coverage with a premium that is higher than the standard rate or with exclusions for coverage of specified services. Data from HHS, Maryland, and AHIP all indicated that some portion of applicants received offers at a premium that was higher than the standard rate. For example, the HHS data demonstrated that from January through March of 2010, about 20 percent of individual market applicants were offered coverage with premiums higher than the standard rate. Maryland data also indicated that for the first half of 2010, 8 percent of applicants were offered either coverage with premiums higher than the standard rate or coverage that excluded specified health conditions. Finally, AHIP data from 2008 showed that 34 percent of offers for coverage were for coverage at a higher premium rate. The AHIP data also showed that 6 percent of offers for coverage were for coverage that excluded specified health conditions. Data from selected states and others indicated that the rates of coverage denials, including denials for preauthorizations and claims, varied significantly, and a number of factors may have contributed to that variation. The data also indicated that coverage denials occurred for a variety of reasons, frequently for billing errors and eligibility issues and less often for judgments about the appropriateness of a service. Further, the data we reviewed indicated that coverage denials, if appealed, were frequently reversed in the consumer's favor and that appeals and complaints related to coverage denials sometimes resulted in financial recoveries for consumers. State data that we reviewed showed that rates of coverage denials by insurers operating in the group and individual markets varied significantly across states. Specifically, aggregate claim denial rates for the three states that we identified as collecting such data ranged from 11 percent in Ohio in 2009 to 24 percent in California in the same year. Data reported by the remaining state, Maryland, indicated a claim denial rate of 16 percent in 2007. A fourth state, Connecticut, collected data on a different measure, preauthorization denials, and these data indicated a denial rate of 14 percent in 2009. In addition, claim denial rates indicated by AMA data--3 percent during 2 months of 2010--varied from coverage denial rates in the four states. Several factors may have contributed to the variation in rates across the four states and the AMA data. For example, Ohio and AMA data were based on denials of electronic claims. AMA officials told us that providers with electronic billing systems and insurers that accept electronic claims are more sophisticated in terms of billing management, and therefore the denial rates calculated by AMA may be lower than rates of denials for all claims, including both electronic and paper-based. In another example, Maryland's rate was calculated using data for categories of denials that accounted for about 90 percent of all claims denied. In contrast, according to California officials, California's data represented all claim denials. Differences in the time frames for the data may have also contributed to the variation. AMA officials noted that their data were from a 2-month period of the year (February through March) when there was less contractual activity, such as open enrollment periods, and when denials related to meeting deductible requirements--which according to officials from one insurance company can be significant--have already been resolved. In contrast, data from the four states, except Ohio, covered a full year and therefore reflect all denials for the year, including those related to enrollment and deductible issues. See table 3 for the rates of coverage denials indicated by state data and a description of the characteristics of the data, some of which may have contributed to the variation in rates. In addition to variation across states in aggregated rates, state and other data also indicated that coverage denial rates varied significantly across insurers. For example, the California data indicated that in 2009 claim denial rates ranged from 6 percent to 40 percent across six of the largest managed care organizations operating in the state. Similarly, preauthorization denial rates in Connecticut varied across 21 insurers, with rates among the seven largest insurers ranging from 4 percent to 29 percent in 2009. Somewhat narrower variation across insurers was also evident in the AMA data, with claim denial rates in 2010 that ranged from less than 1 percent to over 4 percent across the seven insurers represented in those data. State and other officials told us about several factors that may have contributed to the variation across insurers and make it difficult to compare data across insurers. First, California officials told us that insurers may interpret a state's reporting requirements differently and noted that some insurers may count certain claims transactions as denials that the state would not consider a denial. This was evidenced by discussions with one insurer who told us that if asked to report the number of claims denied, some insurers might include claims where the service was approved but the insurer paid nothing because the member was liable for the charge, which California officials would not characterize as a denial. Officials from the insurer said that their current overall denial rate is 27 percent, but it would be 18 percent if member liability denials were excluded. Officials from California and AMA also indicated that circumstances unique to an insurer may affect their denial rate. For example, California officials told us one insurer's denials rose sharply in a month because providers were submitting claims to the insurer's HMO when they should have gone to the preferred provider organization (PPO). Rather than transferring the claims, the HMO denied all of them, and then the PPO paid the claims shortly after that. According to state and other data, coverage denials occurred for various reasons. For example: Claim denials were often made for billing errors such as duplicate claims and missing information on the claim. For example, data from Maryland showed that the most prevalent reason for claim denials in 2007 was duplicate claim submissions, accounting for 32 percent of all denials. Among six of the largest managed care organizations in California, the four that reported on the most prevalent reasons for claim denials in 2009 all reported duplicate claims as one of those reasons. With regard to claims missing required information, the 2010 AMA data indicated that five of the seven insurers represented in the data made 15 percent or more of denials on the basis that the claim was missing information, such as documentation of preauthorization. Data from Maryland showed that 74 percent of denied claims did not meet the state's criteria for "clean" claims, those claims that include all of the required information needed for processing. Denials of claims also frequently resulted from eligibility issues. For example, for six of the seven insurers in the 2010 AMA data, over 20 percent of claim denials occurred as a result of eligibility issues such as services being provided before coverage was initiated or after coverage was terminated. Insurers also denied preauthorizations and claims as a result of judgments about the appropriateness of the service, such as that the service was not medically necessary or was experimental or investigational, although less frequently than for billing errors and eligibility issues. Data from Maryland showed that in 2007 insurers denied nearly 40,000 preauthorizations or claims because they determined the services were not medically necessary. This was a relatively small number compared to the 6.3 million claim denials reported in the same year. The 2010 AMA data showed that only one of the seven insurers denied claims on the basis that services were not appropriate, specifically that the service was experimental or investigational, with about 9 percent of denials made for that reason. NAIC data on complaints filed with states in 2009 also provided some information on coverage denials related to the appropriateness of services. Specifically, the data showed that of the approximately 14,000 complaints related to coverage denials, at least 8 percent were related to the insurer's determination that the service was not medically necessary and 2 percent were related to the determination that the service was experimental. HHS provided us with written comments on a draft version of this report. These comments are reprinted in appendix III. HHS agreed with our findings, noting in particular the need to improve the quality and scope of existing data, and suggested clarifications, which we incorporated. HHS and DOL also provided technical comments to the draft report, which we incorporated as appropriate. In its written comments, HHS emphasized the importance--for policymakers, regulators, and consumers--of data on health insurance application and coverage denials. HHS noted that data on application and coverage denials can help increase transparency in the private health insurance market and that these data can also provide an important baseline measure for evaluating the impact of changes resulting from PPACA. In its comments, HHS also noted that data collection on application and coverage denials has been uneven across insurers, plans, and states and that very little information is available to help analysts understand the causes or sources of variation in the data that are available. According to HHS, more effort is needed to improve the quality and scope of existing data collection to give policymakers and regulators better and richer data to evaluate health insurance plan practices and market changes and to produce measures that may be useful to consumers when they are shopping for insurance. In its written comments, HHS also identified a limitation to our data that needed some clarification. Specifically, HHS pointed out--correctly--that while our draft report provided information on the percentage of claims that were denied, as well as data on the outcomes of internal appeals and external reviews of denied claims, our draft report did not provide data on the frequency with which claim denials are appealed by consumers. These data were not included in the report because the data we reviewed did not allow for a systematic calculation of an "appeal rate"--the number of coverage denials for which an appeal was initiated--for several reasons, including different sources or years of denials and appeals data we reviewed. In response to HHS' comments, we added language to the report clarifying this limitation. For context, we also added information on the appeal rate from one quarter for one state--the only information we identified on internal claims appeal rates. HHS also noted that the statement in our draft report that "denials are frequently reversed" upon appeal may be confusing, because readers may assume a large number of claim denials are ultimately overturned. We revised the language in our draft report to prevent this misinterpretation of our data, by stating that coverage denials, if appealed, were frequently reversed in the consumer's favor. We are sending copies of this report to the Secretaries of HHS and DOL, the congressional committees of jurisdiction, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In order to describe the data on denials of applications for enrollment and coverage of medical services, we contacted six states to interview officials and to obtain data the states collect and track on denials and appeals related to denials. The six states we selected included states identified in the literature, through searches of state insurance department Web sites, or in interviews with experts as a state collecting data on the incidence of application or coverage denials. These also included states that collect or track data on appeals related to coverage denials reviewed by insurers (internal appeals) or reviewed by external parties (external appeals). The six states accounted for at least 20 percent of national enrollment in private health insurance. Once we selected the states, we asked officials from each state whether they collected the following types of data: (1) incidence of application denials; (2) incidence of coverage denials, including incidence of denials of preauthorizations and claims; (3) incidence and outcomes of appeals reviewed by insurers (that is, internal appeals); and (4) incidence and outcomes of appeals reviewed by external parties (that is, external appeals). If state officials reported collecting the data, we reviewed at least the most recent year of data available. We reviewed data from one state on the incidence of application denials, from four states on the incidence of coverage denials, from four states on the number and outcomes of internal appeals, and from all six states on the number and outcomes of external appeals. (See table 6.) To identify research that examined private health insurance denials, including the incidence of denials of applications for enrollment and of coverage for medical services (i.e., "coverage denials") and the incidence and outcomes of appeal related to coverage denials, we conducted a structured literature review. This review resulted in 24 studies that we determined to be relevant to our objectives. To conduct this review, we searched 23 reference databases for articles or studies published from January 2000 through July 2010, using a combination of search terms, such as "denial" and "insurer." We determined that a study was directly relevant to our objectives if it: (1) included empirical analysis related to the incidence of application denials, the incidence of coverage denials, or the incidence and outcomes of appeals related to such denials; and (2) analyzed, at minimum, denial or appeal data from an entire state or two or more insurers. In addition to searching the reference databases, we checked the bibliographies of the relevant studies to identify other potentially relevant research and interviewed several private health insurance experts about research done on denials. We identified 24 studies in the literature that included empirical analyses examining (1) the frequency of denials of applications for enrollment or (2) the frequency of or reasons for denials of coverage for medical services and outcomes of appeals related to such denials. Table 7 identifies the number of studies that address these topics, with some studies addressing more than one topic. The 24 studies that GAO identified in the literature are as follows: 1. American Association of Health Plans. Independent Medical Review of Health Plan Coverage Decisions: Empowering Consumers with Solutions. Washington, D.C., 2001. 2. America's Health Insurance Plans. Individual Health Insurance 2009: A Comprehensive Survey of Premiums, Availability, and Benefits. Washington, D.C., 2009. 3. -----. Individual Health Insurance 2006-2007: A Comprehensive Survey of Premiums, Availability, and Benefits. Washington, D.C., 2007. 4. -----. Update on State External Review Programs. Washington, D.C., 2006. 5. American Medical Association. 2010 National Health Insurer Report Card. Chicago, Ill., 2010. 6. -----. 2009 National Health Insurer Report Card. Chicago, Ill., 2009. 7. -----. 2008 National Health Insurer Report Card. Chicago, Ill., 2008. 8. California Healthcare Foundation. Independent Medical Review Experiences in California, Phase I: Cases of Investigational/Experimental Treatments. Prepared by the Institute for Medical Quality for the California Healthcare Foundation, Oakland, Calif., 2002. 9. Chuang, K. H., W. M. Aubry, and R. A. Dudley. "Independent Medical Review of Health Plan Coverage Denials: Early Trends." Health Affairs, vol. 23, no. 6 (November/December 2004), 163-169. 10. Collins, S. R., J. L. Kriss, M. M. Doty, and S. D. Rustgi. Losing Ground: How the Loss of Adequate Health Insurance is Burdening Working Families: Findings from the Commonwealth Fund Biennial Health Insurance Surveys, 2001-2007. New York, N.Y., 2008. 11. Doty, M. M., S. R. Collins, J. L. Nicholson, and S. D. Rustgi. Failure to Protect: Why the Individual Insurance Market is not a Viable Option for Most U.S. Families. Findings from the Commonwealth Fund Biennial Health Insurance Survey, 2007. New York, N.Y., 2009. 12. Foote, S. B., B. A. Virnig, L. Bockstedt, and Z. Lomax. "External Review of Health Plan Denials of Mental Health Services: Lessons from Minnesota." Administration and Policy in Mental Health and Mental Health Services Research, vol. 34 (2007), 38-44. 13. Gresenz, C. R., and D. M. Studdert. External Review of Coverage Denials by Managed Care Organizations in California. Working Paper No. WR-264-ICJ, RAND Institute for Civil Justice, Santa Monica, Calif., 2005. 14. Gresenz, C. R., D. M. Studdert, N. Campbell, and D. R. Hensler. "Patients In Conflict With Managed Care: A Profile of Appeals in Two HMOs." Health Affairs, vol. 21, no. 4 (July/August 2002), 189-196. 15. Gresenz, C. R., and D. M. Studdert. "Disputes over Coverage of Emergency Department Services: A Study of Two Health Maintenance Organizations." Annals of Emergency Medicine, vol. 43, no. 2 (February 2004), 155-162. 16. Kaiser Family Foundation / Harvard School of Public Health. National Survey on Consumer Experiences With and Attitudes Toward Health Plans: Key Findings. Washington, D.C., 2001. 17. Kapur, K., C. R. Gresenz, and D. M. Studdert. "Managed Care: Utilization Review in Action at Two Capitated Medical Groups." Health Affairs, Web exclusive (2003), W3-275-282. 18. Karp, N., and E. Wood. Understanding Health Plan Dispute Resolution Practices, Washington. D.C., 2000. 19. Pearson, S. D. "Patient Reports of Coverage Denial: Association with Ratings of Health Plan Quality and Trust in Physician." The American Journal of Managed Care (March 2003), 238-244. 20. Pollitz, K., R. Sorian, and K. Thomas. How Accessible is Individual Health Insurance for consumers in less-than-perfect health? Prepared for the Henry J. Kaiser Family Foundation, Menlo Park, Calif., 2001. 21. Pollitz, K., J. Crowley, K. Lucia, and E. Bangit. Assessing State External Review Programs and the Effects of Pending Federal Patients' Rights Legislation. Prepared for the Henry J. Kaiser Family Foundation, Menlo Park, Calif., 2002. 22. Schauffler, H. H., S. McMenamin, J. Cubanski, and H. S. Hanley. "Differences in the Kinds of Problems Consumers Report in Staff/Group Health Maintenance Organizations, Independent Practice Association/Network Health Maintenance Organizations, and Preferred Provider Organizations in California." Medical Care, vol. 39, no. 1 (2001), 15-25. 23. Studdert, D. M., and C. R. Gresenz. "Enrollee Appeals of Preservice Coverage Denials at 2 Health Maintenance Organizations." The Journal of the American Medical Association, vol. 289, no. 7 (Feb. 19, 2003), 864-870. 24. Young, G. P., J. Ellis, J. Becher, C. Yeh, J. Kovar, and M. A. Levitt. "Managed Care Gatekeeping, Emergency Medicine Coding, and Insurance Reimbursement Outcomes for 980 Emergency Department Visits from Four States Nationwide." Annals of Emergency Medicine, vol. 39, no. 1 (January 2002), 24-30. In addition to the contact named above, Kristi Peterson, Assistant Director; Susan Barnidge; Krister Friday; Jawaria Gilani; Teresa Tam; and Hemi Tewarson made key contributions to this report.
|
The large percentage of Americans that rely on private health insurance for health care coverage could expand with enactment of the Patient Protection and Affordable Care Act (PPACA) of 2010. Until PPACA is fully implemented, some consumers seeking coverage can have their applications for enrollment denied, and those enrolled may face denials of coverage for specific medical services. PPACA required GAO to study the rates of such application and coverage denials. GAO reviewed the data available on denials of (1) applications for enrollment and (2) coverage for medical services. GAO reviewed newly available nationwide data collected by the Department of Health and Human Services (HHS) from 459 insurers operating in the individual market on application denials from January through March 2010. GAO also reviewed a year or more of the available data from six states on the rates of application and coverage denials and the rates and outcomes of appeals related to coverage denials. The six states included all states identified by experts and in the literature as collecting data on the rates of application or coverage denials and together represented over 20 percent of private health insurance enrollment nationally. GAO conducted a literature review to identify studies related to application and coverage denials and reviewed data from selected studies. GAO interviewed HHS and state officials and researchers about factors to consider when interpreting the data. The available data indicated variation in application denial rates, and there are several issues to consider in interpreting those rates. Nationwide data collected by HHS from insurers showed that the aggregate application denial rate for the first quarter of 2010 was 19 percent, but that denial rates varied significantly across insurers. For example, just over a quarter of insurers had application denial rates from 0 percent to 15 percent while another quarter of insurers had rates of 40 percent or higher. Data reported by Maryland--the only of the six states in GAO's review identified as collecting data on the incidence of application denials--indicated that variation in application denial rates across insurers has occurred for several years, with rates ranging from about 6 percent to over 30 percent in each of 3 years. The available data provided little information on the reasons that applications were denied. There are also several issues to consider when interpreting application denial rates. For example, the rates may not provide a clear estimate of the number of individuals that were ultimately able to secure coverage, as individuals can apply to multiple insurers, and the rates do not reflect applicants that have been offered coverage with a premium that is higher than the standard rate. The available data from the six states in GAO's review and others indicated that the rates of coverage denials, including rates of denials of preauthorizations and claims, also varied significantly. The state data indicated that coverage denial rates varied significantly across states, with aggregate rates of claim denials ranging from 11 percent to 24 percent across the three states that collected such data. In addition, rates varied significantly across insurers, with data from one state indicating a range in claim denial rates from 6 percent to 40 percent across six large insurers operating in the state. There are several factors that may have contributed to the variation in rates across states and insurers, such as states varying in the types of denials they require insurers to report. The data also indicated that coverage denials occurred for a variety of reasons, frequently for billing errors, such as duplicate claims or missing information on the claim, and eligibility issues, such as services being provided before coverage was initiated, and less often for judgments about the appropriateness of a service. Further, the data GAO reviewed indicated that coverage denials, if appealed, were frequently reversed in the consumer's favor. For example, data from four of the six states on the outcomes of appeals filed with insurers indicated that 39 percent to 59 percent of appeals resulted in the insurer reversing its original coverage denial. Data from a national study conducted by a trade association for insurance companies on the outcomes of appeals filed with states for an independent, external review indicated that coverage denials were reversed about 40 percent of the time. GAO provided a draft of the report to HHS and the Department of Labor (DOL). HHS agreed with GAO's findings, noting the need to improve the quality and scope of existing data, and suggested clarifications, which were incorporated. HHS and DOL also provided technical comments, which were incorporated as appropriate.
| 7,355 | 957 |
The Head Start program, which is overseen by HHS's Office of Head Start, performs its mission to promote school readiness and development among children from low-income families through its nearly 1,600 grantees. More specifically, Head Start grants are awarded directly to state and local public agencies, private nonprofit and for-profit organizations, tribal governments, and school systems for the purpose of operating programs in local communities. According to Head Start program reports, more than 90,000 employees taught in grantee programs during program year 2014-2015. Of these employees, about half were preschool classroom teachers (44,677), and the other half were preschool assistant teachers (45,725). Head Start officials told us that many grantees do not run programs during the summer months, which could leave these teachers and teaching assistants without wages from these employers during that time. According to HHS, Head Start teachers were paid about $29,000 annually (compared to annual median earnings of nearly $52,000 for kindergarten teachers). Head Start teachers and teaching assistants are among the approximately 960,000 ECE employees across the nation, according to the Bureau of Labor Statistics. The nation's UI system is a joint federal-state partnership originally authorized by the Social Security Act and funded primarily through federal and state taxes on employers. Under this arrangement, states administer their own programs according to certain federal requirements and under the oversight of DOL's Office of Unemployment Insurance, but they have flexibility in setting benefit amounts, duration, and the specifics of eligibility. In order for employers in the states to receive certain UI tax benefits, states must follow certain requirements, including that the states have laws generally prohibiting instructional employees of certain educational institutions from collecting UI benefits between academic terms if they have a contract for, or "reasonable assurance" of, employment in the second term. However, the Act does not define educational institution, leaving states discretion with this classification. Federal policies regarding the eligibility of Head Start teachers for UI benefits have remained the same for decades, according to DOL officials, and are stated in a 1997 policy letter in which DOL's position is clarified on applying the federal provision regarding employees of educational institutions. In responding to our survey, state UI directors reported having to follow various state laws and policies that may affect whether Head Start and other ECE teachers are allowed to collect benefits during the summer. In 3 of the 53 states and territories in which we conducted a survey about laws and policies that may affect teacher eligibility, officials reported that Head Start teachers are generally not eligible for UI benefits over summer breaks. Specifically, in 2 of the states we surveyed, officials reported that Head Start teachers are generally not eligible for UI benefits over summer breaks but other ECE teachers may be. In the first state, Pennsylvania, officials told us that because of a 2007 state court decision, Head Start teachers are generally not eligible for UI benefits because the state generally considers Head Start teachers to be employees of educational institutions, and such employees are generally not eligible. For other ECE teachers, however, Pennsylvania officials said that eligibility may vary by employer. More specifically, the officials told us that employees of for- profit institutions would not be subject to the educational institution restriction, and therefore may be eligible for UI benefits over summer breaks. In the second state, Wyoming, officials reported that their state excludes from UI eligibility those engaged in instructional work of an educational institution. The officials also reported that Head Start teachers "fall under this provision by consistent interpretation and precedent decisions" in Wyoming and therefore are not eligible for UI benefits over summer breaks, though other ECE teachers not employed by educational institutions are potentially eligible for benefits during that time. Officials in a third state, Indiana, told us that state law restricts eligibility for employees who are on a vacation period due to a contract or the employer's regular policy and practice, which affects both Head Start and other ECE teachers who are on summer break. The officials explained that these workers are not considered unemployed during regularly scheduled vacation periods and are therefore not eligible to receive UI benefits. This law was not targeted at Head Start or other teachers but instead was meant to address those individuals with predictable vacation periods, according to officials. Indiana's Department of Workforce Development determines whether the employee is on a regularly scheduled vacation period by analyzing historical data, rather than relying on the employer to notify the department. Officials told us they have an internal unit that works to detect claims patterns over time to identify these vacation periods for specific employers. In contrast, based on what officials in the remaining 50 states and territories we surveyed reported to us, Head Start and other ECE teachers in the remaining states may be eligible for benefits over summer breaks in those states, usually depending on a number of factors, such as the type of employer or the program's connection to a school or board of education, as discussed below. Officials told us that eligibility for UI benefits can be affected by the type of employer in 30 states for Head Start and 28 for ECE. State officials reported having to abide by a wide ranging set of laws and policies in this area, including those that sometimes include or exclude certain types of organizations from the state's definition of educational institution. For example: Employers included as educational institutions. Officials in some states told us that their laws include certain types of employers in the definition of an educational institution. Consequently, Head Start and ECE teachers at these institutions are generally not eligible for UI benefits over summer breaks. For example, New York officials told us that nursery schools and kindergartens would be considered educational institutions but day care providers generally would not. Alaska officials reported that the state generally defines an educational institution as a "public or not-for-profit" institution that provides an organized course of study or training. Head Start employees of private for-profit institutions, however, may be eligible. In a separate example, New Jersey officials told us that teachers who work in private preschools mandated by the state to operate in districts known as "Abbott districts" are generally not eligible for UI benefits during summer breaks, because they are considered employees of educational institutions. In Abbott districts, they said, the state must provide preschool for all students living in that district, either directly through a public agency or by contracting with a private provider, which could also potentially be a Head Start grantee organization. They further explained that Abbott teachers are paid significantly higher salaries than Head Start teachers outside of Abbott districts are paid. As a result, they said, Abbott teachers may be less in need of UI benefits during summer breaks. Furthermore, ECE teachers who work for non-Abbott providers are potentially eligible, according to New Jersey officials. Employers not considered educational institutions. In other cases, state officials reported that their laws specified that certain types of employers are not included in the definition of an educational institution and that teachers working for these specific types of employers are potentially eligible for UI benefits over summer breaks. For instance, officials in 10 states for Head Start and 4 states for ECE reported that community action groups operating Head Start or other ECE programs are not included in the definition of an educational institution. Therefore, the general restriction against employees of educational institutions getting UI benefits during summer breaks does not apply to these teachers. For example, Kansas officials reported that private for-profit institutions are not considered educational institutions, and California officials reported that non- profits are not considered educational institutions. Head Start and ECE teachers in these types of programs may be eligible for benefits. Officials in 17 states reported that UI eligibility for Head Start teachers can be affected by the program's relationship to a school or board of education, and officials in 11 states reported similar restrictions for ECE teachers. For example: Program is an "integral part" of a school or school system. Officials in some states reported that teachers who worked for Head Start and ECE programs that operated as integral parts of a school or school system could be affected by eligibility restrictions. For example, Illinois officials specified in their response to our survey that "integral part" means the Head Start program is conducted on the premises of an academic institution and that the staff is governed by the same employment policies as the other employees of the academic institution. Program's relationship with a school board. In other states, officials reported potentially allowing or restricting eligibility for Head Start and other ECE programs based on the program's relationship with a board of education. West Virginia officials reported that if a teacher works for a Head Start program that is "under the influence or authority" of a county board of education, and his or her wages are reportedly paid by the board of education, the teacher is generally considered a school employee and is therefore not eligible for UI benefits over the summer break. Colorado officials told us that "educational institution" does not include Head Start programs that are not part of a school administered by a board of education. We estimated that approximately 44,800 of the nearly 90,400 Head Start teachers across the country may have been eligible for UI benefits during their summer breaks in 2015. Among the teachers who were likely ineligible for UI benefits, we estimated that about 14,150 were likely not eligible because they work in school systems or charter schools, which we assumed would be included in the states' definitions of educational institutions. We also estimated that about 28,940 Head Start teachers did not have summer breaks long enough to allow them to collect UI benefits. Because states pay UI benefits on a weekly basis, and in most states individuals must first serve a waiting period of a week, employees must have a summer break of at least 2 weeks in most states before they can collect benefits. This break must be at least 1 week in states without a waiting period. We counted those teachers at employers with shorter summer breaks than these as likely ineligible. Lastly, we estimated that about 2,490 were likely not eligible to receive UI benefits during summer breaks because of state restrictions, as shown in figure 1. Based on our analysis of available data, about 2,100 of these teachers work in Indiana, Pennsylvania, and Wyoming, where, as mentioned earlier, Head Start teachers are generally not eligible for UI benefits over summer breaks. The other Head Start teachers we estimated were likely not eligible for UI benefits were potentially affected by restrictions on teachers who work for certain types of employers, such as government agencies or non-profits. States reported using a variety of methods to communicate general UI eligibility information to both Head Start and ECE employers and employees. Specifically, state officials most frequently reported using websites to communicate laws, regulations, and policies regarding UI benefit eligibility to employers and employees (52 out of 53 states, or 98 percent), followed by the use of handbooks with 39 out of 53 states (74 percent) reporting using this method for employers and 47 out of 53 states (89 percent) reporting using this method for employees. Beyond these approaches, state officials often reported being in contact with employers through call centers, with 26 out of 53 states (49 percent) reporting this method. The other most frequently used communication approach for employees was through hotlines, with 44 out of 53 states (83 percent) reporting using this method. See figure 2 for more information on communication methods states reported using to employers and figure 3 for communication methods states reported using to employees. Even with the information states reported providing, the Head Start and ECE employer and employee representatives we interviewed said that state UI programs can remain difficult to understand because of the complexity of the various federal and state laws, regulations, and policies governing the programs. For example, one employer in Wyoming told us that she did not feel she could effectively advise her employees on eligibility policies because there was no clear, readily available information or guidance from the state. More specifically, the employer told us that a general overview of what the benefits are, how long they last, and what requirements claimants have to meet to keep receiving benefits would be helpful. However, through our survey, officials in 51 of 53 states (96 percent) reported that they did not provide any additional communication specific to Head Start or other ECE regarding eligibility policies. As mentioned earlier, the impact of these eligibility rules can vary greatly across states and even within a state, and an employee's eligibility can be affected by certain circumstances, such as the type of employer and the employer's relationship with a school. State officials also told us that misunderstandings about program eligibility can be perpetuated when state officials inconsistently administer the policies or delay implementation of new state policies. For example, Indiana officials told us the legislature passed a law that eliminated UI benefits during regularly scheduled vacation periods in July 2011, but officials were not able to fully implement this new law until October 2012 because it was difficult to identify all of the industries or employees that would be affected. As mentioned earlier, instead of relying on employers' reports, the Indiana office analyzes historical data by employer to help identify claim patterns that may indicate a regularly scheduled break. Officials said they did not enforce the changes to employees claiming benefits during the summer of 2011 and started attempting to enforce it during the summer of 2012 to the extent that they were able to detect regularly scheduled vacation periods. These officials told us that in hindsight they should have reached out to Head Start and ECE employees after they began to implement the regularly scheduled vacation period provision, but they did not realize the full impact of the policy change until after the provision was fully implemented. Similarly, New Jersey officials told us that they recently had to hold a meeting among key internal state officials responsible for initial case determinations and appeals because they realized not all staff understood state eligibility policies. According to a New Jersey official, in some cases, when the initial claims adjudicator denied the case, the employees had wrongly obtained benefits through appeals. They also mentioned that they planned to distribute a letter to the ECE community to clarify the state's eligibility policy. Due to the complexity of eligibility issues and the potential for inconsistent adjudications that may affect the integrity of the program, officials in New Jersey and Indiana told us they developed dedicated offices to handle claims and monitor improper payments for groups that include Head Start and ECE employees. New Jersey officials told us they have a centralized office that handles all school employee claims to ensure that the adjudication process is uniform and all claims are handled appropriately. According to a New Jersey official, 10 of the best examiners were selected from various field offices to receive specialized training to handle school employee claims from all over the state. Similarly, Indiana officials told us they have an office dedicated to resolving claims that may be affected by a regularly scheduled vacation period. To some extent, the concerns raised by stakeholders can be associated with the fact that states do not generally assess the effectiveness of their communication approaches. Thirty-four of 53 states (64 percent) reported that they do not conduct evaluations to assess the effectiveness of their communications with employers. Of the remaining 19 states that reported conducting evaluations, 12 states said they conducted those evaluations on an ongoing basis, such as providing employers the opportunity to regularly provide feedback through an evaluation form posted on their website. Of the states that reported conducting evaluations of their communication efforts with employers, 16 of 19 states reported using the results to make program changes. For example, one state reported that it has made changes to its online employer handbook by providing a search tool that allows employers to find information more quickly. Other states reported that they have created employer training materials or mandated customer service training for all employees after assessing their communication efforts. Similarly, 29 of 53 states (54 percent) also reported that they do not conduct evaluations of their communication with employees. Of the remaining 23 states that reported conducting evaluations, 18 states said they conduct them on an ongoing basis by providing an evaluation form posted on their website. When feedback is collected, these states reported using the information to make program changes. For example, one state official told us they have an ongoing satisfaction survey on their website that employees can fill out after they apply for benefits and that feedback from that survey is used to improve the eligibility determination process. Other states reported making significant changes to their claims processing systems and making the language on the applications more reader friendly and understandable. We provided a draft of this report to the Department of Health and Human Services and the Department of Labor for review and comment. Officials from both agencies provided technical comments which we incorporated in the draft as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Secretary of Labor, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Please contact me on (202) 512-7215 or at [email protected] if you or your staff have questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found of the last page of this report. Key contributors to this report are listed in appendix II. We examined (1) the extent to which states have laws or policies that affect whether Head Start and other ECE teachers can claim UI benefits during summer breaks; (2) how many Head Start teachers may have been eligible for UI benefits during their summer breaks in 2015; and (3) what is known about how states communicate information about eligibility for UI benefit payments to Head Start and ECE employees and the effectiveness of these efforts. To address all three objectives, we conducted a web-based survey of state UI directors (including all 50 states, the District of Columbia, Puerto Rico, and the Virgin Islands) from February to April 2016. The survey included questions about state laws, regulations, and policies that might affect whether Head Start employees and other ECE teachers are able to receive UI benefits during summer breaks. For example, we asked whether the type of employer would affect eligibility for benefits. The survey also included questions about key internal controls, especially those related to communication and monitoring, such as whether states targeted communication to Head Start or other ECE employers or employees and whether states were aware of improper payments to Head Start or other ECE employees. We received responses from all 53 states. We followed up with states when necessary to clarify their responses, but we did not independently verify the information they provided. For example, while we asked states to provide a description of relevant state laws, regulations, and policies, we did not confirm their descriptions with an independent review. Thus, in this report all descriptions and analysis of state laws, regulations, and policies are based solely on what states reported to us. We used standard descriptive statistics to analyze responses to the questionnaire. Because we surveyed all states, the survey did not involve sampling errors. To minimize non-sampling errors, and to enhance data quality, we employed recognized survey design practices in the development of the questionnaire and in the collection, processing, and analysis of the survey data. For example, we pretested the questionnaire with three state UI directors to minimize errors arising from differences in how questions might be interpreted and to reduce variability in responses that should be qualitatively the same. We further reviewed the survey to ensure the ordering of survey sections was appropriate and that the questions within each section were clearly stated and easy to comprehend. An independent survey specialist within GAO also reviewed a draft of the questionnaire prior to its administration. To reduce nonresponse, another source of non-sampling error, we followed up by e- mail with states that had not responded to the survey to encourage them to complete it. We reviewed the data for missing or ambiguous responses and followed up with states when necessary to clarify their responses. On the basis of our application of recognized survey design practices and follow-up procedures, we determined that the data were of sufficient quality for our purposes. To address our second objective, we used data from the Office of Head Start's Program Information report, which each Head Start grantee is required to submit on an annual basis through HHS's Head Start Enterprise System. We interviewed knowledgeable HHS officials to determine the reliability of the data, and we concluded that they were sufficiently reliable for the purposes of our audit. We analyzed data from the 2014-2015 program year, the most recent available. In conjunction with this data, we used information from our survey on state laws, regulations, and policies to estimate the number of Head Start teachers and teaching assistants who may have been eligible for UI benefits during summer breaks in 2015. We assessed the potential eligibility of teachers and assistant teachers based solely on their wages while employed by Head Start programs. We were not able to identify whether these teachers had wages from other employment that would affect their eligibility for benefits based on those wages. In addition, we were not able to determine whether a Head Start grantee may be offering work to its employees during the summer that is outside of the Head Start program, which may affect eligibility for benefits. In conducting this analysis, we made various assumptions that could impact the results. For example, we assumed that two grantee types-- charter schools and school systems--are classified as educational institutions by all states and are therefore ineligible for UI benefits. This assumption may not always be correct, however, as there may be instances in which charter schools or school systems are not defined as educational institutions in their state. In addition, according to HHS officials, grantees self-report their category, and HHS does not verify this information. Therefore, there may be grantees that would be classified as educational institutions by states because they are charter schools or school systems that we were unable to identify in the data. We also assumed that all employees have reasonable assurance of continued employment after the summer break. However, not all employees may have such assurance, which may lead to an underestimation of the employees who are potentially eligible for UI benefits between terms. We also identified programs that did not have a long enough break to allow employees to collect UI benefits by examining the start and end of the program year. In doing so, we assumed that all teachers at such employers were employed for the full school year and were thus not eligible for UI benefits. However, we were not able to identify whether all teachers and teacher assistants in those programs were employed for the entire length of the program year. Therefore, this may be an overestimate of the population with breaks too short to collect UI benefits. In the course of following up with certain states, we asked various questions that were not asked on the survey, and as a result, these states answered additional questions and gave additional details about their state laws which affected the results of our analysis for those states. In conducting this analysis, when faced with uncertainty about the status of a Head Start grantee, we classified employees of such grantees as potentially eligible for UI benefits since we did not have enough information to conclude that they are ineligible. We assessed the potential eligibility of teachers and assistant teachers based solely on their wages while employed by Head Start programs. We were not able to identify whether these teachers had wages from other employment that would affect their eligibility for benefits based on those wages. Concurrent with our survey, we conducted site visits to two states, Indiana and New Jersey, and phone interviews with officials and stakeholders in Alabama, Puerto Rico, and Wyoming. We selected these states and that territory based on factors such as the number of Head Start centers and survey responses regarding eligibility and experiences with improper payments and because they are located in geographically diverse regions. In each state, we interviewed state UI program officials as well as stakeholders, such as ECE association officials and Head Start grantees. The results from our interviews with state UI programs and stakeholders are not generalizable. In our interviews with state officials, we asked about eligibility policies and changes to such policies, improper payments, communication with employers and employees, and other internal controls. In our interviews with stakeholders, we asked about their awareness of eligibility policies, the extent to which their employees collect UI benefits, and their experiences with the state UI department. We conducted this study from August 2015 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Cindy S. Brown Barnes, (202) 512-7215, [email protected]. In addition to the contact named above, Danielle Giese, Assistant Director; Amy Sweet, Analyst-in-Charge; Meredith Lilley, and Vernette Shaw made significant contributions to this report. Also contributing to this report were Amy Buck, David Chrisinger, Alex Galuten, Jill Lacey, Mimi Nguyen, Jerome Sandau, and Almeta Spencer. Unemployment Insurance: States' Customer Service Challenges and DOL's Related Assistance. GAO-16-430. Washington, D.C.: May 12, 2016. Unemployment Insurance: States' Reductions in Maximum Benefit Durations Have Implications for Federal Costs. GAO-15-281. Washington, D.C.: April 22, 2015. Managing for Results: Selected Agencies Need to Take Additional Efforts to Improve Customer Service. GAO-15-84. Washington, D.C.: October 24, 2014. Improper Payments: Government-Wide Estimates and Reduction Strategies. GAO-14-737T. Washington, D.C.: July 9, 2014. Unemployment Insurance: Economic Circumstances of Individuals Who Exhausted Benefits. GAO-12-408. Washington, D.C.: February 17, 2012.
|
In 2015, the Head Start child development program provided federal funds to local grantees that employed over 90,000 teachers. Some of these grantees run programs that do not run during the summer, and some teachers may, in turn, seek UI benefits to help meet expenses during that time. All states have laws generally prohibiting certain employees of educational institutions from collecting UI benefits between terms, though they have flexibility in setting specific eligibility restrictions. GAO was asked to review Head Start and other ECE teachers' eligibility for UI benefits during the summer months. This report examines (1) the extent to which states have laws or policies that affect whether Head Start and other ECE teachers are eligible for UI benefits during summer breaks; (2) how many Head Start teachers may have been eligible for these benefits during their summer breaks in 2015; and (3) what is known about how states communicate information about eligibility for UI benefit payments to Head Start and ECE employees and the effectiveness of these efforts. GAO surveyed UI directors in all 50 states, the District of Columbia, Puerto Rico, and the Virgin Islands (with 100 percent responding); analyzed Head Start data from program year 2015; reviewed relevant federal laws; and interviewed federal officials and stakeholders, including employer associations and teacher associations, in five states selected using criteria such as their benefit restrictions. In response to GAO's survey, officials from all 50 states, the District of Columbia, and two territories reported that they have various laws or policies that may affect whether Head Start and other early childhood education (ECE) teachers are allowed to collect unemployment insurance (UI) benefits during summer breaks. Officials in three states--Indiana, Pennsylvania, and Wyoming--reported that Head Start teachers are generally not eligible for UI benefits over summer breaks. In other states, officials outlined various factors that can affect eligibility. Specifically, officials from 30 states said the type of employer--for-profit, non-profit, or municipality--can influence eligibility for Head Start teachers (officials in 28 states reported this for ECE teachers). In addition, officials in 17 states reported that eligibility for Head Start teachers can be affected by the program's relationship to a school or board of education (officials in 11 states reported this for ECE teachers). For example, West Virginia officials reported that Head Start teachers considered under the authority of the board of education are generally not eligible for UI benefits. In 2015, about half of the 90,000 Head Start teachers (about 44,800) across the country may have been eligible for UI benefits during their summer break, according to GAO's analysis of available data and the information states reported about their laws, regulations, and policies in response to GAO's survey. The remaining teachers and assistant teachers were likely not eligible because they worked for school districts or charter schools (about 14,150); worked in programs with breaks that were too short to allow them to collect benefits (about 28,940); or were generally not eligible under state laws, regulations, or policies (about 2,510). To communicate UI eligibility rules to both employers and employees, state UI agencies reported using a variety of methods; however, selected stakeholders identified several concerns with these efforts. According to GAO's survey, state directors reported that they use various communication channels to provide general information to both employers and employees on matters, such as how to file a claim in their states. The three most commonly cited methods used by the states included websites, hotlines, and handbooks. Even though most states reported that they are using multiple methods of communication with employers and employees, some Head Start and ECE stakeholders in five selected states told GAO that the complexity of federal and state laws and policies governing state programs continue to make UI eligibility rules difficult to understand, even with information that their states are providing. While some of this confusion can be attributed to the variability and complexities of states' eligibility policies, GAO also found that states are generally not evaluating the effectiveness of their communication approaches. Specifically, over half of the states reported that they have not evaluated the effectiveness of their communication approaches with employees, and about two-thirds reported they have not evaluated the effectiveness of their communication approaches with employers. The states that were conducting evaluations reported that the feedback allowed them to make improvements in their communication materials for both employers and employees. For example, some states reported making their claims processing applications more user friendly and understandable as a result of this feedback. GAO is not making recommendations.
| 5,481 | 936 |
FBI's review and approval process for Trilogy contractor invoices, which was carried out by a review team consisting of officials from FBI, GSA, and Mitretek, did not provide an adequate basis for verifying that goods and services billed were actually received by FBI or that payments were for allowable costs. This occurred in part because responsibility for the review and approval of invoices was not clearly defined or documented. In addition, contractor invoices frequently lacked detailed information required by the contracts and other additional information that would be needed to facilitate an adequate review process. Despite this, invoices were paid without requesting additional supporting documentation necessary to determine the validity of the charges. These weaknesses in the review and approval process made FBI highly vulnerable to payment of unallowable or questionable contractor costs. While the invoice review and approval process differed for each contractor and type of invoice charge, in general the process carried out by the review team lacked key procedures to reasonably ensure that goods and services billed were actually received by FBI or that the amounts billed and paid were for allowable costs. For example, the review team did not have a systematic process for verifying that the individuals listed on labor invoices actually worked the number of hours billed or that the job classification and related billing rates were appropriate. Further, there was no documented assessment of whether overall hours billed for a particular activity were in line with expectations. In addition, the review team paid contractor invoices for subcontractor labor charges without any attempt to assess the validity of the charges. The GSA official responsible for paying the invoices stated that the review team relied on the contractors to properly bill for costs related to subcontractors and to validate the subcontractor invoices. However, the review team had no process in place to assess whether the contractors were properly validating their subcontractor labor charges or to assess the allowability of those charges. The insufficient invoice review and approval process was at least in part the result of a lack of clarity in the interagency agreement between FBI and GSA as well as in FBI's oversight contract with Mitretek. We have identified the management of interagency contracting as a high-risk area, in part because it is not always clear with whom the responsibility lies for critical management functions in the interagency contracting process, including contract oversight. For example, the terms and conditions of the interagency agreement with GSA only vaguely described GSA's role in contract administration. In particular, the agreement did not specify the invoice review and approval steps to be performed or who would perform them. Likewise, the Mitretek contract provided a general description of Mitretek's oversight duties, but did not specifically mention its responsibilities related to the invoice review and approval process. Additionally, the lack of clarity in roles and responsibilities was evident in our interviews with the review team, where each party indicated that another party was responsible for a more detailed review. The failure to establish an effective review process was compounded by the fact that not all invoices provided the type of detailed information required by the contracts and other information that would be needed to validate the invoice charges. For example: CSC labor invoices did not include information related to individual labor rates or indicate which overhead rates were applicable to each employee--information needed to verify mathematical accuracy and to determine that the components of the labor charges were valid. CSC invoices provided a summary of travel charges by category (e.g., airfare and lodging), but did not provide required information related to an individual traveler's trip costs. The travel invoices also did not provide cost detail by travel authorization number. Therefore, there was no way to determine that the trips billed were approved in advance or that costs incurred were proper and reasonable based on the location and length of travel. CSC and SAIC invoices for the other direct costs (ODC) provided a summary of charges by category (e.g., shipping and office supplies); however, CSC did not provide required cost detail by transaction. In some cases, the category of charges was not even identified. For example, as shown in figure 1, on the ODC invoice, a category entitled "Other Direct Costs" made up $1.907 million of the $1.951 million invoice current billing total. No additional information was provided on the invoice to explain what made up these costs. Even though contractor invoices, particularly those from CSC, frequently lacked key information needed for reviewing charges, we found through inquiries with the review team and the contractors that invoices were generally paid without requesting additional supporting documentation. We further found that invoices for equipment did not individually identify each asset being billed by bar code, serial number, or some other identifier that would allow verification of assets billed to assets received. This severely impeded FBI's ability to determine whether it had actually received the assets included on invoices and to subsequently track individual accountable assets on an item-by-item basis. Because of the lack of fundamental internal controls over the process used to pay Trilogy invoices, FBI was highly vulnerable to payment of unallowable contractor charges. In order to assess the effect of these vulnerabilities, we used forensic auditing techniques to select certain contractor costs for review. We identified about $10.1 million in questionable contractor costs paid by FBI. These costs included payments for first-class travel and other excessive airfare costs, incorrect billings for overtime hours worked, potentially overcharged labor rates, and other questionable costs. Given FBI's poor control environment over invoice payments and the fact that we reviewed only selected FBI payments to Trilogy contractors, other questionable costs may have been paid that have not been identified. During our review of CSC's supporting documentation for selected travel charges, we found 19 first-class airline tickets costing a total of $20,025. The CSC contract called for travel to be reimbursed to the extent allowable under the Joint Travel Regulations, which state that travelers must use basic economy or coach class unless the use of first-class travel is properly authorized and justified. Because the documentation provided by CSC for these first-class tickets we identified did not contain the required authorizations or justifications, we consider the cost of this travel in excess of coach-class fares as potentially unallowable. Also during our review of travel charges, we noted several instances of unusually expensive coach-class tickets, which we also considered to be questionable. Upon further inquiry with several airlines, we determined that most of these were for "full fare" coach-class tickets. We noted that the airlines used most often by the contractors indicated that it is possible to obtain a free upgrade to first class with the purchase of the more expensive full-fare coach ticket. In fact, we found that in some instances, the current price of a full-fare coach ticket was higher than the current price of a first-class ticket. We noted 62 full-fare coach tickets billed by CSC for $85,336. In contrast, we estimated that basic coach-class fares would have cost $41,978. SAIC and Mitretek also billed FBI for excessive airfare costs, but to a lesser degree. In total, we identified 75 unusually expensive tickets costing $100,847, which exceeded our estimate of basic coach-class fares by approximately $49,848. Table 1 provides examples of the first-class and excessive airfare travel costs we identified. Our review also showed that FBI may have paid SAIC for incorrectly billed overtime charges. The task order for SAIC work stated that the government would not object to SAIC employees working hours in excess of 40 per week if necessary. In March 2003, SAIC implemented a policy that FBI agreed to, which decreased the amount of hours that would be billed to FBI. This policy stated that contractor staff would be compensated for hours worked that exceeded 90 hours in a 2-week pay period, and established a ceiling of 120 hours per pay period. We found, however, that SAIC employees frequently charged for all hours worked beyond 80 in a pay period and noted some instances where employees charged hours beyond the 120-hour ceiling. The costs of these hours were billed to and paid by FBI. SAIC management acknowledged that billings were not consistent with the March 2003 policy and indicated that it would research the issue further to determine whether corrections are necessary. Based on our review of the labor charges, FBI may have overpaid for more than 4,000 hours. Using average, fully burdened labor rates for employees who billed incorrectly, we estimated that FBI may have overpaid these overtime costs by as much as $400,000. Questionable Labor Rates We also found that CSC/DynCorp may have charged labor rates that exceeded ceiling rates that GSA asserts were established pursuant to a DynCorp task order. In short, GSA and CSC disagree on whether ceiling rates for a CSC/DynCorp subcontractor, DynCorp Information Systems (DynIS), were ever established. When DynCorp entered into the contractual agreement with GSA, it agreed to ceiling rates for various labor categories and agreed to negotiate subcontractor ceiling rates separately for each task order. The May 2001 DynCorp task order award document stated that ceilings were in place on all DynIS labor category and indirect rates, subject to negotiation pending the results of a Defense Contract Audit Agency audit. GSA officials told us they believed that DynIS labor category rates in DynCorp's Trilogy proposal represented established ceilings, and that they negotiated DynIS labor category ceiling rates with DynCorp. However, CSC stated that it never negotiated labor category ceiling rates with GSA. Based on our review of DynCorp's labor invoices, we noted that several of DynIS's rates charged exceeded the labor rates that GSA contended were ceiling rates. For example, CSC/DynCorp billed over 14,000 hours for work performed by senior IT analysts during 2001 on the Trilogy project based on an average hourly rate of $106.14. However, if ceiling rates were established, the DynCorp proposal indicated that the Trilogy project would be charged a maximum of $68.73 per hour for a senior IT analyst working in the field or $96.24 per hour for a senior IT analyst working at headquarters during 2001. If ceiling rates were established, we estimated that FBI overpaid CSC/DynCorp by approximately $2.1 million for DynIS labor costs. Other Questionable Costs We also identified about $7.5 million in other payments to contractors that were for questionable costs. In most cases, these costs were not supported by sufficient documentation to enable an objective third party to determine if each payment was a valid use of government funds. For example, CSC did not provide us adequate supporting documentation for almost $2 million of subcontractor labor charges and about $5.5 million of ODC charges we selected to review. Because $4.7 million of these inadequately supported ODC costs were for training charges from one subcontractor, CACI Inc. - Federal (CACI), we subsequently requested supporting documentation from the subcontractor for selected charges for training costs totaling about $3.5 million. We found that CACI could not adequately support charges to FBI totaling almost $3 million that CACI paid to one event planning company (another subcontractor). CACI stated that supporting documentation was not applicable because its agreement with the event planner was "fixed priced." However, CACI's assertion was not supported by the terms of the purchase order and related statement of work that specifically required documentation to support costs claimed by the event planner and to charge only for services rendered. CSC was also unable to provide us adequate supporting documentation for $762,262 in equipment disposal costs billed by two subcontractors. The documentation provided consisted of a spreadsheet that summarized costs of the subcontractors, but did not include receipts or other support to prove that these costs were actually incurred. Our review of SAIC's subcontractor labor charges found that FBI was billed twice for the same subcontractor invoice totaling $26,335. SAIC officials agreed that they double billed and stated that they would make a correction. Our audit also disclosed that FBI did not adequately maintain accountability for equipment purchased for the Trilogy project. FBI relied extensively on contractors to account for Trilogy assets while they were being purchased, warehoused, and installed. However, FBI did not establish controls to verify the accuracy and completeness of contractor records it was relying on. Moreover, once FBI took possession of the Trilogy equipment, it did not establish adequate physical control over the assets. Consequently, we found that FBI could not locate over 1,200 assets purchased with Trilogy funds, which we valued at approximately $7.6 million. Because of the significant weaknesses we identified in FBI's property controls, the actual amount of missing equipment could be even higher. FBI relied on contractors to maintain records related to the purchasing, warehousing, and installation of about 62 percent of the equipment purchased for the Trilogy project. FBI's primary contractor responsible for delivering computer equipment to FBI sites was CSC. FBI officials told us they met regularly with CSC and its subcontractors to discuss FBI's equipment needs and a deployment strategy for the delivery of equipment. Based on these meetings, CSC instructed its subcontractors to purchase equipment, which was subsequently shipped to and put under the control of those same subcontractors. Once equipment arrived at the subcontractors' warehouses, the subcontractors were responsible for affixing bar codes on accountable items--all items valued above $1,000 and certain others considered sensitive that are required by FBI policy to be tracked individually. In addition, FBI directly purchased about $19.1 million of equipment for the Trilogy project that was shipped directly to either CSC or CSC subcontractors. When equipment was shipped from a subcontractor warehouse to an FBI site, the subcontractor prepared a bill of lading that listed all items shipped. However, there was no requirement for FBI officials to verify that the items were actually received. The subcontractors also prepared a "Site Acceptance Listing" of equipment that had been installed at each FBI site. While an FBI official signed this listing, based on our inquiries at two field offices, we found the officials may not have always verified the accuracy and completeness of these lists. FBI did not prepare its own independent lists of ordered, purchased, or paid-for assets and did not perform an overall reconciliation of total assets ordered and paid for to those received. Such a reconciliation would have been made difficult by the fact that invoices FBI received from CSC did not include item-specific information--such as bar codes, serial numbers, or shipping location. However, failure to perform such a reconciliation left FBI with no assurance that it had received all of the assets it paid for. In addition, equipment that was delivered to FBI sites was not entered into FBI's Property Management Application (PMA) in a timely manner, increasing the risk that assets could be lost or stolen without detection. We found that 71.6 percent of the CSC-purchased equipment that was recorded in PMA, representing 84 percent of the total dollar value, was entered more than 30 days after receipt, and nearly 17 percent of the equipment, representing 37 percent of the dollar value, was entered more than a year after receipt. When assets are not timely recorded in the property system, there is no systematic means of identifying where they are located or when they are removed, transferred, or disposed of and no record of their existence when physical inventories are performed. This severely limits the effectiveness of the physical inventory in detecting missing assets and in triggering investigation efforts as to the causes. FBI also could not accurately identify all accountable assets because of improper controls related to its bar codes--a key tool for maintaining accountability and control over individual assets. FBI relied on contractors to affix the bar codes, yet did not track the bar code numbers given to contractors, the bar code numbers they used, or the bar code numbers returned. Moreover, FBI provided incorrect instructions to contractors, initially directing them to bar code certain types of lower cost equipment that did not need to be tracked. FBI's loss of control over its bar codes and failure to timely enter assets into its property tracking system seriously hampered its ability to maintain accountability for its Trilogy equipment. Accountability for equipment was further undermined by FBI's failure to perform sufficient physical inventory procedures to ensure that all assets purchased with Trilogy funds were actually located during the physical inventory. Given the serious nature of these control weaknesses, we performed additional test work to determine whether all accountable assets purchased with Trilogy funds could be accounted for and found that FBI was unable to locate 1,404 of these assets. These were items such as desktop computers, laptops, printers, and servers. In written comments on a draft of our report, FBI told us that it had accounted for more than 1,000 of these items. During our agency comment period, FBI stated that it had found 237 items we previously identified as missing and provided us evidence, not made available during our audit, to sufficiently account for 199 of these items. We adjusted the missing assets listing in our report to reflect 1,205 (1,404 - 199) assets as still missing. FBI later informed us that the approximately 800 remaining items noted in its official agency response included (1) accountable assets not recorded in PMA because they were either incorrectly identified as nonaccountable assets or mistakenly omitted, (2) defective accountable assets that were never recorded in PMA and subsequently replaced, and (3) nonaccountable assets or components of accountable assets that were incorrectly bar coded. We considered these same issues during our audit and attempted to determine their impact. For example, as stated in our report, FBI told us that components of some nonaccountable assets that were part of a larger accountable item may have been mistakenly bar coded. Using FBI guidance on accountable property, we determined that 103, or about 11 percent, of the 926 missing assets purchased by CSC may have represented nonaccountable components. Because FBI could not provide us with the location information, we could not definitively determine whether the items were accountable assets. During the course of our audit, FBI was not able to provide us with any evidence to support its other statements regarding the reasons the assets could not be located. While we are encouraged by FBI's current efforts to account for these assets, its ability to definitively determine their existence has been compromised by the numerous control weaknesses identified in our report. Further, the fact that assets have not been properly accounted for to date means that they have been at risk of loss or misappropriation without detection since being delivered to FBI--in some cases, for several years. FBI's Trilogy IT project spanned 4 years and the reported costs exceeded $500 million. Our review disclosed that there were serious internal control weaknesses in the process used by FBI and GSA to approve contractor charges related to Trilogy, which made up the majority of the total reported project cost. While our review focused specifically on the Trilogy program, the significance of the issues identified during our review may indicate more systemic contract and financial management problems at FBI and GSA, in particular when using cost-reimbursable type contracts and interagency contracting vehicles. These weaknesses resulted in the payment of millions of dollars of questionable contractor costs, which may have unnecessarily increased the overall cost of the project. Unless FBI strengthens its controls over contractor payments, its ability to properly control the costs of future projects involving contractors, including its new Sentinel project, will be seriously compromised. Further, weaknesses in FBI's controls over the equipment acquired for Trilogy resulted in millions of dollars in missing equipment and call into question FBI's ability to adequately safeguard its equipment, as well as confidential and sensitive information that could be accessed through that equipment from unauthorized use. Our companion report includes 15 recommendations to help improve FBI's and GSA's controls over their invoice review and approval processes and to address questionable billing issues we identified. It also includes 12 recommendations to help improve FBI's accountability for assets. FBI concurred with our recommendations and outlined actions under way and further planned actions to address the weaknesses we identified. FBI also provided additional information related to Trilogy assets we identified as missing. While GSA accepted our recommendations, it did not believe that one of them was needed, and described some of the improvements to its internal controls and other business process changes already implemented. GSA also expressed concern with some of our observations and conclusions related to the invoice review and approval process and our analysis of airfare costs. We continue to believe that our report is accurate and that all recommendations should be implemented. We understand that FBI has outlined actions to implement our recommendations. While we are encouraged by these efforts, let me just emphasize the importance of continually monitoring the implementation of corrective actions to ensure that they are effective in helping to avoid the types of control lapses that we identified throughout the Trilogy project. Without such vigilant monitoring, Sentinel and other efforts will be greatly exposed to similar questionable or inappropriate payments and lack of accountability over assets. Mr. Chairman and members of the committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have. For more information regarding this testimony, please contact Linda M. Calbom at (202) 512-9508 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this testimony included Steven Haughton (Assistant Director), Ed Brown, Marcia Carlsen (Assistant Director), Lisa Crye, and Matt Wood. Numerous other individuals contributed to our audit and are listed in our companion report. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Trilogy project--initiated in 2001--is the Federal Bureau of Investigation's (FBI) largest information technology (IT) upgrade to date. While ultimately successful in providing updated IT infrastructure and systems, Trilogy was not a success with regard to upgrading FBI's investigative applications. Further, the project was plagued with missed milestones and escalating costs, which eventually totaled nearly $537 million. This testimony focuses on (1) the internal controls over payments to contractors, (2) payments of questionable contractor costs, and (3) FBI's accountability for assets purchased with Trilogy project funds. FBI's review and approval process for Trilogy contractor invoices, which included a review role for GSA as contracting agency, did not provide an adequate basis for verifying that goods and services billed were actually received and that the amounts billed were appropriate, leaving FBI highly vulnerable to payments of unallowable costs. This vulnerability is demonstrated by FBI's payment of about $10.1 million in questionable contractor costs we identified using data mining, document analysis, and other forensic auditing techniques. These costs included first-class travel and other excessive airfare costs, incorrect charges for overtime hours, potentially overcharged labor rates, and charges for which the contractors could not provide adequate supporting documentation to substantiate the costs purportedly incurred. FBI also failed to establish controls to maintain accountability over equipment purchased for the Trilogy project. These control lapses resulted in more than 1,200 missing pieces of equipment valued at approximately $7.6 million that GAO identified as part of its review. Given the poor control environment and the fact that GAO reviewed only selected FBI payments to Trilogy contractors, other questionable contractor costs may have been paid that have not been identified. If these control weaknesses go uncorrected, future contracts, including those related to Sentinel--FBI's new electronic information management system initiative--will be greatly exposed to improper payments. In addition, the lack of accountability for Trilogy equipment calls into question FBI's ability to adequately safeguard its existing assets as well as those it may acquire in the future.
| 4,752 | 450 |
Defense biometrics activities involve a number of military services, commands, and offices across the department. Figure 2 depicts the relationship among several of the key DOD biometrics organizations. Roles and responsibilities for defense biometrics activities are explained in DOD's 2008 biometrics directive, and summarized in table 1. DOD is revising its biometrics directive based on, among other things, new requirements in the Ike Skelton National Defense Authorization Act for Fiscal Year 2011. Office of the Secretary of Defense officials said that they plan to issue the revised biometrics directive in the fall of 2012. The office had started to draft an implementing instruction for biometrics based on the 2008 directive but suspended this effort pending issuance of the updated directive. According to DOD officials, the implementing instruction is expected to contain a more detailed description of roles and responsibilities based upon the revised directive. To oversee biometrics activities in Afghanistan, Central Command established Task Force Biometrics in 2009. According to the Commander's Guide to Biometrics in Afghanistan, Task Force Biometrics assists commands with integrating biometrics into their mission planning, trains individuals on biometrics collection, develops biometrics-enabled intelligence products, and manages the biometrically enabled watchlist for Afghanistan that contains the names of more than 33,000 individuals. This watchlist is a subset of the larger biometrically enabled watchlist managed by the National Ground Intelligence Center. Additionally, according to Army officials, the Army established the Training and Doctrine Command Capabilities Manager for Biometrics and Forensics with responsibilities for ensuring that user requirements are considered and incorporated in Army policy and doctrine involving biometrics. Further, the Army gave its Intelligence Center of Excellence responsibilities for developing and implementing biometrics training, doctrine, education, and personnel. U.S. Army, Commander's Guide to Biometrics in Afghanistan: Observations, Insights, and Lessons, Center for Army Lessons Learned (April 2011). Non-U.S. persons are individuals who are neither U.S. citizens nor aliens lawfully admitted into the United States for permanent residence. during patrols and other missions. U.S. forces use three principal biometrics collection devices to enroll individuals. The Biometrics Automated Toolset: Consists of a laptop computer and separate peripherals for collecting fingerprints, scanning irises, and taking photographs. The Toolset system connects into any of the approximately 150 computer servers geographically distributed across Afghanistan that store biometrics data. The Toolset system is used to identify and track persons of interest and to document and store information, such as interrogation reports, about those persons. This device is primarily used by the Army and Marine Corps to enroll and identify persons of interest. The Handheld Interagency Identity Detection Equipment: Is a self- contained handheld biometrics collection device with an integrated fingerprint collection surface, iris scanner, and camera. The Handheld Interagency Identity Detection Equipment connects to the Biometrics Automated Toolset system to upload and download biometrics data and watchlists. This device is primarily used by the Army and Marine Corps. The Secure Electronic Enrollment Kit: Is a self-contained handheld biometrics collection device with a built-in fingerprint collection surface, iris scanner, and camera. Additionally, the Secure Electronic Enrollment Kit has a built-in keyboard to facilitate entering biographical and other information about individuals being enrolled. The Kit is used primarily by the Special Operations Command, although the Army and Marine Corps have selected the Kit as the replacement biometrics collection device for the Handheld Interagency Identity Detection Equipment. The Biometrics Automated Toolset, Handheld Interagency Identity Detection Equipment, and Secure Electronic Enrollment Kit collection devices are shown in figure 3. U.S. forces in Afghanistan collect biometrics data and search for a match against the Afghanistan biometrically enabled watchlist that is stored on the biometrics collection devices in order to identify persons of interest. Soldiers and Marines connect their biometrics collection devices to the Afghanistan Biometrics Automated Toolset system's architecture, at which point the data are transmitted and replicated through a series of computer servers in Afghanistan to the ABIS database in West Virginia. Special operations forces have a classified and an unclassified Web- based portal that they use to transmit biometrics data directly from their collection devices to the ABIS database in West Virginia. Biometrics data obtained during the enrollment using the biometrics collection devices are searched against previously collected biometrics records in the Afghanistan biometrically enabled watchlist, and in some cases the Biometrics Automated Toolset servers, before searching against stored biometrics records and latent fingerprints stored in ABIS. Match/no match watchlist results are reported to Task Force Biometrics and other relevant parties. The biometrics data collected during the enrollment are retained in ABIS for future matching by DOD. Once collected, biometrics data and associated information are evaluated by intelligence analysts to link a person with other people, events, and information. This biometrics-enabled intelligence is then used to identify persons of interest, which can result in his or her inclusion on the biometrically enabled watchlist. The biometrically enabled watchlist for Afghanistan contains five levels, and according to the level of assignment, an individual who is encountered after his or her initial enrollment will be detained, questioned, denied access to U.S. military bases, disqualified from training or employment, or tracked to determine his or her activities and associations. In addition to DOD, the Federal Bureau of Investigation and the Department of Homeland Security collect and store biometrics data to identify persons of interest. The Federal Bureau of Investigation uses its biometrics system for law enforcement purposes. The Department of Homeland Security uses its biometrics system for border security, naturalization, and counterterrorism purposes, as well as for visa approval in conjunction with the Department of State. While the three biometrics organizations are able to share information, the biometrics databases operate independently from one another, as we have noted in our March 2011 report. DOD has trained thousands of personnel on the collection and transmission of biometrics data since 2004; however, training for leaders does not fully support warfighter use of biometrics because it does not instruct unit commanders and other military leaders on (1) the effective use of biometrics, (2) selecting the appropriate personnel for biometrics collection training, and (3) tracking personnel who have been trained in biometrics collection to effectively staff biometrics operations. The Army, Marine Corps, and Special Operations Command have trained thousands of personnel on the use of biometrics prior to their deployment to Afghanistan over the last 8 years. This training includes the following: Army: Offers classroom training at its three combat training centers at Fort Polk, Louisiana; Fort Irwin, California; and Hohenfels, Germany as well as home station training teams and mobile training teams that are available to travel and train throughout the United States as needed. In addition, the Army is developing virtual-based training software to supplement its classroom training efforts. Marine Corps: Offers classroom training at its training centers at Camp Pendleton, California and Camp LeJeune, North Carolina as well as simulation training at Twentynine Palms, California. Special Operations Command: Offers classroom and simulation training at Fort Bragg, North Carolina. Moreover, the military services and Special Operations Command have mobile training teams in Afghanistan to provide biometrics training to personnel during their deployment. Additionally, the military services rely on personnel who have been trained in biometrics prior to deployment to train others while deployed. The Office of the Secretary of Defense, the military services, and Central Command each has emphasized in key documents the importance of training. The 2008 DOD directive, which was issued by the Under Secretary of Defense for Acquisition, Technology, and Logistics 4 years after biometrics collection began in Afghanistan, emphasizes the importance of biometrics training, including the need for component-level guidance to ensure training is developed as required. The Office of the Assistant Secretary of Defense for Research and Engineering subsequently drafted an implementing instruction that includes guidance for the establishment of training programs designed to enable DOD units and leaders to effectively employ biometrics collection capabilities and utilize biometrically enabled watchlists. As noted earlier, this instruction will not be issued until the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics reissues the biometrics directive, potentially in the fall of 2012. Both the Army and Central Command have issued guidance that requires soldiers to be trained prior to deployment. Additionally, an Army regulation on training says that first and foremost, training must establish tasks, conditions, and standards to prepare units to perform their missions. Similarly, a Marine Corps order on training states that units focus their training effort on those missions and tasks to which they can reasonably expect to be assigned in combat. This Office of the Secretary of Defense, Army, Marine Corps, and Central Command guidance underscores the importance of biometrics training. DOD's draft instruction for biometrics emphasizes the importance of training leaders in the effective employment of biometrics. However, existing biometrics training for leaders does not instruct unit commanders and other military leaders on (1) the effective use of biometrics, (2) selecting the appropriate personnel for biometrics collection training, and (3) tracking personnel who have been trained in biometrics collection to effectively staff biometrics operations. When leaders are not invested in the importance of biometrics as a tool for identifying enemy combatants, the warfighters serving with them may be unaware of the value of biometrics because their leaders have not conveyed to them the importance of biometrics. Moreover, existing biometrics training for leaders limits the ability of military personnel to collect higher quality biometrics enrollments to better confirm the identity of enemy combatants through biometrics. The military services and Special Operations Command have developed biometrics training for leaders to varying degrees, but the existing training does not communicate how leaders can effectively use biometrics in their mission planning. As noted in Army Training Needs Analysis for Tactical Biometrics Collection Devices issued in March 2010, a majority of the Army's unit commanders were unaware of how biometrics collection contributes to identifying enemy combatants, and that failure to address biometrics in training for leaders hampers force protection measures. This analysis also stated that Army leaders need to train on how and when to incorporate biometrics in mission planning and how to subsequently deploy soldiers to use biometrics systems. As a result of the analysis, the Army developed a 1-hour briefing for unit commanders and other senior officials, but according to an Army training official, it is voluntary training provided by mobile training teams and not a part of the Army's formal, required training for leaders. Furthermore, even if leaders take the briefing, they may still not be fully aware of the importance and use of biometrics in combat missions because the briefing focuses primarily on operating biometrics devices for collecting and transmitting biometrics data and not on the value of biometrics' contribution to identifying enemy combatants. In addition, neither the Marine Corps nor the Special Operations Command has incorporated training for leaders into its biometrics training efforts. A Marine Corps official told us that biometrics training for leaders will not be developed until the Marine Corps finalizes its Marine Corps Identity Operations Strategy 2020, which will establish biometrics as a fully integrated capability. Officials at Special Operations Command said that while they offer biometrics training for the warfighter, they do not have dedicated biometrics training for leaders. Existing Army biometrics training for leaders also does not stress the importance of (1) selecting appropriate personnel for biometrics training, and (2) tracking which personnel have completed biometrics training prior to deployment. These two omissions likely contribute to less effective biometrics operations. For example, the Army found that unit commanders frequently made inappropriate choices regarding which soldiers should attend biometrics collection training prior to deploying to Afghanistan (e.g., vehicle drivers) compared to other occupations (e.g., military police) who are more likely to utilize biometrics in operations. Similarly, a Marine Corps official told us that commanders have selected personnel for biometrics missions who were never identified for predeployment training, including, in one instance, musicians. With respect to tracking personnel with biometrics training, the Army requires unit commanders to document soldiers' completion of unit-level training in the Army's Digital Training Management System. However, the Army Audit Agency reported in March 2011 that units routinely do not use the system to document training--biometrics or otherwise--and Army biometrics officials with whom we spoke during the course of our review were unaware of this system or any other mechanism to track completion of biometrics training. Similarly, the Marine Corps does not have a tracking mechanism to identify personnel trained in biometrics prior to deployment. A Marine Corps training official said that because they have not developed biometrics doctrine and training guidance, biometrics training is not tracked. Consequently, Army and Marine Corps unit commanders in Afghanistan do not have accurate information on which and how many of their personnel have received training for conducting biometrics operations. This lack of accurate information impedes unit commanders' ability to assess whether they have sufficient expertise among their personnel to effectively staff biometrics operations. Since 2004, U.S. forces in Afghanistan have collected biometrics from more than 1.2 million individuals with approximately 3,000 successful matches to enemy combatants, but factors during the transmission process limit biometrics' timely identification of enemy combatants using biometrics. Every week, thousands of biometrics enrollments are collected in Afghanistan and transmitted to ABIS in West Virginia; however, responsibility for assuring the completeness and accuracy of the biometrics data during the transmission process is unclear. According to the DOD biometrics directive, the Executive Manager for DOD Biometrics is responsible for developing and maintaining policies and procedures for the collection, processing, and transmission of biometrics data. However, no policy has been articulated that assigns responsibility for maintaining the completeness and accuracy of biometrics data during the transmission process. In addition, the Standards for Internal Control in the Federal Government state that (1) controls should be installed at an application's interfaces with other systems to ensure that all inputs are received and are valid, and that outputs are correct and properly distributed; and (2) key duties and responsibilities are divided among different people to reduce the risk of error. As shown in figure 4, the warfighter has responsibility for the biometrics data from collection to the point of submission into the Biometrics Automated Toolset system, and the Biometrics Identity Management Agency assumes responsibility for the biometrics data once they are received by ABIS. The Project Manager for DOD Biometrics has responsibility for the physical infrastructure of the Biometrics Automated Toolset system. DOD officials we spoke with were unable to identify who has responsibility for the completeness and accuracy of the biometrics data during the transmission process. Specifically, officials from Central Command stated that it owns the biometrics data, but the Project Manager for DOD Biometrics is responsible for their completeness and accuracy. Officials from the Project Manager for DOD Biometrics told us that it is not responsible for the completeness and accuracy of the biometrics data. In some cases, issues during the transmission process have surfaced impacting the completeness and accuracy of the biometrics data. Specifically, data synchronization issues led Central Command to issue an urgent requirement in September 2009 to improve data synchronization to avoid further hindering DOD's ability to transfer biometrics data; however, the urgent needs statement was rescinded following more than a year of inaction without improvements having been made in order to reallocate funding towards the Last Tactical Mile pilot project. This issue has continued to impact the completeness and accuracy of biometrics data. For example, during the Last Tactical Mile pilot project in summer 2011, Army officials found that of the more than 33,000 people on the Afghanistan biometrically enabled watchlist, approximately 4,000 biometrics collected from 2004 to 2008 had become separated from their associated identities and 1,800 remained separated as of October 2011. Officials stated that the separated data were most likely due in part to synchronization issues during the data transmission process. This decoupling of an individual from his or her associated biometrics data undermines the utility of biometrics by increasing the likelihood of enemy combatants going undetected within Afghanistan and across borders since the separated biometrics data cannot be used for identification purposes. Although DOD officials said they are aware of this and other synchronization issues, the absence of clearly defined responsibility during the biometrics data transmission process has contributed, in part, to DOD's inability to expeditiously correct data transmission issues as they arise, such as instances in which biometrics data collected in Afghanistan have been separated from their identities. Several factors in transmitting biometrics data from Army and Marine Corps forces affect DOD's ability to identify and capture enemy combatants with biometrics in a timely manner. The transmission process for biometrics data involves a unit's submission of collected enrollments, matching in ABIS, and a match/no match response back to the unit. From the time data are submitted, the transmission process can take from less than 1 day to 15 days or more to complete, as shown in figure 5. However, the design specifications for the Biometrics Automated Toolset system for Afghanistan state that biometrics data should transmit from the point of data submission to ABIS within 4 hours. In contrast, according to officials from the Biometrics Identity Management Agency, once enrollments are received by ABIS, the time it takes to match the data and transmit a response to the National Ground Intelligence Center for intelligence analysis, and ultimately back to the unit that performed the biometrics enrollment in Afghanistan, averages 22 minutes. Multiple factors contribute to the time it takes to transmit biometrics data from the warfighter to ABIS, and back. These factors include: Biometrics architecture: The Biometrics Automated Toolset system's architecture constructed for use in Afghanistan requires biometrics submissions to be replicated sequentially across multiple computer servers before reaching ABIS. As noted in figure 5, biometrics data on the Biometrics Automated Toolset system's architecture can take more than 2 weeks to transmit from Afghanistan to ABIS. However, DOD is unclear on how the number of servers correlates to transmission timeliness. Geographic challenges to connectivity: The mountainous terrain in Afghanistan's northern regions highlights the limited ability of U.S. forces to transmit biometrics data within the country. For example, under optimal conditions (i.e., flat terrain), wireless transmission, such as that used in the Last Tactical Mile pilot project, is capable of transmitting biometrics data up to approximately 50 miles. However, wireless transmission requires line-of-sight from a handheld biometrics device to a wireless tower, which would necessitate acquiring and erecting many towers to cover a relatively small geographic area. DOD is still evaluating the viability of expanding the pilot project in Afghanistan. Multiple, competing demands for communications infrastructure: Multiple, competing demands for communications infrastructure by U.S. forces in Afghanistan limit bandwidth available to transmit biometrics data to ABIS, thus resulting in delayed submissions. According to DOD officials, available bandwidth is a continuing problem in Afghanistan, which limits the amount and speed of information transmitted within or outside of Afghanistan. DOD has increased bandwidth capacity in Afghanistan over the years, but new military capabilities add to the demand for additional bandwidth. Mission requirements: According to the Commander's Guide to Biometrics in Afghanistan, forces should submit enrollments to ABIS within 8 hours of completion of a mission; however, missions can keep units operating in remote areas away from biometrics transmission infrastructure for weeks at a time. While on missions, a unit's biometrics collection devices have a preloaded Afghanistan biometrically enabled watchlist and are typically updated weekly, but again, mission requirements can delay updating these devices with the most current watchlist. DOD has pursued two key efforts to reduce the time it takes to transmit biometrics data in Afghanistan outside of the Biometrics Automated Toolset system: communication satellites used by special operations forces and the Last Tactical Mile pilot project. Special operations forces upload their biometrics enrollments to a dedicated classified or unclassified Web-based portal using communications satellites. The Biometrics Identity Management Agency monitors the portal and accesses the enrollments therein to match against biometrics data stored in ABIS. In addition to fingerprint, iris, and facial biometrics, the Web- based portal supports the cataloguing and analysis of other biometric and nonbiometric evidence such as DNA, documents, and cell phone data. Match/no match responses are provided to the warfighter via the portal within 2 to 7 minutes, assuming satellite or other Internet connectivity is available. Additionally, special operations forces can match individuals against a preloaded biometrically enabled watchlist on the handheld biometrics collection devices. Central Command was responsible for conducting the Last Tactical Mile pilot project during 2011 to provide the warfighter with a rapid response time on biometrics data submissions. In the pilot project, matching is initially against a biometrically enabled watchlist stored on the warfighter's handheld device before searching against data stored in a stand-alone computer server in Afghanistan prior to transmission to ABIS--the authoritative database. The Last Tactical Mile pilot project originated as a to utilize wireless infrastructure to transmit joint urgent operational need biometrics enrollments from a handheld biometrics collection device to a wireless communications tower.receive a match/no match response in 2 to 5 minutes against the biometrics data stored on the computer server in Afghanistan, including possible latent fingerprint matches. Army officials told us that expanding the Last Tactical Mile pilot project across all of Afghanistan would cost approximately $300 million, in large part due to the number of wireless A goal of the pilot project was to communications towers that would be necessary to provide connectivity across the mountainous terrain in the northern part of the country. DOD had not completed its evaluation of the Last Tactical Mile pilot project at the time of our review, and had not documented plans to utilize wireless infrastructure for biometrics in Afghanistan beyond the continued operation of the pilot project. Figure 6 highlights the differences between the Biometrics Automated Toolset system's architecture used by the Army and the Marine Corps, the Web-based architecture used by Special Operations Command, and the architecture in the Last Tactical Mile pilot project. Although DOD is tracking biometrics data transmission time in Afghanistan to facilitate timely responses to the warfighter, it has not assessed several of the factors that contribute to transmission delays. Officials from the Office of the Secretary of Defense told us that the Project Manager for DOD Biometrics had conducted a limited analysis of some known factors affecting transmission delays in Afghanistan, and found that the warfighter was largely responsible for submission delays. However, this analysis did not evaluate technical and geographic factors that can contribute to extended transmission times. According to the biometrics directive, the Assistant Secretary of Defense for Research and Engineering--within the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics--is responsible for periodically assessing biometrics activities for continued effectiveness in satisfying end-user requirements. However, no comprehensive assessment of factors contributing to transmission timeliness has been conducted by this office. In addition, DOD's draft biometrics instruction states that testing and evaluation expertise must be employed to understand the strengths and weaknesses of the system, with a goal of early identification of deficiencies so that they can be corrected before problems occur. For example, it is unclear whether the benefits of additional communications satellite access, expansion of the Last Tactical Mile pilot project's technology, or an alternative approach outweigh their associated costs. Factors contributing to transmission delays can lead to enemy combatants going undetected and subsequently being released back into the general population because their identities could not be confirmed with biometrics data in a timely manner. If a watchlist stored on a biometrics collection device does not lead to a confirmed match to an enemy combatant, it may be months or years before the individual is stopped again by U.S. forces at a roadside checkpoint, border crossing, or during a patrol or another mission, if ever. Lessons learned from U.S. forces' experiences with biometrics in Afghanistan are collected by and used within each of the military services and Special Operations Command, but those lessons are not disseminated across DOD. Army and Marine Corps guidance both emphasize using lessons learned to sustain, enhance, and increase preparedness to conduct current and future operations. The Army, Marine Corps, and Special Operations Command each rely on their respective existing processes to collect lessons learned pertaining to biometrics to facilitate knowledge sharing. To collect lessons learned, the military services and Special Operations Command draw from a variety of sources, including through surveys administered to students and instructors during training, and through interviews with personnel who have recently returned from a deployment. These lessons learned are analyzed to identify opportunities to improve existing practices within the military services and Special Operations Command. For example, Army officials said that about 10 to 15 percent of the lessons learned it collects are subsequently identified as either best practices or issues that require further action to resolve. DOD also uses informal processes to capture biometrics training-related lessons learned. For example, monthly teleconferences are held by and open to training representatives from Central Command's Task Force Biometrics and the Army to discuss biometrics training-related issues and experiences. However, this information is not disseminated across the department. Army biometrics officials told us that it would be advantageous to share biometrics lessons learned across the military services and combatant commands. Currently, DOD has no requirement to disseminate biometrics lessons learned across the department. However, the unpublished DOD implementing instruction for biometrics that was drafted by the Office of the Assistant Secretary of Defense for Research and Engineering includes a provision that would require DOD organizations to provide feedback and biometrics lessons learned to the Biometrics Identity Management Agency in its role as the Executive Manager for DOD Biometrics. In this role, the Biometrics Identity Management Agency could disseminate biometrics lessons learned collected by the various military services and combatant commands to inform relevant policies and practices. Biometrics Identity Management Agency officials told us that while they have established a process to receive Army lessons learned for biometrics, the agency does not plan to assume the additional responsibility of collecting the other military services' and combatant commands' lessons learned for biometrics issues and disseminating them across DOD without an explicit requirement to do so. By not disseminating biometrics lessons learned from existing military service and combatant command lessons learned systems across the department, DOD misses an opportunity to fully leverage its investment in biometrics. U.S. military forces have used biometrics as a nonlethal weapon in counterinsurgency operations in Afghanistan to remove the anonymity sought by enemy combatants. However, issues such as minimal biometrics training for leaders; challenges with ensuring the complete, accurate, and timely transmission of biometrics data; and the absence of a requirement to disseminate biometrics lessons learned across DOD persist. As a result, these issues limit the effectiveness of biometrics as an intelligence tool and may allow enemy combatants to move more freely within and across borders. We recommend that the Secretary of Defense take the following seven actions: To better ensure that training supports warfighter use of biometrics, direct the military services and Special Operations Command to expand biometrics training for leaders to include the effective use of biometrics in combat operations, the importance of selecting appropriate candidates for training, and the importance of tracking who has completed biometrics training prior to deployment to help ensure appropriate assignments of biometrics collection responsibilities. To better ensure the completeness and accuracy of transmitted biometrics data, direct the Assistant Secretary of Defense for Research and Engineering, through the Under Secretary of Defense for Acquisition, Technology, and Logistics, and in coordination with the military services, Special Operations Command, and Central Command, to identify and assign responsibility for biometrics data throughout the transmission process, regardless of the pathway the data travels, to include the time period between when warfighters submit their data from the biometrics collection device until the biometrics data reach ABIS. To determine the viability and cost-effectiveness of reducing transmission times for biometrics data, direct the Assistant Secretary of Defense for Research and Engineering, through the Under Secretary of Defense for Acquisition, Technology, and Logistics, to comprehensively assess and then address, as appropriate, the factors that contribute to transmission time for biometrics data. To more fully leverage DOD's investment in biometrics, direct the Assistant Secretary of Defense for Research and Engineering, through the Under Secretary of Defense for Acquisition, Technology, and Logistics, to assess the value of disseminating biometrics lessons learned from existing military service and combatant command lessons learned systems across DOD to inform relevant policies and practices, and implement a lessons learned dissemination process, as appropriate. We requested comments from DOD on the draft report, but none were provided. DOD did provide us with technical comments that we incorporated, as appropriate. We are sending copies of this report to other interested congressional parties; the Secretary of Defense; the Chairman, Joint Chiefs of Staff; the Secretaries of the U.S. Army, the U.S. Navy, and the U.S. Air Force; the Commandant of the U.S. Marine Corps; and the Director, Office of Management and Budget. In addition, this report will be available at no charge on the GAO Website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. To address our audit objectives, we reviewed relevant Office of the Secretary of Defense, military service, and combatant command policies and guidance, such as the Department of Defense's (DOD) biometrics directive and accompanying draft instruction, and the Army's Commander's Guide to Biometrics in Afghanistan. We obtained these and other relevant documentation, and interviewed officials from the DOD organizations identified in table 2. To determine the extent to which DOD's biometrics training supports warfighter use of biometrics in Afghanistan, we reviewed relevant Army, Marine Corps, and Central Command policies and assessments pertaining to biometrics training for warfighters and leaders to determine the biometrics training requirements for U.S. forces operating in Afghanistan. To understand the frequency and types of biometrics training offered by the Army, Marine Corps, and Special Operations Command, we reviewed training schedules and we observed the Army's Soldier Field Service Engineer Course 2nd Pilot and Biometrics Operations Specialist/Master Gunner Training at Fort Drum, N.Y.; the Marine Corps' Biometrics Automated Toolset Basic Operator's Course at Camp Pendleton, CA; and the Special Operations Command's Technical Exploitation Course I and the Sensitive Site Exploitation Operator Advanced Courses training at Fort Bragg, N.C.. In addition, we discussed training with military officials in Afghanistan from the organizations listed in table 2. To determine the extent to which DOD is effectively collecting and transmitting biometrics data, we obtained, reviewed, and analyzed relevant Central Command issued Joint Urgent Operational Need Statements. In addition, we reviewed documents on biometrics collections and transmissions and spoke with Office of the Secretary of Defense, Army, Marine Corps, Central Command, and Special Operations Command officials. We reviewed DOD biometrics submission latency data to understand data transmission over time. We assessed the reliability of the data by reviewing related documentation and interviewing knowledgeable officials. Although we found the data sufficiently reliable to provide descriptive and summary statistics, problems were identified with the completeness and accuracy of the data due to external factors, such as inaccurate time/date stamps on biometrics collection devices. As a result, we developed a recommendation to assign responsibility for biometrics data throughout the transmission process, to include the time period between when warfighters submit their data into the Biometrics Automated Toolset system until the biometrics data reach ABIS to better ensure completeness and accuracy of biometrics data during the transmission process. We also reviewed the Standards for Internal Control in the Federal Government for information on data completeness and accuracy assurance. We conducted site visits to four military installations in Afghanistan to ascertain how biometrics are being collected, utilized, and transmitted. Specifically, we visited Bagram Air Field, Kandahar Air Field, Marine Corps Base Camp Leatherneck, and Forward Operating Base Pasab to meet with military officials responsible for leading and performing biometrics collection, analysis, and transmission activities in Afghanistan and for operating the Last Tactical Mile pilot project. To determine the extent to which DOD has developed a process to collect and disseminate biometrics lessons learned, we analyzed relevant Office of the Secretary of Defense, Army, and Marine Corps guidance and policies, and met with officials from each of these organizations to discuss current practices. We conducted this performance audit from May 2011 through April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Marc Schwartz, Assistant Director; Grace Coleman; Mary Coyle; Davi M. D'Agostino, Director (retired); Bethann E. Ritter; Amie Steele; and Spencer Tacktill made key contributions to this report. Ashley Alley, Timothy Persons, Terry Richardson, John Van Schaik, and Michael Willems provided technical assistance. Defense Biometrics: DOD Can Better Conform to Standards and Share Biometric Information with Federal Agencies. GAO-11-276. Washington, D.C.: March 31, 2011. Defense Management: DOD Can Establish More Guidance for Biometrics Collection and Explore Broader Data Sharing. GAO-09-49. Washington, D.C.: October 15, 2008. Defense Management: DOD Needs to Establish Clear Goals and Objectives, Guidance, and a Designated Budget to Manage its Biometrics Activities. GAO-08-1065. Washington, D.C.: September 26, 2008.
|
The collection of biometrics data, including fingerprints and iris patterns, enables U.S. counterinsurgency operations to identify enemy combatants and link individuals to events such as improvised explosive device detonations. GAO was asked to examine the extent to which (1) DOD's biometrics training supports warfighter use of biometrics, (2) DOD is effectively collecting and transmitting biometrics data, and (3) DOD has developed a process to collect and disseminate biometrics lessons learned. To address these objectives, GAO focused on the Army and to a lesser extent on the Marine Corps and U.S. Special Operations Command, since the Army collected about 86 percent of the biometrics enrollments in Afghanistan. GAO visited training sites in the United States, observed biometrics collection and transmission operations at locations in Afghanistan, reviewed relevant policies and guidance, and interviewed knowledgeable officials. The Department of Defense (DOD) has trained thousands of personnel on the use of biometrics since 2004, but biometrics training for leaders does not provide detailed instructions on how to effectively use and manage biometrics collection tools. The Office of the Secretary of Defense, the military services, and U.S. Central Command each has emphasized in key documents the importance of training. Additionally, the Army, Marine Corps, and U.S. Special Operations Command have trained personnel prior to deployment to Afghanistan in addition to offering training resources in Afghanistan. DOD's draft instruction for biometrics emphasizes the importance of training leaders in the effective employment of biometrics collection, but existing training does not instruct military leaders on (1) the effective use of biometrics, (2) selecting the appropriate personnel for biometrics collection training, and (3) tracking personnel who have been trained in biometrics collection to effectively staff biometrics operations. Absent this training, military personnel are limited in their ability to collect high-quality biometrics data to better confirm the identity of enemy combatants. Several factors during the transmission process limit the use of biometrics in Afghanistan. Among them is unclear responsibility for the completeness and accuracy of biometrics data during their transmission. As a result, DOD cannot expeditiously correct data transmission issues as they arise, such as the approximately 4,000 biometrics collected from 2004 to 2008 that were separated from their associated identities. Such decoupling renders the data useless and increases the likelihood of enemy combatants going undetected within Afghanistan and across borders. Factors affecting the timely transmission of biometrics data include the biometrics architecture with multiple servers, mountainous terrain, and mission requirements in remote areas. These factors can prevent units from accessing transmission infrastructure for hours to weeks at a time. The DOD biometrics directive calls for periodic assessments, and DOD is tracking biometrics data transmission time in Afghanistan, but DOD has not determined the viability and cost-effectiveness of reducing transmission time. Lessons learned from U.S. military forces' experiences with biometrics in Afghanistan are collected and used by each of the military services and U.S. Special Operations Command. Military services emphasize the importance of using lessons learned to sustain, enhance, and increase preparedness to conduct future operations, but no requirements exist for DOD to disseminate existing biometrics lessons learned across the department. GAO recommends that DOD take several actions to: expand leadership training to improve employment of biometrics collection, help ensure the completeness and accuracy of transmitted biometrics data, determine the viability and cost-effectiveness of reducing transmission times, and assess the merits of disseminating biometrics lessons learned across DOD for the purposes of informing relevant policies and practices. GAO requested comments from DOD on the draft report, but none were provided.
| 7,646 | 809 |
The tens of thousands of individuals who responded to the September 11, 2001, attack on the WTC experienced the emotional trauma of the disaster and were exposed to a noxious mixture of dust, debris, smoke, and potentially toxic contaminants, such as pulverized concrete, fibrous glass, particulate matter, and asbestos. A wide variety of health effects have been experienced by responders to the WTC attack, and several federally funded programs have been created to address the health needs of these individuals. Numerous studies have documented the physical and mental health effects of the WTC attacks. Physical health effects included injuries and respiratory conditions, such as sinusitis, asthma, and a new syndrome called WTC cough, which consists of persistent coughing accompanied by severe respiratory symptoms. Almost all firefighters who responded to the attack experienced respiratory effects, including WTC cough. One study suggested that exposed firefighters on average experienced a decline in lung function equivalent to that which would be produced by 12 years of aging. A recently published study found a significantly higher risk of newly diagnosed asthma among responders that was associated with increased exposure to the WTC disaster site. Commonly reported mental health effects among responders and other affected individuals included symptoms associated with post-traumatic stress disorder (PTSD), depression, and anxiety. Behavioral health effects such as alcohol and tobacco use have also been reported. Some health effects experienced by responders have persisted or worsened over time, leading many responders to begin seeking treatment years after September 11, 2001. Clinicians involved in screening, monitoring, and treating responders have found that many responders' conditions--both physical and psychological--have not resolved and have developed into chronic disorders that require long-term monitoring. For example, findings from a study conducted by clinicians at the NY/NJ WTC Consortium show that at the time of examination, up to 2.5 years after the start of the rescue and recovery effort, 59 percent of responders enrolled in the program were still experiencing new or worsened respiratory symptoms. Experts studying the mental health of responders found that about 2 years after the WTC attack, responders had higher rates of PTSD and other psychological conditions compared to others in similar jobs who were not WTC responders and others in the general population. Clinicians also anticipate that other health effects, such as immunological disorders and cancers, may emerge over time. There are six key programs that currently receive federal funding to provide voluntary health screening, monitoring, or treatment at no cost to responders. The six WTC health programs, shown in table 1, are (1) the FDNY WTC Medical Monitoring and Treatment Program; (2) the NY/NJ WTC Consortium, which comprises five clinical centers in the NY/NJ area; (3) the WTC Federal Responder Screening Program; (4) the WTC Health Registry; (5) Project COPE; and (6) the Police Organization Providing Peer Assistance (POPPA) program. The programs vary in aspects such as the HHS administering agency or component responsible for administering the funding; the implementing agency, component, or organization responsible for providing program services; eligibility requirements; and services. The WTC health programs that are providing screening and monitoring are tracking thousands of individuals who were affected by the WTC disaster. As of June 2007, the FDNY WTC program had screened about 14,500 responders and had conducted follow-up examinations for about 13,500 of these responders, while the NY/NJ WTC Consortium had screened about 20,000 responders and had conducted follow-up examinations for about 8,000 of these responders. Some of the responders include nonfederal responders residing outside the NYC metropolitan area. As of June 2007, the WTC Federal Responder Screening Program had screened 1,305 federal responders and referred 281 responders for employee assistance program services or specialty diagnostic services. In addition, the WTC Health Registry, a monitoring program that consists of periodic surveys of self-reported health status and related studies but does not provide in- person screening or monitoring, collected baseline health data from over 71,000 people who enrolled in the Registry. In the winter of 2006, the Registry began its first adult follow-up survey, and as of June 2007 over 36,000 individuals had completed the follow-up survey. In addition to providing medical examinations, FDNY's WTC program and the NY/NJ WTC Consortium have collected information for use in scientific research to better understand the health effects of the WTC attack and other disasters. The WTC Health Registry is also collecting information to assess the long-term public health consequences of the disaster. Beginning in October 2001 and continuing through 2003, FDNY's WTC program, the NY/NJ WTC Consortium, the WTC Federal Responder Screening Program, and the WTC Health Registry received federal funding to provide services to responders. This funding primarily came from appropriations to the Department of Homeland Security's Federal Emergency Management Agency (FEMA), as part of the approximately $8.8 billion that the Congress appropriated to FEMA for response and recovery activities after the WTC disaster. FEMA entered into interagency agreements with HHS agencies to distribute the funding to the programs. For example, FEMA entered into an agreement with NIOSH to distribute $90 million appropriated in 2003 that was available for monitoring. FEMA also entered into an agreement with ASPR for ASPR to administer the WTC Federal Responder Screening Program. A $75 million appropriation to CDC in fiscal year 2006 for purposes related to the WTC attack resulted in additional funding for the monitoring activities of the FDNY WTC program, NY/NJ WTC Consortium, and the Registry. The $75 million appropriation to CDC in fiscal year 2006 also provided funds that were awarded to the FDNY WTC program, the NY/NJ WTC Consortium, Project COPE, and the POPPA program for treatment services for responders. An emergency supplemental appropriation to CDC in May 2007 included an additional $50 million to carry out the same activities provided for in the $75 million appropriation made in fiscal year 2006. The President's proposed fiscal year 2008 budget for HHS includes $25 million for treatment of WTC-related illnesses for responders. In February 2006, the Secretary of HHS designated the Director of NIOSH to take the lead in ensuring that the WTC health programs are well coordinated, and in September 2006 the Secretary established a WTC Task Force to advise him on federal policies and funding issues related to responders' health conditions. The chair of the task force is HHS's Assistant Secretary for Health, and the vice chair is the Director of NIOSH. The task force reported to the Secretary of HHS in early April 2007. HHS's WTC Federal Responder Screening Program has had difficulties ensuring the uninterrupted availability of services for federal responders. First, the provision of screening examinations has been intermittent. (See fig. 1.) After resuming screening examinations in December 2005 and conducting them for about a year, HHS again placed the program on hold and suspended scheduling of screening examinations for responders from January 2007 to May 2007. This interruption in service occurred because there was a change in the administration of the WTC Federal Responder Screening Program, and certain interagency agreements were not established in time to keep the program fully operational. In late December 2006, ASPR and NIOSH signed an interagency agreement giving NIOSH $2.1 million to administer the WTC Federal Responder Screening Program. Subsequently, NIOSH and FOH needed to sign a new interagency agreement to allow FOH to continue to be reimbursed for providing screening examinations. It took several months for the agreement between NIOSH and FOH to be negotiated and approved, and scheduling of screening examinations did not resume until May 2007. Second, the program's provision of specialty diagnostic services has also been intermittent. After initial screening examinations, responders often need further diagnostic services by ear, nose, and throat doctors; cardiologists; and pulmonologists; and FOH had been referring responders to these specialists and paying for the services. However, the program stopped scheduling and paying for these specialty diagnostic services in April 2006 because the program's contract with a new provider network did not cover these services. In March 2007, FOH modified its contract with the provider network and resumed scheduling and paying for specialty diagnostic services for federal responders. In July 2007 we reported that NIOSH was considering expanding the WTC Federal Responder Screening Program to include monitoring examinations--follow-up physical and mental health examinations--and was assessing options for funding and delivering these services. If federal responders do not receive this type of monitoring, health conditions that arise later may not be diagnosed and treated, and knowledge of the health effects of the WTC disaster may be incomplete. In February 2007, NIOSH sent a letter to FEMA, which provides the funding for the program, asking whether the funding could be used to support monitoring in addition to the onetime screening currently offered. A NIOSH official told us that as of August 2007 the agency had not received a response from FEMA. NIOSH officials told us that if FEMA did not agree to pay for monitoring of federal responders, NIOSH would consider using other funding. According to a NIOSH official, if FEMA or NIOSH agrees to pay for monitoring of federal responders, this service would be provided by FOH or one of the other WTC health programs. NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area, although it recently took steps toward expanding the availability of these services. Initially, NIOSH made two efforts to provide screening and monitoring services for these responders, the exact number of which is unknown. The first effort began in late 2002 when NIOSH awarded a contract for about $306,000 to the Mount Sinai School of Medicine to provide screening services for nonfederal responders residing outside the NYC metropolitan area and directed it to establish a subcontract with AOEC. AOEC then subcontracted with 32 of its member clinics across the country to provide screening services. From February 2003 to July 2004, the 32 AOEC member clinics screened 588 nonfederal responders nationwide. AOEC experienced challenges in providing these screening services. For example, many nonfederal responders did not enroll in the program because they did not live near an AOEC clinic, and the administration of the program required substantial coordination among AOEC, AOEC member clinics, and Mount Sinai. Mount Sinai's subcontract with AOEC ended in July 2004, and from August 2004 until June 2005 NIOSH did not fund any organization to provide services to nonfederal responders outside the NYC metropolitan area. During this period, NIOSH focused on providing screening and monitoring services for nonfederal responders in the NYC metropolitan area. In June 2005, NIOSH began its second effort by awarding $776,000 to the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide both screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. In June 2006, NIOSH awarded an additional $788,000 to DCC to provide screening and monitoring services for these responders. NIOSH officials told us that they assigned DCC the task of providing screening and monitoring services to nonfederal responders outside the NYC metropolitan area because the task was consistent with DCC's responsibilities for the NY/NJ WTC Consortium, which include data monitoring and coordination. DCC, however, had difficulty establishing a network of providers that could serve nonfederal responders residing throughout the country--ultimately contracting with only 10 clinics in seven states to provide screening and monitoring services. DCC officials said that as of June 2007 the 10 clinics were monitoring 180 responders. In early 2006, NIOSH began exploring how to establish a national program that would expand the network of providers to provide screening and monitoring services, as well as treatment services, for nonfederal responders residing outside the NYC metropolitan area. According to NIOSH, there have been several challenges involved in expanding a network of providers to screen and monitor nonfederal responders nationwide. These include establishing contracts with clinics that have the occupational health expertise to provide services nationwide, establishing patient data transfer systems that comply with applicable privacy laws, navigating the institutional review board process for a large provider network, and establishing payment systems with clinics participating in a national network of providers. On March 15, 2007, NIOSH issued a formal request for information from organizations that have an interest in and the capability of developing a national program for responders residing outside the NYC metropolitan area. In this request, NIOSH described the scope of a national program as offering screening, monitoring, and treatment services to about 3,000 nonfederal responders through a national network of occupational health facilities. NIOSH also specified that the program's facilities should be located within reasonable driving distance to responders and that participating facilities must provide copies of examination records to DCC. In May 2007, NIOSH approved a request from DCC to redirect about $125,000 from the June 2006 award to establish a contract with a company to provide screening and monitoring services for nonfederal responders residing outside the NYC metropolitan area. Subsequently, DCC contracted with QTC Management, Inc., one of the four organizations that had responded to NIOSH's request for information. DCC's contract with QTC does not include treatment services, and NIOSH officials are still exploring how to provide and pay for treatment services for nonfederal responders residing outside the NYC metropolitan area. QTC has a network of providers in all 50 states and the District of Columbia and can use internal medicine and occupational medicine doctors in its network to provide these services. In addition, DCC and QTC have agreed that QTC will identify and subcontract with providers outside of its network to screen and monitor nonfederal responders who do not reside within 25 miles of a QTC provider. In June 2007, NIOSH awarded $800,600 to DCC for coordinating the provision of screening and monitoring examinations, and QTC will receive a portion of this award from DCC to provide about 1,000 screening and monitoring examinations through May 2008. According to a NIOSH official, QTC's providers have begun conducting screening examinations, and by the end of August 2007, 18 nonfederal responders had completed screening examinations, and 33 others had been scheduled. In fall 2006, NIOSH awarded and set aside funds totaling $51 million from its $75 million appropriation for four WTC health programs in the NYC metropolitan area to provide treatment services to responders enrolled in these programs. Of the $51 million, NIOSH awarded about $44 million for outpatient services to the FDNY WTC program, the NY/NJ WTC Consortium, Project COPE, and the POPPA program. NIOSH made the largest awards to the two programs from which almost all responders receive medical services, the FDNY WTC program and NY/NJ WTC Consortium (see table 2). In July 2007 we reported that officials from the FDNY WTC program and the NY/NJ WTC Consortium expected that their awards for outpatient treatment would be spent by the end of fiscal year 2007. In addition to the $44 million it awarded for outpatient services, NIOSH set aside about $7 million for the FDNY WTC program and NY/NJ WTC Consortium to pay for responders' WTC-related inpatient hospital care as needed. The FDNY WTC program and NY/NJ WTC Consortium used their awards from NIOSH to continue providing treatment services to responders and to expand the scope of available treatment services. Before NIOSH made its awards for treatment services, the treatment services provided by the two programs were supported by funding from private philanthropies and other organizations. According to officials of the NY/NJ WTC Consortium, this funding was sufficient to provide only outpatient care and partial coverage for prescription medications. The two programs used NIOSH's awards to continue to provide outpatient services to responders, such as treatment for gastrointestinal reflux disease, upper and lower respiratory disorders, and mental health conditions. They also expanded the scope of their programs by offering responders full coverage for their prescription medications for the first time. A NIOSH official told us that some of the commonly experienced WTC conditions, such as upper airway conditions, gastrointestinal disorders, and mental health disorders, are frequently treated with medications that can be costly and may be prescribed for an extended period of time. According to an FDNY WTC program official, prescription medications are now the largest component of the program's treatment budget. The FDNY WTC program and NY/NJ Consortium also expanded the scope of their programs by paying for inpatient hospital care for the first time, using funds from the $7 million that NIOSH had set aside for this purpose. According to a NIOSH official, NIOSH pays for hospitalizations that have been approved by the medical directors of the FDNY WTC program and NY/NJ WTC Consortium through awards to the programs from the funds NIOSH set aside for this purpose. By August 31, 2007, federal funds had been used to support 34 hospitalizations of responders, 28 of which were referred by the NY/NJ WTC Consortium's Mount Sinai clinic, 5 by the FDNY WTC program, and 1 by the NY/NJ WTC Consortium's CUNY Queens College program. Responders have received inpatient hospital care to treat, for example, asthma, pulmonary fibrosis, and severe cases of depression or PTSD. According to a NIOSH official, one responder is now a candidate for lung transplantation and if this procedure is performed, it will be covered by federal funds. If funds set aside for hospital care are not completely used by the end of fiscal year 2007, he said they could be carried over into fiscal year 2008 for this purpose or used for outpatient services. After receiving NIOSH's funding for treatment services in fall 2006, the NY/NJ WTC Consortium ended its efforts to obtain reimbursement from health insurance held by responders with coverage. Consortium officials told us that efforts to bill insurance companies involved a heavy administrative burden and were frequently unsuccessful, in part because the insurance carriers typically denied coverage for work-related health conditions on the grounds that such conditions should be covered by state workers' compensation programs. However, according to officials from the NY/NJ WTC Consortium, responders trying to obtain workers' compensation coverage routinely experienced administrative hurdles and significant delays, some lasting several years. Moreover, according to these program officials, the majority of responders enrolled in the program either had limited or no health insurance coverage. According to a labor official, responders who carried out cleanup services after the WTC attack often did not have health insurance, and responders who were construction workers often lost their health insurance when they became too ill to work the number of days each quarter or year required to maintain eligibility for insurance coverage. According to a NIOSH official, although the agency had not received authorization as of August 30, 2007, to use the $50 million emergency supplemental appropriation made to CDC in May 2007, NIOSH was formulating plans for use of these funds to support the WTC treatment programs in fiscal year 2008. Screening and monitoring the health of the people who responded to the September 11, 2001, attack on the World Trade Center are critical for identifying health effects already experienced by responders or those that may emerge in the future. In addition, collecting and analyzing information produced by screening and monitoring responders can give health care providers information that could help them better diagnose and treat responders and others who experience similar health effects. While some groups of responders are eligible for screening and follow-up physical and mental health examinations through the federally funded WTC health programs, other groups of responders are not eligible for comparable services or may not always find these services available. Federal responders have been eligible only for the initial screening examination provided through the WTC Federal Responder Screening Program. NIOSH, the administrator of the program, has been considering expanding the program to include monitoring but has not done so. In addition, many responders who reside outside the NYC metropolitan area have not been able to obtain screening and monitoring services because available services are too distant. Moreover, HHS has repeatedly interrupted the programs it established for federal responders and nonfederal responders outside of NYC, resulting in periods when no services were available to them. HHS continues to fund and coordinate the WTC health programs and has key federal responsibility for ensuring the availability of services to responders. HHS and its agencies have recently taken steps to move toward providing screening and monitoring services to federal responders and to nonfederal responders living outside of the NYC area. However, these efforts are not complete, and the stop-and-start history of the department's efforts to serve these groups does not provide assurance that the latest efforts to extend screening and monitoring services to these responders will be successful and will be sustained over time. Therefore we recommended in July 2007 that the Secretary of HHS take expeditious action to ensure that health screening and monitoring services are available to all people who responded to the attack on the WTC, regardless of who their employer was or where they reside. As of early September 2007 the department has not responded to this recommendation. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact Cynthia A. Bascetta at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Helene F. Toiv, Assistant Director; Hernan Bozzolo; Frederick Caison; Anne Dievler; and Roseanne Price made key contributions to this statement. September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders. GAO-07-892. Washington, D.C.: July 23, 2007. September 11: HHS Has Screened Additional Federal Responders for World Trade Center Health Effects, but Plans for Awarding Funds for Treatment Are Incomplete. GAO-06-1092T. Washington, D.C.: September 8, 2006. September 11: Monitoring of World Trade Center Health Effects Has Progressed, but Program for Federal Responders Lags Behind. GAO-06-481T. Washington, D.C.: February 28, 2006. September 11: Monitoring of World Trade Center Health Effects Has Progressed, but Not for Federal Responders. GAO-05-1020T. Washington, D.C.: September 10, 2005. September 11: Health Effects in the Aftermath of the World Trade Center Attack. GAO-04-1068T. Washington, D.C.: September 8, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Six years after the attack on the World Trade Center (WTC), concerns persist about health effects experienced by WTC responders and the availability of health care services for those affected. Several federally funded programs provide screening, monitoring, or treatment services to responders. GAO has previously reported on the progress made and implementation problems faced by these WTC health programs. This testimony is based on and updates GAO's report, September 11: HHS Needs to Ensure the Availability of Health Screening and Monitoring for All Responders ( GAO-07-892 , July 23, 2007). In this testimony, GAO discusses the status of (1) services provided by the Department of Health and Human Services' (HHS) WTC Federal Responder Screening Program, (2) efforts by the Centers for Disease Control and Prevention's National Institute for Occupational Safety and Health (NIOSH) to provide services for nonfederal responders residing outside the New York City (NYC) area, and (3) NIOSH's awards to WTC health program grantees for treatment services. For the July 2007 report, GAO reviewed program documents and interviewed HHS officials, grantees, and others. In August and September 2007, GAO updated selected information in preparing this testimony. In July 2007, following a re-examination of the status of the WTC health programs, GAO recommended that the Secretary of HHS take expeditious action to ensure that health screening and monitoring services are available to all people who responded to the WTC attack, regardless of who their employer was or where they reside. As of early September 2007 the department has not responded to this recommendation. As GAO reported in July 2007, HHS's WTC Federal Responder Screening Program has had difficulties ensuring the uninterrupted availability of screening services for federal responders. From January 2007 to May 2007, the program stopped scheduling screening examinations because there was a change in the program's administration and certain interagency agreements were not established in time to keep the program fully operational. From April 2006 to March 2007, the program stopped scheduling and paying for specialty diagnostic services associated with screening. NIOSH, the administrator of the program, has been considering expanding the program to include monitoring, that is, follow-up physical and mental health examinations, but has not done so. If federal responders do not receive monitoring, health conditions that arise later may not be diagnosed and treated, and knowledge of the health effects of the WTC disaster may be incomplete. NIOSH has not ensured the availability of screening and monitoring services for nonfederal responders residing outside the NYC area, although it recently took steps toward expanding the availability of these services. In late 2002, NIOSH arranged for a network of occupational health clinics to provide screening services. This effort ended in July 2004, and until June 2005 NIOSH did not fund screening or monitoring services for nonfederal responders outside the NYC area. In June 2005, NIOSH funded the Mount Sinai School of Medicine Data and Coordination Center (DCC) to provide screening and monitoring services; however, DCC had difficulty establishing a nationwide network of providers and contracted with only 10 clinics in seven states. In 2006, NIOSH began to explore other options for providing these services, and in May 2007 it took steps toward expanding the provider network. NIOSH has awarded treatment funds to four WTC health programs in the NYC area. In fall 2006, NIOSH awarded $44 million for outpatient treatment and set aside $7 million for hospital care. The New York/New Jersey WTC Consortium and the New York City Fire Department WTC program, which received the largest awards, used NIOSH's funding to continue outpatient services, offer full coverage for prescriptions, and cover hospital care.
| 5,036 | 797 |
All Army and Marine Corps forces are required to annually complete individual training requirements, such as weapons qualification; sexual assault prevention and response; and chemical, biological, radiological, and nuclear defense training. Congress, the Department of Defense, and the Army and Marine Corps all have the authority to establish training requirements. Service policies do not specify where annual training should be completed, and commanders can prioritize this training to align it with other training the units are conducting to develop units' combat capabilities. As a result of this flexibility, units conduct annual training throughout the year at home stations and even while deployed. In addition to annual training, forces that deploy conduct both individual and collective predeployment training. Army and Marine Corps predeployment training, which can be conducted at home station or other locations, begins with individual and small unit training and progresses to larger scale collective training exercises that are designed to build proficiency in the skills required for deployment and the culminating training event. The requirements for this training come from a variety of sources. The Commander of U.S. Central Command has established baseline individual and collective training requirements for units deploying to Iraq and Afghanistan. Required individual training requirements include, but are not limited to, basic marksmanship, high-mobility multipurpose wheeled vehicle and mine resistant ambush protected vehicle egress assistance training, and first aid. Each service secretary is responsible for training their forces to execute the current and future operational requirements of the combatant commands. Accordingly, U.S. Army Forces Command, as the Army's force provider, and the Commandant of the Marine Corps have also issued training requirements for forces deploying in support of missions in Iraq and Afghanistan. Other Army and Marine Corps commands at various levels have also imposed predeployment training requirements and increased the required number of repetitions for certain training tasks. Unit training requirements may differ based on various factors, such as unit type--for example, combat arms and combat support forces--the units' mission, or deployment location. Training requirements may have several associated tasks. For example, depending on the mission, Army soldiers and units are required to conduct counter-improvised explosive device training, which may consist of up to 8 individual and 11 collective tasks, including reacting to, and preparing for, a possible counter-improvised explosive device attack. Likewise, Marines are required to conduct language and culture training, which depending on the mission, may include 2 to 5 individual and 4 collective training tasks. The Army's Force Generation model (ARFORGEN) is a cyclical model designed to build the readiness of units as they move through three phases termed RESET, Train/Ready, and Available. The Army uses these phases to synchronize training with the arrival of unit personnel and equipment. The initial phase of ARFORGEN is RESET, which begins when a unit returns from deployment or exits the Available phase. Units in RESET perform limited individual, team, and/or crew training tasks. As units exit RESET, they enter the Train/Ready phase, where they build readiness through further individual and collective training tasks. As units exit Train/Ready, they enter the Available phase, when they may be deployed. During this phase, units focus on sustainment training. Together, figures 1 and 2 show how training opportunities are expected to change as deployment-to-dwell ratios--the amount of time spent deployed compared to the amount not deployed--change. As forces draw down in Iraq, the length of the Train/Ready phase is expected to increase. In addition, the types of training conducted during this phase will change. The figures are not meant to show the exact amount of time devoted to training--for assigned missions, such as the current counterinsurgency missions, or for a fuller range of missions--but they do illustrate the current and expected future trends. Figure 1 shows how training has generally occurred within the ARFORGEN process in recent years, when much of the active Army was experiencing 1:1 deployment-to-dwell ratios. Figure 2 shows how training is expected to change as requirements for ongoing operations in Iraq decline. Marine Corps Force Generation is a four-block process designed to synchronize manning, equipping, and training to build a total force capable of responding to combatant commander requirements. As shown in table 1, Marine Corps predeployment training is planned and executed in accordance with a standardized system of "building blocks," which progresses from individual to collective training. Training in block one is individual training and is divided into baseline requirements (Block 1A) and theater-specific training requirements (Block 1B). At the Army's and Marine Corps' combat training centers, units are able to execute large-scale, highly realistic and stressful advanced training, including live-fire training, which they may not be able to conduct at their home stations. Each training rotation affords units and their leaders the opportunity to face a well-trained opposing force, focus training on higher unit-level tasks, develop proficiency under increasingly difficult conditions, and receive in-depth analyses of performance from training experts. In addition, training at the combat training centers is tailored to bring units to the proficiency level needed to execute their missions. The Army maintains two combat training centers in the continental United States: the National Training Center, Fort Irwin, California and the Joint Readiness Training Center, Fort Polk, Louisiana. These centers focus on training brigade combat teams--approximately 5,000 servicemembers--during rotations that last between 18 and 25 days. The Marine Corps has a single combat training center, the Air Ground Combat Center at Twentynine Palms, California. At this combat training center, multiple battalion-sized units preparing to deploy to Afghanistan participate in a 28-day exercise. Each exercise includes two infantry battalions, a combat logistics battalion, and an aviation combat element. These exercises prepare marines for the tactics and procedures they are expected to employ in Afghanistan. Units are not required to complete a specific level of training prior to the culminating training events that are held at the combat training centers. However, service policies identify training goals for units to complete. For example, in October 2010, Forces Command established a goal for active component units to achieve company-level proficiency at home station. In addition, in 2010, U.S. Army Forces Command identified a goal for training to be completed at the combat training centers--brigade-level, live-fire exercises. Similarly, an April 2010 Marine Corps policy stated that units should conduct battalion level training prior to conducting a culminating training event. The Army and Marine Corps are developing and implementing systems to assist units in tracking training proficiency and completion throughout the service force generation cycles. While deployable combat arms and combat support forces in the Army and Marine Corps conduct extensive predeployment training, they are not always able to complete all desired training prior to the culminating training event. Based on our unit visits, 7 of 13 Army and Marine Corps units conducting a culminating training event at a combat training center were not able to complete all of the desired individual and collective training (e.g., company-level, live-fire training) prior to their arrival at the combat training centers. During our discussions with unit and training command officials, we found that units do not always reach the desired level of proficiency prior to their culminating training events due to several factors--such as the current focus on training on counterinsurgency skills that are needed in Iraq and Afghanistan, the large number of requirements, limited training time between deployments, and availability of necessary equipment. Unit officials from both services identified training that they were unable to complete prior to arriving at the combat training centers. The following are examples of the types of desired training that some Army and Marine Corps units that we visited were not able to complete prior to arriving at the combat training centers. Due to the extensive licensing and certification requirements for the different types of vehicles, which are currently being used in Iraq and Afghanistan, units were not always able to license and certify all necessary drivers prior to arriving at the combat training centers. Aviation units, which balance aviation requirements and ground requirements, were not always able to complete all ground training requirements, such as all language and culture training. Marine Corps units often waived the first two levels of weapons qualifications. Given limited theater-specific equipment at home station, units were not always able to complete convoy training using mine resistant ambush protected vehicles. Biometrics training and training on communications equipment were often not completed prior to arriving at the combat training centers. Given limited systems at home station, units were often unable to integrate unmanned aerial systems into training prior to arriving at the combat training centers. Due to land constraints, units were often unable to complete company-level, live-fire attack prior to arriving at the combat training centers. Further, officials from all of the Army and Marine Corps units we spoke with stated that they planned to delay certain training until they were at the combat training centers since resources--such as theater-specific equipment like mine resistant ambush protected vehicles--were more readily available there. In addition, due to land constraints in the Pacific, Hawaii units are unable to conduct heavy artillery training prior to arriving at the combat training centers. Furthermore, we found that some units had to train to improve proficiency levels at the combat training centers prior to beginning the culminating training events, and therefore were not always able to take full advantage of the training opportunities available to them at the combat training centers to conduct complex, higher-level training. In the past, units used the initial week at the combat training centers to replicate their arrival in theater and prepare to commence combat operations by conducting tasks such as receiving and organizing equipment; however, over the past decade, units have had to incorporate other types of training into this first week. For example, training officials at the National Training Center stated that it was necessary for soldiers that were new to the units to complete individual weapons qualifications during the first 5 days of the combat training center rotation because these soldiers often arrived after their unit's home station ranges were completed, failed to qualify on their weapon, or were not available on the day their unit was at the range. Army and Marine Corps officials, including trainers at the combat training centers, reported that while units arrive at the combat training centers with varying levels of proficiency, all units leave with at least the platoon level proficiency required to execute counterinsurgency missions for the current operations in Iraq and Afghanistan. In addition, Army and Marine Corps guidance places responsibility on unit commanders to certify that their units have completed all required training and are prepared to deploy. Once certified, the Commanding General of Army Forces Command and the Marine Expeditionary Forces Commanding Generals validate completion of training for all Army and Marine Corps units, respectively, prior to deploying. While leaders are responsible for the training of their units, the pace of operations over the past decade has led to reduced training time frames, and as a result, the services have shifted training management responsibilities from junior leaders to their higher headquarters. However, changing conditions--such as the increased competition for training resources in an increasingly constrained fiscal environment and the return to training for a broader range of missions--highlight the importance of solid training management skills for all leaders. While the Army and Marine Corps are developing initiatives to restore and develop the capabilities of leaders to plan, prepare, execute, and assess training, neither service has established results-oriented performance measures to evaluate the impact of these initiatives. Effective training, which can be best accomplished when founded on solid training management, is critical to overall mission readiness, but the pace of current operations has resulted in fewer opportunities for junior leaders to focus on training management. As noted in Army policy, leaders manage training to ensure effective unit preparation and successful mission execution. Similarly, Marine Corps guidance notes that training management allows for maximized results when executing training. To train effectively, leaders at all levels must possess a thorough understanding of training management--the process of planning, preparing, executing, and assessing training--and continually practice these skills. Training management skills are especially important for junior leaders, as it is these leaders that focus the priorities of their units-- squads, platoons, and companies--to achieve training goals, maximize training, and reach the greatest level of readiness and proficiency prior to and during the culminating training event. Traditionally, leaders have gained these skills through training and education in formal schools, the learning and experience gained while assigned to operational and training organizations, and individuals' own self-development. Continuous deployments to evolving theaters have, over the past decade, led to shorter timeframes during which units can accomplish training. Given these shorter time frames, much of the responsibility for training management has been assumed by senior leaders, leaving some junior leaders with limited opportunities to perform or observe training management. As a result, junior leaders have focused more on training execution and their higher headquarters have assumed much of the responsibility for planning and preparing unit training. According to Army and Marine Corps unit officials, while junior leaders are capable of executing live-fire training and combat scenarios, many of these leaders have not had experience in preparing the ranges for such training exercises. Further, the U.S. Army Forces Command Training and Leader Development Guidance for Fiscal Year 2011-2012 states that training meetings have not always been conducted to standard over the last nine years. These training meetings--which are essential to training management--are conducted by unit leaders and are meant to provide feedback on the completion of training requirements, task proficiency, and the quality of the train ing conducted. With the decline in operational requirements in Iraq, more units are at home for longer periods, resulting in increased competition for training resources--such as training ranges, centrally managed equipment, and simulators. At the same time, these units are facing an increasingly constrained fiscal environment in which the services are seeking to achieve greater efficiencies in training, and potential savings. In this environment, junior leaders will be expected to learn the fundamentals of planning and conducting individual and small unit collective training including obtaining resources, identifying critical requirements, and integrating individual and collective training events. During our visit, officials at Joint Base Lewis-McChord noted that 2010- 2011 was the first time since the start of operations in Iraq in 2003 that the installation's nine brigades were on base at the same time. With the large number of units at the base, installation officials, in coordination with corps and brigade training officers, identified strict time frames during which individual units would have priority over training resources and assisted junior leaders in planning for the use of training ranges and other resources. Likewise, Marine Corps officials noted that their units in the Pacific, which rely on Army installations across Hawaii to conduct a significant portion of their live-fire training and large-scale collective training exercises, would experience increased competition for the use of training ranges as time at home station begins to increase. The ability of junior leaders to effectively manage expanded training requirements will be a key to meeting the Army's recently established goal for active component units to achieve company-level proficiency at home station prior to the culminating training event. Further, the services are seeking to address the atrophy of some critical skills by shifting their training focus from counterinsurgency operations to a fuller range of missions. For example, while some Marine Corps units have retained the capability to conduct amphibious operations, this critical skill has not been exercised by all units since the start of operations in Iraq. However, as the Marine Corps returns to training for its full range of missions, junior leaders will be expected to plan and manage additional individual and collective training requirements to prepare units to execute this mission. Training management will also become more complex as the services return to conducting more joint, combined, and multinational exercises. For example, units are supposed to prepare for exercises with partner nations, but some units have recently been unable to train for or participate in such exercises. With an increase in dwell time, and fewer units deploying to Iraq, more time will be available for units to focus on training and preparing for these exercises. The Army and Marine Corps recognize the need to renew emphasis on the training management skills that enable leaders to plan and resource training, optimize installation resources, track individual qualifications and proficiencies, and assess training readiness. As a result, the services have been proactive in developing initiatives that are designed to restore training management skills in some leaders, and develop these skills in junior leaders. Specifically, the Army and Marine Corps have developed online resources and demonstration videos to refresh leaders' training management skills and serve as instructional tools until leaders can attend formal instruction on these skills. For example, the Army has developed online videos that show leaders how to conduct training meetings. Likewise, Marine Corps officials stated that they are currently revising one of their online training management courses and plan to release an 8-hour computer-based course designed to assist leaders in developing training management skills. Further, the services are developing and implementing automated training management systems. According to Army guidance, the Army's Digital Training Management System, an automated system for tracking and managing both individual and collective training, is the key to establishing training management amongst its leaders. The system allows unit leaders--including junior leaders--to develop their mission-essential task list, establish calendars for their training plans, and track the completion of training requirements and exercises. Similarly, according to officials, the Marine Corps' Training Information Management System, once fully implemented, will allow leaders to track and manage individual marine and collective unit capabilities and assist leaders in developing training plans and calendars. According to Army and Marine Corps officials, in the future, the automated training management systems will interface with their readiness reporting systems and allow leaders to have a more objective view of unit training readiness. In addition to the online training and automated systems, both services are revamping their professional military education courses to emphasize training management skills. Specifically, the Army is currently working to standardize and update the training management content within its leadership courses, starting with the Captains' Career Course. Officials stated that they expect to test the revised course content by September 2011 and are also looking to identify and standardize the training management content taught in other career courses, such as those designed for non-commissioned officers. In January 2009, the Marine Corps began conducting the Unit Readiness Planning Course, a comprehensive, 5-day training management course that is available to leaders in the ranks of corporal to colonel. The service has also added a training management component to many of its professional military education courses for junior leaders, such as the Commander's Symposium and the Expeditionary Warfare School. The Army's and Marine Corps' initiatives are a solid start to the development of training management skills in their junior leaders, but neither service has developed results-oriented performance metrics to gauge the effectiveness of their efforts to restore training management skills. Our prior work has shown that it is important for agencies to incorporate performance metrics to demonstrate the contributions that training programs make to improve results. Incorporating valid measures of effectiveness into training and development programs enables an agency to better ensure that desired changes will occur in trainee's skills, knowledge, and abilities. When developing results-oriented performance metrics, organizations should consider the frequency of evaluation, and the indicators that will be used to evaluate the performance of initiatives. For example, the services could measure the ability of junior leaders to plan, prepare, and assess training that will be expected of them, or the amount and types of on-the-job training required for junior leaders to perform required training tasks after those leaders have attended identified courses or participated in on-the-job training. By establishing metrics, the services can identify approaches that may not be working and adjust training as needed. In addition, given the variety of ways to provide training, such as classroom, e-learning, and on-the-job training, results-oriented performance metrics can help target training investments and provide the services with credible information on how their initiatives are impacting performance. Training can prepare Army and Marine Corps forces to execute a wide range of missions. However, the pace of operations over the past decade has limited training time and reduced the services' abilities to focus on developing training management skills in their junior leaders. At the same time, the Army and Marine Corps have focused their limited training time on training personnel in the skills needed to carry out their counterinsurgency missions in Iraq and Afghanistan. With the drawdown of forces in Iraq and a commitment to resume training for a fuller range of missions, both services have recognized the need and opportunity to restore and develop leaders' abilities to plan, prepare, execute, and assess the wider range of needed training. While the Army and Marine Corps have initiatives to restore and develop leaders' training management skills, neither service has developed results-oriented performance metrics that would allow them to determine the effectiveness of their initiatives and adjust when necessary. Ensuring that these training management skills are restored and developed is an essential step in maximizing training effectiveness, especially as forces spend more time at home station and face increased competition for installation training resources. However, without a means of measuring the effectiveness of their efforts to restore and develop leaders' training management skills, the Army and Marine Corps lack the information they need to assess the extent to which their leaders are prepared to plan, prepare, and assess required training. As the Army and Marine Corps continue to develop and implement programs to restore and develop leaders' training management skills, we recommend that the Secretary of Defense direct the Secretary of the Army and the Commandant of the Marine Corps to develop results- oriented performance metrics that can be used to evaluate the effectiveness of these training management initiatives and support any adjustments that may be needed. In written comments on a draft of this report, DOD concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army and the Commandant of the Marine Corps to develop results- oriented performance metrics that can be used to evaluate the effectiveness of training management initiatives and support any adjustments that may be needed. DOD noted that for the Army, results- oriented performance metrics could help provide an objective view to support the subjective assessment of training readiness. DOD further stated that as the Marine Corps redeploys and resets the force, the service will ensure doctrinal unit training management practices are emphasized as a means to most effectively plan and meet training readiness requirements. In addition, the Marine Corps will continue to develop and refine performance metrics and tools that support the commander's ability to assess individual and unit training readiness. The full text of DOD's written comments is reprinted in appendix II. We are also sending copies of this report to the Secretary of Defense, the Secretary of the Army, the Commandant of the Marine Corps, and appropriate congressional committees. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions concerning this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To determine the extent to which Army and Marine Corps combat arms and combat support forces are completing training prior to the culminating training event, we first reviewed Office of the Secretary of Defense, Joint Staff, combatant command, Army, and Marine Corps training requirements and guidance, including U.S. Central Command Theater Entry Requirements, U.S. Pacific Command Fiscal Year 11-14 Pacific Joint Training Strategy, U.S. Army Forces Command Pre-deployment Training Guidance in Support of Combatant Commands, Army Regulation 350-1, Army Training and Leader Development, and Marine Corps Order 3502.6, Marine Corps Force Generation Process, to determine the nature of training requirements. We also interviewed officials from these offices to discuss these documents. In addition, we interviewed trainers from the Army's two maneuver combat training centers in the continental United States at Fort Irwin, California and Fort Polk, Louisiana, and the Marine Corps single combat training center at Twentynine Palms, California, to discuss the desired training, if any, that units could not complete prior to the culminating training event. We also reviewed service training guidance such as U.S. Army Forces Command Regulation 350-50-1, Training at the National Training Center, and U.S. Army Forces Command Regulation 350-50-2, Training at the Joint Readiness Training Center, to identify the extent to which the guidance established requirements for training to be completed prior to the culminating training events and interviewed trainers from the combat training centers to discuss this guidance. Further, we reviewed unit training documents and interviewed officials from 19 Army and 10 Marine Corps units to discuss training information such as: (1) the training that units were completing, (2) any training that units were unable to complete prior to the culminating training events, (3) any factors that impacted units' abilities to complete training prior to the culminating training events, and (4) the impact that not completing training prior to the final culminating training event might have on those events. For the Army, we used readiness information from the Defense Readiness Reporting System-Army from November 2010, to identify the universe of all deployable brigade-sized units, since these units may conduct their culminating training event at a combat training center. We then selected the installations with the largest number of combat arms and combat support brigades present during our site visit timeframes. We found this data to be sufficiently reliable for the purpose of site selection. Based on the data, we selected Fort Bragg, North Carolina; Fort Hood, Texas; Joint Base Lewis-McChord, Washington; and Schofield Barracks, Hawaii, where we held discussions with 10 Army brigade combat teams and 9 Army support brigades. For the Marine Corps, we focused on battalion-sized combat arms and combat support units; these units conduct their culminating training events at the service's combat training center at Twentynine Palms, California. Specifically, we identified those units who would be conducting their culminating training events at the combat training center between November 2010 and February 2011. We held discussions with 5 Marine Corps ground combat units, and 5 Marine Corps support units from Camp Lejeune, North Carolina; Camp Pendleton, California; Twentynine Palms, California; and Marine Corps Base Hawaii. Findings from the Army and Marine Corps site visits are not generalizable to all units. We also spoke with Army and Marine Corps officials from Fort Shafter, Hawaii, and Okinawa, Japan, respectively, to discuss any factors that impacted units' abilities to complete training prior to the culminating training events. To assess the extent to which leaders are positioned to plan and manage training as forces resume training for a fuller range of missions, we reviewed service policy and guidance that provided information on the return to training for a fuller range of missions, such as the U.S. Army Forces Command Training and Leader Development Guidance for Fiscal Year 2011-2012, Army Field Manual 7-0, Training Units and Developing Leaders for Full Spectrum Operations, the Marine Corps' Commandant Planning Guidance, and the Marine Corps Posture Statement for 2011. We interviewed service and unit officials to discuss these documents and how training for a fuller range of missions might be impacted by changing conditions, such as the drawdown of forces from Iraq. We interviewed installation management officials from both Army and Marine Corps installations to discuss challenges that may exist for units as more units are stationed at home for longer periods of time, and reviewed installation policies and plans regarding the scheduling of home station resources, such as ranges, centrally managed equipment, and simulators. We also examined service plans and strategies to develop and restore training management skills amongst Army and Marine Corps leaders, and discussed these plans with service officials. For example, we reviewed the U.S. Army Forces Command Inspector General's Office Training Management Assessment, Army Field Manual 7-0, Marine Corps MCRP 3-0A, Unit Training Management Guide, the Marine Corps Posture Statement and the Marine Corps Task 9 Vision and Strategy 2025. We also discussed current and future initiatives to restore and develop training management skills with officials from the Army's Training and Doctrine Command and the Marine Corps' Training and Education Command. Furthermore, we participated in an online demonstration of the Army's Digital Training Management System and reviewed the online trainings available through the Army Training Network. Table 2 outlines all of the organizations we contacted and interviewed during the course of our review. We conducted this performance audit from July 2010 to July 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributors to this report were Michael Ferren (Assistant Director), Jerome Brown, Kenya Jones, Ashley Lipton, Lonnie McAllister, Terry Richardson, and Erik Wilkins- McKee.
|
Over the past decade, Army and Marine Corps forces have deployed repeatedly with limited time between deployments. At their home stations, combat training centers, and other locations, units have focused their limited training time on training for counterinsurgency operations. Prior to deploying, units also conduct a large-scale exercise referred to as a culminating training event. With the drawdown of forces in Iraq, the services have begun to resume training for a fuller range of offensive, defensive, and stability missions. The House report to the National Defense Authorization Act for Fiscal Year 2011 directed GAO to report on the Army's and Marine Corps' abilities to complete training requirements. GAO assessed the extent to which the services' (1) active component forces are completing training prior to the culminating training event and (2) leaders are positioned to plan and manage training as forces resume training for a fuller range of missions. GAO analyzed training requirements and unit training documentation, and interviewed headquarters and unit personnel during site visits between July 2010 and July 2011. Deploying Army and Marine Corps units conduct extensive predeployment training--both individual and collective, to include a large-scale culminating training event--at their home stations, combat training centers, and other locations. However, several factors, such as limited training time between deployments, the large number of training requirements, and the current focus on counterinsurgency operation training have been preventing units from completing all desired training prior to the culminating training event. For example, based on GAO's site visits, 7 of 13 units were not able to complete all of the desired individual and collective training (e.g., company-level live fire training) prior to arriving at the combat training centers. Further, officials from all of the units GAO spoke with stated that they planned to delay certain training until they were at the combat training centers since resources--such as theater-specific equipment like mine resistant ambush protected vehicles--were more readily available there. GAO found that some units had to train to improve proficiency levels at the combat training centers prior to beginning the culminating training events, and therefore were not always able to take full advantage of the training opportunities available to them at the combat training centers to conduct complex, higher-level training. Still, according to trainers at the combat training centers, while units arrive with varying levels of proficiency, all forces leave with at least the platoon level proficiency required to execute the counterinsurgency missions required for ongoing operations in Iraq and Afghanistan. Over the past decade, continuous overseas deployments have reduced training timeframes and resulted in senior leaders assuming training management responsibilities from junior leaders. Specifically, leaders at higher headquarters have taken responsibility for much of the training management function--planning, preparing, and assessing training--while junior leaders have focused primarily on training execution. However, changing conditions, such as increased competition for resources in a constrained fiscal environment, increased time at home station, and a return to training for a fuller range of missions, make it imperative that all leaders possess a strong foundation in training management. The services are developing various initiatives to restore and develop training management skills in their leaders, but neither service has developed results-oriented performance metrics to gauge the effectiveness of their efforts to restore these skills. As GAO has previously reported, establishing metrics can help federal agencies target training investments and assess the contributions that training programs make to improving results. Without a means of measuring the effectiveness of their efforts, the Army and Marine Corps will not have the information they need to assess the extent to which their leaders have the training management skills needed to plan, prepare, and assess required training. GAO recommends that the services develop results-oriented performance metrics that can be used to evaluate the effectiveness of their training management initiatives and support any adjustments that the services may need to make to these initiatives. DOD concurred with this recommendation.
| 6,224 | 832 |
AIG is a holding company that, through its subsidiaries, is engaged in a broad range of insurance and insurance-related activities in the United States and abroad, including general insurance, life insurance and retirement services, financial services, and asset management. The AIG organization includes the largest domestic life insurer and the second largest domestic property/casualty insurer, and it has a large foreign general insurance business. It also has a financial products division, which has been a key source of AIG's financial difficulties, particularly AIGFP, which engaged in a wide variety of financial transactions, including standard and customized financial products. From July 2008 to August 2008, ongoing concerns about AIG's securities lending program and continuing declines in the value of super senior collateralized debt obligations (CDO) protected by AIGFP's super senior credit default swap (CDS) portfolio, along with ratings downgrades of the CDOs, resulted in AIGFP having to post additional cash collateral, which raised liquidity issues. By early September, collateral postings and securities lending requirements were placing increased pressure on the AIG parent company's liquidity. AIG attempted to raise additional capital in September but was unsuccessful. It was also unable to secure a bridge loan through a syndicated secured lending facility. On September 15, 2008, the rating agencies downgraded AIG's debt rating three notches, resulting in the need for an additional $20 billion to fund its additional collateral demands and transaction termination payments. As AIG's share price continued to fall following the credit rating downgrade, counterparties withheld payments and refused to transact with AIG. Also around this time, the insurance regulators no longer allowed AIG's insurance subsidiaries to lend funds to the parent under a revolving credit facility that AIG maintained and demanded that any outstanding loans be repaid and that the facility be terminated. Ongoing instability in global credit markets and other issues have resulted in over $182 billion in federal assistance being made available to AIG. First, in September 2008, the Federal Reserve created the Revolving Credit Facility, which was intended to stabilize AIG by providing it with sufficient liquidity and enabling AIG to dispose of certain assets in an orderly manner while avoiding undue disruption to the economy and financial markets (see table 2). The original amount available under the facility was up to $85 billion. While the amount borrowed reached $82 billion, the debt was reduced by the proceeds from AIG's sale of preferred shares to Treasury as well as repayments from the Fed Securities Lending Agreement and the Commercial Paper Facility. As of February 18, 2009, AIG had $38.8 billion in debt outstanding under this facility. Second, in November 2008, the Federal Reserve and Treasury announced additional assistance to AIG and restructured its original assistance. On November 9, 2008, the Treasury announced plans to use its Systemically Significant Failing Institutions (SSFI) Program, under TARP, to purchase $40 billion in AIG preferred shares. This purchase allowed AIG to reduce its debt outstanding to the Federal Reserve and enabled the Federal Reserve to reduce the amount available under the Revolving Credit Facility from $85 billion to $60 billion. On November 10, 2008, the FRBNY announced plans to lend up to $22.5 billion to Maiden Lane II LLC, a facility formed to purchase residential mortgage-backed securities (RMBS) from the U.S. securities lending investment portfolio of AIG subsidiaries. When this facility was established, it replaced an interim securities lending agreement with the Federal Reserve. Also on November 10, FRBNY announced plans to lend up to $30 billion to Maiden Lane III LLC, a FRBNY facility formed to purchase multi-sector CDOs on which AIGFP had written CDS protection. In connection with the purchase of the CDOs, AIG's CDS counterparties agreed to terminate the CDS contracts. Most recently, on March 2, 2009, the U.S. Treasury and FRBNY announced plans to further restructure the terms of the assistance. Consistent with earlier assistance, this was also designed to enhance the company's capital and liquidity in order to facilitate orderly restructuring of the company. The restructuring of the assistance would, among other things, provide the government with interests in two AIG foreign life insurance companies, as well as certain cash flows from certain domestic insurance companies, each in exchange for reducing AIG's Revolving Credit Facility balance. The assistance also would include a new Treasury equity capital facility that would allow AIG to draw down up to $30 billion as needed over time in exchange for newly issued non-cumulative preferred stock to the U.S. Treasury. Treasury and FRBNY would also exchange the previously issued Series D preferred stock for Series E preferred stock that would more closely resemble common stock and provide for non-cumulative dividends. To date, AIG has not drawn against this facility. As noted above, some federal assistance was designated for specific purposes, such as reducing the loan outstanding to the Federal Reserve or for purchasing specific assets, such as CDOs and RMBS. Other assistance, such as that available through the Federal Reserve Revolving Credit Facility, is available to meet the general financial needs of the parent company and its subsidiaries. Some of the assistance also places restrictions on actions that AIG can take while it has loans outstanding to the federal government or as long as the federal government has an ownership interest in AIG assets, as well as restrictions on executive compensation. Executive compensation restrictions for TARP recipients were also included in the American Recovery and Reinvestment Act of 2009, which was enacted on February 17, 2009. In general, the restrictions prohibit bonus and incentive compensation payments to certain employees, depending on the amount of TARP assistance received; golden parachutes; and compensation plans that encourage risk-taking. See appendix I for a detailed chronology of events. Federal assistance to AIG has been focused on preventing systemic risk from a potential AIG failure and monitoring its progress, but AIG faces challenges in repaying the assistance. Federal Reserve and Treasury officials have said that a failure of AIG, potentially triggered by further credit downgrades or additional collateral calls, would result in liquidity concerns for other financial market participants. A disorderly failure of AIG would not only create difficulties for AIG's counterparties as described, but could further erode confidence in and uncertainty about the viability of other financial institutions. This, in turn, would further constrict the flow of credit to households and businesses, potentially deepening and lengthening the current recession. If the ultimate goal is avoiding the failure of AIG, the Federal Reserve and Treasury have achieved that goal in the short-term. However, maintaining solvency has required federal assistance beyond that provided in September and November 2008, and rating companies have stated that their current ratings are contingent on continued federal support for AIG. AIG and federal regulators acknowledge that there may be a need for further assistance given the significant challenges AIG continues to face. Therefore, more time is required to determine if the goal will be fully achieved in the long-term. We asked Treasury and the Federal Reserve how they were monitoring AIG's progress toward reaching the goals of the federal financial assistance and AIG's compliance with the restrictions placed upon it as a condition of receiving the assistance. According to Treasury and Federal Reserve officials, the agencies are working together to monitor AIG's solvency by reviewing the reports required by the terms of the financial assistance, and the Federal Reserve is in contact daily with AIG officials regarding AIG's liquidity needs and their efforts to sell the company's assets. AIG regularly files several reports with FRBNY, including daily cash flow reports, reports identifying risk areas within the company, and daily liquidity requests/cash flow forecasts, allowing the Federal Reserve to monitor AIG's liquidity. Also, AIG has a divestiture team that meets at least weekly with the Federal Reserve to discuss potential sales deals, including bids from potential buyers, financing, and other terms of sales agreements, so that the Federal Reserve can monitor AIG's efforts to sell its assets. The Federal Reserve and Treasury said that they are monitoring the various federal agreements with AIG, and these agreements place restrictions on AIG's use of the funds. For example, the Federal Reserve monitors restrictions on the Revolving Credit Facility, including whether AIG has inappropriately paid dividends or financed extraordinary corporate actions like acquisitions. According to Treasury officials, it is in the process of finalizing new executive compensation requirements based on the American Recovery and Reinvestment Act of 2009, and will begin monitoring AIG's compliance with those regulations once they are in place. This is an area we will continue to monitor as part of our broader TARP oversight. State insurance regulators are responsible for monitoring the solvency of insurance companies generally, as well as for approving transactions regarding those companies, such as changes in control or significant transactions with the parent company or other subsidiaries. For example, regulators told us that AIG's insurance companies, like all insurance companies, file quarterly reports with them. Since AIG began receiving federal assistance in September 2008, regulators also said that AIG's insurance companies have been submitting additional reports on their liquidity, investment income, and statistics on surrender and renewal of policies, sometimes on a daily or weekly basis. The various regulators also coordinate their monitoring of the companies' insurance lines. State regulators also evaluate potential sales of AIG's domestic insurance companies. NAIC formed a working group designed to expedite any regulatory approvals required for asset sales, with a goal of completing the approvals within 45 days of filing for a sale. AIG's restructuring has hinged on efforts in three areas: (1) terminating its CDS portfolio, (2) terminating its securities lending program, and (3) selling assets. Federal assistance was targeted to the first two areas that posed a significant risk to AIG's solvency--AIGFP's CDS portfolio and the securities lending program--and the risks from both activities appear to have been reduced, but some risks remain. One arrangement, Maiden Lane III--the FRBNY facility created to purchase CDOs--has purchased approximately $24.3 billion in multi-sector CDOs (with a par value of approximately $62 billion), which were the assets underlying the CDS protection that AIG sold. Concurrent with the purchase of the underlying CDOs, AIGFP counterparties agreed to cancel the CDS written on the CDOs, thus unwinding significant portions of AIGFP's CDS portfolio. According to AIG, some arrangements did not qualify for sale to the facility, generally either because the counterparties did not own the instruments on which CDS were written or because they were in denominations other than U.S. dollars. As of February 18, 2009, approximately $12.2 billion in notional amounts of CDS remained with AIG. According to AIG, these remaining CDS continue to present a risk to AIG, as further losses from these assets could require additional funding. A second FRBNY facility--Maiden Lane II--purchased approximately $19.5 billion in RMBS and other assets related to the securities lending program. Both the Maiden Lane II and Maiden Lane III facilities allow AIG to participate in the residual proceeds after the FRBNY loan has been repaid. However, AIG faces other potential losses from other investments. The federal assistance has allowed AIG to undertake restructuring efforts, which continue. As of September 2008, AIG was to wind down the operations of AIGFP and sell certain businesses. In October 2008, the company announced plans to sell some of its life insurance operations and other businesses. AIG is continuing to wind down AIGFP but expects the process to take at least several years in order to avoid further losses given the current market conditions. AIG has been unable to sell its insurance assets for prices it deems acceptable given the general state of the global economy. As a result, the plan has been modified, and the federal government will now assume an ownership interest in some of AIG's life insurance companies. The federal government's ownership stake will be a percentage of the fair market value of these companies based on valuations acceptable to the Federal Reserve. In addition, AIG plans to consolidate its commercial property/casualty insurance operations in a free-standing entity and potentially offer an equity interest in part of this new entity to public investors. Asset sales have been difficult, not only because tight credit markets are limiting buyers' ability to obtain the capital needed to purchase the companies, but also because of challenges faced by AIG in retaining key employees, who contribute to the value of the company. In addition, the timely sale of CDOs and RMBS held by the Federal Reserve facilities will be challenging, not only because it may be difficult to value those assets, but because many are tied to home values, which have been in decline. AIG's ongoing financial problems have resulted in additional assistance and restructuring of the terms of the original assistance, and AIG faces numerous, significant challenges to its ability to repay federal assistance in the future. AIG's ability to repay the federal government hinges on it remaining solvent and effectively restructuring the organization, including the sale of subsidiaries. The federal government recouping its assistance also depends in part on FRBNY being able to obtain a satisfactory return on the sale of the CDO- and RMBS-related assets purchased by Maiden Lane II and III. AIG's ability to pay interest and dividend payments has been and may continue to be a challenge because its ability to make payments is dependent on the profitability of AIG operations, which face a number of hurdles. As of December 31, 2008, AIG insurance subsidiaries had statutory capital levels that exceeded the minimum requirements. However, damage to AIG's reputation has made it difficult for its insurance companies to maintain current business and write new business. In addition, profitability is also dependent on the overall state of the economy--many of AIG's insurance premium sources are tied to economic activity, such as payroll--and its insurers, especially its life insurers, depend on strong investment returns. To the extent the overall economy is experiencing difficulty, it will present challenges to the profitable operations of AIG's insurance companies. While recent federal assistance has been restructured to reduce AIG's interest and dividend payment requirements, it is too soon to tell whether further assistance or further restructuring will be needed in the future. We are examining the potential effect of federal assistance to AIG on the insurance market, particularly AIG's pricing practices within the commercial property/casualty market. Market participants (actuaries, regulators, brokers, customers, and insurance companies) we talked with indicated that, foremost, insurance premium rates follow an insurance underwriting cycle that is generally characterized by a long period of "soft market" conditions, where premium rates are relatively low and underwriting standards are less stringent, followed by a much shorter period of "hard market" conditions, where premium rates flatten or increase and underwriting standards are more stringent. They explained that starting with the September 11, 2001, terrorist attacks and continuing until late 2003 or early 2004, the commercial property/casualty market was in a hard market, but since this time the markets have softened and premium rates have been declining. For example, according to the Council of Independent Agents and Brokers (CIAB) surveys, quarterly changes in commercial property/casualty premium rates have been negative (falling) for all commercial line accounts since the second quarter of 2004 (except for catastrophe-exposed property lines in early 2006), and while the magnitude of the changes leveled off in the last quarter of 2008, the average quarterly premium rate change was still negative in that period. Industry participants also said that premiums charged by commercial property/casualty insurers for a given coverage are influenced by several factors that could allow one insurer to price lower than another on a given risk and that AIG Commercial Insurance historically had been able to take advantage of several of these factors. Such factors include a long history of experience with complex risks, a lower operating expense ratio relative to competitors, global operations that allow offsetting risks, and the ability to leverage the size and the financial strength of the parent company to write larger coverage amounts than competitors, in some cases without the need to purchase reinsurance. It is not yet clear to what extent the current financial difficulties the AIG parent company may have diminished these advantages for AIG Commercial Insurance. Some insurers we spoke with said that they had observed instances, in some cases numerous instances, where AIG had sold commercial property/casualty coverage for a price that these insurers believed was inadequate for the risk involved. They cited examples where AIG Commercial Insurance's prices had decreased significantly from the prior year's price, when circumstances appeared to indicate that higher prices were warranted. Some insurers said that they had brought several of these instances to the attention of the relevant state insurance regulator. Insurers expressed concern that while current market conditions would dictate increased prices in most commercial property/casualty lines of insurance, they believe that AIG Commercial Insurance has decreased its prices. They added that when such pricing activity is combined with AIG Commercial Insurance's market power, AIG Commercial Insurance can prevent prices from increasing and thus hurt other insurers' ability to price insurance at a cost adequate to cover the risk involved. The insurers said they believed that AIG Commercial Insurance's recent pricing behavior is the result of its desire to retain existing business in the face of concerns over the financial health of its parent company, and some suggested that the federal financial assistance is providing them the means to do this. For example, some suggested that AIG Commercial Insurance officials know that the federal government will not let them fail, so they can charge very low prices without fear of the consequences when the premiums collected turn out to be less than the losses those premiums were meant to cover. Some also suggested that buyers in the market are choosing to stay with AIG Commercial Insurance because they also believe that the insurance company is now backed by the federal government and that their losses will ultimately be covered. AIG told us that AIG Commercial Insurance has the biggest policyholder surplus in the industry and that they are solvent and financially sound. They maintained that they are charging prices adequate for the risk being covered and that their commercial insurance rates have been mirroring the overall trends in the current soft market. That is, they indicated that their rates have been declining at an increasingly slower pace since the fourth quarter of 2008, and in some cases have increased. They also cited other factors that they said would indicate that they were not pricing inadequately or taking market share from other companies. First, AIG Commercial Insurance told us that they have actually been losing market share because the financial situation of the parent company had impacted the reputation of the AIG commercial insurance companies. In addition, they cited instances where competitors were using the AIG parent company's financial problems as a way to discourage customers from buying AIG commercial insurance coverage. Finally, AIG Commercial Insurance provided us with examples of recent contracts that they have lost to competitor bids that were below their own. However, AIG Commercial Insurance acknowledges that these examples reflect the nature of the business, not necessarily inappropriate pricing by the competitors. State insurance regulators, insurance brokers, and insurance buyers that we have spoken to said that they have seen no indications that AIG's commercial property/casualty insurers are selling coverage at prices inadequate to cover the risk involved: State insurance regulators we spoke to said that they generally do not closely watch commercial insurance rates because they may have been largely deregulated by the states, as well as because of the highly negotiated nature and complexity of many commercial lines of insurance. However, they said that they investigate complaints about pricing activities and monitor insurer solvency measures that would indicate inadequate pricing--although in some lines the consequences of such pricing may not show up in these measures for several years. State regulators indicated that complaints of pricing inadequate for the risk involved would need to be numerous enough to indicate a potential systemic problem or would need to prove an intentional predatory strategy from the part of a particular company. Based on what they have reviewed, the regulators we spoke with said they have seen no indications of inadequate pricing by AIG's commercial property/casualty insurers. Insurance brokers we spoke with said that when helping a customer obtain coverage, they see all of the prices and conditions offered by each insurer placing a bid on that coverage. They also indicated that commercial property/casualty insurance is competitive, and that in several lines of commercial insurance, especially where large coverage amounts are involved, prices offered by insurers can deviate significantly on the same risk. For example, one broker said that insurers' bids on large policies regularly vary by as much as 20 percent below and above the median bid. Several brokers told us that AIG Commercial Insurance has historically priced aggressively in some lines, and that while in some instances in the past several months AIG Commercial Insurance may have priced more aggressively in order to retain certain customers, it did not appear to be a widespread practice and was viewed as an expected response given the reputational hit the company has taken. They also cited instances where AIG Commercial Insurance has lost business because other insurers' prices were lower than theirs. Insurance buyers, who also see all of the prices and conditions offered by each insurer bidding on their coverage, said that AIG Commercial Insurance is known to be competitive in some lines and that they have not seen any indications of a widespread change in pricing by AIG's commercial insurers. They also said that they would recognize, and be concerned about, an insurer charging suspiciously low rates for the coverage because it would create a risk that the insurer would be unable to pay the policyholder's claim. However, according to insurance regulators and other industry participants, for many lines of commercial insurance, determining whether prices charged by a commercial property/casualty insurer are adequate for the risk involved pose a number of challenges: In many lines of commercial insurance, in the case of very large risks as opposed to routine policies, the terms of coverage, in addition to the price, are often negotiated, resulting in unique policies. For example, the amount of a claim the policyholder would be responsible for, and the collateral the policyholder would be required to post to guarantee payment of this amount, would be negotiated. Without knowing all the terms of an individual policy, it could be difficult to determine the extent to which that policy was priced adequately for the risk involved. Insurers price policies based on predictions of future losses, which contain a number of subjective assumptions about risk, interest rates, litigation costs, and other costs. Underwriters may price a given risk differently and still be able to defend the reasoning behind their calculations. The most concrete indication of systematic inadequate pricing comes several years later, depending on how far into the future the losses associated with the policies in question are realized. However, a company may ultimately end up with higher-than-expected losses even if it charged actuarially determined premiums using reasonable assumptions at the time the policies were written. In closing, the extent to which the assistance provided by the government will achieve its goal of preventing systemic risk continues to unfold and will be largely influenced by AIG's success in meeting its ongoing challenges in trying to restructure its operations. Likewise, it is too soon to tell whether AIG will be able to repay its outstanding debt to the federal government, which in large part depends on the stability of the overall financial system. While we have found no evidence that federal assistance has been provided directly to AIG's property/casualty insurers, as has been the case for AIG life insurers, AIG's insurance companies have likely received some indirect benefit to the extent that the property/casualty insurers would have been adversely affected by a credit downgrade or failure of the AIG parent. While we are continuing to complete our work in the area, some of AIG's competitors claim that AIG's commercial insurance pricing is out of line with its risks but other insurance industry participants and observers disagree. At this time, we have not drawn any final conclusions about how the assistance has impacted the overall competitiveness of the commercial property/casualty market. Mr. Chairman, this completes my prepared statement. I would be pleased to answer any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact Orice M. Williams at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Patrick Ward (Assistant Director), Joe Applebaum (Chief Actuary), Susan Offutt (Chief Economist), Silvia Arbelaez-Ellis, Tania Calhoun, John Forrester, Dana Hopings, Jennifer Schwartz, and Melvin Thomas. July 2008 to August 31, 2008: The super senior collateralized debt obligation (CDO) securities protected by American International Group Financial Products' (AIGFP) super senior credit default swap (CDS) portfolio continued to decline and ratings of CDO securities were downgraded, resulting in AIGFP posting additional $5.9 billion collateral. AIG was doing a strategic review of AIG's businesses and reviewing measures to address the liquidity concerns in AIG's securities lending portfolio and to address the ongoing collateral calls regarding AIGFP's super senior multi-sector CDS portfolio, which as of July 31, 2008, totaled $16.1 billion. Early September 2008: These collateral postings and securities lending requirements were placing increasing stress on the AIG parent company's liquidity. September 8 to September 12, 2008: AIG's common stock price declined from $22.76 to $12.14, making it unlikely that AIG would be able to raise the large amounts of capital that would be necessary if AIG's long-term debt ratings were downgraded. September 11 or 12, 2008: AIG approached the Federal Reserve with two concerns: AIG had significant losses in the first two quarters of calendar year 2008, primarily attributable to AIGFP and decreasing values in their securities, leading AIG to request to place large amounts of cash collateral. AIG's investments in mortgage-backed securities (MBS) were very illiquid. Consequently, AIG would not be able to liquidate its assets to meet the demands of counterparties. Since AIG is not regulated by the Federal Reserve, the agency was not aware of the company's financial problems. Also, because AIG was facing a downgrade in its credit rating the next week, it needed immediate liquidity help. Over the weekend, the Federal Reserve was examining AIG to determine if it was systemically important, meaning that its failure would have a broader effect on the economy. This was the same weekend that Lehman Brothers went into bankruptcy. September 12, 2008: Standard & Poor's (S&P), placed AIG on CreditWatch with negative implications and noted that upon completion of its review, the agency could affirm the AIG parent company's current rating of AA- or lower the rating by one to three notches. AIG's subsidiaries, International Lease Finance Corporation (ILFC) and American General Finance, Inc. (AGF), were unable to replace all of their maturing commercial paper with new issuances of commercial paper. As a result, AIG advanced loans to these subsidiaries to meet their commercial paper obligations. September 13 and 14, 2008: AIG accelerated the process of attempting to raise additional capital and discussed potential capital injections and other liquidity measures with private equity firms, sovereign wealth funds and other potential investors. AIG also met with Blackstone Advisory Services LP to discuss possible options. September 15, 2008: AIG was again unable to access the commercial paper market for its primary commercial paper programs, AIG Funding, ILFC and AGF. AIG advanced loans to ILFC and AGF to meet their funding obligations. AIG met with representatives of Goldman, Sachs & Co., J.P. Morgan, and the Federal Reserve Bank of New York (FRBNY) to discuss the creation of a $75 billion secured lending facility. S&P, Moody's, and Fitch Ratings (Fitch) downgraded AIG's long-term debt rating. As a result, AIGFP estimated that it needed in excess of $20 billion to fund additional collateral demands and transaction termination payments in a short period of time. September 15, 2008: AIG's common stock price fell to $4.76 per share. September 16, 2008: AIG's strategy to obtain private financing failed. Goldman, Sachs & Co. and J.P. Morgan were unable to syndicate a lending facility. Consequently, counterparties were withholding payments from AIG, and AIG was unable to borrow in the short-term lending markets. To provide liquidity, both ILFC and AGF drew down on their existing revolving credit facilities, resulting in borrowings of approximately $6.5 billion and $4.6 billion, respectively. AIG was notified by its insurance regulators that it would no longer be permitted to borrow funds from its insurance company subsidiaries under a revolving credit facility that AIG maintained with certain of its insurance subsidiaries acting as lenders. Subsequently, the insurance regulators required AIG to repay any outstanding loans under that facility and to terminate it. The Federal Reserve extended the facility to AIG to prevent systemic failure. AIG had no viable private sector solution to its liquidity issues. It received the terms of a secured lending agreement that FRBNY was prepared to provide. AIG estimated that it had an immediate need for cash in excess of its available liquid resources. That night, AIG's Board of Directors approved borrowing from FRBNY based on a term sheet that set forth the terms of the secured credit agreement and related equity participation. September 22, 2008: The inter-company facility was terminated effective September 22, 2008. AIG entered into the Fed Credit Agreement in the form of a two-year secured loan. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Board of Governors of the Federal Reserve System (Federal Reserve) and the Department of the Treasury (Treasury) have made available over $182 billion in assistance to American International Group (AIG) to prevent its failure. However, questions have been raised about the goals of the assistance and how it is being monitored. Also, because AIG is generally known for its insurance operations, questions exist about the effect of the assistance on certain insurance markets. This statement provides preliminary findings on (1) the goals and monitoring of federal assistance to AIG and challenges to AIG's repayment of the assistance; and (2) the potential effects of the federal assistance on the U.S. commercial property/casualty insurance market. GAO's work on these issues is ongoing. To date, we have reviewed relevant documents on the assistance and ongoing operations of AIG, as well as documents issued by the Federal Reserve and Treasury. We also interviewed officials from these organizations as well as industry participants (competitors, brokers, and customers) and insurance regulators, among others. Federal financial assistance to AIG, both from the Federal Reserve and Federal Reserve Bank of New York through their authority to lend funds to critical nonbank institutions and from Treasury's Troubled Asset Relief Program (TARP), has focused on preventing systemic risk that could result from a rating downgrade or failure of AIG. The goal of the assistance and subsequent restructurings was to prevent systemic risk from the failure of AIG by allowing AIG to sell assets and restructure its operations in an orderly manner. The Federal Reserve has been monitoring AIG's operations since September, and Treasury has begun to more actively monitor AIG's operations as well. Although the ongoing federal assistance has prevented further downgrades in AIG's credit rating, AIG has had mixed success in fulfilling its other restructuring plans, such as terminating its securities lending program, selling assets, and unwinding its AIG Financial Products portfolio. For example, AIG has made efforts at selling certain business units and has begun an overall restructuring, but market and other conditions have prevented significant asset sales, and most restructuring efforts are still under way. AIG faces ongoing challenges from the continued overall economic deterioration and tight credit markets. AIG's ability to repay its obligations to the federal government has also been impaired by its deteriorating operations, inability to sell its assets and further declines in its assets. All of these issues will continue to adversely impact AIG's ability to repay its government assistance. As part of GAO's ongoing work related to the federal assistance provided to AIG, GAO is reviewing the potential impact of the assistance on the commercial property/casualty insurance market. Specifically, GAO is reviewing potential effects of the assistance on AIG's pricing practices. According to some of AIG's competitors, federal assistance to AIG has allowed AIG's commercial property/casualty insurance companies to offer coverage at prices that are inadequate for the risk involved. Conversely, state insurance regulators, insurance brokers, and insurance buyers said that while AIG may be pricing somewhat more aggressively than in the past in order to retain business in light of damage to the parent company's reputation, they did not see indications that this pricing was inadequate or out of line with previous AIG pricing practices. Moreover, some have noted that AIG has lost business because of the problems encountered by its parent company. As GAO evaluates these issues, it faces a number of challenges associated with determining the adequacy of commercial property/casualty premium rates, especially in the short term. These challenges include the unique, negotiated nature of many commercial insurance policies, the subjective assumptions involved in determining premiums, and the fact that for some lines of commercial insurance it can take several years to determine if premiums charged were adequate for the related losses.
| 6,604 | 825 |
OPM is the central management agency of the federal government charged with administering and enforcing federal civil service laws, regulations, and rules and aiding the President in carrying out his responsibilities for managing the federal workforce. OPM has policy responsibilities related to hiring, managing, compensating, and separating federal employees. Moreover, OPM endeavors to ensure compliance with civil service policies through a program of overseeing the personnel activities of covered federal agencies. OPM helps federal program managers in their personnel responsibilities through a range of programs, such as training and performance management, designed to increase the effectiveness of federal employees. In addition to these responsibilities, OPM also promulgates regulations related to federal employee benefits, including retirement, health, and life insurance benefits. OPM directly administers all or major portions of these benefit programs, which serve millions of current and former federal employees. Top OPM officials said they envision OPM as providing human resource management (HRM) leadership for the federal government. Through that leadership, OPM officials say they intend to ensure that the merit principles that are the basis for the federal civil service system are followed throughout the government and that human resource management is effective. The Results Act is intended to improve the efficiency and effectiveness of federal programs by establishing a system to set goals for program performance and to measure results. Specifically, the Act requires executive agencies to prepare multiyear strategic plans, annual performance plans, and annual performance reports. OPM and other agencies submitted their first cycle of agency multiyear strategic plans to OMB and Congress in September 1997. Like other agencies, OPM also submitted its first draft annual performance plan to OMB in the fall of 1997. The Results Act requires each performance plan to identify annual performance goals that cover all of the program activities in the agency's budget. OMB Circular A-11 specifies that the annual performance goals reflect the agency's strategic goals and mission. OMB used these draft performance plans to develop and submit the first federal governmentwide performance plan to Congress in February 1998 with the President's fiscal year 1999 budget. OPM and other agencies submitted their final performance plans to Congress after the submission of the President' s budget. OPM's annual performance plan specifies quite clearly its goals--generally expressed as planned activities--for fiscal year 1998 and how those planned activities relate to the goals in its published strategic plan and to program activity accounts in its proposed fiscal year 1999 budget. OPM's plan specifies over 100 performance goals, with each OPM unit linking its planned activities and processes to OPM's five strategic goals and to program activities in its budget request. Consistent with congressional suggestions and OMB guidance, the plan also describes the means OPM intends to use to validate performance and discusses its coordination with other agencies on crosscutting activities. In this sense, the annual performance plan provides a picture of OPM's intended performance. However, this picture is incomplete because the annual performance plan often does not give a sense of how those activities will help OPM achieve a desired end result. Rather, OPM's performance plan often would enable policymakers to determine whether OPM has completed a set of actions, but not whether those actions made any difference in such things as the management of the federal workforce or whether the actions would cause that workforce to be more or less able to effectively and efficiently carry out its responsibilities. The performance goals in OPM's plan are generally measurable and linked to the agency's strategic goals and objectives; however, they are typically more activity- or output-oriented rather than results-oriented as envisioned by the Results Act. The lack of a results focus likely would impede policymakers in determining whether OPM's efforts have "made a difference" in how well the federal government's human resources are actually managed. Generally, OPM's performance goals are expressed as activities to be completed or results to be achieved by the end of fiscal year 1999. For example, OPM's Employment Service says that in fiscal year 1999, it will complete a review of all governmentwide policies and programs that are its responsibility, and OPM's Workforce Compensation and Performance Service says it will lead a study of allowances, differentials, premium pay, and hours of duty as part of a 3-year comprehensive review of governmentwide compensation policies and programs. Both of these performance goals, like many others, commit OPM to undertake or complete a specific piece of work in fiscal year 1999 and thus, in a literal sense, define a minimal level of expected performance. OPM officials acknowledged that many of the annual performance goals are activity- or process-oriented, but said that, particularly with respect to policy development and implementation, successful accomplishment of several of its performance goals will require a sequence of steps from policy analysis and development through policy implementation to policy evaluation. In many cases, this sequence of steps will extend over several years. Consequently, OPM officials said it is impractical to specify a results-oriented goal in any year until the sequence of steps is complete and changes in policy have been made and implemented so that the new policies can actually effect a change in agencies' practices. OPM officials also noted that this circumstance is recognized in OMB's guidance on annual performance plans, which notes that outcome goals may only be achieved at certain points during the lifespan of a strategic plan and requires that an annual plan include outcome goals when their achievement is scheduled for the fiscal year covered by the annual performance plan. The OPM officials' observations highlight that results-oriented annual performance goals can be difficult to set on an annual basis in certain circumstances. However, a key intent of the Results Act was that agencies should focus their planning on what they are intending to achieve, the result that they are provided resources to accomplish, rather than on traditional measures of output like activities undertaken. We have previously reported that OPM's strategic plan goals do not provide a sense of the results OPM expects to achieve or how they might be measured. If neither the strategic goals nor the annual performance goals are results-oriented, policymakers likely will have an inadequate basis on which to judge whether agencies are making meaningful progress toward an overall desired outcome. OPM officials also told us that they were obligated to develop an annual performance plan that presented annual performance goals that would carry out their existing strategic plan's goals. Although the officials did not necessarily agree that the OPM strategic goals were inadequately results-oriented, they said that their annual performance goals could not be inconsistent with the strategic plan. OMB guidance does advise agencies that their annual performance plans should be specifically linked to their strategic plans and that, for example, performance goals and indicators in the annual plan should be based on the general goals and objectives in the agency's strategic plan. Accordingly, OPM may have been somewhat constrained in developing annual goals that were results-oriented given that, in our judgment, the strategic goals did not give a clear sense of the results OPM was intending to achieve. Other agencies have recognized that their strategic plans did not communicate their desired results adequately and have initiated efforts to revise those plans. For example, the Department of Labor has consolidated the six strategic goals outlined in its September 1997 strategic plan into the three strategic goals contained in its annual performance plan. According to Labor, this revision fosters greater cohesion within the Department and also responds to concerns raised by external reviewers that the agency's strategic plan did not adequately reflect the integration and crosscutting nature of Labor's programs. A results-oriented goal in OPM's annual performance plan illustrates how such goals can provide a better basis for OPM, Congress, and the public to determine if the agency is achieving the intended impact or results with the resources that it is provided. OPM's Employment Service has a goal that states, in part, that agency-delegated examining units (offices within agencies that assess whether job applicants meet the requirements of jobs being advertised) "will operated according to merit principles." This is directly related to OPM's mission of ensuring that merit system requirements are followed in federal human resources management. This results-oriented goal is included even though the rest of the goal stresses activities to be undertaken, that is, to complete the first 3-year cycle of recertification for all delegated examining units by the end of fiscal year 1999. However, the results-oriented goal provides a framework for OPM and Congress to use to determine whether the activities lead to an improved result. That is, OPM and Congress can track the number of instances in which delegated examining units do or do not operate in accordance with the merit principles specified in statute. This example also shows that even if a results-oriented annual performance goal cannot be set in any given year, tracking data related to a desired result or outcome can nevertheless occur and be useful. Measures that track yearly results can be useful in establishing a baseline performance level to use in establishing future results-oriented performance goals and in determining whether specific activities are moving the agency closer to the desired end result. OPM's plan has some measures that are related to achieving results. For example, the Office of Merit Systems Oversight and Effectiveness (OMSOE) has a fiscal year 1999 goal to promote the growth of merit principle awareness and understanding governmentwide. OPM has statutory responsibility for overseeing compliance with the merit principles specified in title 5 of the U.S. Code. One measure, or target, for OMSOE's performance goal is an increase from 39 to 41 percent in the proportion of employees who say they know what the merit system principles and prohibited personnel practices are as measured by an employee survey. OPM's annual performance plan could be more useful if additional results-oriented performance measures were identified. For example, the Employment Service's performance goal of reviewing all governmentwide human resource management policies and programs during fiscal year 1999 is in support of OPM's strategic goal of providing leadership to recruit and retain the federal workforce required for the 21st century. Policymakers could reasonably expect OPM to define the characteristics of the workforce that is needed--in essence, the result being sought in part through the improved human resource management policies OPM hopes to develop--and to track the extent to which the federal government is being more or less successful in recruiting and retaining that workforce. OPM has no such measure in its fiscal year 1999 annual performance plan and had not proposed such a measure in its strategic plan. OPM's annual performance plan also does not appear to have cost-based performance measures, as intended by Congress and encouraged in OMB guidance, that would show how efficiently it performed certain business-like operations (e.g., the administration of health and retirement programs). Relevant measures might include the cost of doing business per unit of output, such as the cost to process civil service retirement payments made either by electronic funds transfer or check. Cost-based efficiency measures could be useful to managers as they attempt to improve their operations. Such measures could serve as benchmarks for determining whether private firms might be able to perform certain services more cost-effectively than OPM can with federal civilian employees. If such cost-based measures were developed, however, it would be important for OPM's salaries and expenses and revolving funds to have accurate financial and cost data. The reliability of these data is not currently determinable since OPM's Inspector General (IG) has been unable to express an unqualified opinion on these funds' financial statements because of inadequate or nonexistent internal controls and standard accounting policies, procedures, and records. OPM's annual performance plan clearly connects its performance goals to the agency's mission, strategic goals, and program activities in its fiscal year 1999 budget request. For nearly all of its program activities, OPM's plan lists strategic and annual goals. The plan also provides the total budgetary resources proposed for the program activity and a breakdown of how much of the program activity will be used for each of OPM's five strategic goals. For example, OPM's fiscal year 1999 annual goal to assist agencies to raise the levels of underrepresented groups in key federal occupations and at key grade levels by 2 percent over fiscal year 1998 levels supports OPM's strategic goal to provide policy direction and leadership to recruit and retain the federal workforce required for the 21st century and is 1 of 11 major performance goals expected to use almost $12 million from the Employment Service program activity. The portions of OPM's plan that provide fiscal year 1999 budgetary information for its mandatory spending program activities related to federal health, life, and retirement programs do not include annual performance goals and do not show linkage to OPM's strategic goals. Although goals and linkages are not included in these specific portions of the plan, OPM does have annual performance goals related to these activities listed under the Transfers from Trust Funds section of the Salaries and Expenses Account portion of its plan. OPM officials believe that it is more appropriate to discuss the goals and linkages in the Transfers section because this is the budgetary account that funds the activities that are expected to achieve OPM's goals. For example, OPM set a goal to maintain, at fiscal year 1998 levels, customer satisfaction, processing times, and accuracy rates pertinent to processing new claims for annuity and survivor benefits and shows baseline data on processing these claims. OPM also set a goal to develop a proposal, expected to be completed in fiscal year 1998, to implement the design, financing, and service delivery of federal earned benefits recommended by its benefits vision study. Providing a reference to these goals in the relevant presentation of the mandatory spending program activities would be a useful guide to quickly steer users of the plan to goals and measures associated with these program activities. OPM's specific goals related to its information technology (IT) program are also linked to its strategic goals. This is a useful linkage that is consistent with recent legislation that emphasizes that IT investments should be made in direct support of the mission-related activities of agencies. In addition, OPM's performance plan includes goals for dealing with Clinger-Cohen Act requirements, Year 2000 computer conversion efforts, and information security; specifies the means for achieving the goals; and includes performance indicators for measuring results. Given the importance of these issues, their focused presentation in the annual performance plan appears to be appropriate. OPM could further strengthen its performance indicators by including information on (1) how it plans to deal with its other systems that may not be mission-critical but may have some impact on its operations in 2000, and (2) contingency plans in place in the event that Year 2000 corrections are not successful or systems fail to operate. OPM's performance plan partially addresses the need to coordinate with other agencies and individuals having an interest in OPM's mission and services. As a central management agency, OPM must work with or through other federal agencies to ensure that federal personnel policies are appropriate and are followed properly. Thus, OPM's core responsibilities do, in some sense, cut across a large portion of the federal government. OPM's performance goals reflect the crosscutting nature of its activities. In many cases, the plan discusses OPM's planned efforts to coordinate its crosscutting functions with the federal community. These discussions are consistent with Results Act requirements. However, in some cases, a more explicit discussion of OPM's intended coordination with other agencies would be helpful. For example, OPM has a performance goal to seek improvement in adjudicatory processes that address conflicts in the workplace and to work to make them more understandable, timely, and less costly. The means, or strategy, OPM proposes to achieve this goal implicitly recognizes that OPM has limited authority to set or influence policy regarding adjudicatory processes. It states that OPM will "promote and provide active participation in response to governmentwide efforts to improve the adjudicatory process." Meaningful participation by OPM would require ongoing coordination with the adjudicatory agencies, such as the Equal Employment Opportunity Commission and the Merit Systems Protection Board, but such coordination is not discussed in the plan. OPM's relationship with the adjudicatory agencies and its approach to coordination could be described more fully to portray the status of OPM's involvement in this issue and the extent to which it intends to participate in interagency efforts to improve the adjudicatory process. OPM's performance plan could more fully discuss the strategies and resources the agency will use to achieve its performance goals. Because many of OPM's annual performance goals are not results-oriented, it would be difficult for policymakers to judge from the plan, itself, how the strategies associated with these performance goals would add up to achieving a significant result related to OPM's mission. Nevertheless, the plan specifies strategies for achieving each of its performance goals. But in many cases, the plan does not provide a rationale for how the strategy will contribute to accomplishing the expected level of performance. OPM's performance plan could also be enhanced by discussing external factors that could significantly affect performance. We found that OPM's strategies are connected to its performance goals, but because many of the performance goals are not results-oriented, it is unclear how the strategies will contribute to achieving an intended result related to OPM's mission. For example, OPM's performance goal to improve recognition of OPM as a leading source for effective, efficient technical assistance in a broad range of employment programs does not readily indicate what result this would help OPM to achieve. Consequently, it is also difficult to determine whether its corresponding strategy to monitor current and emerging issues, trends, and stakeholder interests will contribute to achieving a results-oriented change, such as improving the effectiveness of federal employees. In other cases, it was unclear how a strategy related to its associated performance goal. For example, OPM has a goal to complete a plan for central personnel data file (CPDF) modernization in fiscal year 1999 in coordination with the Human Resources Technology Council. That performance goal has an associated strategy to "use electronic media to collect and disseminate information widely and cost-effectively." While this strategy may be useful for improving the collection and dissemination of CPDF information, it is not clear how this strategy is related to getting the CPDF modernization plan, itself, done. OPM's plan discusses the actions it plans to take to use information technology and capital investments to improve performance and help achieve performance goals in terms of (1) reducing costs, (2) increasing productivity, (3) decreasing cycle or processing time, (4) improving service quality, and (5) increasing customer satisfaction. For example, OPM has established a goal placing responsibility with its Chief Information Officer for providing independent oversight of major OPM information technology initiatives and investments to ensure that OPM's core functions can meet their business goals and objectives through the prudent application of technology and improved use of IT through the implementation of the requirements of the Clinger-Cohen Act. OPM also plans to oversee major IT initiatives, including modernization of the retirement program's service delivery systems and the earned benefit financial systems, modernization of the CPDF system, and development and integration of OPM's employment information systems; implement a sound and integrated IT architecture; manage OPM's IT capital planning and investment control process and implement a performance-based IT management system; and implement an agencywide systems development life-cycle methodology and train staff in its use to support OPM's achievement of Software Engineering Institute Capability Maturity Model level 3 for systems development. One area that is unclear from OPM's discussion in its plan for the Clinger-Cohen Act implementation is whether or not OPM has or plans to establish a separate Investment Review Board to ensure that senior executives are involved in information management decisions. The Clinger-Cohen Act calls for agencies to establish such boards to help improve performance and meet strategic goals. Although not stated in the plan, OPM officials have told us they plan to establish an Investment Review Board. In its September 1997 strategic plan, OPM identified several external factors that could affect achievement of its goals and objectives, which it organized by the following categories: (1) governmentwide issues, (2) relationships with other federal agencies, and (3) the personnel community. OPM's performance plan does not explicitly discuss these factors or their impact on achieving the performance goals. While not required by the Results Act, we believe that a discussion of these external factors would provide additional context regarding anticipated performance. For example, several large agencies recently have been granted, or are seeking to be granted, wide flexibility to deviate from standard provisions of title 5 of the U.S. Code. These include the Internal Revenue Service, the Federal Aviation Administration, and the Department of Defense (civilian workforce). Although these changes could significantly affect OPM's role as the central personnel agency, the plan has little discussion of how such changes were taken into account in setting performance goals. OPM's performance plan partially discusses the resources it will use to achieve the performance goals. OPM's plan does not consistently describe the capital, human, information, and other resources the agency will use to achieve its performance goals. For example, the plan explains that OPM will spend approximately $2.6 million in fiscal year 1999 on implementing an action plan to develop a governmentwide electronic personnel recordkeeping system that will support its goal of helping the Human Resources Technology Council design an electronic official personnel folder to replace paper records. In contrast, the plan generally does not mention specific training or workforce skills that will be needed to achieve OPM's performance goals. We found that OPM's performance plan could better provide confidence that its performance information will be credible. OPM's annual performance plan material for each program activity includes a verification and validation section. The material in those sections generally describes various assessments and measures that OPM intends to use in gauging progress toward the performance goals and how they will be audited, benchmarked, and validated. These sections sometimes do not provide a clear view of the current problems OPM faces with data verification and validation. We also found that the plan does not discuss or identify any significant data limitations and their implications for assessing the achievement of performance goals. OPM's performance plan partially discusses how the agency will ensure that its performance information is sufficiently complete, accurate, and consistent. Specifically, the plan highlights the importance of having credible data and generally meets the intent of the Results Act by identifying actions that OPM believes will identify data problems. These actions include audits of its financial statements by an independent accounting firm. The plan also includes specific actions or goals that could contribute to improved reliability of data, such as installing a new financial management system. However, it does not include plans for audits of nonfinancial data, which were one technique for ensuring data integrity as envisioned by Congress. Although the performance plan provides proposed indicators for each performance goal, it is not clear that data exist for all of the indicators or that the specific data OPM proposes to use would be a valid measure for assessing progress toward achieving its associated performance goal. For instance, for its goal of supporting OPM leadership of the Human Resources Technology Council, OMSOE proposes to use as an indicator "improved HRM operations as measured by 10-year efficiency and quality indicators, e.g., improved ratios of personnel operations staff to employees covered." However, the plan does not indicate what data OPM would use to measure the quality of HRM operations. Further, the proposed efficiency measure, the ratio of personnelists to other employees, while a potentially useful measure, can be imprecise when agencies have staff performing personnel-related duties who are not specifically in job classifications normally considered to be "personnelist" occupations. OPM's performance plan does not discuss a number of known data limitations that may affect the validity of many performance measures OPM plans to use. OPM lacks the timely, accurate, and reliable program data needed to effectively manage and oversee some of its various activities and programs. For example, OPM's December 1997 report on the agency's management controls required by the Federal Managers' Financial Integrity Act noted that there are a number of key areas where controls and reconciliations are either weak or not implemented. This report noted that OPM does not have an effective system in place to ensure the accuracy of claims paid by experience-rated carriers participating in the Federal Employees Health Benefits Program (FEHBP). The report also noted that a significant opportunity exists for fraudulent claims to persist undetected owing to the lengthy audit cycle of FEHBP carriers, which was 15 years in 1992--longer than the requirement for carriers to retain auditable records (3 to 5 years). Similarly, in his October 31, 1997, semiannual report to Congress, OPM's Inspector General expressed concern with the infrequency of IG audits of FEHBP insurance carriers and with the consolidation of unaudited data from experience-rated carriers with agency data, which contributed to the disclaimer of opinion on OPM's health benefit program financial statements. The annual performance plan section on the Inspector General's Office requests five additional staff to meet the goal of a shorter audit cycle. OPM's plan states that in addition to providing increased FEHBP oversight, reducing the audit cycle to 5 years would result in considerable financial recoveries. Finally, the independent audit of OPM's 1996 and 1997 financial statements noted internal control weaknesses in a number of areas for OPM's retirement, health benefits, and life insurance programs. For example, OPM has prescribed minimum records, documentation, and reconciliation requirements to the employer agencies, but it does not monitor the effectiveness of employer agencies' controls or their degree of compliance with controls. As a result, OPM does not have a basis for relying on other agencies' internal controls as they relate to contributions recorded in its accounting records and other data received, which support amounts recorded in the financial statements. The independent accountant also noted in the 1997 report that OPM's financial management system does not support all program decisionmaking because the system does not produce cost reports or other types of reports at meaningful levels. Despite such evidence that suggests that internal controls over data reliability are still a major problem area, the performance plan deals with these problems only on a very broad level in those portions of the plan that alert readers to the limitations associated with data that OPM intends to use to gauge its performance against planned goals. Although OPM's fiscal year 1997 retirement and life insurance program financial statements received unqualified opinions, the independent auditor disclaimed an opinion on the health benefits program financial statements for reasons related to inadequate controls. At a minimum, it would have been helpful if the plan had an explicit discussion of specific current program performance data problems and how OPM plans to address them. We provided OPM with copies of a draft of our observations on its annual performance plan. On April 10, 1998, we met with OPM's Chief of Staff and other officials to discuss the draft. In an April 13, 1998, letter, the OPM Director raised a number of concerns about the draft observations, which we addressed in a revised draft. In an April 30, 1998, letter, the OPM Director provided written comments on the revised draft (see app. I). OPM said that it found the meeting with us to be particularly helpful as OPM further develops and refines its plan--which OPM views as an evolutionary process that will enable it to continually improve and articulate its focus on improving federal human resource management. OPM also said that it was especially pleased to see that the revised draft included changes on some of the points discussed in the meeting. OPM also said that the revised draft contains an inappropriate "imbalance in its overall negative tone," which may lead readers to conclude that the OPM plan is substantially weaker than it is strong. OPM described our discussions of the plan's weaknesses as "lengthy" and said that they overwhelm our "relatively short" statements regarding the plan's strengths. We agree that the Results Act planning process is evolutionary and assessed OPM's annual performance plan from the standpoint of how well it can, as currently written, assist Congress and OPM as they work to realize the potential of a results-focused planning process. We believe that our assessment recognizes strengths in OPM's annual performance plan while also providing a sufficiently in-depth discussion to adequately describe areas in which further improvement is warranted. Thus, it was not our intention to create an unduly negative tone, and we have made changes to avoid such an impression. OPM made additional comments that, for example, provided an explanation of its intentions in developing its annual performance plan and suggested additional context concerning some of our observations. We made changes where appropriate to reflect these comments. Appendix I includes OPM's letter and our additional comments. We are sending copies of this report to the Chairmen and Ranking Minority Members of interested congressional committees; the Director, Office of Personnel Management; and other interested parties. Upon request, we will also make copies available to others. Major contributors to this report are listed in appendix II. Please contact me on (202) 512-8676 if you or your staff have any questions concerning this report. The following are GAO's comments on the Office of Personnel Management's letter dated April 30, 1998. 1. OPM stated that in several cases where we suggested its annual performance plan could be improved, the underlying problem seemed to be a continuing disagreement between us and OPM on the strategic goals, objectives, and measures included in its Results Act strategic plan. OPM further said it was required by law to develop an annual performance plan that presented annual performance goals for fiscal year 1999 that it determined to be necessary to achieve that strategic plan's goals and outcomes. In a previous analysis of OPM's strategic plan, we did find that the goals in OPM's strategic plan tended to be process or activity goals as opposed to results-oriented goals. This may contribute to the annual plan goals' also focusing on processes or activities, which is one of the key areas in which we believe the annual performance plan could be improved. Nevertheless, even with a set of strategic goals that are process- or activity-focused, annual performance goals can to some extent be results-oriented. This is demonstrated in part by OPM's performance plan itself, which does include some results-oriented goals. Further, even when actual results-oriented goals are not established, identifying and tracking results-oriented performance measures can be useful to establish performance baselines and to lead to more informed goal-setting in the future. We have revised the report to make these points more clearly. In addition, although the Results Act requires that strategic plans be updated at least every 3 years, it does not prohibit more frequent revisions. More frequent revisions might be appropriate in these early years of implementing the Act as all parties gain experience with the challenges and benefits of results-oriented planning. At least two agencies began revising their strategic plans even as they were developing their first annual performance plans. Thus, if OPM believes that its current strategic plan inhibits its ability to achieve a results orientation in its annual performance plans, it could reconsider its strategic plan. 2. OPM said that it continues to believe that the Transfers from the Trust Funds section of the Salaries and Expenses Account portion of its performance plan is the proper location for its annual performance goals for its mandatory spending program activities related to federal, health, life, and retirement programs. Nevertheless, OPM said that its annual performance plans for fiscal year 2000 and beyond will include appropriate statements that direct readers to the Transfers and Trust Funds section for goals that would pertain to the mandatory spending program activities. We agree that providing a reference to the relevant goals in OPM's presentation of its mandatory spending accounts would appropriately guide users of the plan to the goals and measures associated with the accounts. OPM also stated that our report implies that, because of the method OPM used to establish and communicate relevant annual performance goals for its mandatory spending program activities, OPM's performance plan is not consistent with its strategic plan and is, consequently, deficient. It was not our intention to imply that OPM's plan was inconsistent with its strategic plan. We have revised the appropriate section of the report to more accurately reflect our observations. 3. OPM also disagreed with our assessment that its performance plan deals with certain internal and management control weaknesses in the earned benefits programs only on a very broad level. OPM said that its plan contains five specific annual performance goals in the Transfers from Trust Funds section and an additional two such goals in the Office of Inspector General section that deal specifically with these problems. More importantly than how broad its description of how it approaches a matter, according to OPM, is the fact that OPM has made a commitment to overcome a problem, solve an issue, or otherwise deal with an important matter affecting the government's Human Resource Management Program. We think it is commendable that OPM is committed to overcoming its internal control problems. However, our comment about OPM's dealing with these problems only on a very broad level was made in the context of pointing out that these internal control problems affect the reliability of the performance measures OPM proposes to use to gauge progress toward achieving its goals. Our report, Agencies' Annual Performance Plans Under the Results Act: An Assessment Guide to Facilitate Congressional Decision Making (GAO/GGD/AIMD-10.1.18, p. 23) states that explaining the limitations of performance information can provide Congress with a context for understanding and assessing agencies' performance and the costs and challenges agencies face in gathering, processing, and analyzing needed data. Thus, we believe a more specific discussion of internal control problems and their effect on data limitations would be desirable. We made clarifying changes to the report on this matter. 4. OPM expressed concern that we cited one of its performance goals as one of "several" other performance goals using almost $12 million from the Employment Service program activity rather than state that the particular performance goal is 1 of "11" major performance goals in the program activity. We have revised the report to reflect this fact. 5. In reference to our statement that OPM's plan does not mention specific training or workforce skills that will be needed to achieve its performance goals, OPM referenced the statement in its plan that states that OPM has a major initiative underway to ensure that gaps in core competencies are addressed. Our position on this issue remains unchanged since OPM's plan does not specify the training or skills needed nor does it link these needs to specific performance goals. This information is needed for policymakers to make informed judgments concerning whether OPM's staffing will in fact be adequate to successfully execute its plan. Alan N. Belkin, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the Office of Personnel Management's (OPM) annual performance plan for fiscal year (FY) 1999, focusing on whether OPM's plan complies with the statutory requirements and congressional intent as contained in the Government Performance and Results Act and related guidance. GAO noted that: (1) OPM's annual performance plan addresses the six program components required by the Results Act; (2) the plan has several performance goals and measures listed under each of its five strategic goals as identified in OPM's September 1997 strategic plan; (3) some of these goals and measures are objective and quantifiable, providing a way to judge whether the goal has been achieved; (4) the plan also lays out, very well, a clear linkage between the FY 1999 performance goals and OPM's mission and strategic goals and also between its goals and its specific program activities and related funding as presented in its 1999 budget; (5) the principal area in which the performance plan could be improved to better meet the purposes of the Results Act is in the statement of its goals; (6) OPM's annual performance plan goals, like those in its strategic plan, tend to be process or activity goals; (7) the Results Act, in contrast, envisions a much greater emphasis on outcome goals that state what overall end result the agency will achieve, such as increasing the effectiveness of the federal civilian workforce; (8) Congress sought this emphasis to help ensure that processes and activities that agencies undertake actually add up to a meaningful result that is commensurate with the resources expended; and (9) OPM's annual performance plan could also be improved by including more discussion on how its resources will be used to achieve its goals and adding a discussion of known data limitations that may affect the validity of various performance measures that OPM plans to use.
| 7,633 | 391 |
Since the enactment of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), AHRQ, an agency within the Department of Health and Human Services (HHS), has been one of several federal agencies responsible for supporting and disseminating the results of CER. The dissemination of CER refers to developing and distributing information derived from CER for target audiences, such as clinicians, consumers, or policymakers, in order to inform health care delivery or practice. This process involves translating research findings into terminology and materials that are appropriate for the target audience. Specifically, AHRQ has supported CER activities by awarding grants and contracts to research centers and academic organizations to carry out this work, which includes reviewing and synthesizing scientific evidence through research reviews; generating new scientific evidence and analytical tools in original research reports; compiling research findings; and communicating those findings to a variety of audiences. Under the American Recovery and Reinvestment Act of 2009 (Recovery Act), AHRQ received funding of $474 million to support and disseminate the results of CER--$300 million that was appropriated to AHRQ and $174 million that was appropriated to the HHS Office of the Secretary and allocated to AHRQ. The Recovery Act also required the Secretary of HHS to enter into a contract with the IOM to produce a report that included recommendations on research questions that should receive national priority for study with CER funds made available under the act. The result of this work included a list of 100 research questions prioritized for CER. In 2010, PPACA authorized the establishment of PCORI as a nonprofit corporation aimed at advancing the quality and relevance of evidence through CER to help patients, clinicians, purchasers, and policy-makers in making informed health care decisions. The law requires PCORI to perform a number of duties related to CER. (See table 1.) PCORI is governed by a 21-member Board of Governors and employs 153 staff, as of September 2014, including directors for its different program areas as well as staff for engagement, communications, contract management, finance, human resources, and information technology. PCORI's 17-member methodology committee defines methodological standards for its research. PCORI has developed five broad research priorities and developed a research agenda to identify how each priority will be addressed. The institute has established a multi-step merit review process to score and identify applications for funding and a process for monitoring contractors. PCORI is also developing a peer review process for primary research and a dissemination plan for completed research. In 2012, PCORI established five broad research priorities: (1) assessment of prevention, diagnosis, and treatment options; (2) improving health care systems; (3) communication and dissemination research; (4) addressing disparities; and (5) accelerating patient-centered outcomes research and methodological research. PCORI also developed a research agenda to identify how each priority would be addressed. The research agenda contains a set of more specific research areas within each priority. PCORI expects its research agenda to be updated and refined over time based on more specific analyses of gaps in research. (See table 2.) PCORI issued its research priorities and agenda in May 2012, following a process that began in July 2011. In the fall of 2011, PCORI formed two workgroups within its Board of Governors--the National Priorities for Research Workgroup and the Research Agenda Workgroup. Along with PCORI staff and members of the Methodology Committee, these workgroups examined the processes and products of other recent priority- and agenda-setting efforts, including those from AHRQ and IOM. They also engaged with and received input from stakeholder groups through a number of public presentations and other modes of communication, such as press releases, focus groups, and feedback through social media. PCORI posted on its website its draft research priorities and agenda for public comment from January 23, 2012 to March 15, 2012. In response, PCORI received and analyzed a total of 474 formal comments, and made changes to the draft priorities and agenda based on these comments. A final version of PCORI's research priorities and agenda were adopted by the Board of Governors and made publicly available in May 2012. PCORI established its research priorities and agenda consistent with PPACA requirements. Specifically, PPACA directed PCORI to establish priorities for research that take into account factors that include disease incidence, prevalence, and burden; gaps in evidence with regard to clinical outcomes; patient needs, outcomes, and preferences; and practice variations and health disparities in terms of delivery and outcomes of care, among other things. The act also directed PCORI to develop a research agenda for addressing these priorities. The act did not specify the content or form of the priorities or agenda. According to documentation on the process PCORI used to develop its research priorities and agenda, the workgroups reviewed and considered these requirements. In the process of developing the research agenda, the workgroups identified where, for example, certain agenda items addressed criteria such as gaps in knowledge, variation in care, and inclusiveness of different populations. According to PCORI officials, PCORI intended its research priorities to be broad in scope so that they could encompass a broad range of topics in need of study, including different disease areas, conditions, or health care system issues, thus making PCORI's research agenda more flexible than if it identified specific topics. PCORI noted in its research priorities and agenda document that PCORI did not want to exclude any disease from being studied. Some stakeholders we interviewed expressed concern that PCORI's research priorities are too broad and lack specificity. While PCORI officials acknowledged that many of the institute's initial funding announcements were broad in that they did not identify specific topics, they noted that they are in the process of increasing the proportion of funding that goes to specific research topics. For example, PCORI will soon begin funding awards for large pragmatic clinical studies to evaluate patient-centered outcomes. The funding announcement for this effort indicates a number of specific research topics of interest to PCORI. PCORI officials noted that applications which are responsive to these specific research topics will be given priority for funding. The specific research topics identified in this funding announcement were developed using, among other things, stakeholder recommendations, IOM's CER Priorities, and input from PCORI's advisory panels. To identify more specific research questions and topics for use in funding announcements, PCORI utilizes advisory panels, as authorized by PPACA. PPACA directed PCORI to establish advisory panels for rare diseases and clinical trials, which PCORI established in November 2013, and also authorized PCORI to establish other advisory panels as needed. In addition to establishing the two advisory panels required by PPACA, PCORI has also established five additional advisory panels: (1) Assessment of Prevention, Diagnosis, and Treatment Options; (2) Improving Healthcare Systems; (3) Addressing Disparities; (4) Patient Engagement; and (5) Communication and Dissemination Research. Four of these advisory panels align with PCORI's research priorities. To appoint members to its advisory panels, PCORI solicits applications via its website. Advisory panel applicants are reviewed based on common criteria established by PCORI and against the needs of the specific panel for which a position is being filled. The PCORI Board of Governors makes the final selection of advisory panel members. PCORI's advisory panels assist in the prioritization of research topics. PCORI receives suggested research topics through a number of sources, including stakeholders, social media, and workshops, as well as from AHRQ, NIH, and professional and advocacy groups. As part of the prioritization process, PCORI's advisory panels evaluate and rank suggested research topics using the following criteria: patient- centeredness, potential condition impact, assessment of current options, likelihood of implementation in practice, and durability of information. Topics identified by advisory panels as being of higher priority proceed for further evaluation while the lower-priority topics enter a pool of topics that may be reconsidered at a later date. Recommendations from PCORI's advisory panels are taken into consideration by PCORI's staff and Board of Governors and are used to refine and prioritize specific research topics and inform the development of PCORI funding announcements. PCORI may also use recommendations from advisory panels to commission reviews of previous and current research on recommended high-priority topics. According to PCORI, its processes for identifying research include mechanisms for avoiding unnecessary duplication and coordinating research efforts with NIH and AHRQ. For example, when establishing its broad research priorities, officials from both NIH and AHRQ participated in PCORI's workgroups to develop these priorities and the related research agenda. As PCORI has developed more specific research priorities for funding announcements, PCORI staff has consulted with relevant NIH staff on specific topics in an effort to obtain expertise and an understanding of what research is being funded on a particular topic. For example, PCORI officials reported and NIH confirmed that when developing a specific research topic related to cardiovascular disease, PCORI coordinated with staff at NIH's National Heart, Lung, and Blood Institute to ensure that PCORI's planned research in this area had not already been sufficiently addressed by prior research. NIH submitted to PCORI a list of potential topics it considered important but was not actively funding, according to PCORI officials. PCORI officials also stated that when a new topic is suggested to PCORI for inclusion in a funding announcement, technical briefs are prepared to document the existing work that has been completed on that topic. In addition, PCORI staff stated that they conduct searches in ClinicalTrials.gov, NIH's Research Portfolio Online Reporting Tools Expenditure and Results (RePORTER), and other research databases to determine if any similar research is in progress or has already been funded.proposed research topic that is unnecessarily duplicative is eliminated. PPACA directs PCORI to enter into contracts to carry out its research priorities. PCORI has established a multi-step research funding process, which includes merit review, that is designed to select high quality research that has the best potential to improve patient outcomes, according to PCORI officials. PCORI officials stated that their merit review process is modeled on the peer review processes established by AHRQ and NIH, which are described in law. Unlike AHRQ and NIH, however, PCORI's merit review process utilizes patients and stakeholders in the review and scoring of applications. Key steps in PCORI's research funding process include the development and posting of funding announcements, review and scoring of applications, and final approval by the Board of Governors. (See figure 1.) Submitted applications are assessed by reviewers recruited by PCORI and selected based on expertise or knowledge in a particular subject area. Reviewers may be patients, other stakeholders, or scientific reviewers. Prior to reviewing applications, reviewers undergo web-based Reviewers score applications using five standard criteria. (See training.table 3.) Applications are scored by reviewers through an online review, and a subset is scored by the full panel during an in-person meeting. For the online review, four reviewers are assigned to evaluate each application-- two scientists and two stakeholders (one of them a patient). Scientific reviewers focus on all 5 criteria, while the patient and stakeholder reviewers focus on the potential of the study to improve health care and outcomes, the extent to which the research is patient-centered, and the extent to which the proposed research includes patient and stakeholder engagement. Reviewers assign an overall score to the application and provide written comments on each application's specific strengths and weaknesses. Scientific reviewers also check to determine if the application adheres to PCORI's methodological standards.is also involved, working with reviewers to ensure that each reviewer understands how the applications should be assessed. Reviewers have one month to electronically submit both their initial scores and detailed written critiques to PCORI. Following the online review, applications advance to in-person merit review meetings where they are reviewed again in panels specific to the funding announcement. To determine which applications will advance, PCORI staff considers reviewers' average overall scores for each application, whether applications have scores that differ significantly among reviewers (and could thus benefit from further discussion), and whether applications received a good score for technical merit criteria and are therefore a strong candidate for funding. Reviewers discuss the applications' merits and weaknesses and, as a panel, provide a final overall application score. After applications are scored at the in-person meeting, PCORI staff determine the applications that will be submitted for review to a PCORI selection committee, composed of PCORI Board of Governors members and up to one member from PCORI's methodology committee. The selection committee proposes a list of applications to fund. According to PCORI staff, while the proposed applications generally consist of the top-scoring applications, some lower-scoring applications may be included to achieve program balance or fund research in critical areas. The full Board of Governors votes to approve the recommended applications. While the Board has the authority to make changes, PCORI staff stated that it has not made such changes to date. Once the recommended applications are approved, awards are announced to the public via PCORI's website, PCORI develops final contract requirements, and research contracts are executed. PCORI officials stated that each cycle of the research funding process takes 9 to 12 months to complete, from the time a funding announcement is posted to the time recommended applications are approved by the Board, of which the merit review process takes about 4 to 6 months. There are currently up to four funding cycles each year and PCORI staff noted that these cycles overlap. PCORI officials reported taking steps during the merit review and application selection process to ensure PCORI's funded research is not duplicative of other research within the federal government or private sector. For example, PCORI officials reported seeking input from NIH and AHRQ during the final selection of awards through each agency's involvement in the selection committee and membership on PCORI's Board of Governors. PCORI officials also noted that in some instances NIH and AHRQ staff assist PCORI in reviewing letters of intent submitted in response to PCORI funding announcements. Finally, before funding an application, PCORI program staff check databases that include ClinicalTrials.gov and NIH's RePORTER to identify any ongoing studies on a particular topic that may be duplicative. Once funded, PCORI contractors are monitored by PCORI staff. According to PCORI staff, following an initial kickoff call with the contractor, PCORI receives a progress report from each contractor every 6 months, and there are additional interactions via phone and through regularly scheduled meetings. Progress reports include updates on key project milestones, a progress statement for public use, a financial status update, and additional documents the contractor deems relevant to the project's progress during the reporting period. PCORI officials stated that if concerns arise regarding a contractor's performance, a site visit to the contractor could be conducted. All contractors are required to submit a final report covering their research at the conclusion of the project. The law also requires PCORI to develop a peer review process to assess the integrity of primary research funded by PCORI and its adherence to PCORI's methodological standards. PCORI officials stated that they are currently developing a peer review assessment process, as required by law, to review final reports submitted by contractors at the conclusion of a project. A draft of this peer review process was posted on PCORI's website for public comment in September 2014. The draft process requires contractors to submit a draft final report to PCORI within three months of the project's completion for review by peer reviewers, who will include researchers from outside of PCORI. These reviewers will consider whether the research presented in each final report has scientific integrity and adheres to PCORI's methodological standards. Following receipt of the draft final report, the peer reviewers may identify revisions, which the contractor is required to respond to within 45 days. After revisions are made and PCORI formally accepts the final report, its draft process states that the institute will create and post on its website a lay abstract intended for the general public, a medical abstract intended for researchers and clinicians, a stand-alone table that presents key findings, and ancillary information such as the identity of the contractor. The contractor also will be required to ensure that the study results are submitted to ClinicalTrials.gov and to include with that submission links to the abstracts posted on the PCORI website. PCORI is currently developing a plan for the dissemination of the research it funds, as required by PPACA, and expects to begin disseminating research results as early as 2015. Specifically, PCORI has entered into a contract for the development of a dissemination and implementation plan. A draft framework for this plan was provided by the contractor to PCORI for comment in July 2014 and PCORI anticipates that the contractor will submit the revised framework to PCORI in February 2015. The draft framework identifies five key elements as core components of a dissemination and implementation plan. (See table 4.) According to PCORI, its dissemination efforts will result in research findings being publicly available within 90 days of receiving final reports from researchers, as required by law. PCORI has determined that this 90-day period will follow the completion of PCORI's peer review process for completed research. Upon accepting the final report on the research conducted by the contractor, PCORI's 90-day period will begin, during which time PCORI will develop abstracts and other materials to post publicly on its website. At the conclusion of this 90-day period, PCORI anticipates posting abstracts, a results table that contains key findings, and ancillary information, such as investigators and conflict of interest information, on its website. The contractor will also be required to ensure that the results tables are submitted to ClinicalTrials.gov, which will link to the abstract posted on PCORI's website. According to PCORI, these dissemination efforts will take place at the end of the 29 to 79 month period for funding, conducting, and disseminating research, depending on the length of the contract. (See figure 2.) PPACA gave both PCORI and AHRQ responsibilities for disseminating CER results produced by PCORI. PPACA directs PCORI to make research findings available to clinicians, patients, and the general public within 90 days of the conduct or receipt of research findings. The act also directs AHRQ to disseminate PCORI's research findings as well as other government-funded research relevant to CER. PCORI officials stated that to coordinate AHRQ's and PCORI's dissemination responsibilities, PCORI has formed a workgroup specifically to address engagement, dissemination, and implementation activities, on which both PCORI and AHRQ officials serve. This workgroup currently meets monthly and discusses plans for disseminating CER results, among other things. A PCORI official stated that PCORI and AHRQ are working together to determine which entity will be best suited for disseminating different types of CER results. As of October 2014, PCORI has awarded 360 contracts to fund research projects across 12 funding areas. PCORI made a total of $670.8 million in commitments to fund these contracts. Most of PCORI's projects are funded for periods of between 2 and 3 years, with some larger studies funded for up to 5 years. In total, PCORI expects to make approximately $2.6 billion in commitments for contracts starting as late as 2019, with about $1.9 billion in commitments occurring between fiscal years 2015 and 2019. While commitments occur when PCORI makes awards to contractors, expenses occur when PCORI pays contractors and spends money for PCORI's operations. Expenses include not only money spent on research contracts, but also on research support activities--such as merit review and contractor monitoring--and administrative expenses. According to PCORI, from inception through fiscal year 2014, it has incurred a total of $235 million in expenses. fund research contracts, with the remaining amounts for research support activities and administrative expenses. Through fiscal year 2015, PCORI anticipates expending a total of $597 million. Overall, PCORI expects to receive an estimated total of $3.5 billion through fiscal year 2019 from the PCORTF to fund its work.$2.6 billion of its $3.5 billion on contracts for research and research infrastructure, with the remaining amounts spent on research support activities and administrative expenses. In fiscal year 2014, PCORI reported that it was experiencing a "delay in spending," that is, contractors were submitting invoices to PCORI and collecting money at a slower rate than initially expected. PCORI officials noted that they have undertaken additional efforts to analyze and better plan for such delays, which can result because of the relatively slow pace of spending that occurs during the beginning of research projects. PCORI has adjusted its spending expectations based on the patterns observed to date, and PCORI officials told us that they will monitor actual expenses as one indicator that funded work is progressing at the appropriate pace. Officials anticipate that PCORI's independent audit of fiscal year 2014 will be issued in early 2015. PCORI's total administrative expenses for fiscal years 2012 and 2013 accounted for 20 percent and 32.5 percent of PCORI's total operating budget, respectively. Examples of administrative expenses include salaries for PCORI staff, health benefits, rent for PCORI's headquarters office, and information technology costs. In its fiscal year 2013 financial audit report, issued by a private, independent auditor, PCORI noted that administrative costs for fiscal year 2013 remained high, but were not outside of expected levels given that fiscal year 2013 was a period of investment in systems and infrastructure that would not be sustained at current levels in future years. According to PCORI's financial projections, it expects that total administrative expenses will constitute 14 percent and 8.2 percent of its total expenses in fiscal years 2014 and 2015, respectively. (See figure 3.) Each of the projects PCORI has committed to funding has been awarded within one of twelve funding areas. (See figure 4.) Five of these funding areas closely correspond to PCORI's priority areas. Some of the other areas--such as reducing disparities in asthma--cover specific topics within a priority area. According to PCORI officials, approximately $106 million in commitments to date are for the PCORnet data research network, the aim of which is to improve the capacity for and speed of conducting CER. PCORI officials stated that they expect to spend a total of $271 million on PCORnet through 2019. PCORnet is a distributed data research network, which means that no central repository of data exists. Instead, multiple organizations, each with their own data, agree to allow users to query their data and combine it with data from the other organizations on a project-by-project basis. PCORnet consists of 29 separate health data networks called clinical data research networks (CDRN) and patient- powered research networks (PPRN), some of which existed prior to PCORnet and some of which were created with PCORnet funding. PCORnet is still undergoing development and testing. CDRNs and PPRNs are currently working to map their data to the PCORnet common data model. The common data model standardizes the definition, content, and format of data aggregated by CDRNs and PPRNs, which is necessary to allow researchers to use data from multiple CDRNs and PPRNs. An initial test query was conducted using PCORnet in September 2014. PCORnet is expected to be used to conduct an initial clinical research trial starting in 2015. Officials stated that limited amounts of data will be available through PCORnet for queries by researchers after September 2015, with the amount of available data increasing over time. While PCORI officials, stakeholders, and PCORnet contractors noted that PCORnet has the potential to significantly improve the ability to conduct CER, they also noted challenges that PCORI will face with regard to the establishment and future operation of PCORnet. For example, they expect that the process of mapping data to the common data model will be slow and resource intensive because of the lack of standardization among existing data maintained by CDRNs and PPRNs, such as data from electronic health records (EHR). PCORI officials recognize the challenges caused by the lack of standardization. They expect that both the common data model and a future requirement by PCORI for CDRNs to hire additional staff with expertise related to this work will help to address this challenge. PCORnet contractors also noted uncertainty regarding future costs and the sustainability of the network, particularly if the PCORTF is not reauthorized beyond fiscal year 2019. One stakeholder we interviewed noted that some core funding would always be required to maintain the central operation of the network. PCORI officials acknowledge that some future core funding could be needed, but they expect it to be lower than current funding levels. They also expect that future research projects funded by PCORI, NIH, or others such as the pharmaceutical and device industry will pay for the use of PCORnet's data. PCORI officials also noted that each CDRN and PPRN will be required to provide a sustainability plan for continuing operation once PCORI funds are no longer available. Another concern noted by CDRN officials is that their data does not cover all care received by patients due to the fragmented nature of the U.S. health care system, although such "complete" data would be preferred by PCORI. An official from one PPRN noted that the completeness of CDRN and PPRN data could be improved by having PPRNs and CDRNs share data with one another, which is not current practice. PCORI officials said that they are collaborating with other relevant organizations, including state Medicaid offices and private health insurers, to identify how CDRNs could link their EHR data to claims data, which would improve data completeness. In addition, PCORI officials stated they are requiring CDRNs to identify and implement links with other institutions that have additional patient data. PCORI's evaluation group--a body composed of members from its Board of Governors, methodology committee, advisory panel on patient engagement, external experts, and PCORI staff --has developed initial plans for evaluating PCORI's efforts against its three strategic goals, which are to increase information, speed implementation, and influence research. To do so, PCORI identified primary outcome measures for each of its strategic goals. In its strategic plan, PCORI notes that these are meant to be long-term measures because research typically requires several years to complete and additional years for the results to be disseminated and implemented. Therefore, since 2013, PCORI has been using early and intermediate process and output measures as a way to monitor its progress toward its strategic goals. PCORI anticipates having some early results related to its primary outcome measures starting in 2017 after the first CER studies are completed and their findings released, although full evaluation of the results of these outcome measures will not be possible until around 2020, after a large number of CER studies have been completed and a few years have elapsed, allowing time for study results to be taken up. (See table 5.) Officials stated that to collect information for monitoring PCORI's progress and for evaluating PCORI's impact on the dissemination and uptake of CER study results, PCORI will employ a variety of data collection methods, including surveys, focus groups, interviews, case studies, and document reviews, to collect data on how potential consumers of CER perceive PCORI's work and CER in general. For example, PCORI is conducting a number of surveys to collect both baseline and ongoing data to gauge changes in perceptions of CER over time. (See table 6.) PCORI officials stated that these baseline data will be compared against future similar data collection efforts in an attempt to see whether PCORI's work is contributing to improved understanding and increased use of CER. PCORI anticipates having preliminary baseline data by the end of 2014. PCORI has identified some limitations and challenges related to their evaluation methods. Specifically, PCORI's evaluation plans rely on survey development, focus groups and interviews, data extraction from PCORI databases, and expert panels. In its plans, PCORI identifies potential response bias and self-report bias as limitations to some of those data collection methods. Further, PCORI officials stated that measuring outcomes such as reducing practice variation and changing health care delivery can be challenging, particularly within a 5- or 10-year timeframe. PCORI staff stated that disseminating research results in an effort to improve health care is a long-term challenge, as past research suggests that it takes more than 17 years for research evidence to affect clinical practice settings. PCORI staff stated that they hope the inclusion of end users in their research process--such as patients and clinicians--will expedite the uptake of PCORI's research findings, but it is too soon to tell if this will be the case. Finally, officials stated that it will be difficult to know for certain whether any measured changes in health care delivery and practice are attributable to PCORI-funded research or due to other efforts. Therefore, officials stated that they will have to rely on the measures identified in PCORI's strategic plan for a small subset of PCORI's studies, to determine the extent to which these studies may influence reductions in practice variation or other changes in health care delivery. PCORI is also undertaking efforts to assess the extent to which its funded research addresses CER priority topics identified by the IOM in its 2009 PCORI issued a request report and AHRQ's Future Research Needs.for proposal to evaluate whether the research topics they funded address the CER priority topics identified by IOM and AHRQ. Contractors were selected and began conducting this work in June 2014, and it is anticipated that the work will continue through 2015, with the possibility of it extending into 2016. PCORI plans to use the results from this evaluation to inform additional funding announcements in the future. PCORI officials stated that preliminary analysis has shown that about half of the research studies PCORI has funded to date directly related to a CER priority topic identified in IOM's 2009 report. PCORI has conducted a preliminary effort to compare the extent to which mental health research PCORI has funded aligns with mental health topics in the IOM report, as well as some additional analyses to show the extent to which PCORI-funded research aligns with IOM's 100 CER priorities, according to officials. For example, an analysis in September 2013 showed that of PCORI's 55 projects at that time, 6 were closely related, 23 were somewhat related, and 26 were unrelated to the IOM's 100 CER priorities. PCORI officials noted that the IOM's CER priorities were developed in 2009 and that, given the amount of time that has passed, it is likely that some of the topics listed in the IOM report are no longer critical, while other CER topics have increased in importance. PCORI officials stated that, as a result, the IOM's CER priorities alone are not a good indicator of PCORI's progress related to CER. We provided a draft of this report to PCORI for review and comment. PCORI provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Executive Director of PCORI and other interested parties. In addition, the report will be available at no charge on GAO's website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix I. In addition to the contact named above, Will Simerl, Assistant Director; Jennie Apter; LaSherri Bush; Ashley Dixon; Colbie Holderness; Andrea E. Richardson; and Jennifer Whitworth made key contributions to this report.
|
In 2010, PPACA authorized the establishment of PCORI as a federally funded, nonprofit corporation to improve the quality and relevance of CER. PCORI, which began operation in 2010, is required to identify research priorities, establish a research project agenda, fund research consistent with its research agenda, and disseminate research results, among other things. To fund PCORI, PPACA established the Patient-Centered Outcomes Research Trust Fund, through which the institute is expected to receive an estimated $3.5 billion from fiscal years 2010 through 2019. PPACA mandated that GAO review PCORI's activities by 2015 and 2018. This report examines (1) the extent to which PCORI established priorities and processes for funding and disseminating comparative clinical effectiveness research consistent with its legislative requirements; (2) the status of PCORI's efforts to fund comparative clinical effectiveness research; and (3) PCORI's plans, if any, to evaluate the effectiveness of its work. GAO reviewed relevant legislative requirements and PCORI documentation, including funding data, and interviewed PCORI officials. GAO also interviewed relevant stakeholders, including health policy experts and PCORI contractors. PCORI provided technical comments, which GAO incorporated as appropriate. The Patient-Centered Outcomes Research Institute (PCORI) has established priorities and processes for funding comparative clinical effectiveness research (CER)--which is research that evaluates and compares health outcomes and the clinical effectiveness, risks, and benefits of two or more medical treatments, services, or items such as health care interventions--and is developing dissemination plans, consistent with the legislative requirements of the Patient Protection and Affordable Care Act (PPACA). In 2012, PCORI established five broad research priorities: (1) assessment of prevention, diagnosis, and treatment options; (2) improving health care systems; (3) researching communication and dissemination strategies; (4) comparing interventions to reduce health disparities; and (5) accelerating patient-centered outcomes research and methodological research. PCORI also developed a research agenda to identify how each priority would be addressed. PCORI has established a multi-step research funding process designed to assess and select contract applications for funding. Funded contracts are monitored by PCORI staff. Per legislative requirement, PCORI is developing a peer review assessment process to review final reports submitted by contract awardees and is in the process of developing a plan for the dissemination of funded research potentially beginning in 2015, in coordination with the Agency for Healthcare Research and Quality. PCORI has started awarding contracts for research and plans to award additional contracts through 2019. As of October 2014, PCORI has awarded 360 contracts to fund research projects, committing a total of $670.8 million to them. PCORI expects to commit about $2.6 billion to research contracts, out of $3.5 billion in total estimated spending. Approximately $106 million in commitments to date are for PCORnet, a data research network aimed at improving the capacity for and speed of conducting CER. PCORI officials stated that they expect to spend a total of $271 million on PCORnet through fiscal year 2019. PCORI officials stated that limited amounts of data will be available through PCORnet for researchers to use after September 2015 with the amount of available data increasing over time. PCORI has established an evaluation plan and is developing efforts to measure outcomes. PCORI has developed initial plans for evaluating the institute's efforts against its three strategic goals, which are to increase information, speed implementation, and influence research. To do so, PCORI has developed primary outcome measures for assessing PCORI's progress related to these strategic goals. In its strategic plan, PCORI notes that these are meant to be long-term measures because research typically requires several years to complete and additional years for the results to be disseminated and implemented. Therefore, since 2013, PCORI has been using early and intermediate process and output measures--such as the number of people accessing or referencing PCORI information--as a way to monitor its progress toward its strategic goals. PCORI anticipates having some early results related to its primary outcome measures starting in 2017 after the first CER studies are completed and their findings released, although full evaluation of the results of these outcome measures will not be possible until around 2020.
| 6,878 | 892 |
The United States has approximately 360 commercial sea and river ports that handle more than $1.3 trillion in cargo annually. A wide variety of goods travels through these ports each day--including automobiles, grain, and millions of cargo containers. While no two ports are exactly alike, many share certain characteristics such as their size, proximity to a metropolitan area, the volume of cargo they process, and connections to complex transportation networks. These characteristics can make them vulnerable to physical security threats. Moreover, entities within the maritime port environment are vulnerable to cyber-based threats because they rely on various types of information and communications technologies to manage the movement of cargo throughout the ports. These technologies include terminal operating systems, which are information systems used to, among other things, control container movements and storage; industrial control systems, which facilitate the movement of goods using conveyor belts or pipelines to structures such as refineries, processing plants, and storage tanks; business operations systems, such as e-mail and file servers, enterprise resources planning systems, networking equipment, phones, and fax machines, which support the business operations of the terminal; and access control and monitoring systems, such as camera surveillance systems and electronically enabled physical access control devices, which support a port's physical security and protect sensitive areas. All of these systems are potentially vulnerable to cyber-based attacks and other threats, which could disrupt operations at a port. While port owners and operators are responsible for the cybersecurity of their operations, federal agencies have specific roles and responsibilities for supporting these efforts. The National Infrastructure Protection Plan (NIPP) establishes a risk management framework to address the risks posed by cyber, human, and physical elements of critical infrastructure. It details the roles and responsibilities of DHS in protecting the nation's critical infrastructures; identifies agencies that have lead responsibility for coordinating with federally designated critical infrastructure sectors (maritime is a component of one of these sectors--the transportation sector); and specifies how other federal, state, regional, local, tribal, territorial, and private-sector stakeholders should use risk management principles to prioritize protection activities within and across sectors. The NIPP establishes a framework for operating and sharing information across and between federal and nonfederal stakeholders within each sector. These coordination activities are carried out through sector coordinating councils and government coordinating councils. Further, under the NIPP, each critical infrastructure sector is to develop a sector- specific plan that details the application of the NIPP risk management framework to the sector. As the sector-specific agency for the maritime mode of the transportation sector, the Coast Guard is to coordinate protective programs and resilience strategies for the maritime environment. Further, Executive Order 13636, issued in February 2013, calls for various actions to improve the cybersecurity of critical infrastructure. These include developing a cybersecurity framework; increasing the volume, timeliness, and quality of cyber threat information shared with the U.S. private sector; considering prioritized actions within each sector to promote cybersecurity; and identifying critical infrastructure for which a cyber incident could have a catastrophic impact. More recently, the Cybersecurity Enhancement Act of 2014 further refined public-private collaboration on critical infrastructure cybersecurity by authorizing the National Institute of Standards and Technology to facilitate and support the development of a voluntary set of standards, guidelines, methodologies, and procedures to cost-effectively reduce cyber risks to critical infrastructure. In addition to these cyber-related policies and law, there are laws and regulations governing maritime security. One of the primary laws is the Maritime Transportation Security Act of 2002 (MTSA) which, along with its implementing regulations developed by the Coast Guard, requires a wide range of security improvements for the nation's ports, waterways, and coastal areas. DHS is the lead agency for implementing the act's provisions, and DHS component agencies, including the Coast Guard and the Federal Emergency Management Agency (FEMA), have specific responsibilities for implementing the act. To carry out its responsibilities for the security of geographic areas around ports, the Coast Guard has designated a captain of the port within each of 43 geographically defined port areas. The captain of the port is responsible for overseeing the development of the security plans within each of these port areas. In addition, maritime security committees, made up of key stakeholders, are to identify critical port infrastructure and risks to the port areas, develop mitigation strategies for these risks, and communicate appropriate security information to port stakeholders. As part of their duties, these committees are to assist the Coast Guard in developing port area maritime security plans. The Coast Guard is to develop a risk-based security assessment during the development of the port area maritime security plans that considers, among other things, radio and telecommunications systems, including computer systems and networks that may, if damaged, pose a risk to people, infrastructure, or operations within the port. In addition, under MTSA, owners and operators of individual port facilities are required to develop facility security plans to prepare certain maritime facilities, such as container terminals and chemical processing plants, for deterring a transportation security incident. The implementing regulations for these facility security plans require written security assessment reports to be included with the plans that, among other things, contain an analysis that considers measures to protect radio and telecommunications equipment, including computer systems and networks. MTSA also codified the Port Security Grant Program, which is to help defray the costs of implementing security measures at domestic ports. Port areas use funding from this program to improve port-wide risk management, enhance maritime domain awareness, and improve port recovery and resilience efforts through developing security plans, purchasing security equipment, and providing security training to employees. FEMA is responsible for administering this program with input from Coast Guard subject matter experts. Like threats affecting other critical infrastructures, threats to the maritime IT infrastructure are evolving and growing and can come from a wide array of sources. Risks to cyber-based assets can originate from unintentional or intentional threats. Unintentional threats can be caused by, among other things, natural disasters, defective computer or network equipment, software coding errors, and careless or poorly trained employees. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled insiders, foreign nations engaged in espionage and information warfare, and terrorists. These adversaries vary in terms of their capabilities, willingness to act, and motives, which can include seeking monetary gain or pursuing a political, economic, or military advantage. For example, adversaries possessing sophisticated levels of expertise and significant resources to pursue their objectives--sometimes referred to as "advanced persistent threats"--pose increasing risks. They make use of various techniques-- or exploits--that may adversely affect federal information, computers, software, networks, and operations, such as a denial of service, which prevents or impairs the authorized use of networks, systems, or applications. Reported incidents highlight the impact that cyber attacks could have on the maritime environment, and researchers have identified security vulnerabilities in systems aboard cargo vessels, such as global positioning systems and systems for viewing digital nautical charts, as well as on servers running on systems at various ports. In some cases, these vulnerabilities have reportedly allowed hackers to target ships and terminal systems. Such attacks can send ships off course or redirect shipping containers from their intended destinations. For example, according to Europol's European Cybercrime Center, a cyber incident was reported in 2013 (and corroborated by the FBI) in which malicious software was installed on a computer at a foreign port. The reported goal of the attack was to track the movement of shipping containers for smuggling purposes. A criminal group used hackers to break into the terminal operating system to gain access to security and location information that was leveraged to remove the containers from the port. In June 2014 we reported that DHS and the other stakeholders had taken limited steps with respect to maritime cybersecurity. In particular, risk assessments for the maritime mode did not address cyber-related risks; maritime-related security plans contained limited consideration of cybersecurity; information-sharing mechanisms shared cybersecurity information to varying degrees; and the guidance for the Port Security Grant Program did not take certain steps to ensure that cyber risks were addressed. In its 2012 National Maritime Strategic Risk assessment, which was the most recent available at the time of our 2014 review, the Coast Guard did not address cyber-related risks to the maritime mode. As called for by the NIPP, the Coast Guard completes this assessment on a biennial basis, and it is to provide a description of the types of threats the Coast Guard expects to encounter within its areas of responsibility, such as ensuring the security of port facilities, over the next 5 to 8 years. The assessment is to be informed by numerous inputs, such as historical incident and performance data, the views of subject matter experts, and risk models, including the Maritime Security Risk Analysis Model, which is a tool that assesses risk in terms of threat, vulnerability, and consequences. However, we found that while the 2012 assessment contained information regarding threats, vulnerabilities, and the mitigation of potential risks in the maritime environment, none of the information addressed cyber- related risks or provided a thorough assessment of cyber-related threats, vulnerabilities, and potential consequences. Coast Guard officials attributed this gap to limited efforts to develop inputs related to cyber threats to inform the risk assessment. For example, the Maritime Security Risk Analysis Model did not contain information related to cyber threats. The officials noted that they planned to address this deficiency in the next iteration of the assessment, which was to be completed by September 2014, but did not provide details on how cybersecurity would be specifically addressed. We therefore recommended that DHS direct the Coast Guard to ensure that the next iteration of the maritime risk assessment include cyber- related threats, vulnerabilities, and potential consequences. DHS concurred with our recommendation, and the September 2014 version of the National Maritime Strategic Risk Assessment identifies cyber attacks as a threat vector for the maritime environment and assigns some impact values to these threats. However, the assessment does not identify vulnerabilities of cyber-related assets. Without fully addressing threats, vulnerabilities, and consequences of cyber incidents in its assessment, the Coast Guard and its sector partners will continue to be hindered in their ability to appropriately plan and allocate resources for protecting maritime-related critical infrastructure. As we reported in June 2014, maritime security plans required by MTSA did not fully address cyber-related threats, vulnerabilities, and other considerations. Specifically, three area maritime security plans we reviewed from three high-risk port areas contained very limited, if any, information about cyber-threats and mitigation activities. For example, the three plans included information about the types of information and communications technology systems that would be used to communicate security information to prevent, manage, and respond to a transportation security incident; the types of information considered to be sensitive security information; and how to securely handle such information. They did not, however, identify or address any other potential cyber-related threats directed at or vulnerabilities in these systems or include cybersecurity measures that port-area stakeholders should take to prevent, manage, and respond to cyber-related threats and vulnerabilities. Similarly, nine facility security plans from the nonfederal organizations we met with during our 2014 review generally had very limited cybersecurity information. For example, two of the plans had generic references to potential cyber threats, but did not have any specific information on assets that were potentially vulnerable or associated mitigation strategies. Officials representing the Coast Guard and nonfederal entities acknowledged that their facility security plans at the time generally did not contain cybersecurity information. Coast Guard officials and other stakeholders stated that the area and facility-level security plans did not adequately address cybersecurity because the guidance for developing the plans did not require a cyber component. Officials further stated that guidance for the next iterations of the plans, which were to be developed in 2014, addressed cybersecurity. However, in the absence of a maritime risk environment that addressed cyber risk, we questioned whether the revised plans would appropriately address the cyber-related threats and vulnerabilities affecting the maritime environment. Accordingly, we recommended that DHS direct the Coast Guard to use the results of the next maritime risk assessment to inform guidance for incorporating cybersecurity considerations for port area and facility security plans. While DHS concurred with this recommendation, as noted above, the revised maritime risk assessment does not address vulnerabilities of systems supporting maritime port operations, and thus is limited as a tool for informing maritime cybersecurity planning. Further, it is unclear to what extent the updated port area and facility plans include cyber risks because the Coast Guard has not yet provided us with updated plans. Consistent with the private-public partnership model outlined in the NIPP, the Coast Guard helped establish various collaborative bodies for sharing security-related information in the maritime environment. For example, the Maritime Modal Government Coordinating Council was established to enable interagency coordination on maritime security issues, and members included representatives from DHS, as well as the Departments of Commerce, Defense, Justice, and Transportation. Meetings of this council discussed implications for the maritime mode of the President's executive order on improving critical infrastructure cybersecurity, among other topics. In addition, the Maritime Modal Sector Coordinating Council, consisting of owners, operators, and associations from within the sector, was established in 2007 to enable coordination and information sharing. However, this council disbanded in March 2011 and was no longer active, when we conducted our 2014 review. Coast Guard officials stated that maritime stakeholders had viewed the sector coordinating council as duplicative of other bodies, such as area maritime security committees, and thus there was little interest in reconstituting the council. In our June 2014 report, we noted that in the absence of a sector coordinating council, the maritime mode lacked a body to facilitate national-level information sharing and coordination of security-related information. By contrast, maritime security committees are focused on specific geographic areas. We therefore recommended that DHS direct the Coast Guard to work with maritime stakeholders to determine if the sector coordinating council should be reestablished. DHS concurred with this recommendation, but has yet to take action on this. The absence of a national-level sector coordinating council increases that risk that critical infrastructure owners and operators will be unable to effectively share information concerning cyber threats and strategies to mitigate risks arising from them. In 2013 and 2014 FEMA identified enhancing cybersecurity capabilities as a funding priority for its Port Security Grant Program and provided guidance to grant applicants regarding the types of cybersecurity-related proposals eligible for funding. However, in our June 2014 report we noted that the agency's national review panel had not consulted with cybersecurity-related subject matter experts to inform its review of cyber- related grant proposals. This was partly because FEMA had downsized the expert panel that reviewed grants. In addition, because the Coast Guard's maritime risk assessment did not include cyber-related threats, grant applicants and reviewers were not able to use the results of such an assessment to inform grant proposals, project review, and risk-based funding decisions. Accordingly, we recommended that DHS direct FEMA to (1) develop procedures for grant proposal reviewers, at both the national and field level, to consult with cybersecurity subject matter experts from the Coast Guard when making funding decisions and (2) use information on cyber- related threats, vulnerabilities, and consequences identified in the revised maritime risk assessment to inform funding guidance for grant applicants and reviewers. Regarding the first recommendation, FEMA officials told us that since our 2014 review, they have consulted with the Coast Guard's Cyber Command on high-dollar value cyber projects and that Cyber Command officials sat on the review panel for one day to review several other cyber projects. FEMA officials also provided examples of recent field review guidance sent to the captains of the port, including instructions to contact Coast Guard officials if they have any questions about the review process. However, FEMA did not provide written procedures at either the national level or the port area level for ensuring that grant reviews are informed by the appropriate level of cybersecurity expertise. FEMA officials stated the fiscal year 2016 Port Security Grant Program guidance will include specific instructions for both the field review and national review as part of the cyber project review. With respect to the second recommendation, since the Coast Guard's 2014 maritime risk assessment does not include information about cyber vulnerabilities, as discussed above, the risk assessment would be of limited value to FEMA in informing its guidance for grant applicants and reviewers. As a result, we continue to be concerned that port security grants may not be allocated to projects that will best contribute to the cybersecurity of the maritime environment. In summary, protecting the nation's ports from cyber-based threats is of increasing importance, not only because of the prevalence of such threats, but because of the ports' role as conduits of over a trillion dollars in cargo each year. Ports provide a tempting target for criminals seeking monetary gain, and successful attacks could potentially wreak havoc on the national economy. The increasing dependence of port activities on computerized information and communications systems makes them vulnerable to many of the same threats facing other cyber-reliant critical infrastructures, and federal agencies play a key role by working with port facility owners and operators to secure the maritime environment. While DHS, through the Coast Guard and FEMA, has taken steps to address cyber threats in this environment, they have been limited and more remains to be done to ensure that federal and nonfederal stakeholders are working together effectively to mitigate cyber-based threats to the ports. Until DHS fully implements our recommendations, the nation's maritime ports will remain susceptible to cyber risks. Chairman Miller, Ranking Member Vela, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions you may have at this time. If you or your staff have any questions about this testimony, please contact Gregory C. Wilshusen, Director, Information Security Issues at (202) 512-6244 or [email protected]. GAO staff who made key contributions to this testimony are Michael W. Gilmore, Assistant Director; Bradley W. Becker; Jennifer L. Bryant; Kush K. Malhotra; and Lee McCracken. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The nation's maritime ports handle more than $1.3 trillion in cargo each year: a disruption at one of these ports could have a significant economic impact. Increasingly, port operations rely on computerized information and communications technologies, which can be vulnerable to cyber-based attacks. Federal entities, including DHS's Coast Guard and FEMA, have responsibilities for protecting ports against cyber-related threats. GAO has designated the protection of federal information systems as a government-wide high-risk area since 1997, and in 2003 expanded this to include systems supporting the nation's critical infrastructure. This statement addresses (1) cyber-related threats facing the maritime port environment and (2) steps DHS has taken to address cybersecurity in that environment. In preparing this statement, GAO relied on work supporting its June 2014 report on cybersecurity at ports. (GAO-14-459) Similar to other critical infrastructures, the nation's ports face an evolving array of cyber-based threats. These can come from insiders, criminals, terrorists, or other hostile sources and may employ a variety of techniques or exploits, such as denial-of-service attacks and malicious software. By exploiting vulnerabilities in information and communications technologies supporting port operations, cyber-attacks can potentially disrupt the flow of commerce, endanger public safety, and facilitate the theft of valuable cargo. In its June 2014 report, GAO determined that the Department of Homeland Security (DHS) and other stakeholders had taken limited steps to address cybersecurity in the maritime environment. Specifically: DHS's Coast Guard had not included cyber-related risks in its biennial assessment of risks to the maritime environment, as called for by federal policy. Specifically, the inputs into the 2012 risk assessment did not include cyber-related threats and vulnerabilities. Officials stated that they planned to address this gap in the 2014 revision of the assessment. However, when GAO recently reviewed the updated risk assessment, it noted that the assessments did not identify vulnerabilities of cyber-related assets, although it identified some cyber threats and their potential impacts. The Coast Guard also did not address cyber-related risks in its guidance for developing port area and port facility security plans. As a result, port and facility security plans that GAO reviewed generally did not include cyber threats or vulnerabilities. While Coast Guard officials noted that they planned to update the security plan guidance to include cyber-related elements, without a comprehensive risk assessment for the maritime environment, the plans may not address all relevant cyber-threats and vulnerabilities. The Coast Guard had helped to establish information-sharing mechanisms called for by federal policy, including a sector coordinating council, made up of private-sector stakeholders, and a government coordinating council, with representation from relevant federal agencies. However, these bodies shared cybersecurity-related information to a limited extent, and the sector coordinating council was disbanded in 2011. Thus, maritime stakeholders lacked a national-level forum for information sharing and coordination. DHS's Federal Emergency Management Agency (FEMA) identified enhancing cybersecurity capabilities as a priority for its port security grant program, which is to defray the costs of implementing security measures. However, FEMA's grant review process was not informed by Coast Guard cybersecurity subject matter expertise or a comprehensive assessment of cyber-related risks for the port environment. Consequently, there was an increased risk that grants were not allocated to projects that would most effectively enhance security at the nation's ports. GAO concluded that until DHS and other stakeholders take additional steps to address cybersecurity in the maritime environment--particularly by conducting a comprehensive risk assessment that includes cyber threats, vulnerabilities, and potential impacts--their efforts to help secure the maritime environment may be hindered. This in turn could increase the risk of a cyber-based disruption with potentially serious consequences. In its June 2014 report on port cybersecurity, GAO recommended that the Coast Guard include cyber-risks in its updated risk assessment for the maritime environment, address cyber-risks in its guidance for port security plans, and consider reestablishing the sector coordinating council. GAO also recommended that FEMA ensure funding decisions for its port security grant program are informed by subject matter expertise and a comprehensive risk assessment. DHS has partially addressed two of these recommendations since GAO's report was issued.
| 3,950 | 905 |
Collecting information is one way that federal agencies carry out their missions. For example, IRS needs to collect information from taxpayers and their employers to know the correct amount of taxes owed. The U.S. Census Bureau collects information used to apportion congressional representation and for many other purposes. When new circumstances or needs arise, agencies may need to collect new information. We recognize, therefore, that a large portion of federal paperwork is necessary and often serves a useful purpose. Nonetheless, besides ensuring that information collections have public benefit and utility, federal agencies are required by the PRA to minimize the paperwork burden that they impose. Among the act's provisions aimed at this purpose are detailed requirements, included in the 1995 amendments to the PRA, spelling out how agencies are to review information collections before submitting them to OMB for approval. According to these amendments, an agency official independent of those responsible for the information collections (that is, the program offices) is to evaluate whether information collections should be approved. This official is the agency's CIO, who is to review each collection of information to certify that the collection meets 10 standards (see table 1) and to provide support for these certifications. In addition, the original PRA of 1980 (section 3514(a)) requires OMB to keep Congress "fully and currently informed" of the major activities under the act and to submit a report to Congress at least annually on those activities. Under the 1995 amendments, this report must include, among other things, a list of any increases in burden. To satisfy this requirement, OMB prepares the annual PRA report, which reports on agency actions during the previous fiscal year, including changes in agencies' burden-hour estimates. In addition, the 1995 PRA amendments required OMB to set specific goals for reducing burden from the level it had reached in 1995: at least a 10 percent reduction in the governmentwide burden-hour estimate for each of fiscal years 1996 and 1997, a 5 percent governmentwide burden reduction goal in each of the next 4 fiscal years, and annual agency goals that reduce burden to the "maximum practicable opportunity." At the end of fiscal year 1995, federal agencies estimated that their information collections imposed about 7 billion burden hours on the public. Thus, for these reduction goals to be met, the burden-hour estimate would have had to decrease by about 35 percent, to about 4.6 billion hours, by September 30, 2001. In fact, on that date, the federal paperwork estimate had increased by about 9 percent, to 7.6 billion burden hours. For the most recent PRA report, the OMB Director sent a bulletin in September 2004 to the heads of executive departments and agencies requesting information to be used in preparing its report on actions during fiscal year 2004. In May 2005, OMB published this report, which shows changes in agencies' burden-hour estimates during fiscal year 2004. According to OMB's most recent PRA report to Congress, the estimated total burden hours imposed by government information collections in fiscal year 2004 was 7.971 billion hours; this is a decrease of 128 million burden hours (1.6 percent) from the previous year's total of about 8.099 billion hours. It is also about a billion hours larger than in 1995 and 3.4 billion larger than the PRA target for the end of fiscal year 2001 (4.6 billion burden hours). The reduction for fiscal year 2004 was a result of several types of changes, which OMB assigns to various categories. OMB classifies all changes--either increases or decreases--in agencies' burden- hour estimates as either "program changes" or "adjustments." * Program changes are the result of deliberate federal government action (e.g., the addition or deletion of questions on a form) and can occur as a result of new statutory requirements, agency-initiated actions, or the expiration or reinstatement of OMB-approved collections. * Adjustments do not result from federal burden-reduction activities but rather are caused by factors such as changes in the population responding to a requirement or agency reestimates of the burden associated with a collection of information. For example, if the economy declines and more people complete applications for food stamps, the resulting increase in the Department of Agriculture's paperwork estimate is considered an adjustment because it is not the result of deliberate federal action. Table 2 shows the changes in reported burden totals since the fiscal year 2003 PRA report. As table 2 shows, the change in the "adjustments" category was the largest factor in the decrease for fiscal year 2004. These results are similar to those for fiscal year 2003, in which adjustments of 181.7 million hours led to an overall decrease of 116.3 million hours (1.4 percent) in total burden estimated. The slight decreases that occurred in fiscal years 2004 and 2003 followed several years of increases, as shown in table 3. As table 3 also shows, if adjustments are disregarded, the federal government paperwork burden would have increased by about 28.5 million burden hours in fiscal year 2004 ("total program changes" in table 2). The largest percentage of governmentwide burden can be attributed to the IRS. In fiscal year 2004, IRS accounted for about 78 percent of governmentwide burden: about 6210 million hours. No other agency's estimate approaches this level: As of September 30, 2004, only five agencies had burden-hour estimates of 100 million hours or more (the Departments of Health and Human Services, Labor, and Transportation; EPA; and the Securities and Exchange Commission). Thus, as we have previously reported, changes in paperwork burden experienced by the federal government have been largely attributable to changes associated with IRS. However, in interpreting these figures, it is important to keep in mind their limitations. First, as estimates, they are not precise; changes from year to year, particularly small ones, may not be meaningful. Second, burden-hour estimates are not a simple matter. The "burden hour" has been the principal unit of paperwork burden for more than 50 years and has been accepted by agencies and the public because it is a clear, easy-to-understand concept. However, it is challenging to estimate the amount of time it will take for a respondent to collect and provide the information or how many individuals an information collection will affect. Therefore, the degree to which agency burden-hour estimates reflect real burden is unclear. (IRS is sufficiently concerned about the methodology it uses to develop burden estimates that it is in the process of developing and testing alternative means of measuring paperwork burden.) Because of these limitations, the degree to which agency burden-hour estimates reflect real burden is unclear, and so the significance of small changes in these estimates is also uncertain. Nonetheless, these estimates are the best indicators of paperwork burden available, and they can be useful as long as the limitations are clearly understood. Among the PRA provisions intended to help achieve the goals of minimizing burden while maximizing utility are the requirements for CIO review and certification of information collections. The 1995 amendments required agencies to establish centralized processes for reviewing proposed information collections within the CIO's office. Among other things, the CIO's office is to certify, for each collection, that the 10 standards in the act have been met, and the CIO is to provide a record supporting these certifications. The four agencies in our review all had written directives that implemented the review requirements in the act, including the requirement for CIOs to certify that the 10 standards in the act were met. The estimated certification rate ranged from 100 percent at IRS and HUD to 92 percent at VA. Governmentwide, agencies certified that the act's 10 standards had been met on an estimated 98 percent of the 8,211 collections. However, in the 12 case studies that we reviewed, this CIO certification occurred despite a lack of rigorous support that all standards were met. Specifically, the support for certification was missing or partial on 65 percent (66 of 101) of the certifications. Table 4 shows the result of our analysis of the case studies. For example, under the act, CIOs are required to certify that each information collection is not unnecessarily duplicative. According to OMB instructions, agencies are to (1) describe efforts to identify duplication and (2) show specifically why any similar information already available cannot be used or modified for the purpose described. Program reviews were conducted to identify potential areas of duplication; however, none were found to exist. There is no known Department or Agency which maintains the necessary information, nor is it available from other sources within our Department. is a new, nationwide service that does not duplicate any single existing service that attempts to match employers with providers who refer job candidates with disabilities. While similar job-referral services exist at the state level, and some nation-wide disability organizations offer similar services to people with certain disabilities, we are not aware of any existing survey that would duplicate the scope or content of the proposed data collection. Furthermore, because this information collection involves only providers and employers interested in participating in the EARN service, and because this is a new service, a duplicate data set does not exist. While this example shows that the agency attempted to identify duplicative sources, it does not discuss why information from state and other disability organizations could not be aggregated and used, at least in part, to satisfy the needs of this collection. We have attempted to eliminate duplication within the agency wherever possible. This assertion provides no information on what efforts were made to identify duplication or perspective on why similar information, if any, could not be used. Further, the files contained no evidence that the CIO reviewers challenged the adequacy of this support or provided support of their own to justify their certification. A second example is provided by the standard requiring each information collection to reduce burden on the public, including small entities, to the extent practicable and appropriate. OMB guidance emphasizes that agencies are to demonstrate that they have taken every reasonable step to ensure that the collection of information is the least burdensome necessary for the proper performance of agency functions. In addition, OMB instructions and guidance direct agencies to provide specific information and justifications: (1) estimates of the hour and cost burden of the collections and (2) justifications for any collection that requires respondents to report more often than quarterly, respond in fewer than 30 days, or provide more than an original and two copies of documentation. With regard to small entities, OMB guidance states that the standard emphasizes such entities because these often have limited resources to comply with information collections. The act cites various techniques for reducing burden on these small entities, and the guidance includes techniques that might be used to simplify requirements for small entities, such as asking fewer questions, taking smaller samples than for larger entities, and requiring small entities to provide information less frequently. Our review of the case examples found that for the first part of the certification, which focuses on reducing burden on the public, the files generally contained the specific information and justifications called for in the guidance. However, none of the case examples contained support that addressed how the agency ensured that the collection was the least burdensome necessary. According to agency CIO officials, the primary cause for this absence of support is that OMB instructions and guidance do not direct agencies to provide this information explicitly as part of the approval package. For the part of the certification that focuses on small businesses, our governmentwide sample included examples of various agency activities that are consistent with this standard. For instance, Labor officials exempted 6 million small businesses from filing an annual report; telephoned small businesses and other small entities to assist them in completing a questionnaire; reduced the number of small businesses surveyed; and scheduled fewer compliance evaluations on small contractors. For four of our case studies, however, complete information that would support certification of this part of the standard was not available. Seven of the 12 case studies involved collections that were reported to impact businesses or other for-profit entities, but for 4 of the 7, the files did not explain either * why small businesses were not affected or * even though such businesses were affected, that burden could or could not be reduced. Referring to methods used to minimize burden on small business, the files included statements such as "not applicable." These statements do not inform the reviewer whether there was an effort made to reduce burden on small entities or not. When we asked agencies about these four cases, they indicated that the collections did, in fact, affect small business. OMB's instructions to agencies on this part of the certification require agencies to describe any methods used to reduce burden only if the collection of information has a "significant economic impact on a substantial number of small entities." This does not appropriately reflect the act's requirements concerning small business: the act requires that the CIO certify that the information collection reduces burden on small entities in general, to the extent practical and appropriate, and provides no thresholds for the level of economic impact or the number of small entities affected. OMB officials acknowledged that their instruction is an "artifact" from a previous form and more properly focuses on rulemaking rather than the information collection process. The lack of support for these certifications appears to be influenced by a variety of factors. In some cases, as described above, OMB guidance and instructions are not comprehensive or entirely accurate. In the case of the duplication standard specifically, IRS officials said that the agency does not need to further justify that its collections are not duplicative because (1) tax data are not collected by other agencies so there is no need for the agency to contact them about proposed collections and (2) IRS has an effective internal process for coordinating proposed forms among the agency's various organizations that may have similar information. Nonetheless, the law and instructions require support for these assertions, which was not provided. In addition, agency reviewers told us that management assigns a relatively low priority and few resources to reviewing information collections. Further, program offices have little knowledge of and appreciation for the requirements of the PRA. As a result of these conditions and a lack of detailed program knowledge, reviewers often have insufficient leverage with program offices to encourage them to improve their justifications. When support for the PRA certifications is missing or inadequate, OMB, the agency, and the public have reduced assurance that the standards in the act, such as those on avoiding duplication and minimizing burden, have been consistently met. IRS and EPA have supplemented the standard PRA review process with additional processes aimed at reducing burden while maximizing utility. These agencies' missions require them both to deal extensively with information collections, and their management has made reduction of burden a priority. In January 2002, the IRS Commissioner established an Office of Taxpayer Burden Reduction, which includes both permanently assigned staff and staff temporarily detailed from program offices that are responsible for particular information collections. This office chooses a few forms each year that are judged to have the greatest potential for burden reduction (these forms have already been reviewed and approved through the CIO process). The office evaluates and prioritizes burden reduction initiatives by * determining the number of taxpayers impacted; * quantifying the total time and out-of-pocket savings for taxpayers; * evaluating any adverse impact on IRS's voluntary compliance * assessing the feasibility of the initiative, given IRS resource * tying the initiative into IRS objectives. Once the forms are chosen, the office performs highly detailed, in- depth analyses, including extensive outreach to the public affected, the users of the information within and outside the agency, and other stakeholders. This analysis includes an examination of the need for each data element requested. In addition, the office thoroughly reviews form design. The office's Director heads a Taxpayer Burden Reduction Council, which serves as a forum for achieving taxpayer burden reduction throughout IRS. IRS reports that as many as 100 staff across IRS and other agencies can be involved in burden reduction initiatives, including other federal agencies, state agencies, tax practitioner groups, taxpayer advocacy panels, and groups representing the small business community. The council directs its efforts in five major areas: * simplifying forms and publications; * streamlining internal policies, processes, and procedures; * promoting consideration of burden reductions in rulings, regulations, and laws; * assisting in the development of burden reduction measurement * partnering with internal and external stakeholders to identify areas of potential burden reduction. IRS reports that this targeted, resource-intensive process has achieved significant reductions in burden: over 200 million burden hours since 2002. For example, it reports that about 95 million hours of taxpayer burden were reduced through increases in the income- reporting threshold on various IRS schedules. Another burden reduction initiative includes a review of the forms that 15 million taxpayers use to request an extension to the date for filing their tax returns. Similarly, EPA officials stated that they have established processes for reviewing information collections that supplement the standard PRA review process. These processes are highly detailed and evaluative, with a focus on burden reduction, avoiding duplication, and ensuring compliance with PRA. According to EPA officials, the impetus for establishing these processes was the high visibility of the agency's information collections and the recognition, among other things, that the success of EPA's enforcement mission depended on information collections being properly justified and approved: in the words of one official, information collections are the "life blood" of the agency. According to these officials, the CIO staff are not generally closely involved in burden reduction initiatives, because they do not have sufficient technical program expertise and cannot devote the extensive time required. Instead, these officials said that the CIO staff's focus is on fostering high awareness within the agency of the requirements associated with information collections, educating and training the program office staff on the need to minimize burden and the impact on respondents, providing an agencywide perspective on information collections to help avoid duplication, managing the clearance process for agency information collections, and acting as liaison between program offices and OMB during the clearance process. To help program offices consider PRA requirements such as burden reduction and avoiding duplication as they are developing new information collections or working on reauthorizing existing collections, the CIO staff also developed a handbook to help program staff understand what they need to do to comply with PRA and gain OMB approval. In addition, program offices at EPA have taken on burden reduction initiatives that are highly detailed and lengthy (sometimes lasting years) and that involve extensive consultation with stakeholders (including entities that supply the information, citizens groups, information users and technical experts in the agency and elsewhere, and state and local governments). For example, EPA reports that it amended its regulations to reduce the paperwork burden imposed under the Resource Conservation and Recovery Act. One burden reduction method EPA used was to establish higher thresholds for small businesses to report information required under the act. EPA estimates that the initiative will reduce burden by 350,000 hours and save $22 million annually. Another EPA program office reports that it is proposing a significant reduction in burden for its Toxic Release Inventory program. Overall, EPA and IRS reported that they produced significant reductions in burden by making a commitment to this goal and dedicating resources to it. In contrast, for the 12 information collections we examined, the CIO review process resulted in no reduction in burden. Further, the Department of Labor reported that its PRA reviews of 175 proposed collections over nearly 2 years did not reduce burden. Similarly, both IRS and EPA addressed information collections that had undergone CIO review and received OMB approval and nonetheless found significant opportunities to reduce burden. In summary, government agencies often need to collect information to perform their missions. The PRA puts in place mechanisms to focus agency attention on the need to minimize the burden that these information collections impose--while maximizing the public benefit and utility of government information collections--but these mechanisms have not succeeded in achieving the ambitious reduction goals set forth in the 1995 amendments. Achieving real reductions in the paperwork burden is an elusive goal, as years of PRA reports attest. Among the mechanisms to fulfill the PRA's goals is the CIO review required by the act. However, as this process is currently implemented, it has limited effect on the quality of support provided for information collections. CIO reviews appear to be lacking the rigor that the Congress envisioned. Many factors have contributed to these conditions, including lack of management support, weaknesses in OMB guidance, and the CIO staff's lack of specific program expertise. As a result, OMB, federal agencies, and the public have reduced assurance that government information collections are necessary and that they appropriately balance the resulting burden with the benefits of using the information collected. The targeted approaches to burden reduction used by IRS and EPA suggest promising alternatives to the current process outlined in the PRA. However, the agencies' experience also suggests that to make such an approach successful requires top-level executive commitment, extensive involvement of program office staff with appropriate expertise, and aggressive outreach to stakeholders. Indications are that such an approach would also be more resource- intensive than the current process. Moreover, such an approach may not be warranted at all agencies that do not have the level of paperwork issues that face IRS and similar agencies. Consequently, it is critical that any efforts to expand the use of the IRS and EPA models consider these factors. In our report, we suggested options that the Congress may want to consider in its deliberations on reauthorizing the act, including mandating pilot projects to test and review alternative approaches to achieving PRA goals. We also made recommendations to the Director of OMB, including that the office alter its current guidance to clarify and emphasize issues raised in our review, and to the heads of the four agencies to improve agency compliance with the act's provisions. Madam Chairman, this completes my prepared statement. I would be pleased to answer any questions. For future information regarding this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6420, or [email protected]. Other individuals who made key contributions to this testimony were Barbara Collier, Alan Stapleton, Warren Smith, and Elizabeth Zhao.
|
Americans spend billions of hours each year providing information to federal agencies by filling out information collections (forms, surveys, or questionnaires). A major aim of the Paperwork Reduction Act (PRA) is to minimize the burden that these collections impose on the public, while maximizing their public benefit. Under the act, the Office of Management and Budget (OMB) is to approve all such collections and to report annually on the agencies' estimates of the associated burden. In addition, agency Chief Information Officers (CIO) are to review information collections before they are submitted to OMB for approval and certify that the collections meet certain standards set forth in the act. For its testimony, GAO was asked to comment on OMB's burden report for 2004 and to discuss its recent study of PRA implementation (GAO-05-424), concentrating on CIO review and certification processes and describing alternative processes that two agencies have used to minimize burden. For its study, GAO reviewed a governmentwide sample of collections, reviewed processes and collections at four agencies that account for a large proportion of burden, and performed case studies of 12 approved collections. The total paperwork burden imposed by federal information collections shrank slightly in fiscal year 2004, according to estimates provided in OMB's annual PRA report to Congress. The estimated total burden was 7.971 billion hours--a decrease of 1.6 percent (128 million burden hours) from the previous year's total of about 8.099 billion hours. Different types of changes contributed to the overall change in these estimates, according to OMB. For example, adjustments to the estimates (from such factors as changes in estimation methods and estimated number of respondents) accounted for a decrease of about 156 million hours (1.9 percent), and agency burden reduction efforts led to a decrease of about 97 million hours (1.2 percent). These decreases were partially offset by increases in other categories, primarily an increase of 119 million hours (1.5 percent) arising from new statutes. However, because of limitations in the accuracy of burden estimates, the significance of small changes in these estimates is unclear. Nonetheless, as the best indicators of paperwork burden available, these estimates can be useful as long as the limitations are clearly understood. Among the PRA provisions aimed at helping to achieve the goals of minimizing burden while maximizing utility is the requirement for CIO review and certification of information collections. GAO's review of 12 case studies showed that CIOs provided these certifications despite often missing or inadequate support from the program offices sponsoring the collections. Further, although the law requires CIOs to provide support for certifications, agency files contained little evidence that CIO reviewers had made efforts to improve the support offered by program offices. Numerous factors have contributed to these problems, including a lack of management support and weaknesses in OMB guidance. Because these reviews were not rigorous, OMB, the agency, and the public had reduced assurance that the standards in the act--such as minimizing burden--were consistently met. In contrast, the Internal Revenue Service (IRS) and the Environmental Protection Agency (EPA) have set up processes outside the CIO review process that are specifically focused on reducing burden. These agencies, whose missions involve numerous information collections, have devoted significant resources to targeted burden reduction efforts that involve extensive outreach to stakeholders. According to the two agencies, these efforts led to significant reductions in burden on the public. In contrast, for the 12 case studies, the CIO review process did not reduce burden. In its report, GAO recommended that OMB and the agencies take steps to improve review processes and compliance with the act. GAO also suggested that the Congress may wish to consider mandating pilot projects to target some collections for rigorous analysis along the lines of the IRS and EPA approaches. OMB and the agencies agreed with most of the recommendations, but disagreed with aspects of GAO's characterization of agencies' compliance with the act's requirements.
| 4,699 | 836 |
The Coalition Provisional Authority (CPA), established in May 2003, was the U.N.-recognized coalition authority led by the United States and the United Kingdom that was responsible for the temporary governance of Iraq. In May 2003, the CPA dissolved the military organizations of the former regime and began the process of creating or reestablishing new Iraqi security forces, including the police and new Iraqi army. Over time, multinational force commanders assumed responsibility for recruiting and training some Iraqi defense and police forces in their areas of responsibility. On June 28, 2004, the CPA transferred power to a sovereign Iraqi interim government, the CPA officially dissolved, and Iraq's transitional period began. Under Iraq's transitional law, the transitional period covers the interim government phase and the transitional government period, which is scheduled to end by December 31, 2005. The multinational force (MNF-I) has the authority to take all necessary measures to contribute to security and stability in Iraq during this process, working in partnership with the Iraqi government to reach agreement on security and policy issues. A May 2004 national security presidential directive required the U.S. Central Command (CENTCOM) to direct all U.S. government efforts to organize, equip, and train Iraqi security forces. The Multi-National Security Transition Command-Iraq, which operates under MNF-I, now leads coalition efforts to train, equip, and organize Iraqi security forces. In October 2003, the multinational force outlined a four-phased plan for transferring security missions to Iraqi security forces. The four phases were (1) mutual support, where the multinational force establishes conditions for transferring security responsibilities to Iraqi forces; (2) transition to local control, where Iraqi forces in a local area assume responsibility for security; (3) transition to regional control, where Iraqi forces are responsible for larger regions; and (4) transition to strategic over watch, where Iraqi forces on a national level are capable of maintaining a secure environment against internal and external threats, with broad monitoring from the multinational force. The plan's objective was to allow a gradual drawdown of coalition forces first in conjunction with the neutralization of Iraq's insurgency and second with the development of Iraqi forces capable of securing their country. Citing the growing capability of Iraqi security forces, MNF-I attempted to quickly shift responsibilities to them in February 2004 but did not succeed in this effort. In March 2004, Iraqi security forces numbered about 203,000, including about 76,000 police, 78,000 facilities protection officers, and about 38,000 in the civilian defense corps. Police and military units performed poorly during an escalation of insurgent attacks against the coalition in April 2004. According to a July 2004 executive branch report to Congress, many Iraqi security forces around the country collapsed during this uprising. Some Iraqi forces fought alongside coalition forces. Other units abandoned their posts and responsibilities and in some cases assisted the insurgency. A number of problems contributed to the collapse of Iraqi security forces. MNF-I identified problems in training and equipping them as among the reasons for their poor performance. Training of police and some defense forces was not uniform and varied widely across Iraq. MNF-I's commanders had the leeway to institute their own versions of the transitional police curriculum, and the training for some defense forces did not prepare them to fight against well-armed insurgents. Further, according to the CPA Director of Police, when Iraqi police voluntarily returned to duty in May 2003, CPA initially provided limited training and did not thoroughly vet the personnel to get them on the streets quickly. Many police who were hired remain untrained and unvetted, according to Department of Defense (DOD) officials. MNF-I completed a campaign plan during summer 2004 that elaborated and refined the original strategy for transferring security responsibilities to Iraqi forces at the local, regional, and then national levels. Further details on this campaign plan are classified. On March 1, 2005, the CENTCOM Commander told the Senate Armed Services Committee that Iraqi security forces were growing in capability but were not yet ready to take on the insurgency without the presence, help, mentoring, and assistance of MNF-I. He cited a mixed performance record for the Iraqi security forces during the previous 11 months. The commander further testified that focused training and mentoring of Iraqi Intervention Forces, Iraqi Special Operations Forces, and National Guard forces contributed to successful coalition operations in places such as Najaf and Kufa during August 2004 and Fallujah during November 2004, and during the January 2005 elections. On the other hand, he also cited instances of poor performance by the police in western Baghdad from August through October 2004 and Mosul during November 2004. U.S. government data does not provide reliable information on the status of Iraqi military and police forces. According to a March 2005 State Department report, as of February 28, 2005, the Iraqi Ministry of Defense had 59,695 operational troops, or roughly two thirds of the total required. The Ministry of Interior had 82,072 trained and equipped officers on duty, or almost half of the total required. Table 1 shows status of Iraqi forces under the Ministries of Defense and Interior. MNF-I's goal is to train and equip a total of about 271,000 Iraqi security forces by July 2006. However, the numbers of security forces, as reported in table 1, are limited in providing accurate and complete information on the status of Iraqi forces. Specifically: The reported number of security forces overstates the number actually serving. Ministry of Interior reports, for example, include police who are absent without leave in its totals. Ministry of Defense reports exclude the absent military personnel from its totals. According to DOD officials, the number of absentees is probably in the tens of thousands. The reported number of Iraqi police is unreliable. According to a senior official from the U.S. embassy in Baghdad, MNF-I does not know how many Iraqi police are on duty at any given point because the Ministry of Interior does not receive consistent and accurate reporting from police stations across Iraq. The Departments of Defense and State do not provide additional information on the extent to which trained Iraqi security forces have their necessary equipment. As recently as September 2004, State issued unclassified reports with detailed information on the number of weapons, vehicles, communication equipment, and body amour required by each security force compared to the amount received. State had also provided weekly unclassified updates on the number of personnel trained in each unit. In addition, the total number of Iraqi security forces includes forces with varying missions and training levels. Not all units are designed to be capable of fighting the insurgency. For example, the police service, which numbers about 55,000 of Iraq's 141,000 personnel who have received training, has a civilian law enforcement function. As of mid-December 2004, paramilitary training for a high-threat hostile environment was not part of the curriculum for new recruits. The missions of other units, such as the Ministry of Defense's commando battalion and the Ministry of Interior's Emergency Response Unit, focus on combating terrorism. Required training for both forces includes counterterrorism. Table 2 provides information on the types of military and police units, their missions, and their training. The multinational force's security transition plan depends on neutralizing the insurgent threat and increasing Iraqi security capability. The insurgent threat has increased since June 2003, as insurgent attacks have grown in number, sophistication, and complexity. At the same time, MNF-I and the Iraqi government confront difficulties to building Iraqi security forces that are capable of effectively combating the insurgency. These include programming effective support for a changing force structure, assessing progress in developing capable forces without a system for measuring their readiness, developing leadership and loyalty throughout the Iraqi chain of command, and developing police who abide by the rule of law in a hostile environment. According to senior military officials, the insurgency in Iraq--particularly the Sunni insurgency--has grown in number, complexity, and intensity over the past 18 months. On February 3, 2005, the Chairman of the Joint Chiefs of Staff told the Senate Armed Services Committee that the insurgency in Iraq had built up slowly during the first year, then became very intense from summer 2004 through January 2005. Figure 1 provides Defense Intelligence Agency (DIA) data showing these trends in enemy initiated attacks against the coalition, its Iraqi partners, and infrastructure. Overall attacks peaked in August 2004 due to a rise in violence in Sunni- dominated regions and an uprising by the Mahdi Army, a Shi'a insurgent group led by radical Shi'a cleric Muqtada al-Sadr. Although the November 2004 and January 2005 numbers were slightly lower than those for August, it is significant that almost all of the attacks in these 2 months took place in Sunni-majority areas, whereas the August attacks took place countrywide. MNF-I is the primary target of the attacks, but the number of attacks against Iraqi civilians and security forces increased significantly during January 2005. On March 1, 2005, the CENTCOM Commander told the Senate Armed Services Committee that more Iraqi security forces than Americans have died in action against insurgents since June 2004. Insurgents have demonstrated their ability to increase attacks around key events, according to the DIA Director's February 2005 statement before the Senate Select Committee on Intelligence. For example, attacks spiked in April and May 2004, the months before the transfer of power to the Iraqi interim government; in November 2004 due to a rise in violence in Sunni- dominated areas during Ramadan and MNF-I's operation against insurgents in Fallujah; and in January 2005 before the Iraqi elections. The DIA Director testified that attacks on Iraq's election day reached about 300, double the previous 1 day high of about 150 during last year's Ramadan. About 80 percent of all attacks occurred in Sunni-dominated central Iraq, with the Kurdish north and Shia south remaining relatively calm. In February and March 2004, the DIA Director and CENTCOM Commander presented their views of the nature of the insurgency to the Senate Select Committee on Intelligence and the Senate Armed Services Committee, respectively. According to these officials, the core of the insurgency consists of Sunni Arabs, dominated by Ba'athist and former regime elements. Shi'a militant groups, such as those associated with the radical Shi'a cleric Muqtada al-Sadr, remain a threat to the political process. Following the latest round of fighting last August and September, DIA concluded that al-Sadr's forces were re-arming, re-organizing, and training, with al-Sadr keeping his options open to employ his forces. Jihadists have been responsible for many high-profile attacks that have a disproportionate impact, although their activity accounts for only a fraction of the overall violence. Foreign fighters comprise a small component of the insurgency and a very small percentage of all detainees. DIA believes that insurgents' infiltration and subversion of emerging government institutions, security, and intelligence services will be a major problem for the new government. In late October 2004, according to a CENTCOM document, MNF-I estimated the overall size of active enemy forces at about 20,000. The estimate consisted of about 10,000 former regime members; about 3,000 members of al Sadr's forces; about 1,000 in the al-Zarqawi terrorist network; and about 5,000 criminals, religious extremists, and their supporters. In February and March 2005, the Chairman of the Joint Chiefs of Staff and the CENTCOM Commander told the Senate Armed Services Committee that it is difficult to develop an accurate estimate of the number of insurgents. The CENTCOM commander explained that the number of insurgent fighters, supporters, and sympathizers can rise and fall depending on the politics, problems, and major offensive operations in a given area. He also acknowledged that gaps exist in the intelligence concerning the broader insurgency, particularly in the area of human intelligence. The CENTCOM commander and MNF-I commanding general recently cited Iraq's January 2005 elections as an important step toward Iraqi sovereignty and security but cautioned against possible violence in the future. In March 2005, the MNF-I commanding general stated that the insurgency has sufficient ammunition, weapons, money, and people to maintain about 50 to 60 attacks per day in the Sunni areas. The CENTCOM Commander told the Senate Armed Services Committee that the upcoming processes of writing an Iraqi constitution and forming a new government could trigger more violence, as the former regime elements in the insurgency seek a return to power. The MNF-I commanding general stated that a combination of political, military, economic, and communications efforts will ultimately defeat the insurgency. On March 1, 2005, the CENTCOM Commander told the Senate Armed Services Committee that Iraqi security forces are not yet ready to take on the insurgency without the presence, help, mentoring, and assistance of MNF-I. MNF-I has faced four key challenges in helping Iraq develop security forces capable of combating the insurgency or conducting law enforcement duties in a hostile environment. These key challenges are (1) training, equipping, and sustaining a changing force structure; (2) determining progress in developing capable forces without a system for measuring their readiness; (3) developing loyalty and leadership throughout the Iraqi chain of command; and (4) developing police capable of democratic law enforcement in a hostile environment. The Iraqi security force structure has constantly changed in response to the growing insurgency. This makes it difficult to provide effective support--the training, equipping, and sustaining of Iraqi forces. DOD defines force structure as the numbers, size, and composition of units that comprise defense forces. Some changes to the Iraqi force structure have resulted from a Multi-National Security Transition Command-Iraq analysis of needed Iraqi security capabilities during summer 2004 and reported in October 2004. The Iraqi government has made other changes to forces under the Ministries of Defense and Interior to allow them to better respond to the increased threat. According to a February 2005 DOD budget document, MNF-I and the Iraqi government plan to increase the force structure over the next year. According to the October report, a number of enhancements in Iraqi force capabilities and infrastructure were critically needed to meet the current threat environment. Based on this review, the MNF-I Commander decided to increase the size of the Iraqi Police Service from 90,000 to 135,000 personnel; the Iraqi National Guard by 20 battalions to 62 battalions; and the Department of Border Enforcement from 16,000 to 32,000 border officers. The review also supported in the creation of the Civil Intervention Force, which consists of nine specialized Public Order Battalions and two Special Police Regiments under the Ministry of Interior. This force is designed to provide a national level, high-end, rapid response capability to counter large-scale civil disobedience and insurgency activities. Over the past year, the Iraqi government has created, merged, and expanded Iraqi security forces under the Ministries of Defense and Interior. For example, according to a DOD official, the Iraqi Army Chief of Staff created the Iraqi Intervention Force in April 2004 in response to the unwillingness of a regular Army battalion to fight Iraqi insurgents in Fallujah. This intervention force will be comprised of nine battalions and is the counter-insurgency wing of the Iraqi Army. According to Iraq's national security strategy, the Iraqi government decided to increase the Iraqi Army from 100,000 soldiers to 150,000 personnel by the end of this year and extend the time required to complete their training from July 2005 to December 2005. The government planned to form this larger army by including the Iraqi National Guard and accelerating the training and recruitment of new troops. In addition, in late 2004, the Ministry of Interior added the Mechanized Police Brigade, a paramilitary, counter- insurgency unit that will consist of three battalions that will deploy to high-risk areas. It also created the paramilitary, army-type Special Police Commando brigades. According to DOD document supporting the February 2005 supplemental request, the Iraqi government planned to add a number of additional military elements, primarily support units, to the force structure over the next year. These include logistics units at the division level and below, a mechanized division, and a brigade each for signals, military police, engineering, and logistics. MNF-I officials stated that, as of March 2005, MNF-I and the Iraqi government do not yet have a system in place to assess the readiness of Iraq's various security forces to accomplish their assigned missions and tasks. However, in early 2005, the commanding general of the Multi- National Security Transition Command-Iraq said that MNF-I had begun work on a system to assess Iraqi capabilities. MNF-I plans to develop a rating system along the lines of the U.S. military readiness reporting system. According to the commanding general of the Multi-National Security Transition Command-Iraq, this system most likely would have Iraqi brigade commanders evaluating such things as the training readiness of their units, their personnel field, and their equipping levels. They also would provide a subjective judgment of the units' readiness. The commanding general said that this rating system would take time to implement. It is unclear at this time whether the system under development would provide adequate measures for determining the capability of Iraqi police. Because the police have a civilian law enforcement function rather than a military or paramilitary role in combating the insurgency, MNF-I may have to develop a separate system for determining police readiness. On March 1, 2005, the CENTCOM Commander told the Senate Armed Services Committee that the establishment of an effective Iraqi chain of command is a critical factor in determining when Iraqi security forces will be capable of taking the lead in fighting the counterinsurgency. The CENTCOM Commander added that the Iraqi chain of command must be loyal and capable, take orders from the Iraqi head of state through the lawful chain of command, and fight to serve the Iraqi people. MNF-I faces several challenges in helping to develop an effective chain of command, including questionable loyalty among some Iraqi security forces, poor leadership in Iraqi units, and the destabilizing influence of militias outside the control of the Iraqi government. The executive branch reported in July 2004 that some Iraqi security forces had turned to fight with insurgents during the spring uprising. In October 2004, in response to questions we submitted, CENTCOM officials indicated that it is difficult to determine with any certainty the true level of insurgent infiltration within Iraqi security forces. Recent reports indicate that some Iraqi security personnel continue to cooperate with insurgents. For example, a February 2005 report cited instances of insurgent infiltration of Iraqi police forces. Police manning a checkpoint in one area were reporting convoy movements by mobile telephone to local terrorists. Police in another area were infiltrated by former regime elements. In February 2005 press briefings, the Secretary of Defense and the commanding general of the Multi-National Security Transition Command- Iraq cited the leadership of Iraqi security forces as a critical element in developing Iraqi forces capable of combating insurgents. MNF-I officials indicated that they plan to expand the use of military transition teams to support Iraqi units. These teams would help train the units and headquarters and accompany them into combat. On March 1, 2005, the CENTCOM Commander told the Senate Armed Services Committee that there is broad, general agreement that MNF-I must do more to train, advise, mentor, and help Iraqi security forces. CENTCOM has requested an additional 1,487 troops to support these efforts and must have the continued support of the new Iraqi government. The continued existence of militias outside the control of Iraq's central government also presents a major challenge to developing an effective chain of command. In late May 2004, the CPA developed a transition and reintegration strategy for disbanding or controlling militias that existed prior to the transfer of power to the Iraqi interim government. Detailed information on the current status of militias in Iraq is classified. However, the CENTCOM Commander acknowledged the continued existence of older militias and the recent creation of new militias. He said that their presence will ultimately be destabilizing unless they are strictly controlled, come under government supervision, and are not allowed to operate independently. MNF-I's efforts to develop a police force that abides by and upholds the rule of law while operating in a hostile environment have been difficult. U.S. police trainers in Jordan told us in mid-December 2004 that Iraqi police were trained and equipped to do community policing in a permissive security environment. Thus, Iraqi police were not prepared to withstand the insurgent attacks that they have faced over the past year and a half. According to the State Department's Country Report on Human Rights Practices for 2004, more than 1,500 Iraqi police have been killed between April 2003 and December 2004. To address this weakness, MNF-I and the Iraqi government report taking steps to better prepare some police to operate during an insurgency. In a December 2004 press briefing, the MNF-I Commander stated that MNF-I was moving to add paramilitary-type skills to the police training program to improve some units' ability to operate in a counterinsurgency environment. U.S. police trainers in Jordan told us that the curriculum was being revised to provide police paramilitary capabilities. In addition, according to the Iraq's national security strategy, the Iraqi government is in the process of upgrading security measures at police stations throughout the country. According to State's 2004 human rights report, police have operated in a hostile environment. Attacks by insurgents and foreign terrorists have resulted in killings, kidnappings, violence, and torture. Bombings, executions, killings of government officials, shootings, and intimidation were a daily occurrence throughout all regions and sectors of society. The report also states that members of the Ministry of Interior's security forces committed numerous, serious human rights abuses. For example, in early December 2004, the Basrah police reported that the Internal Affairs Unit was involved in the killings of 10 members of the Baath Party and the killings of a mother and daughter accused of prostitution. The report further states that, according to Human Rights Watch, torture and ill treatment of detainees by the police was commonplace. Additionally, the report states that corruption continued to be a problem. The Iraq Commission for Public Integrity was investigating cases of police abuse involving unlawful arrests, beatings, and theft of valuables from the homes of persons detained. The multinational force has been working to transfer full security responsibilities for the country to the Iraqi military and police. However, the multinational force and Iraq face the challenges of an intense insurgency, a changing Iraqi force structure, the lack of a system to measure military and police readiness, an Iraqi leadership and chain of command in its infancy, and a police force that finds it difficult to uphold the rule of law in a hostile environment. MNF-I recognizes these challenges and is moving to address them so it can begin to reduce its presence in Iraq and draw down its troops. Of particular note is MNF-I's effort to develop a system to assess unit readiness and to embed MNFI-I transition teams into units to mentor Iraqis. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions you or the other Subcommittee members may have. For further information, please contact Joseph A. Christoff on (202) 512- 8979. Individuals who made key contributions to this testimony were Lynn Cothern, Mattias Fenton, Laura Helm, Judy McCloskey, Tet Miyabara, Michael Rohrback, and Audrey Solis. We provided preliminary observations on 1) the strategy for transferring security responsibilities to Iraqi military and police forces, 2) the data on the status of the forces, and 3) challenges the Multi. National Force in Iraq (MNF-I) faces in transferring security missions to these forces. We conducted our review for this statement during February and March 2005 in accordance with generally accepted government auditing standards. We used only unclassified information for this statement To examine the strategy for transferring security responsibilities to Iraqi forces, we focused on the 2003 security transition concept plan. We obtained and reviewed the transition plan and related documents and interviewed officials from the Coalition Provisional Authority and the Departments of State and Defense. Our work on this issue is described in June 2004 GAO report entitled Rebuilding Iraq: Resource, Security, Governance, Essential Services, and Oversight Issues (GAO-04-902R). To update information on the transition concept, we reviewed statements for the record from the Commander, U.S. Central Command (CENTCOM) Commander and the MNF-I commanding general on the campaign plan and on the capability and recent performance of Iraqi security forces. These statements focused on Iraqi security forces' ability to perform against the insurgency, as well as the training and mentoring of forces that contributed to successful operations. To determine the data on Iraqi security forces, we reviewed unclassified Department of State status reports from June 2004 to March 2005 that provided information about the number of troops by the Ministries of Defense and Interior. We interviewed State and Department of Defense (DOD) officials about the number of Iraqi police on duty and the structure of the Iraqi police forces. To identify the type of training the Iraqi security forces receive, we reviewed and organized data and information from the Multi-National Security Transition Command-Iraq. We also visited the Jordan International Police Training Center in Amman, Jordan to determine the training security forces receive. This approach allowed us to verify that Iraqi security forces have varying missions and training levels and not all are designed to be capable of fighting the insurgency. To discuss the insurgency in Iraq, we reviewed statements for the record from the Chairman of the Joint Chiefs of Staff, the Director of the Defense Intelligence Agency (DIA), and the CENTCOM Commander on the status of the insurgency. We obtained data and reports from DIA on the number of reported incidents from June 2003 through February 2005. We obtained written responses from CENTCOM on the strength and composition of the insurgency. To address the challenges to increasing the capability of Iraqi security forces, we reviewed statements for the record by the CENTCOM Commander, the MNF-I commanding general, and DOD officials. We also examined the Iraqi National Security Strategy, funding documents from the Office of Management and Budget and State Department, and the fiscal year 2005 Supplemental Request of the President. We obtained and reviewed further breakdowns of briefings on the supplemental request. To identify challenges in developing the Iraqi police force, we interviewed police trainers in Jordan and reviewed the State Department's Country Report on Human Rights Practices for 2004. We obtained comments on a draft of this statement from State and DOD, including CENTCOM. All generally agreed with our statement and provided technical comments that we have incorporated as appropriate. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Since the fall of the former Iraq regime in April 2003, the multinational force has been working to develop Iraqi military and police forces capable of maintaining security. To support this effort, the United States provided about $5.8 billion in 2003-04 to develop Iraq's security capability. In February 2005, the president requested a supplemental appropriation with an additional $5.7 billion to accelerate the development of Iraqi military and police forces. GAO provides preliminary observations on (1) the strategy for transferring security responsibilities to Iraqi military and police forces; (2) the data on the status of forces, and (3) challenges that the Multi-National Force in Iraq faces in transferring security missions to these forces. To prepare this statement, GAO used unclassified reports, status updates, security plans, and other documents from the Departments of Defense and State. GAO also used testimonies and other statements for the record from officials such as the Secretary of Defense. In addition, GAO visited the Iraqi police training facility in Jordan. The Multinational Force in Iraq has developed and begun to implement a strategy to transfer security responsibilities to the Iraqi military and police forces. This strategy would allow a gradual drawdown of its forces based on the multinational force neutralizing the insurgency and developing Iraqi military and police services that can independently maintain security. U.S. government agencies do not report reliable data on the extent to which Iraqi security forces are trained and equipped. As of March 2005, the State Department reported that about 82,000 police forces under the Iraqi Ministry of Interior and about 62,000 military forces under the Iraqi Ministry of Defense have been trained and equipped. However, the reported number of Iraqi police is unreliable because the Ministry of Interior does not receive consistent and accurate reporting from the police forces around the country. The data does not exclude police absent from duty. Further, the departments of State and Defense no longer report on the extent to which Iraqi security forces are equipped with their required weapons, vehicles, communications equipment, and body armor. The insurgency in Iraq has intensified since June 2003, making it difficult to transfer security responsibilities to Iraqi forces. From that time through January 2005, insurgent attacks grew in number, complexity, and intensity. At the same time, the multinational force has faced four key challenges in increasing the capability of Iraqi forces: (1) training, equipping, and sustaining a changing force structure; (2) developing a system for measuring the readiness and capability of Iraqi forces; (3) building loyalty and leadership throughout the Iraqi chain of command; and (4) developing a police force that upholds the rule of law in a hostile environment. The multinational force is taking steps to address these challenges, such as developing a system to assess unit readiness and embedding US forces within Iraqi units. However, without reliable reporting data, a more capable Iraqi force, and stronger Iraqi leadership, the Department of Defense faces difficulties in implementing its strategy to draw down U.S. forces from Iraq.
| 6,111 | 634 |
RAND and the University of California at Los Angeles (UCLA) School of Medicine developed a process for determining the appropriateness of a specific health care service for a particular patient, known as the RAND/UCLA appropriateness method. An appropriate service was defined as one in which the expected health benefit exceeds the expected negative consequences by a sufficiently wide margin that the procedure is worth doing, exclusive of cost. In selecting services for AUC development, the RAND/UCLA authors outlined the following factors for consideration. A service should be frequently used, be associated with a substantial amount of morbidity and/or mortality, exhibit wide variations among geographic areas in rates of use, or be controversial. Numerous groups--including provider-led entities, such as medical specialty societies, and government or non-profit entities, such as the Centers for Disease Control and Prevention or the American Cancer Society--have developed AUC to assist providers in making the most suitable treatment decision for a particular patient. From October 2011 through September 2013, CMS conducted the Medicare imaging demonstration to estimate the impact applying AUC would have on provider ordering practices and utilization. AUC were programmed into clinical decision support mechanisms (CDSM)-- electronic tools in which providers enter patient characteristics, such as symptoms, diagnoses, prior test results, and demographic information. From the CDSM, providers received a rating on the degree of appropriateness of their imaging order (appropriate, equivocal or uncertain, or inappropriate). If the order could not be linked to any criteria in the CDSM, no rating would be assigned. Instead, the CDSM would notify the provider that the order was not covered by the guidelines. The RAND evaluation of the demonstration found an increase in the percentage of orders that were rated as appropriate over the course of the demonstration. However, the authors noted that, due to the large proportion of orders that could not be linked to any criteria, the results do not necessarily indicate an improvement in the rate of appropriate ordering. For this reason, among others, the evaluation of the demonstration recommended against expanding the use of AUC for imaging services to a broader population of Medicare beneficiaries. PAMA stipulated that, as of January 2017, providers ordering imaging services (including primary care and specialty care providers) generally will be required to consult AUC through a qualified CDSM. The results of the AUC consultation generally must be documented on the claim submitted by providers furnishing imaging services (typically radiologists) in order to be paid by Medicare. (See fig. 1.) To fully implement the imaging AUC program, CMS must complete several components over the next 5 years, as outlined in PAMA: Specify applicable AUC by November 15, 2015. Through rulemaking, and in consultation with stakeholders, CMS must specify one or more AUC. The AUC may only be developed or endorsed by national professional medical specialty societies or other provider-led entities, not by CMS. The agency must take into account whether the AUC have stakeholder consensus, are scientifically valid and evidence-based, and are derived from studies that are published and reviewable by stakeholders. CMS must annually review AUC to determine the need for any updates or revisions. Publish a list of qualified CDSMs by April 1, 2016. With stakeholder input, CMS must specify one or more qualified CDSMs that may be used by ordering providers for AUC consultation. Mechanisms may include modules in certified electronic health record technology, other private sector CDSMs, or mechanisms established by CMS. The CDSMs must be able to execute certain functions, such as generating and providing a certification or documentation that the CDSM was used by the ordering provider. The list must be updated periodically and include one or more CDSMs per imaging service that are available free of charge. Roll out imaging AUC program by January 1, 2017. Ordering providers generally must provide to the furnishing provider certification that they have consulted with specified AUC using a qualified CDSM. Furnishing providers generally will only be paid if their Medicare claims for imaging services indicate which CDSM was used, whether the order would or would not adhere to any applicable AUC, and the ordering provider identification number. Begin the process for identifying outlier ordering providers on January 1, 2017. For services furnished beginning in 2017, CMS must annually determine up to 5 percent of all ordering providers who are outliers based on their low adherence to appropriate ordering. In making these determinations, CMS must use 2 years of data starting from January 1, 2017 and consult with stakeholders in order to develop methods to identify outlier ordering providers. CMS must also establish a process for determining when a provider's outlier designation should be removed. Implement prior authorization beginning January, 1, 2020. Imaging services ordered by an outlier provider will be subject to prior authorization from CMS. In applying prior authorization, the agency may only use specified AUC. In its July 2015 notice of proposed rulemaking, CMS outlined its initial plan and timeframes for implementing the imaging AUC program. While CMS's initial plan touches on each implementation component of the program, it focused largely on the process for specifying applicable AUC and establishing priority clinical areas for future prior authorization policy. Given the need to consider stakeholder comments and that progress on early components affects the timing of subsequent components, CMS's initial plans to implement certain components of the program extend beyond the dates outlined in PAMA. To respond to the PAMA requirement of specifying applicable AUC, CMS is proposing to qualify provider-led entities such that all AUC developed, endorsed, or modified by these entities may be eligible for use in the imaging program. The agency does not intend to evaluate and select AUC itself because of the volume of those potentially available, according to CMS officials. Once an entity is qualified by CMS, all applicable AUC developed, endorsed, or modified by that entity would become specified applicable AUC under the program. In addition, CMS is proposing recertification every 6 years. The agency's proposed definition for a provider-led entity is a national professional medical specialty society or an organization that is comprised primarily of providers and is actively engaged in the practice and delivery of health care, such as hospitals and health systems. CMS has proposed that individual AUC must link a specific clinical condition, one or more imaging services, and an assessment of the appropriateness of the service(s). To respond to PAMA's requirement for AUC development, the agency proposed that, to become qualified, provider-led entities must demonstrate that their process for developing, endorsing, or modifying AUC includes certain elements such as a rigorous evidentiary review process whereby key decision points within each criteria are graded according to the strength of evidence using a formal, published, and widely recognized methodology; a multidisciplinary team with autonomous governance to lead the AUC a publicly transparent process for identifying and disclosing potential public postings of their AUC and their AUC development process on the entity's website; and a transparent process for the timely and continual updating of each AUC. Under PAMA, CMS is required to specify applicable AUC by November 15, 2015. However, CMS does not anticipate posting its initial set of qualified entities on its website until the summer of 2016. As a result, the agency does not expect to specify applicable AUC until at least 7 months after the PAMA-specified timeframe. To respond to PAMA's requirement that prior authorization be applied to ordering providers with low adherence to appropriate ordering, CMS plans to establish priority clinical areas and limit its identification of outlier ordering providers to these areas. In developing the priority clinical areas, CMS may consider incidence and prevalence of diseases, volume and variability of utilization, strength of evidence for imaging services, and applicability to a variety of care settings and to the Medicare population. According to agency officials, given the variety of clinical scenarios for which imaging services may be ordered, the aim of establishing priority clinical areas is to narrow the potential scope of prior authorization. They also stated that low back pain, nontrauma headache, and acute chest pain are examples of potential priority clinical areas. CMS expects to identify the first set of priority clinical areas in the rulemaking cycle that begins in 2016 in consultation with stakeholders and to further develop its policy to identify outlier ordering providers. Additional priority clinical areas may be added during each rulemaking cycle. In addition, the agency is proposing a process by which it will be made aware of potentially nonevidence-based AUC associated within the established priority clinical areas. CMS is planning to have a standing request for public comments in all future AUC-related rulemaking notices so the public has ongoing opportunities to assist the agency in identifying AUC that may not be sufficiently evidence-based. CMS may consult the Medicare Evidence Development and Coverage Advisory Committee in reviewing any potentially nonevidence-based AUC. If, through this process, AUC are determined to be insufficiently evidence-based, and the provider-led entity that produced the criteria does not make a good faith attempt to correct the issue, this information could be considered when the provider-led entity applies for requalification. According to CMS officials, the proposed process does not include review of potentially nonevidence-based AUC outside of the established priority clinical areas. To respond to PAMA's requirement that specified applicable AUC be reviewed each year, they stated that, in addition to accepting public comments regularly on potentially nonevidence-based AUC associated with the established priority clinical areas, they will also review the requirements and process for AUC development as a part of CMS's annual rulemaking. Under PAMA, CMS is required to publish a list of qualified CDSMs by April 1, 2016. To do so, CMS must determine which CDSMs are suitable for use in the program. The agency's July 2015 notice of proposed rulemaking did not contain specifics on this implementation component. The agency plans to provide clarifications, develop definitions, and establish the process by which it will specify qualified CDSMs through the rulemaking process in 2016. The agency stated that it does not plan to publish a list of qualified CDSMs until after November 1, 2016, at least 7 months after the PAMA-specified timeframe. Medical specialty societies and health care researchers have undertaken efforts to identify services that are of questionable or low value under certain circumstances and therefore have the potential to be used inappropriately. Based on our examination of the AHRQ National Guideline Clearinghouse, we found that provider-led entities--as defined in CMS's notice of proposed rulemaking--have developed AUC associated with a number of these services. These questionable- and low-value services with associated AUC are potential candidate services if the AUC program were to expand beyond imaging services. Since 2012, as a part of the Choosing Wisely®️ initiative, national medical specialty societies have identified health care services of questionable value. Among the hundreds of services included in the Choosing Wisely®️ initiative, we reviewed 17 radiation therapy and clinical pathology services of questionable value under certain circumstances. The following are among those we reviewed: The American Society for Radiation Oncology surveyed its members to collect a list of potential services, convened a work group to select key services from the initial list, conducted literature reviews, and received input from its board of directors to inform its final selection. For example, the group recommended against routine follow-up mammography more often than annually for women who have had radiotherapy following breast conserving surgery. In addition, it recommended that providers not routinely prescribe proton beam therapy over other forms of definitive radiation therapy for prostate cancer. The American Society for Clinical Pathology convened a review panel of pathology and laboratory medicine experts to evaluate the literature and identify services that are frequently performed; that are of no benefit or harmful; that are costly and do not provide higher quality care; and where eliminating the service or alternatives are within the control of the provider. Among the services identified, the group recommended against prescribing testosterone therapy without laboratory evidence of testosterone deficiency. The group also recommended avoiding routine preoperative testing for low-risk elective surgeries without a clinical indication. In addition, researchers at Harvard Medical School compiled a list of 26 low-value Medicare-covered services, of which we reviewed 19 nonimaging services. The researchers deemed a service to be of low value if, on average, it provided little to no clinical benefit, either in general or in specific clinical scenarios. They developed their set of low- value services from the Choosing Wisely®️ initiative, the U.S. Preventive Services Task Force ratings of services with a 'D" grade, the National Institute for Health and Care Excellence "do not do" recommendations, the Canadian Agency for Drugs and Technologies in Health technology assessments, and peer-reviewed literature. The 19 nonimaging services fell into 5 categories: cardiovascular testing and procedures, cancer screening, diagnostic and preventive testing, preoperative testing, and other surgery. In addition, the Harvard study estimated the proportion of Medicare beneficiaries receiving the low-value services and total spending devoted to these services. We found that provider-led entities have developed AUC for some, but not all, of the questionable- or low-value services we reviewed. Our analysis of AHRQ's National Guideline Clearinghouse indicated that provider-led entities have developed AUC for more than half of the 36 questionable- or low-value services included in our review. Specifically, 23 services had at least 1 associated AUC developed by a provider-led entity.AUC in the National Guideline Clearinghouse. For the remaining 13 services, we did not find any associated Among the 17 radiation therapy and clinical pathology questionable-value services identified by the respective medical specialty societies, 12 services had an associated AUC developed by a provider-led entity. (See table 1.) For example, the American College of Radiology has developed an AUC for appropriate radiation therapy following hysterectomy for endometrial cancer patients. In other cases, we found multiple associated AUC for a single questionable service. For instance, the American College of Obstetricians and Gynecologists and the American Society for Colposcopy and Cervical Pathology have each developed criteria for the appropriate use of human papilloma virus testing for low-risk abnormal pap smears. In addition, we found associated AUC developed by provider-led entities for 11 of the 19 low-value services identified in the Harvard study. (See table 2.) For example, the American College of Physicians has developed AUC regarding the appropriate use of stress testing for those with stable coronary disease. In addition, the American College of Radiology has developed criteria outlining the appropriate conditions under which inferior vena cava filters may be used to prevent pulmonary embolism. As indicated in the RAND/UCLA report, the selection of services for an AUC program generally takes into account the service's frequency of use and resources consumed, among other factors. The Harvard researchers used claims and enrollment data--such as procedural codes, beneficiary diagnoses, and age--to determine the extent to which the low-value services were used and the associated expenditures. The researchers reported their results as a range using two approaches. The more narrow approach was based on higher specificity and was less likely to misclassify appropriate use as inappropriate. The broader approach focused on higher sensitivity, capturing more inappropriate use but also some potentially appropriate use. For example, in 2009, estimated inappropriate spending for colorectal cancer screening ranged from $7 million for beneficiaries 85 years and older to $573 million for those 75 years and older and affected 0.9 to 7.7 percent of beneficiaries of each group, respectively. The wide range in inappropriate service use, as measured by the two approaches, indicates how difficult it may be to select services for program expansion that have the most potential for improving health care quality and reducing wasteful spending. We identified several issues that are key to the effective implementation of the imaging AUC program or any future expansion of the program, including the utility of the CDSM and provider confidence in the applicable criteria. Such issues surfaced during the Medicare imaging demonstration and, according to the RAND evaluation of the demonstration, may have limited its success. Because they are not specific to imaging services, they likely would apply to services considered for program expansion as well. Effectively mapping clinical indications to applicable AUC. Programming an adequate variety of patient characteristics into CDSMs would allow the mapping of clinical indications to available AUC to be sufficiently robust. During the demonstration, almost two- thirds of orders placed by providers could not be linked to any criteria in CDSMs; therefore, providers did not receive an appropriateness rating (appropriate, uncertain or equivocal, or inappropriate) for these orders. Providers reported issues with locating a diagnosis or clinical scenario relevant to their patient or that the clinical information they did successfully enter about their patients into the CDSM did not result in any matches to AUC. The evaluators of the demonstration cited technical issues with mapping clinical indications to the distinguishing features of each AUC programmed into the CDSMs. Enhancing clinical decision making. Only well-developed CDSMs can adequately assist providers with meaningful clinical decisions. Many providers in the demonstration found the CDSM feedback presented to them--that is, the appropriateness rating or the links to the AUC, if applicable--was not specific enough to assist with decisions for specific patients or situations. Providers said that they expected detailed, actionable feedback about their orders to guide them in their decision making. In addition, providers reported that the CDSMs would have been more helpful if they had provided feedback before the order was placed, rather than after; specifically, some said they would have preferred entering the clinical indication or other patient information first in order to receive guidance on what to order. Designing CDSMs for ease of use. Whether CDSMs are integrated with electronic health record systems or web-based or stand-alone software applications, providers prefer those that can be used quickly and efficiently. In the demonstration, providers using web-based or stand-alone software applications experienced frustration with the lack of integration between the CDSM and their electronic health record system. For example, providers had to click out of their electronic health record system and go through an entirely new platform to order imaging services. Additionally, providers wanting to change orders would have to start from the beginning and go through the entire process again. This process caused workflow inefficiencies for busy providers. Ensuring provider confidence in appropriateness ratings and their underlying evidence. To secure provider buy-in, it is important that ratings not be based on outdated evidence or conflict with local guidelines or other best practice guidelines. Providers who participated in the demonstration were not always comfortable with the appropriateness ratings that were assigned to their orders and wanted more transparency than was available about how the ratings were assigned, especially when evidence was known to be limited. Providers wanted more information regarding the quality of evidence used to generate AUC and more detail about the level of agreement associated with appropriateness ratings. Allowing sufficient preparation time for implementation. Due to the complex and wide scope of changes associated with implementing AUC, allowing adequate preparation time for stakeholders is critical. Providers who participated in the demonstration noted that all phases of the demonstration were too short to address the large number of challenges related to successfully engaging providers and staff, aligning existing and new workflow patterns, and introducing providers and staff to the CDSM software and guidelines. Providers reported inadequate time for set up, planning, pilot testing, implementation, internal evaluation, and change. Efforts to move forward rapidly during the demonstration were confounded by CDSM software challenges beyond the control of participants and their practices, as well as escalating frustrations and disengagement by providers. The Department of Health and Human Services reviewed a draft of this report and provided technical comments, which we incorporated where appropriate. We are sending copies of this report to appropriate congressional committees and the Administrator of CMS. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the contact named above, Rosamond Katz, Assistant Director; Kye Briesath; Stella Chiang; Maria A. Maguire; Vikki L. Porter; and Jennifer M. Whitworth made key contributions to this report. Medicare: Higher Use of Costly Prostate Cancer Treatment by Providers Who Self-Refer Warrants Scrutiny. GAO-13-525. Washington, D.C.: July 19, 2013. Medicare: Action Needed to Address Higher Use of Anatomic Pathology Services by Providers Who Self-Refer. GAO-13-445. Washington, D.C.: June 24, 2013. Medicare: Higher Use of Advanced Imaging Services by Providers Who Self-Refer Costing Medicare Millions. GAO-12-966. Washington, D.C.: September 28, 2012. Medicare: Use of Preventive Services Could Be Better Aligned with Clinical Recommendations. GAO-12-81. Washington, D.C.: January 18, 2012. Medicare Part B Imaging Services: Rapid Spending and Shift to Physician Offices Indicate Need for CMS to Consider Additional Management Practices. GAO-08-452. Washington, D.C.: June 13, 2008.
|
PAMA required the establishment of a Medicare AUC program for advanced diagnostic imaging services. The Act also included a provision for GAO to report on the extent to which AUC could be used for other Medicare services, such as radiation therapy and clinical diagnostic laboratory services. In this report, GAO describes (1) CMS's plans for implementing the imaging AUC program and (2) examples of questionable- or low-value nonimaging services where provider-led entities have developed AUC, among other objectives. GAO reviewed CMS's July 2015 Federal Register notice of proposed rulemaking outlining its initial plans for implementing components of the imaging AUC program and also interviewed CMS and AHRQ officials. To identify services for potential AUC program expansion, GAO focused on 36 nonimaging services deemed to be of questionable or low value as identified by the American Society for Radiation Oncology, the American Society for Clinical Pathology, and a 2014 study by researchers at Harvard Medical School. GAO also examined AHRQ's National Guideline Clearinghouse to determine whether AUC developed by provider-led entities were associated with those 36 services. GAO did not evaluate the extent to which the associated AUC are suitable for program implementation. Also, the resulting set of services is illustrative and not a comprehensive list of candidates for potential AUC program expansion. HHS provided technical comments on a draft of this report, which were incorporated where appropriate. The Centers for Medicare & Medicaid Services (CMS)--an agency within the Department of Health and Human Services (HHS)--has proposed initial plans and timeframes for implementing the Medicare appropriate use criteria (AUC) program for advanced diagnostic imaging services, such as computed tomography, magnetic resonance imaging, and positron emission tomography. AUC are a type of clinical practice guideline intended to provide guidance on whether it is appropriate to perform a specific service for a given patient. Under the Protecting Access to Medicare Act of 2014 (PAMA), a health care provider ordering advanced diagnostic imaging services generally must consult AUC as a condition of Medicare payment for providers who furnish imaging services. Consulting AUC involves entering patient clinical data into an electronic decision tool to obtain information on the appropriateness of the service. The agency's July 2015 notice of proposed rulemaking focused largely on the process for specifying applicable AUC to be used in the program and a policy for identifying providers who must obtain authorization from CMS before ordering imaging services due to their low adherence to appropriate ordering. CMS has proposed to qualify provider-led entities--such as national professional medical specialty societies--such that all AUC developed, endorsed, or modified by these entities would be eligible for use in the imaging program. To become a qualified source of AUC, provider-led entities must adhere to CMS standards for AUC development. The agency does not plan to evaluate and select imaging AUC itself because of the volume of those potentially available, according to CMS officials. CMS plans to establish priority clinical areas, and providers with low adherence to appropriate ordering--as determined by the AUC--in those areas will be subject to prior authorization. The agency intends to establish a number of priority clinical areas--potentially including low back pain, nontrauma headache, or acute chest pain--through rulemaking beginning in 2016. CMS officials stated that, given the variety of clinical scenarios for which imaging services may be ordered, the aim of establishing priority clinical areas is to narrow the potential scope of prior authorization. Medicare services with associated AUC developed by provider-led entities represent potential candidates for AUC program expansion. Medical specialty societies and health care researchers--including the American Society for Radiation Oncology, the American Society for Clinical Pathology, and researchers at Harvard Medical School--have compiled lists of services considered to be of questionable or low value in certain clinical circumstances. GAO reviewed 36 of these services and found that provider-led entities have developed associated AUC for more than half of them, according to a database of clinical practice guidelines maintained by Agency for Health Research Quality (AHRQ). Specifically, GAO found associated AUC across several service categories, including radiation therapy, clinical pathology, cardiovascular testing and procedures, cancer screenings, diagnostic and preventive testing, and preoperative testing.
| 4,645 | 921 |
Each of the 50 states and the District of Columbia has a workers' compensation program. These programs vary by state, as each is authorized by its own state law. There are three federal workers' compensation programs including the one authorized by FECA, which covers federal employees. Federal and most state workers' compensation programs were initially enacted in the first part of the 20th century. VA's disability compensation program, which was based to some extent on state workers' compensation programs, was substantially overhauled with the enactment of the War Risk Insurance Act of 1917. The act directed VA to establish a schedule to compensate veterans for the average impairment in earning capacity resulting from injuries or diseases incurred during military service. The FECA program and other workers' compensation programs differ in purpose and design from VA's disability compensation program (see table 1). Cash benefits under FECA and the workers' compensation programs and those provided under VA's disability compensation program are provided for different purposes. FECA and other workers' compensation programs primarily focus on compensating workers for their actual lost wages and permanent impairments resulting from occupational diseases or injuries. In contrast, VA's disability program focuses on compensating veterans for the average reduction in earning capacity they are expected to experience as a result of service-connected conditions. An overall goal of workers' compensation programs is to return injured employees to work, and in some workers' compensation programs, benefits are terminated if the worker does not participate in vocational rehabilitation. VA's disability compensation program focuses on providing monetary compensation for service-connected conditions. VA also provides other services, such as vocational rehabilitation, to eligible veterans. However, veterans are not required to participate in vocational rehabilitation in order to receive compensation. Workers' compensation programs attempt to provide adequate benefits to injured workers while limiting employers' liabilities strictly to workers' compensation benefits. These programs provide cash benefits to employees for wage loss and permanent impairments. For selected permanent impairments, the programs use a schedule to determine monetary benefits, which are referred to as "schedule awards." Workers' compensation also provides medical care benefits and vocational rehabilitation to help employees return to work. In some programs, workers who refuse vocational rehabilitation services when these services are necessary for the person to be employed again may forfeit their right to receive wage-loss benefits. In addition, if an employee dies from a job-related injury or illness, the employee's dependents can receive survivors' benefits. The compensation benefits that are paid to workers depend on the nature and extent of their injuries and the ability of injured employees to earn their usual wages. Employees whose injuries are not serious may only receive reimbursement for medical care for work-related injuries or illnesses. Many workers' compensation claims result only in payment for medical care rather than in monetary awards for lost wages or permanent impairments. Employees whose injuries or illnesses result in lost wages may be entitled to receive wage-loss benefits. Employees whose injuries or illnesses result in the permanent loss or loss of use of certain body parts or functions are entitled to compensation for permanent impairments. The majority of the monetary benefits that are paid to workers are for wages lost as opposed to permanent impairment. Often wage-loss benefits are paid for temporary disabilities, which may last a relatively short period of time. For example, according to the Department of Labor, for the claims submitted under FECA in fiscal year 1994 for which wage-loss benefits were paid during the first year after submission, the median amount paid was about $4,000. The median time period for which benefits were paid was about 70 days. No two workers' compensation programs are exactly alike. Despite the various differences among state, FECA, and the District of Columbia programs, to be eligible for benefits under any of these laws, workers' injuries or occupational diseases must arise out of and occur in the course of employment. These criteria, though used somewhat differently among jurisdictions, generally mean that a worker's illness or injury must occur in the course of performing his or her job in order to be compensable. To be eligible for monetary benefits under workers' compensation, workers must experience an actual loss in wages or a permanent impairment. Most states and the District of Columbia require that employees be out of work during a waiting period ranging from 3 to 7 days before benefits for lost wages can be paid. FECA requires a 3-day waiting period that begins after the expiration of any "continuation of pay" to which the worker may be entitled. Continuation of pay is a unique feature of FECA whereby FECA authorizes federal agencies to continue paying employees who are absent from work because of work-related traumatic injuries their regular salaries for up to 45 days before the 3-day waiting period begins. Continuation- of-pay benefits are not payable in occupational disease cases. Under state workers' compensation programs, if an employee is absent from work continuously for 5 to 42 days after the date of injury, he or she is entitled to wage-loss benefits retroactive to the date of injury. Under FECA, an employee absent for 14 days after continuation of pay ends is eligible for restoration of benefits withheld during the 3-day waiting period. While workers' compensation programs attempt to protect the interests of both workers and employers, VA's disability compensation program focuses on economic support for veterans. It is designed to provide cash benefits to veterans for physical or mental service-connected conditions. The benefits paid to veterans depend on the degree to which their specific conditions or injuries are believed to reduce the average earning capacity of veterans in general with that condition. VA's disability compensation program, like workers' compensation programs, provides survivors' benefits and vocational rehabilitation. Unlike most workers' compensation programs, vocational rehabilitation is optional under VA. Both programs cover medical care, but VA generally provides care for veterans through the Veterans Health Administration. VA's program, however, does provide for additional compensation for dependents; special needs related to a veteran's condition; and special monthly compensation--statutory awards--for the loss or loss of use of certain body parts or functions, or procreative organs. VA also provides stipends to veterans in its vocational rehabilitation program in addition to monthly disability compensation. To be eligible for compensation under VA's disability compensation program, veterans must incur or aggravate injuries or diseases in the line of duty or during a period of active military service. Such illnesses and injuries are considered service-connected. Unlike workers' compensation programs, a veteran's injury or illness need not occur because of or in the course of actually performing his or her military-related duty to be compensable. Members of the military are covered 24 hours a day with respect to diseases or injuries they incur, because they are considered on duty 24 hours a day. A veteran does not have to experience an actual reduction in earning capacity or loss in wages to be eligible for disability compensation. Thus, like workers' compensation for permanent impairments, veterans can receive compensation even if they are working and regardless of the amount they earn. Workers' compensation programs base compensation on workers' wages prior to their injury or the onset of occupational disease. Workers are paid a percentage of their wages or the wages they lose as a result of their work-related injury or disease, depending on whether they are being compensated for lost wages or permanent impairments. For selected permanent impairments, workers' compensation programs use a schedule to determine monetary benefits, or "schedule awards." For permanent impairments that do not appear on a schedule, programs may use one or more of three methods to determine compensation. In contrast, VA uses its Schedule for Rating Disabilities to determine the degree of impairment in earning capacity presumed to be associated with specific temporary and permanent conditions. This degree of impairment determines the basic amount of compensation veterans are eligible for. An individual veteran's actual earnings lost as a result of a condition have no bearing on the compensation amount. If employees are unable to earn their usual wages because of work-related disabilities, they are compensated for their lost wages. The amount of monetary compensation a worker receives under workers' compensation is calculated by taking a specified percentage--in many jurisdictions, 66-2/3 percent--of the worker's actual wage loss. Thus, the amount of compensation different employees receive for the same disability varies on the basis of their particular wage loss. Most states and the FECA program, however, place minimum and maximum limits on the weekly compensation amounts an individual can receive. Most programs compensate for lost wages for the duration of the wage loss. Workers who become permanently and totally disabled generally receive benefits for life. An employee in Arizona, for example, who earns $600 a week and loses wages as a result of a work-related injury should theoretically receive $400 (66-2/3 percent x $600) a week in compensation. However, this employee would actually receive $323.10 a week because this is the maximum weekly benefit amount Arizona allows for wage loss. Under the FECA program, the same employee also would receive 66-2/3 percent (75 percent if the employee has dependents) of his or her lost wages. However, the employee would receive the full $400 because the weekly maximum FECA allows for wage loss is $1,299.38. This same worker would also receive the full $400 in the District of Columbia, which limits maximum weekly benefits to $723.34. For certain permanent impairments, workers' compensation programs use a schedule to determine the maximum amount of compensation that can be awarded. As of January 1995, the FECA, District of Columbia, and most state workers' compensation programs each maintained a schedule that specified the maximum number of weeks of compensation a worker under their jurisdiction could receive for specific permanent impairments that result in the total loss or loss of use of certain members (such as a hand, arm, or foot), organs, and functions of the body. Benefits for these permanent impairments are called "schedule awards." Permanent partial loss or loss of use of members, organs, or body functions listed on federal or state schedules is also compensable, but for less than the maximum number of weeks. The maximum amount of money and period of time during which compensation may be paid for specific functional losses are authorized by statute and vary by program. Table 2 shows the variation in the maximum number of weeks of compensation and dollars payable for certain permanent impairments among selected states and the FECA program. The maximum amount and weeks for the states we selected cover the range of these maximum limits across all states. However, the actual amount employees are paid is based on their salaries, so many employees receive less than the maximum. For the types of impairments included on a workers' compensation schedule, whether total or partial, the injured or disabled worker is awarded a percentage of his or her usual wages for no greater than the number of weeks specified on the schedule. For example, under FECA, an employee may receive 66-2/3 percent of his or her salary or wages if there are no dependents (75 percent if there are) for up to a maximum of 312 weeks for the total loss or loss of use of an arm. Accordingly, the loss or loss of use of an arm provides the same number of weeks of benefits to all injured federal workers, but workers with lower wages will receive a lesser amount of monetary benefits. If a worker loses 50-percent use of an arm, he or she would likely receive 66-2/3 percent of his or her salary or wages for 156 weeks, which is 50 percent of the maximum number of weeks allowed. Workers with permanent partial impairments continue to receive compensation for their loss until they receive the full amount they are eligible for, even if they are working. If the worker is unable to return to work or earn his or her usual wages after receiving the maximum amount allowable, the worker may apply for wage-loss benefits. Under FECA, workers cannot collect compensation for wage loss and permanent impairment concurrently. For permanent impairments that are not included in the schedule, commonly called "nonscheduled" or "unscheduled" injuries, one or more of three methods may be used to determine the compensation amount. Nonscheduled injuries are normally injuries to the trunk, internal organs, nervous system, and other body systems that are not included in the list of injuries found in District of Columbia or state statutes that address workers' compensation. FECA does not provide schedule awards for permanent impairments that do not appear on its schedule. The states and the District of Columbia may base awards for nonscheduled injuries on (1) functional limitation or impairment, (2) wage loss, (3) loss of wage-earning capacity, or (4) some combination of these three. The impairment approach compares the worker's condition with that of a person with no impairment or with that of someone at the opposite end of the scale--a "totally disabled" person--and produces a rating of impairment as a percentage. Thus, it is basically an extension of the schedule applied to injuries not listed. For example, in a state that equates a person with no impairments to 500 weeks of benefits, a worker determined to have a 25-percent impairment may receive a percentage of his or her usual wages for 125 weeks (25 percent x 500). Under this method, workers with the same permanent impairment receive the same number of weeks of compensation, even if the impairment is more or less disabling for different individuals. The same type of back injury, for example, would provide the same number of weeks of benefits to a professor and a carpenter, even though the professor is able to return to work and suffers only a temporary loss of wages, while, by contrast, the carpenter may be unable to return to work and suffers a permanent loss in wages. However, when a worker suffers a permanent loss of wages, he or she may apply for wage-loss benefits under workers' compensation. Under the wage-loss method, workers are compensated for a percentage of the actual loss of earnings that stems from their work-related illness or injury. The amount of benefits paid to workers depends upon the extent to which postinjury earnings are affected by the impairment or condition. The degree of functional impairment alone has little or no bearing on the amount of benefits paid. This approach compensates the worker for the anticipated or projected loss of earning capacity as a result of his or her permanent disability. This is similar to what VA's schedule conceptually does. However, VA's schedule reflects the impact a condition is presumed to have on the average earning capacity among all veterans with that condition as opposed to its impact on each individual veteran's earnings. States using this approach assess the seriousness of the worker's medical condition; consider such factors as prior education, work experience, and other personal characteristics that affect one's ability to obtain and retain employment; and estimate the worker's loss in earning capacity in percentage terms. States normally express total earning capacity in terms of a specified number of weeks. For example, in a state that equates total loss of earning capacity with 600 weeks of benefits, a worker with a 20-percent loss of earning capacity would receive a percentage of his or her projected earning capacity for 120 weeks (20 percent x 600). States can also use some combination of the three basic methods. For example, Texas pays workers benefits for permanent disabilities on the basis of their functional impairment for a limited number of weeks. However, workers can also receive additional benefits for loss of wages after their impairment benefits are exhausted. Only workers whose functional impairment is rated at least at 15 percent or higher are eligible for these supplemental benefits. Wisconsin also pays impairment benefits on the basis of loss of functional capacity for permanent disabilities for a limited number of weeks. However, if the worker does not return to work by the end of this period at the preinjury earnings level, additional benefits are based on loss of wage-earning capacity. The amount of compensation veterans are awarded for their service-connected conditions is based on a percentage evaluation, commonly called the disability rating, which VA's Schedule for Rating Disabilities assigns to a veteran's specific condition. The veteran receives the specific benefit amount the law sets for that disability rating level. Unlike workers' compensation, VA does not base compensation on each individual veteran's salary or wage loss, nor does it base compensation on how each veteran's earning capacity is actually affected by his or her service-connected condition. Veterans with the same condition at the same level of severity usually receive the same basic cash benefit. The rating schedule contains medical criteria and disability ratings. The medical criteria consist of a list of diagnoses, organized by body system, and a number of levels of medical severity specified for each diagnosis. The schedule assigns a disability rating to each level of severity associated with a diagnosis. These disability ratings, which are supposed to reflect the average impairment in earning capacity associated with each level of severity, range from 10 percent to 100 percent and correspond to specific dollar amounts of compensation set by law. For example, VA has determined that the loss of a foot during military service results in a 40-percent impairment in earning capacity, on average, among all veterans with this injury. These veterans, therefore, are entitled to a 40-percent disability rating whether this injury actually reduces their earning capacity by more than 40 percent or does not reduce it at all. In 1996, all veterans having conditions with a disability rating of 40 percent received basic compensation of $380 per month. In 1996, under VA's disability program, a veteran received basic compensation of $91 per month, or $1,092 annually, for a condition rated at 10 percent, and up to $1,870 per month, or $22,440 annually, for a condition rated at 100 percent (see table 3). In contrast to many workers' compensation programs, VA's program does not place any limits on the total amount of monetary benefits veterans can receive or the time period for which they can receive these benefits. Veterans can receive disability compensation for their service-connected conditions for life. Monthly benefits for workers with higher earnings under FECA for selected scheduled permanent impairments will likely be higher than VA's monthly disability compensation because schedule award amounts for permanent impairments under the workers' compensation programs are based on workers' wages. FECA monthly cash benefits for workers with lower salaries may not be higher, however, than VA cash benefits. For example, for the total loss or loss of use of an arm, a federal employee without dependents at a GS-12, step 1, salary level would receive about $2,341 a month for a maximum of 72 months (312 weeks) under FECA. Under VA's disability compensation program, this same person would receive $1,124 a month. However, a federal employee at the GS-5, step 1, salary level would receive about $1,065 a month (see table 4). Comparisons of initial monthly benefit payments alone under FECA and VA do not provide a complete picture of the differences in compensation amounts, for several reasons. FECA payments for schedule awards are limited to 6 years or less, whereas VA disability payments are made to disabled veterans for the duration of the impairment, so the payment of several decades' worth of benefits is not uncommon. FECA payments and VA disability payments can be--and usually are--increased over time. A dollar paid to a recipient today is worth more than a dollar paid at some future date, in terms of both value to the recipient and cost to the government, because a dollar received today can be invested to provide more than a dollar's benefit in the future. For these reasons, a comparison of the present value, also known as the lump-sum equivalent, of potential total payment under each program provides a better indication of the relative value of benefits under these two programs (see app. II for a detailed description of benefit calculations). In the long run, at the GS-12, step 1, salary level and below, compensation under VA's disability program is generally higher than FECA compensation for selected permanent impairments. For example, the schedule award benefit for the total loss or loss of use of an arm for an employee at the GS-12, step 1, salary level is limited to a maximum of $186,203 ($540 x 312 weeks, adjusted for cost-of-living allowance) under FECA. The present value, or lump-sum equivalent, of this amount would be $158,561. Assuming that a veteran is compensated for the loss or loss of use of an arm for 30 years, using the 1996 compensation levels and adjusting for future increases in benefits, the veteran would receive $756,474, and the present value would be $289,365. In general, for selected permanent impairments, the maximum amount of benefits a federal worker at the GS-15, step 10, salary level would receive also would be lower than the benefits received under VA. However, the present value of benefits at this salary level would be higher under FECA for some of the impairments we looked at (see tables 5 and 6). In commenting on a draft of this report, VA stated that comparisons of VA's disability compensation program with other workers' compensation programs are not meaningful because the programs are so dissimilar. VA also stated that its compensation program is the best method for providing monetary benefits to people disabled during military service. Throughout this report we have recognized the differences and, in some cases, the similarities among the VA disability and workers' compensation programs' objectives, components, and characteristics. VA's comments are included as appendix III. VA also provided technical comments on the draft report, which we incorporated as appropriate. The Department of Labor also reviewed a draft of this report and provided technical comments, which we have also included as appropriate. We are sending copies of this report to the Subcommittee's Ranking Minority Member, other interested congressional committees and subcommittees, and the Secretaries of Veterans Affairs and Labor. Copies will also be made available to others upon request. This report was prepared under the direction of Clarita A. Mrena, Assistant Director. Other GAO contacts and staff acknowledgments are listed in appendix IV. If you have any questions about this report, please contact me on (202) 512-7101 or Ms. Mrena on (202) 512-6812. Total amount of cash benefits paid (continued) Not available. This appendix presents the specific assumptions that we adopted to estimate the present value, or lump-sum equivalent, benefits of FECA and VA disability compensation for this report. The monthly amount of benefits paid under FECA and VA are adjusted periodically in order to compensate partially or fully for the effects of inflation. We were told by a VA official that recent increases in VA benefits have been consistent with Social Security Administration cost-of-living increases. Accordingly, we assumed that VA disability benefits will increase at an annual rate of 4 percent. This is consistent with the Social Security Administration's long-range projections of the annual increases in the Consumer Price Index. We also assumed that FECA benefits would increase at the same annual rate. Consistent with standard practice, we used a discount rate of 6.6 percent. This represents the cost of borrowing to the federal government (that is, the 30-year Treasury rate) as of December 1996. Connie D. Wilson, Senior Evaluator, collected a major portion of the evidence presented, and Timothy J. Carr, Senior Economist, provided the present value calculations of FECA and VA benefit payments. Ed Tasca, Senior Evaluator, provided technical assistance on workers' compensation programs. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO compared the: (1) criteria used by the Department of Veterans Affairs (VA) disability compensation program and federal and state workers' compensation programs to determine compensation; and (2) compensation individuals with selected work-related injuries and diseases would receive under VA's disability program and what they would receive for the same impairments under the Federal Employees' Compensation Act (FECA). GAO found that: (1) the VA disability compensation program and workers' compensation programs, including FECA, differ with respect to program goals, types of benefits provided, and eligibility requirements for benefits; (2) most workers' compensation programs provide separate cash payments for wages lost and permanent impairment, while VA provides compensation only for service-connected conditions, which need not be permanent; (3) unlike the VA program, workers' compensation programs emphasize returning employees to work while limiting employers' liability, and the vast majority who receive workers' compensation receive only medical benefits, not cash awards; (4) to be eligible for wage loss benefits under workers' compensation programs, workers must actually lose all or a portion of their wages for a specified minimum period of time, then they receive a portion, usually 66 and two-thirds percent, of their actual lost wages for the duration of the period that wages are lost; (5) to collect compensation for permanent impairments, workers must sustain permanent loss or loss of use of a body part or function, but they need not lose wages to receive compensation for their permanent impairments; (6) unlike workers' compensation programs, the amount of basic compensation veterans may receive is established by statute and is not based on their individual wage loss or usual wages, but it is based on the rating VA's Schedule for Rating Disabilities assigns to that veteran's specific condition; (7) all veterans whose conditions are assigned the same rating receive the same basic benefits amount; (8) unlike workers' compensation for permanent impairments, there is no limit on the length of time veterans can receive benefits or the total amount they can receive for permanent conditions; (9) the monthly cash benefits for permanent impairments under FECA for employees at the GS-12, step 1, salary level tend to be higher than the benefits under VA's disability program for the same types of conditions; (10) this is likely to be the case for those at higher, but not lower, salary levels under FECA because workers' compensation is based on workers' usual wages, whereas veterans' benefits are not; and (11) unless workers' compensation continues under the wage loss provision after the cash awards for permanent impairment and, the amount and present value of VA compensation could be higher than FECA's over the long term.
| 5,313 | 562 |
Egypt is currently among the largest recipients of U.S. foreign assistance, along with Israel, Afghanistan, and Iraq. Since 1979, Egypt has received an annual average of more than $2 billion in economic and military aid. Egypt has generally received about $1.3 billion each year in foreign military financing assistance in the form of grants and loans. From 1982 to 1988, the United States forgave Egypt's FMF debt to the United States and began providing military assistance in 1989 solely in the form of grants with no repayment requirement. State and DOD planning documents describe FMF as one of several U.S. security assistance programs which are a subset of U.S. security cooperation efforts designed to build relationships that support specified U.S. government interests. These interests include building friendly nations' capabilities for self-defense and coalition operations, strengthening military support for containing transnational threats, protecting democratically elected governments, and fostering closer military ties between U.S. and recipient nations.According to State, the objectives of the FMF program worldwide include: assisting friendly foreign militaries in procuring U.S. defense articles and services for their countries' self defense and other security needs; promoting coalition efforts in regional conflicts and the global war on improving capabilities of friendly foreign militaries to assist in international crisis response operations; contributing to the professionalism of military forces; enhancing rationalization, standardization, and interoperability of friendly foreign military forces; maintaining support for democratically elected governments; and supporting the U.S. industrial base by promoting the export of U.S. defense-related goods and services. Generally, FMF provides financial assistance in the form of credits or guarantees to U.S. allies to purchase military equipment, services, and training from the United States. Recipient countries can use the assistance to purchase items from the U.S. military departments through the Foreign Military Sales (FMS) process or directly from private U.S. companies through direct commercial sales. State is responsible for the continuous supervision and general direction of security assistance programs, including FMF, in coordination with DOD. DSCA leads the day-to-day program implementation for each FMF recipient country in coordination with other DOD entities at the unified combatant commands and in the recipient countries. CENTCOM's responsibilities include developing and implementing security cooperation plans for Egypt and other countries in the Middle East, as well as coordinating with other government entities on major Egyptian equipment requests. (See appendix II for a description of the FMS process for purchasing FMF-funded cases and appendix III for a description of the roles and responsibilities of the entities involved in the program.) Members of Congress have periodically sought to alter the balance of economic and military assistance to Egypt. In 1998, the United States and Egypt agreed to a 10-year assistance phase-down in conjunction with a similar package for Israel. The package for Egypt reduced economic assistance by $40 million each year but did not increase FMF assistance to Egypt. U.S. economic assistance to Israel was reduced by $120 million each year, and the amount of U.S. military assistance was increased by $60 million per year. In 2004 and 2005, amendments to the Consolidated Appropriations bill for fiscal year 2005 and the Foreign Relations Authorization billfor fiscal years 2006 and 2007 proposed converting some military assistance to economic assistance to Egypt. The 2004 amendment was not adopted and did not become law. Furthermore, as of March 2006, the 2005 amendment has not been enacted. Additionally, a conference report attached to the fiscal year 2006 Foreign Operations Appropriations bill requires State to report to Congress on the balance between economic and military assistance provided to Egypt, including whether maintaining the current level of military assistance in relation to economic assistance is appropriate in light of the political and economic conditions in Egypt and in the region. Although this requirement was not stipulated in law, it conveys congressional intent to have this information provided to the Congress. Over the past decade, Congress and the executive branch laid out a statutory and managerial framework that provides the foundation for strengthening government performance and accountability, with GPRA as its centerpiece. GPRA is designed to inform congressional and executive decision making by providing objective information on the relative effectiveness and efficiency of federal programs and spending. A key purpose of the act is to create closer and clearer links between the process of allocating resources and the results expected to be achieved with those resources. Program evaluations are objective studies that answer questions about program performance and results, and explore ways to improve them. In 2002, OMB implemented the Performance Assessment Rating Tool (PART) method of assessing federal programs. PART assesses federal programs in four areas: purpose and design, strategic planning, management, and results and accountability. Another assessment tool, which we have discussed in previous reports, is a logic model. This tool can be used to describe a program's components and desired results, while explaining the strategy by which the program is expected to achieve its goals. A logic model is a representation of the relationship between the various components of a program, typically including at a minimum, inputs, activities, outputs and outcomes. By specifying the program's theory of what is expected at each step, a logic model can help evaluators define measures of the program's progress toward its ultimate goals. (See appendix IV for details on the logic model.) Since 1979, Egypt has received about $34 billion in FMF assistance which the United States has generally appropriated in annual amounts of approximately $1.3 billion. In fiscal year 2005, Egypt received nearly $1.3 billion in FMF grants, more than 25 percent of the total amount of FMF assistance provided worldwide. FMF assistance to Egypt accounts for 80 percent of Egypt's military procurement budget and has served to replace some of Egypt's Soviet-supplied equipment with modern U.S. equipment. Egyptian officials stated that 52 percent of their military inventory is U.S. equipment as of August 2005. Over the life of the FMF program, Egypt has purchased 36 Apache helicopters, 220 F-16 aircraft, 880 M1A1 tanks, and the accompanying training and maintenance to support these systems, among other items (see fig. 1). According to U.S. and Egyptian officials, the Egyptian military is better equipped to defend its territory and participate in operations in the region. For example, the Egyptian military has participated in peacekeeping missions in East Timor, Bosnia, and Somalia. In addition, the Egyptian military participates with the United States in Operation Bright Star, a biannual military exercise involving forces from other coalition countries, including Germany, Jordan, Kuwait, and the United Kingdom. The purpose of the exercise is to conduct field training to enhance military cooperation among U.S. and coalition partners and strengthen relationships between the United States and Egypt, as well as other participating partners. From 1999 to 2005, the United States provided a total of about $7.8 billion to Egypt in FMF funds. Egypt spent almost half of its FMF funds from 1999 to 2005 (about $3.8 billion) on major equipment such as aircraft, missiles, ships, and vehicles (see fig. 2). For example, Egypt spent 8 percent of its FMF funds on missiles, including 822 ground-launched Stinger missiles, 459 air-launched Hellfire missiles, and 33 sea-launched Harpoon missiles. Egypt also spent 14 percent on aircraft, including 3 cargo airplanes; 10 percent on communications and support equipment, including 42 radar systems and 8 switchboards; and 9 percent on supplies and supply operations, including 1,452 masks to protect against chemical and biological agents. Egypt spent the remaining amount of its FMF funds--about $2.5 billion-- on maintenance, weapons and ammunition, and other requirements. DSCA adheres to a total package approach when working with Egypt to procure items through the FMF program, which ensures that the costs of support articles and services for new equipment are included in the total price of the item. In addition to the equipment, support items include training, technical assistance, initial support, and follow-on support. Egyptian officials stated that approximately one-third of their FMF funds are dedicated to follow-on support; one-third to upgrade U.S.-supplied equipment; and nearly one-third to new procurements. The United States permits Egypt to finance its military purchases using a statutory cash flow financing arrangement that allows Egypt to make purchases in one year and pay for them over succeeding years using grants made from future FMF appropriations. The arrangement allows the United States to enter into contracts in advance of--and in excess of--current FMF appropriations for Egypt. Specifically, Egypt is not required to pay the full amount of the LOA up front. Cash flow financing allows Egypt to pay only the amount that signed LOAs require in a given year for specified defense articles and services. The cash flow financing arrangement benefits Egypt in that it can receive more defense goods and services than it can under other financing arrangements. However, the program accumulated undisbursed funds because the agency refrained from making as many new commitments for goods and services as the annual appropriation would have allowed, according to DSCA officials. The cash flow financing arrangement allows for significant commitments to be made based on anticipated appropriations. Unlike other countries that receive FMF assistance, Egypt and Israel are currently the only countries that may receive defense goods worth more than the annual FMF appropriation and pay for them over multiple years. Cash flow financing enables Egypt to purchase more defense goods and services than under other financing arrangements and to better plan its military purchases over a number of years. For example, Egypt may begin the process of purchasing an F-16 in one year and make installment payments for the item over the life of the contract. Traditional financing options for FMF programs permit countries to make purchases equal to the amount of the particular appropriation in any given year or save appropriations over multiple years. For example, a country using traditional financing would have to plan its purchases by saving its FMF funds over a period of years to accumulate sufficient funds to make the full payment for the item. All other countries that receive FMF assistance, except Israel and Egypt, are required to make their FMF purchases in this manner. By 1998, more than $2 billion in undisbursed funds accumulated in Egypt's FMF account because DSCA did not have a high enough level of commitments to require disbursements in an amount equal to Egypt's entire annual FMF appropriation. DSCA officials stated that previous FMF program managers did not have adequate tools to track Egypt's FMF current commitments against future FMF disbursement requirements. In August 1998, DSCA established a system to project estimated commitments and payments by fiscal year to obtain better control over the cash flow financing process. DSCA developed and is now implementing a plan to disburse the accumulated funds by fiscal year 2007. According to DSCA, OMB officials, and congressional staff, in 1998, members of Congress and OMB became concerned about the large balance in Egypt's FMF account and consulted with DSCA to eliminate it. As a result, DSCA coordinated with OMB and subsequently developed and implemented a plan in 2002, to disburse $300 million of the undisbursed balances every year, in addition to the amounts appropriated annually for Egypt's FMF program, until the undisbursed balances are eliminated in 2007 (see fig. 3). According to DSCA, an estimated $130 million of the undisbursed balances will be held in reserve to cover unexpected costs. Cash flow financing also permits Egypt to order defense articles and services that may be paid for with future appropriations or country funds. As of March 22, 2006, the value of LOAs anticipating future funding totaled approximately $2 billion, some of which are not due for full payment until 2011. Due to the nature of cash flow financing, this number can vary daily because contracts are signed, completed, or modified daily. For example, from 1997 to 2005, the dollar value of these commitments at the end of each fiscal year has varied from $1.3 billion to $3.6 billion, whereas the average amount was $2.6 billion (see fig. 4). These commitments are expected to be paid for with future appropriations. If future appropriations are not available, Egypt will be responsible under the LOA to pay these commitments with other sources. DSCA officials stated that, if there were a change in the anticipated appropriations, the United States would seek funding from Egypt to satisfy the LOAs. If Egypt is unable to pay for the LOAs with its own funds, the U.S. government would be liable for the payments due on the underlying contracts executed on Egypt's behalf. To manage payment if expected funding is reduced, DSCA officials stated that DOD would consider a range of steps including reducing the scope of the existing contracts, and stopping new orders, among other things. Additionally, defense articles and services that have not been delivered would not be provided to Egypt, if payment had not been received. As a result, DOD also may use FMF funds held in reserve to pay companies' costs associated with closing down their production lines and terminating the contracts. However, DSCA officials stated that contract termination would be considered as a last resort. Absent the availability of U.S. funds to pay the entire balance of existing contracts, important implications for the achievement of the program goals and U.S. relations with Egypt may arise. For example, if the United States had to terminate multiple contracts on Egypt's behalf because of a reduction in FMF program funding and Egypt's inability to provide funding, the U.S. ability to achieve FMF goals such as military modernization would be affected. In addition, U.S. and Egyptian officials stated that a shift in funding may affect some elements of the U.S.-Egyptian relationship. State and DOD have not conducted an assessment to identify the impacts of a potential reduction in FMF funding below the levels that are planned to be requested. According to applicable internal control standards for the federal government, an organization should identify risks--such as a reduction in funding--and decide upon the internal control activities required to mitigate those risks and achieve efficient and effective operations, reliable financial reporting, and compliance with laws and regulations.Management should then plan a course of action for mitigating risks, developing mechanisms to anticipate, identify, and react to change. U.S. officials and several experts we consulted assert that FMF assistance to Egypt has supported U.S. strategic goals such as regional stability, the war on terrorism, and Egyptian-Israeli peace. Furthermore, U.S. and Egyptian officials state that FMF has promoted a modern Egyptian military by replacing 52 percent of its aging Soviet-era military equipment with U.S. equipment, and improved U.S.-Egyptian interoperability through joint military exercises. U.S. officials also stated that the U.S.-Egyptian relationship resulted in expedited access through the Suez Canal and the right to fly over Egyptian territory. Although DOD and State can describe the qualitative benefits the United States receives from Egypt, the departments have conducted no systematic, outcome-based assessment of how the FMF program furthers U.S. goals. GPRA and PART establish the expectation that federal programs determine whether they are meeting agency and program goals--annual and long-term--and how performance can be improved to achieve better results. Officials and several experts assert that Egypt supports the U.S. goals of the FMF program, which are found in State's annual Mission Performance Plan for Egypt and its Congressional Budget Justification. Specific goals include (1) modernizing and training Egypt's military; (2) facilitating Egypt's participation as a coalition partner; (3) providing force protection to the U.S. military in the region; and (4) helping guarantee U.S. access to the Suez Canal and overflight routes. Another key goal of the program is to enhance Egypt's interoperability with U.S. forces. DOD officials stated that broader security cooperation and assistance goals found in DOD's regional Theater Security Cooperation Plan also apply to Egypt's FMF program, which we found to be consistent with State's goals for the program. Egyptian and U.S. officials cited several examples of Egypt's support for U.S. goals. For example, Egypt: deployed about 800 military personnel to the Darfur region of the Sudan trained 250 Iraqi police and 25 Iraqi diplomats in 2004; deployed a military hospital and medical staff to Bagram Air Base in Afghanistan from 2003 to 2005, where nearly 100,000 patients received treatment; provided over-flight permission to 36,553 U.S. military aircraft through Egyptian airspace from 2001 to 2005; and granted expedited transit of 861 U.S. naval ships through the Suez Canal during the same period and provided all security support for those ship transits. State and DOD have not systematically evaluated how the FMF program specifically contributes to achieving U.S. goals, particularly modernization and interoperability. DOD currently conducts assessments of security assistance activities in the region and regularly reviews selected FMF- funded purchases at the country level. However, these assessments do not provide information on specific FMF goals for Egypt or progress made in achieving them. DOD rates the collective effectiveness of a mix of programs on a regional basis, including FMF, International Military Education and Training, military-to-military contacts, and others. At the country level, DOD and Egyptian officials regularly review the status of selected FMF-funded purchases through financial management, program management, and in- progress reviews. In addition, a Military Coordination Committee, comprised of senior DOD and Egyptian military officials, meets annually to discuss specific FMF purchases and types of equipment that have been or may be procured. These efforts reflect DOD's attention to assessing broad activities and certain financial and management aspects of FMF to Egypt, but they do not provide a comprehensive assessment of how FMF contributes to achieving U.S. goals. We have reported that, although it can be difficult to isolate one program's effect from another's or to assess a program's impact or benefit, such assessments can help decision makers make more informed choices when faced with limited resources and competing priorities. While some U.S. foreign policy and security goals, such as regional stability or maintaining a strong U.S.-Egyptian relationship, may be difficult to measure quantitatively, key FMF program goals--such as interoperability and modernization--better lend themselves to measurement. DOD has not defined the degree of interoperability that it seeks to achieve with the Egyptian military, nor has it determined how to measure progress towards this goal. According to DOD doctrine, interoperability is the ability of communications and other systems, units, or forces to provide services to each other so that forces can operate effectively together and information can be exchanged directly and satisfactorily. The doctrine also states that the degree of interoperability should be defined in specific cases. Achieving interoperability in Egypt is complicated by both the lack of a common definition of interoperability and limitations on some types of sensitive equipment transfers. CENTCOM officials also stated that they would prefer to operate with Egyptian forces according to the interoperability standard used by the United States. They noted, however, that the Egyptian military's definition of interoperability is limited to participation in joint exercises, such as Operation Bright Star. Additionally, Egypt and the U.S. use interim short-term solutions to minimize limitations with respect to interoperability. For example, U.S. officials stated they have established temporary communications installations on certain equipment and have flown alongside Egyptian C-130s to facilitate Egypt's participation in a joint exercise. Egypt lacks specific equipment that limits its interoperability with U.S. forces, but DOD has not formally assessed this limitation and its implications on interoperability. According to DOD policy, the desired level of interoperability cannot be ascertained within a general statement of policy but is dependent on factors unique to certain areas--such as compatible doctrine, tactics, techniques, and procedures. U.S. CENTCOM officials acknowledged that measuring interoperability in Egypt would vary greatly depending on the operation conducted, the type and size of systems used, and the timing of events. State officials acknowledged that it is possible to measure levels of interoperability through specific capabilities demonstrated by Egyptian forces participating in specific operations. For example, it would be possible to measure the capabilities of Egyptian forces participating in peacekeeping operations. DOD has similarly not defined how it will determine the extent to which FMF assistance contributes to the modernization of Egypt's armed forces. Currently, the Egyptian benchmark is based on a percentage of U.S.-versus-Soviet equipment in Egypt's inventory, as reported by the Egyptian military. According to Egyptian military officials, 52 percent of its current military inventory is U.S. equipment. By 2020, Egypt's goal is to increase this amount to 66 percent. DOD officials stated that they believe Egypt's ratio of U.S.-to-Soviet equipment is accurate but acknowledged that they do not maintain their own data to support the statistics. Nonetheless, other factors may be useful indicators to measure progress toward modernization, such as the technical sophistication of Egypt's units, weapons systems, and equipment to provide humanitarian assistance; the readiness of Egyptian troops to deploy to a peacekeeping mission; or the degree to which Egypt's troops are capable of maintaining a desired level of operational activity during Operation Bright Star. Developing these and other indicators would help DOD measure the degree of modernization and, in turn, be better positioned to determine whether Egypt's goals are reasonable. While measuring goals in these areas presents some difficulties, legislation and administration initiatives have recognized the need to do so. GPRA emphasized the importance of evaluating federal programs. Program evaluations help policy makers address whether program activities contributed to their stated goals and can help improve programs and target resources more effectively. In addition, OMB recently implemented PART to assess and improve program performance so that federal agencies can achieve better results. A PART review is intended to assess aspects of the program in order to form conclusions about program benefits by looking at the program's purpose and design, strategic planning, management, and results--that is, whether the program is meeting its annual and long-term goals. To date, OMB has not conducted a review of the FMF program in the Middle East region. For the past 27 years, the United States has provided Egypt with more than $34 billion in FMF assistance to support U.S. strategic goals in the Middle East. Most of the FMF assistance has been in the form of cash grants that Egypt has used to purchase U.S. military goods and services. Like Israel, and unlike all other recipients of U.S. FMF assistance, Egypt can use the prospects of future congressional appropriations to contract for defense goods and services that it wants to procure in a given year through the FMF program. Until 1998, DSCA limited the number of new commitments to less than the annual appropriation thereby allowing more than $2 billion in undisbursed funds to accumulate. If the plan to eliminate the undisbursed funds for the Egypt FMF program is realized, these funds will be depleted by the end of fiscal year 2007. As Congress debates the appropriate mix between military and economic assistance to Egypt, the inherent risks of such flexible financing warrant careful attention and assessment by State and DOD. Similarly, both State and DOD could do a better job assessing and documenting the achievement of goals as a result of the $34 billion in past U.S. FMF assistance and the $1.3 billion in annual appropriations planned to be requested. Periodic program assessments that are documented and based on established benchmarks and targets for goals would help Congress and key decision makers make informed decisions. We agree that expedited transit in the Suez Canal; support for humanitarian efforts in Darfur, Sudan, and elsewhere; and continuing offers to train Iraqi security forces are important benefits that the United States derives from its strategic relationship with Egypt. However, without a common definition of interoperability for systems, units, or forces, it is difficult to measure the extent of current and desired levels of interoperability, nor is it clear how the Egyptian military has been or could be transformed into the modern, interoperable force articulated in the U.S. goals for the Egypt FMF program. Given the longevity of the FMF program, its relatively high appropriation levels, the strategic importance of Egypt in the Middle East, and congressional interest in assessing the balance between economic and military assistance provided to Egypt, we recommend that the Secretaries of State and Defense take the following two actions: conduct an assessment of the impact of potential shifts in future appropriations on the Egypt FMF program. This would include identifying risks, planning a course of action for mitigating those risks, and developing mechanisms to anticipate, identify, and react to change; and conduct periodic program-level evaluations of the FMF program to Egypt. The United States should define specific objectives for the goals, and identify appropriate indicators that would demonstrate progress toward achieving those objectives. Specifically, we recommend that the agencies define the current and desired levels of modernization and interoperability the United States would like to achieve. This should include establishing benchmarks and targets for these and other goals. We provided a draft of this report to the Secretaries of Defense and State for their review and comment. DOD and State provided written responses that are reprinted in appendixes V and VI. Both departments also provided us with technical comments which we incorporated in the report as appropriate. In commenting on our draft report, DOD concurred with our recommendations but stated that we should direct the recommendations primarily to the Secretary of State. DOD and State are joint partners in the FMF program for Egypt--State sets the broad goals for the program while DOD works closely with Egypt's military to implement the program. Therefore, the recommendations are appropriately addressed to both departments. State did not indicate whether it concurred with our recommendations. With regard to our first recommendation, State emphasized that steps to mitigate risks are already in place, such as maintaining reserves to pay costs associated with terminating FMF contracts. However, contract termination reserves are last-resort measures that do not represent a comprehensive assessment for reducing risk associated with possible fluctuations in the resources of the FMF program for Egypt. As we specify in the report, a risk assessment should include other measures such as reducing the scope of existing contracts, stopping new orders, or selling undelivered defense goods. An assessment that identifies the risks, including a plan to mitigate and anticipate these risks, would be appropriate and consistent with federal government internal control standards. On our second recommendation, State noted that it will work with DOD to better define measures for assessing Egypt's modernization goals but stated that defining a level of interoperability would be speculative. Improving Egypt's ability to operate with the U.S. and coalition partners has been a critical, yet unmeasured goal of the program. At a minimum, DOD and State can begin to measure Egyptian forces' capabilities to operate with allied countries in military exercises or peacekeeping operations. Evaluating the degree to which the program meets its goals would be important information for congressional oversight, particularly as Congress assesses the balance between economic and military assistance to Egypt, as well as the impact on U.S. foreign policy interests. State commented that our report found that cash flow financing caused the accumulation of undisbursed balances in the FMF program for Egypt. DOD made the same comment in their technical comments. We modified the language in our report to clarify that the flexibilities of cash flow financing as managed by DSCA in the past allowed for the accumulation of large undisbursed balances. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, and to the Secretaries of Defense and State and other interested parties. We will also make copies available to others upon request. In additon, the report will be available at no charge on the GAO Web site, http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-8979 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. To describe the types and amounts of Foreign Military Financing (FMF) assistance to Egypt, we examined government and private sector documents, databases, and reports; we also interviewed U.S. government officials. Specifically, we interviewed U.S. Department of Defense (DOD), and Defense Security Cooperation Agency (DSCA) officials. We examined DSCA data from the Defense Security Assistance Management System (DSAMS) database for the period 1999 to 2005. We sorted and categorized this data by type of procurement, year, military service, and cost to determine the composition of purchases made by funds provided under the Egypt FMF program. The broad categories of equipment and services were then examined for specific content and type of equipment, training, support, or service. In addition, we conducted multiple interviews with database administrators and information technology specialists to assess the reliability of the data in this system. We determined that the DSAMS database is reliable for the purposes of this report. We also interviewed officials and reviewed documentation from the U.S. Office of Military Cooperation in Cairo (OMC), along with U.S. Embassy officials, to better understand the nature of the program and the types of equipment and services procured through this program. In addition, we interviewed Egyptian military officials in Cairo and Ministry of Foreign Affairs officials at the Egyptian embassy in Washington, D.C. To assess the financing arrangements used to provide FMF assistance to Egypt, and determine how undisbursed balances accumulated in the Egypt FMF program accounts, we examined data from DSCA's Credit System Database and interviewed officials from the DSCA Middle East and South Asia Division and Comptroller's Office, as well as the Defense Finance and Accounting Service. To identify the amounts of accumulated undisbursed balances, we examined fiscal data by annual appropriation, total amount of accumulated undisbursed balances, and amount of funds that had been disbursed by fiscal year. We analyzed this data by fiscal year and interviewed the database administrator and information technology specialists responsible for this database. We determined that the Credit System Database is reliable for the purposes of this report. To assess the manner in which the undisbursed balances were being eliminated, we also examined three DSCA databases used to manage the financing arrangement for the Egypt FMF program: (1) a cash-flow tracking database that monitors letters of offer and acceptance (LOA) and the amount of funds needed in each fiscal year, (2) a fiscal year data base that monitors the time needed to execute a procurement request, and (3) Egypt's Five Year Defense Plan. We interviewed the custodians of these databases in DSCA's Middle East and South Asia Division to develop an understanding of how they are used to manage the cash flow financing arrangement and the program more generally. We also met with the Office of Management and Budget (OMB) to gain an understanding of the plan to eliminate the accumulated undisbursed balances. We did not examine or assess U.S. economic assistance to Egypt. To evaluate how the United States assesses FMF assistance to Egypt and its contribution to the advancement of U.S. foreign policy and security goals, we examined multiple U.S. and Egyptian government documents, and interviewed U.S. and Egyptian government officials and foreign policy specialists. Specifically, we obtained and analyzed the State Department's mission and bureau performance plans to understand U.S. foreign policy and security goals and priorities, and how the executive branch evaluates those goals. Similarly, we obtained DOD theater and country security cooperation plans and compared their goals and priorities to understand how DOD would measure results against them. We then examined U.S. Central Command's (CENTCOM) evaluation tools to understand what metrics it used to evaluate program results. In addition to U.S. and Egyptian government officials, we spoke with foreign policy experts from the Center for Strategic and International Studies, Georgetown University, the Council on Foreign Relations, the Heritage Foundation, the U.S. Institute for Peace, the Middle East Institute, the National Defense University, the Carnegie Endowment for International Peace and the Brookings Institute. To assess DOD's evaluations of security assistance goals, we reviewed various assessments and identified key components that are inherent in all of these assessments. We also researched other potential models that may assist in program evaluation. We interviewed officials from State and DOD in Washington, D.C., who are responsible for administering and implementing the FMF program to Egypt. We also met with Egyptian government officials in Washington, D.C. We traveled to Cairo and met with State and DOD officials at the U.S. Embassy and the OMC. In addition, we interviewed CENTCOM officials responsible for the FMF program to Egypt as well as Egyptian Ministry of Defense officials in Cairo. We performed our work from May 2005 through March 2006 in accordance with generally accepted government auditing standards. The review and approval process for FMF-funded purchases begins with the Egyptian military requesting the purchase of certain defense articles or services, and ends with a signed letter of offer and acceptance for those goods or services. Figure 5 depicts the review and approval process below. The relevant Egyptian military department sends a letter of request (LOR) to the Egyptian Armament Authority, which then forwards it to the U.S. OMC in Cairo to be processed. If approved, the LOR is sent back to the Egyptian Armament Authority and then to the Egyptian Procurement Office, which forwards it to the DSCA and the appropriate U.S. military department. The relevant U.S. military department and agencies--including the Army, Navy, Air Force, the National Security Agency, and the Defense Logistics Agency--generate a Letter of Offer and Acceptance (LOA) and send it to DSCA to coordinate with the State Department and notify Congress, if required. Once endorsed by DSCA or the relevant military department or agency, the LOA is sent to Egypt for acceptance and signature. After acceptance, LOAs are sent to DSCA, DFAS, and the relevant military department or agency. The country program director for Egypt registers the LOA into various databases that track the program. The principal U.S. entities responsible for administering and implementing the FMF program are State and DOD. The table below further describes their roles and responsibilities. The logic model we provide below is a foundation and first step for organizing the elements of a program. It is a tool that may help program managers identify the necessary elements for an evaluation--but it is not a complete evaluation itself. This model can also be used to communicate how program funds are used to achieve program goals. Figure 5 depicts how FMF dollars (inputs), training and procurement (activities), and the resulting equipped and trained military (outputs) can be linked to enhanced modernization and interoperability (outcomes). We are not prescribing this or any other specific model, and the figure below provides a high-level example in aggregate that is meant to be illustrative and does not define all of the exact inputs, activities, and outputs of the FMF program for Egypt. A program evaluation would typically include a breakdown of these aggregated elements in further detail and would include definitions of standards, benchmarks, and targets for each program goal. Ms. Muriel Forster, Assistant Director. In addition, Nanette J. Barton, Stephanie Robinson, Ann M. Ulrich, Lynn Cothern, Martin De Alteriis, Grace Lui, and Christine Bonham made significant contributions to this report.
|
Since 1979, Egypt has received about $60 billion in military and economic assistance with about $34 billion in the form of foreign military financing (FMF) grants that enable Egypt to purchase U.S.-manufactured military goods and services. In this report, GAO (1) describes the types and amounts of FMF assistance provided to Egypt; (2) assesses the financing arrangements used to provide FMF assistance to Egypt; and (3) evaluates how the U.S. assesses the program's contribution to U.S. foreign policy and security goals. Egypt is currently among the largest recipients of U.S. foreign assistance, along with Israel, Afghanistan, and Iraq. Egypt has received about $1.3 billion annually in U.S. foreign military financing (FMF) assistance and has purchased a variety of U.S.-manufactured military goods and services such as Apache helicopters, F-16 aircraft, and M1A1 tanks, as well as the training and maintenance to support these systems. The United States has provided Egypt with FMF assistance through a statutory cash flow financing arrangement that permits flexibility in how Egypt acquires defense goods and services from the United States. In the past, the Defense Security Cooperation Agency (DSCA) accumulated large undisbursed balances in this program. Because the flexibilities of cash flow financing permit Egypt to pay for its purchases over time, Egypt currently has agreements for U.S. defense articles and services worth over $2 billion--some of which are not due for full payment until 2011. The Departments of State (State) and Defense (DOD) have not conducted an assessment to identify the risks and impacts of a potential shift in FMF funding. Officials and many experts assert that the FMF program to Egypt supports U.S. foreign policy and security goals; however, State and DOD do not assess how the program specifically contributes to these goals. U.S. and Egyptian officials cited examples of Egypt's support for U.S. interests, such as maintaining Egyptian-Israeli peace and providing access to the Suez Canal and Egyptian airspace. DOD has not determined how it will measure progress in achieving key goals such as interoperability and modernizing Egypt's military. For example, the U.S. Central Command, the responsible military authority, defines modernization as the ratio of U.S.-to-Soviet equipment in Egypt's inventory and does not include other potentially relevant factors, such as readiness or military capabilities. Achieving interoperability in Egypt is complicated by the lack of a common definition of interoperability and limitations on some types of sensitive equipment transfers. Given the longevity and magnitude of FMF assistance to Egypt, evaluating the degree to which the program meets its goals would be important information for congressional oversight, particularly as Congress assesses the balance between economic and military assistance to Egypt as well as the impact on U.S. foreign policy interests.
| 7,865 | 623 |
In 1991, a group of for-profit and nonprofit public and private funders started NCDI, currently known as Living Cities, to revitalize urban communities. NCDI is composed of 17 major corporations, foundations, and the federal government--HUD and the Office of Community Services of the Department of Health and Human Services. In its first decade of operation, NCDI assembled a community development system composed of two of the largest national community-building organizations to administer the initiative--LISC and Enterprise; 300 CDCs in 23 cities; and local operating support collaboratives, which include local foundations, banks, corporations, and local governments, that identify and draw on local technical expertise and governmental and economic resources and use them to sustain and enhance the capacity of CDCs. As of September 1, 2001, NCDI had provided $234.8 million to its 23 cities. Of this amount, about three-quarters was for project funding and the balance, about $60 million, supported capacity building with operating grants and training. LISC, founded in 1979 and headquartered in New York City, is the largest community-building organization. LISC's mission, involving hundreds of CDCs, is to rebuild whole communities by supporting these groups. LISC operates local programs in 38 urban program areas and 70 rural communities. According to LISC, it has raised more than $4 billion from over 2,200 investors, lenders, and donors, which has leveraged an additional $6 billion in public and private sector funds. In addition, according to LISC, it has helped 2,200 CDCs build or rehabilitate more than 110,000 affordable homes, created over 14 million square feet of commercial and community space, and helped generate 40,000 jobs. Enterprise was founded in 1982 as a vehicle for helping low-income people revitalize their communities. Headquartered in Columbia, Maryland, Enterprise has offices in 18 communities across the nation. Enterprise works with a network of 2,200 nonprofit organizations, public housing authorities, and Native American tribes in 800 locations, including more than 100 CDCs. The Enterprise Foundation provides these organizations with technical assistance, training, short and long-term loans, equity investments, and grants. According to Enterprise, it has raised nearly $430 million to support community-based development that has helped produce 17,000 affordable homes and assisted 20,000 low-income individuals in finding employment. HFHI, founded in 1976 and headquartered in Americus, Georgia, is a nonprofit ecumenical Christian housing ministry (faith-based organization) seeking to eliminate substandard housing. HFHI builds and rehabilitates houses with the help of homeowner (partner) families, volunteer labor, and donations of money and materials. Work is done at the local community level by affiliates that coordinate all aspects of home building, including fund-raising, building site selection, partner family selection and support, construction, and mortgage servicing. HFHI provides its affiliates with information, training, and a variety of other support services. Affiliates are primarily volunteer driven, though some have their own staff. Affiliates are monitored and supported by HFHI staff across the country. HFHI currently has over 1,669 affiliates, and in 27 years has built over 150,000 houses worldwide, including more than 40,000 homes in the United States. Figure 2 shows the 526 cities where HFHI affiliates have directly received Section 4 funds. YBUSA was founded in 1990 and is headquartered in Somerville, Massachusetts. It is a national nonprofit organization that provides capacity-building grants on a competitive basis to support the efforts of organizations that are planning to or are operating Youthbuild programs in their communities, many of which are funded by the HUD Youthbuild grant program. A Youthbuild program is a comprehensive youth and community development program as well as an alternative school. Youthbuild programs, which offer job training, education, counseling, and leadership development opportunities through the construction and rehabilitation of affordable housing, serve young adults ages 16 to 24 in their own communities. Participants split their time between the construction site and the classroom, where they earn GEDs or high school diplomas and prepare for jobs or college. The buildings that are constructed or rehabilitated during the program are primarily low-income housing. YouthBuild USA serves as the national intermediary and support center for over 200 Youthbuild programs. Over half of the Youthbuild programs are members of YBUSA's affiliated network. As shown in figure 3, YBUSA affiliates located in 106 cities have received Section 4 funds from 1997 to 2001. The scope of eligible activities funded by Section 4 of the HUD Demonstration Act of 1993 has changed over the years. Originally the act focused on providing funding for capacity building in 23 urban areas. Currently, it provides funding to groups and activities in urban, rural, and tribal areas nationwide. Section 4 authorized HUD to join other corporations and foundations as an equal partner in NCDI to develop the capacity and ability of CDCs in 23 cities. In 1997, Section 4 was expanded to include two more grantees, HFHI and YBUSA as well as more cities, and rural and tribal areas. The grantees' organizational structures and missions vary, as do their strategies for awarding Section 4 funds and the types of activities they authorize. Each grantee has initiatives in rural and tribal areas. Additional federal funding, such as Community Development Block Grants, is also available to grantees for capacity building and technical assistance. NCDI in 1991 started with seven large national foundations and a major insurance company and was administered by LISC and Enterprise. This consortium of funders believed CDCs could achieve greater and more lasting success if they could count on a significant reliable commitment of multiyear operating support, project financing, technical assistance, and training. To date, NCDI has had four phases (rounds) of funding. In the first phase (1991-93), NCDI funders pledged $62.9 million (see table 1). With the enactment of Section 4, HUD joined phase II of NCDI, which also included 12 private foundations and financial institutions, as an equal partner. Congress' goal in authorizing HUD to participate in NCDI was to develop the capacity and ability of CDCs to undertake community development and affordable housing projects and programs. HUD's involvement resulted in some changes to the way funds were disbursed. While the foundations provided funding through Living Cities (NCDI), which in turn distributed grant funds to LISC and Enterprise, HUD distributed its funding directly to LISC and Enterprise. In addition, unlike other NCDI funders, HUD provided funding only after expenses were incurred, monitored funding more closely, and restricted uses to capacity- building activities. In 2001, 17 foundations and corporations committed another 10 years to the initiative. Congress did not appropriate funds for HFHI and YBUSA until 1997 (see table 2). At that time, LISC and Enterprise were given the option of using Section 4 funding to continue NCDI activities in the original 23 cities or to undertake new non-NCDI activities in other cities, which expanded the geographical dispersion of Section 4 funding. In addition, Congress required the grantees to set aside a portion of Section 4 funding for rural and tribal areas. Unlike the NCDI activities, whose funding objectives were determined by the responsible funders, LISC and Enterprise worked directly with HUD in creating the objectives for non-NCDI cities. LISC and Enterprise are national organizations that use local program offices to provide financial and technical support to CDCs. The staff at the local program offices work with CDCs to achieve community-driven goals. For example, through its Boston local office, LISC provided several Section 4 grants to the Madison Park Development Corporation. A $78,000 grant was used to help the CDC improve the Dudley Square Business district in the Roxbury neighborhood of Boston. The Cleveland Enterprise office provided Section 4 funds to the Cleveland Neighborhood Partnership Program, a local support collaborative that provides organizational and real estate development and neighborhood planning for Cleveland CDCs. According to HFHI and YBUSA officials, these organizations provide direct grants to affiliates but operate somewhat differently. HFHI has provided grants to affiliates on a 3-year diminishing basis to hire new staff or establish warehouse facilities, with an expectation of increasing house production by at least 15 percent. In addition, HFHI has established regional support centers to bring technical assistance closer to affiliates. YBUSA uses Section 4 funds to provide a variety of grants to its affiliated network, such as operating grants, program enhancement grants, special assistance grants, and scholarships to staff and students. In addition, YBUSA has used Section 4 funds to build its capacity to serve as a national support center and to provide technical assistance and training. LISC and Enterprise consider the subrecipient's stage of development when making Section 4 funding decisions. For example, a new organization might receive Section 4 funds to pay for a portion of the salary of the executive director, whereas more established CDCs might receive funding to upgrade their financial management software. All grantees stressed that because capacity building takes time, they provide multiyear support to subrecipients. However, three of the four grantees indicated that they generally fund subrecipients in ways that encourage the organizations to become financially independent. Officials from LISC and Enterprise explained that although some subrecipients receive multiple grants for several years, the grants are small enough to keep subrecipients from becoming dependent on Section 4 funds for daily operations. As noted earlier, HFHI's grants, which are provided to hire new staff, diminish over a 3-year period. According to HFHI, the affiliates' gradual absorption of staff costs leads to independence from--rather than dependence on--federal funding. YBUSA, however, has provided Section 4 funding to affiliates to pay for general operations during years when they had not received funding under HUD's Youthbuild program. Generally, Section 4 funds are used to pay for staff salaries, training, technology, and office supplies and equipment and to fund the operating support collaboratives. For example, with its 1997 funds HFHI provided direct grants to 60 affiliates to pay for staff salaries (usually an executive director). The YouthBuild Boston affiliate used Section 4 funds to hire an administrative coordinator and enhance its technological capabilities. The Washington, D.C., LISC office provided Section 4 funding to a local CDC to pay for some staff training and to purchase equipment and other supplies to outfit a homebuyer's training center. Enterprise has used Section 4 funds to develop on-line tools, such as a best practices database, and to bring current technology to CDCs. For example, Enterprise awarded one nonprofit organization, Citizen's Housing and Planning Association (CHAPA) in Boston, Section 4 funds to administer the NET-Works program, a program to enhance the technological capacity of CDCs in the New England region. As a result, 36 CDCs received computer equipment, Internet access, and assistance in developing websites. Figure 4 illustrates the broad impact that Section 4 funding had for this nonprofit organization on other CDCs. Congress did not require grantees to set aside Section 4 funding for rural and tribal areas until 1997. All four grantees currently have initiatives that focus on these areas. For example, LISC has a rural office that supports both a national program and a program in the Mississippi River Delta Region of the United States covering 56 counties and parishes. In fiscal years 1997 through 2002, LISC awarded Section 4 grants totaling approximately $9 million to rural CDCs. Enterprise has awarded $6.2 million in Section 4 grants to rural CDCs. Unlike LISC, Enterprise does not have a rural office. Enterprise services its rural and tribal subrecipients through partnerships with other state and regional rural agencies and the Housing Assistance Council, which administers Enterprise's Rural Capacity Building Initiative, and through its regional and local office structure. Although 218 of the 1,003 LISC and Enterprise CDCs provide services to rural and tribal areas, many of them cover large geographical areas. For example, 57 of the 72 rural CDCs that are funded by LISC, operate in more than one county, and 64 of the 146 rural CDCs that are funded by Enterprise operate in more than one county. Figure 5 shows the cities where LISC and Enterprise subrecipients who work in rural areas are located and the multiple counties they serve. HFHI makes an effort to reserve at least one-third of its Section 4 funding for its rural affiliates. HFHI awarded $4.6 million of its fiscal year 1997 Section 4 funds to 60 affiliates of which 33 were rural. According to YBUSA officials, meeting the required set-aside has been a challenge. YBUSA's outreach efforts have included encouraging rural affiliates to apply for planning, operating, and program enhancement grants and for specialized technical assistance. According to a YBUSA official, over the course of the 1997 grant, about $2.5 million of YBUSA's $7.6 million allocation was focused on rural and tribal and partly rural and tribal programs. Of this amount, about $1.3 million was for direct grants to sites and about $1.2 million was for services to sites. A YBUSA official told us that as of July 2003, 84 of the 203 operating Youthbuild programs were rural and partly rural. LISC, Enterprise, HFHI, and YBUSA also receive capacity-building and technical assistance funds from other HUD programs (table 3). The primary difference between Section 4 funding and other federal funding is that the other federal funding for capacity-building and technical assistance is generally awarded competitively, while Section 4 funding is noncompetitive. Several federal programs offer capacity-building funds: CDBG, HOME, and Housing Opportunities for Persons with AIDS (HOPWA). All grantees' Section 4 capacity-building funds exceed those received from other federal programs. While it was difficult to demonstrate empirically that Section 4 directly influenced private sector involvement in community development activities, funders and grantees said that federal involvement served as a catalyst for private fund-raising and provided credibility to subrecipients in terms of their ability to comply with the requirements that are associated with federal funding. Some local funders of CDCs and affiliates were not aware of the specific Section 4 funding the subrecipients received but indicated that both federal funding and diverse funding streams are important. Since matching funds can be raised either nationally, locally, or a combination of both, each grantee employs its own matching policy and raises funds from foundations, corporations, banks, individual donors, and nongovernmental sources. Since the creation of Section 4, grantees have raised nearly $800 million from the private sector, in matching and other cash and in-kind contributions. The grantees and nearly all of the private lenders and foundations we contacted stressed the importance of federal funding in leveraging funds from the private sector. For example, officials from LISC, Enterprise, and Living Cities indicated that private funding and lending have increased since HUD's involvement. In addition, Enterprise officials indicated that the private sector believes that federal funding provides an incentive to work in areas and projects that would be less likely to receive funding without federal involvement. HFHI officials said that federal funding is imperative because it is the only way for all-volunteer organizations to transition into staff-managed, volunteer-based organizations. YBUSA officials said that federal funding, especially funding that leverages private funding, has enabled YBUSA to be proactive in assisting Native American and rural programs. NCDI lenders and funders indicated that Section 4 funding had both a psychological and a real impact on private sector involvement in the initiative. For example, one senior executive from a major lending institution indicated that federal participation in NCDI provided funders with a symbolic and financial incentive to join the NCDI consortium. Symbolically, federal funding provides a sense of credibility to NCDI, as funders see federal participation as a sign of good housekeeping and reduced risk. Financially, federal participation adds more money to NCDI capacity-building initiatives, in turn enabling subrecipients to raise more private funding. Another lender said that HUD's participation in a CDC through Section 4 funding served as an indication of good management and internal controls. An insurance company also noted that Section 4 funding showed that the federal government was strongly committed to a coordinated effort to build CDC capacity, and a foundation told us that the federal presence legitimized NCDI as the CDC capacity-building vehicle with the greatest payoff. Furthermore, nearly all of the YBUSA and HFHI private funders that we interviewed said that federal funding was an incentive for their participation in the program. For example, one funder said that federal support was like a "seal of approval." Another funder said that Section 4 funding created a positive incentive because the availability of invaluable hard-to-get federal funding increased the viability of any project. Most funders and lenders that provide funding directly to CDCs and affiliates stressed that federal funding was beneficial, but some of those local funders were not aware that subrecipients received Section 4 funds. Some LISC and Enterprise subrecipient funders explained that federal funding and diverse funding streams were characteristics of a viable organization. One funder suggested that public funding was critical, since private philanthropy could only do so much. Another foundation indicated that it looked to organizations that had a diversified funding structure, since it could not provide sole support for an organization. The four funders we spoke with that provided funding directly to the YouthBuild Boston affiliate were split on whether federal participation was an incentive to their involvement. Two said that federal participation was an incentive; while the other two said their decision to provide funding was based solely on the affiliate's mission. Officials from most of the five organizations we spoke with that provided funding to an HFHI affiliate in Rhode Island indicated that federal participation was not an incentive, but two said that having other sources of funding encouraged them to participate. An official from one organization indicated that while federal funding indirectly provides an incentive for participation, the organization provided funding primarily based on the affiliate's reputation and mission. Section 4 funding calls for significant private sector participation in community development initiatives because Section 4 requires that grantees match each dollar awarded with three dollars in cash or in-kind contributions from private sources. Matching funds are raised nationally and locally and come from nongovernmental sources including private foundations, corporations, banks, and individual donors. Each grantee has its own matching policy and procedures for complying with the matching requirement. LISC and Enterprise generally meet their matching requirement at the national level but encourage CDCs to seek private contributions to aid in the match. However, LISC requires subrecipients in rural areas to raise at least $1 for each $1 they receive; the remainder of the match is raised nationally. Conversely, HFHI and YBUSA require their affiliates to raise at least $3 for every dollar of Section 4 funding they receive. While both HFHI and YBUSA impose this requirement on all of their affiliates, including those in rural and tribal areas, if YBUSA rural and tribal affiliates cannot raise the 3 to 1 match, the national organization will provide the difference. Officials from the four grantees told us that raising the private matching funds had not been a problem. For example, for the 1997 grant HFHI and its 60 affiliates that received Section 4 funding raised almost $155.6 million in private contributions. YBUSA and its affiliates raised $26.6 million in private contributions to match its $7.6 million grant. Since the four grantees became eligible for Section 4 funding, they have raised nearly $800 million from the private sector in matching funds and empirically that Section 4 funding influenced the grantees' fund-raising owing to external factors such as economic trends and private sector interests. Between 1994 and 2001, LISC and Enterprise raised $457 million, and from 1997 to 2002, HFHI and YBUSA raised $341 million (see table 4). In addition to providing funding, the private sector has contributed in-kind services to CDCs, including managerial skills, mentoring, and volunteer labor. For example, representatives from the private sector serve on LISC's local advisory boards to help local program offices make funding decisions and are members of operating support collaboratives in several cities. HFHI's local affiliates use volunteers for office and construction work and for their boards of directors. HUD monitoring is limited to desk reviews of the grantees' compliance with their grant agreements. In general, the grant agreements require several kinds of reporting information including work plans, semiannual or quarterly financial status reports, requests for grant payment vouchers, and final reports. However, HUD's involvement in reviewing grantee work plans differs for NCDI and non-NCDI activities. Since HUD does not directly monitor the subrecipients' capacity-building activities, it relies on the grantees to monitor and oversee them. The grantees have several mechanisms in place to ensure that subrecipients are complying with their individual grant agreements. However, in a subset of files we reviewed, we found that a grantee had funded an ineligible activity for one subrecipient. Also, HUD does not have specific impact measures in place for Section 4. HUD's efforts to monitor the grantees include desk reviews of work plans, annual performance reports, semiannual financial status reports, requests for grant payment vouchers, and final performance reports. According to HUD, the four grantees sign grant agreements that obligate them to comply with HUD and OMB requirements. For example, grantees must submit work plans that identify when and how federal funds and nonfederal matching resources will be used and present performance goals and objectives in enough detail to allow for HUD monitoring. In addition, the grant agreements require grantees to submit annual reports showing actual progress made in relation to the work plans, plus semiannual financial status reports that show private sector matches and grant expenditures to a certain date. Grantees are not permitted to begin activities or to draw down funds until HUD approves the work plans. Furthermore, the grant provisions require that in order to receive payment, grantees must submit a payment voucher with supporting invoices that provide enough information to allow HUD to determine whether the costs are reasonable in relation to the work plan's objectives. Finally, the grant agreement stipulates that within 90 days of completing the grant award, the grantee must submit a final report summarizing all the activities conducted under the award including any significant program achievements and problems reasons for the program's success or failure. HUD officials told us that staffing constraints caused the agency to focus mostly on grantee work plans and payment vouchers. HUD reviews how the grantees select subrecipients, set benchmarks, and plan to build capacity. HUD uses different processes to review NCDI and non-NCDI work plans. As an equal player, HUD reviews NCDI's work plans together with other funders and meets twice a year to discuss NCDI initiatives and goals for each city. However, HUD reviews and approves non-NCDI work plans by itself. A HUD official told us that HUD staff focus most of their attention on the funding aspects of the work plans. HUD officials told us that they check the semiannual financial status reports and accompanying narratives to determine whether the expended amounts are in line with the amounts stated in the work plans. Section 4 grant funds are provided to grantees after costs are incurred, so grantees must periodically submit vouchers and supporting documentation that detail expenditures by city or project in order to receive payment. HUD staff review the vouchers and supporting documentation to ensure that funds are used for the eligible activities stated in the work plans and that expenditures such as travel and indirect costs are within HUD guidelines and do not exceed available funding. HUD has denied payments for activities not contained in approved work plans or not supported by the required documentation. For example, in March 2003, HUD withheld over $650,000 in Section 4 funding because one grantee did not submit a final report, several financial reports, a work plan, and two annual plans. In June 2003, however, the grantee provided the necessary documents and HUD released the funds. In addition, grantees must submit financial status reports that show whether the organizations are meeting their matching requirements. However, HUD relies on the grantees to ensure that they and their subrecipients are matching funds correctly. Both LISC and Enterprise have a formal matching policy. LISC's policy explicitly states that counting the same funds as matching funds under more than one program is prohibited and requires its subrecipients to identify the sources and amounts of matching funding they have received twice a year. Enterprise's matching requirements are tracked on an ongoing basis and are certified by an Enterprise official. YBUSA requires its affiliates to submit documentation that supports the sources and amounts of matching funds committed before it will release Section 4 funding, and HFHI requires affiliates to report matching funds data quarterly. HUD does not directly monitor subrecipients' and affiliates' capacity- building activities but instead relies on the grantees for monitoring and oversight. Like HUD, grantees initiate grant agreements with their subrecipients and affiliates. These grant agreements generally include such things as the purpose of the grant, grant amount, time frame, disbursement conditions, causes for suspension and termination, restrictions on use of grant funds, and reporting and accounting requirements that describe how the grantee will monitor the grant. The grantees use the grant agreements as the basis for monitoring their subrecipients' performance. The grantees use several mechanisms to ensure that subrecipients are complying with their grant agreements. For example, LISC and Enterprise officials indicated that throughout the grant period, local offices communicate with their subrecipients by telephone or email or in person in order to follow their progress. Similarly, YBUSA staff told us that they monitor affiliates by telephone as well as through on-site technical assistance. LISC, Enterprise, and YBUSA require each subrecipient to submit a monthly activity report, semiannual project reports and narratives, and final reports. However, the grantees have different procedures, forms, and checklists that guide their monitoring activities. Operating support collaboratives aid LISC and Enterprise in their oversight through proposal reviews, organizational assessments, work plan reviews, on-site reviews, quarterly report reviews, and annual and 3- year evaluations. The LISC and Enterprise local offices use the collaboratives' monitoring information when making their Section 4 funding decisions. HFHI and its regional office personnel evaluate all affiliates every 3 years based on a "Standards of Excellence" program. The program has three elements: best practices, acceptable practices, and minimum standards. According to HFHI officials, continued failure to meet minimum standards will lead to probationary status and eventually disaffiliation. The program provides clear guidelines for affiliate self-assessments and HFHI evaluations as well as a systematic process for ensuring that Habitat affiliates are complying with the organization's basic principles. If HFHI national or regional staff are aware of illegal activities or violations of HFHI's minimum standards, immediate action can be taken to correct the problem. The evaluation covers internal controls and audits. All affiliates with an annual income of $250,000 or more, assets of $500,000 or more, or both are required to have an independent annual audit. Affiliates are also requested to submit their annual report to HFHI. While the grantees appear to have comprehensive processes to monitor and control their subrecipients, our review of seven subrecipients' grant files identified a subrecipient that suffered from organizational and financial problems that eventually led to its demise. This subrecipient was the grantee's second-largest in terms of Section 4 funding, receiving 10 grants that totaled almost $1 million over a 7-year period. One grant for $143,000 paid for several activities, one of which was a bad debt--an ineligible expenditure according to OMB Circular A-122. Since HUD officials do not receive and review subrecipient grant agreements and payment vouchers, HUD was not aware of the ineligible cost. The grantee has since taken several steps to ensure that similar problems do not occur, including having a staff member perform increased subrecipient monitoring to verify that sufficient management controls are in place to ensure that grant funds are used appropriately and effectively. This monitoring includes a full review of the grant request and award documents, followed by a review of supporting documentation to verify compliance with allowable expenses and consistency with the work plan. In addition, site visits are made to subrecipients that have received large amounts of funding and a "watch report" is maintained to track all subrecipients that are late in responding to requests for information. HUD has not measured the impact of Section 4 funding on improving the capacity of its grantees and subrecipients. However, HUD requires its grantees to submit annual work plans that include specific details of how federal and private resources will be used and to identify performance goals and objectives that should be attained during the grant period. In addition, OMB is currently requiring HUD and the NCDI grantees to conduct a PART review. PART assessments are used for making budgeting decisions, supporting management, identifying design problems, and promoting performance measurement and accountability. The assessment includes questions on a program's purpose and design, strategic planning, management, and results. Furthermore, in response to a GAO report recommendation that HUD require program offices to determine the practicability of measuring the impact of technical assistance and establishing objective, quantifiable, and measurable performance goals, HUD is working with a group of national technical assistance providers to develop a framework to assess the effectiveness of its technical assistance programs. Living Cities has also contracted with a consultant to develop impact measurements for the 23 NCDI cities. Other evaluations have resulted in measures that gauge the capacity-building system in NCDI cities and categorize organizational capabilities into five different stages of growth-- initiation, demonstration, professionalization, instutionalization, and maturation. While Section 4 funds must be used for capacity-building initiatives, grantees are afforded a great deal of discretion as to how they administer, use, and oversee these funds. HUD is responsible for ensuring that grantees are utilizing Section 4 funds according to federal law and regulations and has several controls in place to ensure that they do. However, HUD relies primarily on its grantees to make certain that this responsibility is carried out at the subrecipient level. We found that grantees generally had good management systems and controls in place to monitor their subrecipients and to ensure that they carried out their work plans, met their objectives, and used federal funds legally and responsibly. However, even with good controls, problems can still occur, as we found at one CDC. While HUD has overarching responsibility for detecting such internal control failures, the cost-effectiveness of adding additional federal controls at the subrecipient level must be weighed against the size of the program and the amount of federal funding involved. Given the relative size of the Section 4 program and the fact that similar problems should not recur if HUD and the grantees remain vigilant, we do not believe that additional controls are necessary at this time. Recommendation for We recommend that the Secretary of HUD take steps to recover the grant Executive Action funds that one Section 4 grantee used to cover a bad debt. In an e-mail dated August 7, 2003, HUD provided technical comments, which we incorporated into this report as appropriate. To accomplish our objectives, we reviewed public laws, federal regulations, HUD directives, budget documents and other material that described the Section 4 program, grantees' missions and organizational structures, and authorized and appropriated funding. To determine how Section 4 funding has evolved and expanded over the years and how grantees use Section 4 funding, we interviewed HUD, Living Cities, LISC, Enterprise, YBUSA, and HFHI officials in national, local, and rural offices, and subrecipients in Americus, GA; Baltimore, MD; Boston, MA; Cleveland, OH; Frederick, MD; Hughesville, MD; Kingston, RI; and Washington, D.C. We collected data from LISC, Enterprise, and YouthBuild USA showing the number of multiple grants and amounts provided to CDCs or affiliates. We selected five CDCs/affiliates from three grantees. For LISC and Enterprise, we chose the CDCs that had received the greatest number of grants and analyzed the purpose of each grant. For YBUSA, we selected the affiliates that had received the highest dollar amounts. To create the maps of subrecipients and cities that received Section 4 funding, we obtained city data from NCDI, LISC, Enterprise, YBUSA, HFHI, and CHAPA and used geographical information software (GIS) to create the maps. We used the same software to create the rural county maps with data obtained from LISC and Enterprise that listed each CDC categorized as rural and the counties they served. To determine the importance of Section 4 funding to private sector involvement in community development initiatives, we reviewed public laws, federal regulations, HUD directives, budget documents, and other materials. We obtained 1994 through 2001 private contribution data from LISC and Enterprise and 1997 through 2001 data from YBUSA and HFHI. We obtained matching policy information from HUD and the grantees and interviewed private funders that had provided either grants or loans to each of the grantees and subrecipients we visited in Boston, MA; Baltimore, MD; Frederick, MD; and Kingston, RI. We based our selections on the subrecipients' proximity to our offices in Washington D.C., and Boston, MA, and the amount of Section 4 funding they received. To determine how HUD and Section 4 grantees controlled the management and measured the impact of Section 4 programs, we reviewed and analyzed HUD and grantee criteria, processes and procedures for monitoring, controlling, and measuring performance and tested grantee monitoring and control procedures at seven subrecipients. In addition, we reviewed reports prepared by Living Cities and the Urban Institute that discussed NCDI's history and accomplishments. We conducted our work from September 2002 through April 2003 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time we will provide copies of this report to the Chairman and Ranking Minority Members, Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member, House Committee on Financial Services; and the Ranking Minority Members of its Subcommittees on Oversight and Investigations and Housing and Community Opportunity. We will also send copies to the Secretary of Housing and Urban Development and the Director of the Office of Management and Budget. In addition, the report will be available at no charge on GAO's Web site at http//:www.gao.gov. Please contact me at (202) 512-8678 if you have any questions about this report. Key contacts and contributors are listed in appendix I. In addition, Emily Chalmers, Nadine Garrick, Diana Gilman, John McGrail, John Mingus, Frank J. Minore, and Marc Molino made key contributions to this report.
|
Congress recognized the importance of building the capacity of community development organizations by passing Section 4 of the HUD Demonstration Act of 1993. The act authorized the Department of Housing and Urban Development (HUD) to partner with several national nonprofit organizations that provide funding to these community groups for such things as training, staff salaries, office equipment and supplies, and management information systems. In 2002, HUD provided $31 million for capacitybuilding activities. To help Congress with its oversight of Section 4, we reviewed the evolution and use of Section 4 funding, the importance of Section 4 funding to private sector involvement, and the management controls and measurements that are in place to assess Section 4. We found that Section 4 has evolved from a narrowly targeted initiative that focused on providing funding for capacity building in 23 urban areas to a broader program that funds groups and activities in urban, rural, and tribal areas nationwide. The four organizations (grantees) use Section 4 funding to provide a variety of capacity-building support to their subrecipients. These subrecipients are nonprofit organizations that undertake locally targeted initiatives in areas such as economic development, low-income housing construction, and job training. The Section 4 funds that the grantees receive help leverage private sector funding and in-kind contributions such as land and equipment, pro bono legal services, office space, and voluntary labor. Since the four grantees became eligible for Section 4 funding, they have leveraged nearly $800 million in cash and in-kind contributions from the private sector. HUD is responsible for ensuring that Section 4 funds are used according to federal law and regulations and that grantees are utilizing funds efficiently and effectively. However, HUD relies on grantees to oversee their subrecipients. The grantees had far-reaching organizational structures and processes in place to monitor and control their subrecipients. But we found that one of the seven subrecipients we tested for monitoring and control procedures had reimbursed a subrecipient for an item that was prohibited by the Office of Management and Budget (OMB). While HUD has the overall responsibility to prevent such internal control failures, the cost-effectiveness of adding additional federal controls must be weighed against the amount of the federal dollars involved. We believe that as long as HUD and the grantees remain vigilant, additional controls are not necessary at this time. HUD is taking steps to develop a framework for assessing the effectiveness of its technical assistance programs and will take part in an OMB Program Assessment Rating Tool review.
| 7,812 | 544 |
Peer review is well established as a mechanism for assuring the quality, credibility, and acceptability of individual and institutional work products. This assurance is accomplished by having the products undergo an objective, critical review by independent reviewers. Peer review has long been used by academia, professional organizations, industry, and government. Within EPA, peer review has taken many different forms, depending upon the nature of the work product, the relevant statutory requirements, and office-specific practices and needs. In keeping with scientific custom and/or congressional mandates, several offices within EPA have used peer review for many years to enhance the quality of science within the agency. In response to a panel of outside academicians' recommendations in 1992, EPA issued a policy statement in 1993 calling for peer review of the major scientific and technical work products used to support the agency's rulemaking and other decisions. However, the Congress, GAO, and others subsequently raised concerns that the policy was not being implemented consistently across the agency. In response to these concerns, in 1994 EPA reaffirmed the central role that peer review plays in ensuring that the agency's decisions are based on sound science and credible data and revised its 1993 policy. The new policy, while retaining the essence of the prior one, was intended to expand and improve the use of peer review throughout EPA. The 1994 policy continued to stress that major products should normally be peer reviewed, but it also recognized that statutory and court-ordered deadlines, resource limitations, and other constraints might limit or even preclude the use of peer review. The policy applied to major work products that are primarily scientific or technical in nature and that may contribute to the basis for policy or regulatory decisions. In contrast, other products used in decision-making are not covered by the policy, nor are the ultimate decisions themselves. While peer review can take place at several different points along a product's development, such as during the planning stage, it should be applied to a relatively well-developed product. The 1994 policy also clarified that peer review is not the same thing as the peer input, stakeholders' involvement, or public comment--mechanisms used by EPA to develop products, to obtain the views of interested and affected parties, and/or to build consensus among the regulated community. While each of these mechanisms serves a useful purpose, the policy points out that they are not a substitute for peer review because they do not necessarily solicit the same unbiased, expert views that are obtained through peer review. EPA's policy assigned responsibility to each Assistant and Regional Administrator to develop standard operating procedures and to ensure their use. To help facilitate consistent EPA-wide implementation, EPA's Science Policy Council--chaired by EPA's Deputy Administrator--was directed to help the offices and regions develop their procedures and identify products that should be peer reviewed. The Council was also given the responsibility for assessing agencywide progress and developing any needed changes to the policy. However, the ultimate responsibility for implementing the policy was placed with the Assistant and Regional Administrators. We found that--2 years after EPA established its peer review policy-- implementation was still uneven. We concluded that EPA's uneven implementation was primarily due to (1) inadequate accountability and oversight to ensure that all products are properly peer reviewed by program and regional offices and (2) confusion among agency staff and management about what peer review is, what its significance and benefits are, and when and how it should be conducted. According to the Executive Director of the Science Policy Council, the unevenness could be attributed to a number of factors. First, while some offices within EPA--such as the Office of Research and Development (ORD)--have historically used peer review for many years, other program offices and regions have had little prior experience. In addition, the Director and other EPA officials told us that statutory and court-ordered deadlines, budget constraints, and problems in finding and obtaining qualified, independent peer reviewers also contributed to the problem. EPA's oversight primarily consisted of a two-part reporting scheme that called for each office and region to annually list (1) the candidate products nominated for peer review during the upcoming year and (2) the status of the products previously nominated. If a candidate product was no longer scheduled for peer review, the list had to note this and explain why peer review was no longer planned. Although we found this to be an adequate oversight tool for tracking the status of previously nominated products, we pointed out that it does not provide upper-level managers with sufficient information to ensure that all products warranting peer review have been identified. This fact, together with the misperceptions about what peer review is and the deadlines and budget constraints that project officers often operate under, has meant that the peer review program to date has largely been one of self-identification, allowing some important work products to go unlisted. According to the Science Policy Council, reviewing officials would be much better positioned to determine if the peer review policy and procedures are being properly and consistently implemented if, instead, EPA's list contained all major products along with what peer review is planned and, if none, the reasons why not. We noted that the need for more comprehensive oversight is especially important given the policy's wide latitude in allowing peer review to be forgone in cases facing time and/or resource constraints. As explained by the Executive Director of EPA's Science Policy Council, because so much of the work that EPA performs is in response to either statutory or court-ordered mandates and the agency frequently faces budget uncertainties or limitations, an office under pressure might argue for nearly any given product that peer review is a luxury the office cannot afford in the circumstances. However, as the Executive Director of the Science Advisory Board (SAB)told us, not conducting peer review can sometimes be more costly to the agency in terms of time and resources. He told us of a recent Office of Solid Waste rulemaking concerning a new methodology for delisting hazardous wastes in which the Office's failure to have the methodology appropriately peer reviewed resulted in important omissions, errors, and flawed approaches in the methodology; these problems will now take from 1 to 2 years to correct. The SAB also noted that further peer review of the individual elements of the proposed methodology is essential before the scientific basis for this rulemaking can be established. Although EPA's policy and procedures provide substantial information about what peer review entails, we found that some EPA staff and managers had misperceptions about what peer review is, what its significance and benefits are, and when and how it should be conducted. Several cases we reviewed illustrate this lack of understanding about what peer review entails. Officials from EPA's Office of Mobile Sources (OMS) told the House Commerce Committee in August 1995 that they had not had any version of the mobile model peer reviewed. Subsequently, in April 1996, OMS officials told us they recognize that external peer review is needed and that EPA planned to have the next iteration of the model so reviewed. We found similar misunderstandings in several other cases we reviewed. EPA regional officials who produced a technical product that assessed the environmental impacts of tributyl tin told us that the contractor-prepared product had been peer reviewed. While we found that the draft product did receive some internal review by EPA staff and external review by contributing authors, stakeholders, and the public, it was not reviewed by experts independent of the product itself or of its potential regulatory ramifications. When we pointed out that--according to EPA's policy and the region's own peer review procedures--these reviews are not a substitute for peer review, the project director said that she was not aware of these requirements. In two other cases we reviewed, there were misunderstandings about the components of a product that should be peer reviewed. For example, in the Great Waters study--an assessment of the impact of atmospheric pollutants in significant water bodies--the scientific data were subjected to external peer review, but the study's conclusions that were based on these data were not. Similarly, in the reassessment of dioxin--an examination of the health risks posed by dioxin--the final chapter summarizing and characterizing dioxin's risks was not as thoroughly peer reviewed. In both cases, the project officers did not have the conclusions peer reviewed because they believed that the development of conclusions is an inherently governmental function that should be performed exclusively by EPA staff. However, some EPA officials with expertise in conducting peer reviews disagreed, maintaining that it is important to have peer reviewers comment on whether or not EPA has properly interpreted the results of the underlying scientific and technical data. EPA's quality assurance requirements also state that conclusions should be peer reviewed. During our review, we found that EPA had recently taken a number of steps to improve the peer review process. Although we believed that these steps should prove helpful, we concluded that they did not fully address the previously-discussed underlying problems and made some recommendations for improvement. EPA agreed with our findings and recommendations and has recently undertaken steps to implement them. While it is too early to gauge the effectiveness of these efforts, we are encouraged by the attention peer review is receiving by the agency's upper-level management. Near the completion of our review, in June 1996, EPA's Deputy Administrator directed the Science Policy Council's Peer Review Advisory Group and ORD's National Center for Environmental Research and Quality Assurance to develop an annual peer review self-assessment and verification process to be conducted by each office and region. The self-assessment was to include information on each peer review completed during the prior year as well as feedback on the effectiveness of the overall process. The verification would consist of the signature of headquarters, laboratory, or regional directors to certify that the peer reviews were conducted in accordance with the agency's policy and procedures. If the peer review did not fully conform to the policy, the division director or the line manager must explain significant variances and actions needed to limit future significant departures from the policy. The self-assessments and verifications were to be submitted and reviewed by the Peer Review Advisory Group to aid in its oversight responsibilities. According to the Deputy Administrator, this expanded assessment and verification process would help build accountability and demonstrate EPA's commitment to the independent review of the scientific analyses underlying the agency's decisions to protect public health and the environment. During our review, we also found a number of efforts under way within individual offices and regions to improve their implementation of peer review. For example, the Office of Water drafted additional guidance to further clarify the need for, use of, and ways to conduct peer review. The Office of Solid Waste and Emergency Response formed a team to help strengthen the office's implementation of peer review by identifying ways to facilitate good peer review and addressing barriers to its successful use. Additionally, EPA's Region 10 formed a Peer Review Group with the responsibility for overseeing the region's reviews. We concluded that the above efforts should help address the problems we found. However, we also concluded that the efforts aimed at improving the oversight of peer review fell short by not ensuring that all relevant products had been considered for peer review and did not require documenting the reasons why products were not selected. Similarly, we noted that the efforts aimed at better informing staff about the benefits and use of peer review would be more effective if they were done consistently throughout the agency. EPA agreed with our findings and conclusions and has recently undertaken a number of steps to implement our recommendations. On November 5, 1996, the Deputy Administrator asked ORD's Assistant Administrator, in consultation with the other Assistant Administrators, to develop proposals to strengthen the peer review process. In response, ORD's Assistant Administrator proposed a three-pronged approach consisting of (1) audits of a select number of work products to determine how well the peer review policy was followed; (2) a series of interviews with office and regional staff involved with peer review to determine the processes used to implement the policy; and (3) training to educate and provide help to individuals to improve the implementation of the peer review policy. Significantly, the Deputy Administrator has echoed our message that EPA needs to improve its oversight to ensure that all appropriate products are peer reviewed. In a January 14, 1997, memorandum to the Assistant and Regional Administrators, the Deputy stated, "I want you to ensure that your lists of candidates for peer review are complete." To help accomplish this goal, each organization is directed to use, among other things, EPA's regulatory agenda and budget planning documents to help identify potential candidates for peer review. While we agree that this should prove to be a useful tool, we continue to encourage EPA to expand its existing candidate list to include all major work products, along with explanations of why individual products are not nominated for peer review. An all-inclusive list such as this will be extremely useful to those overseeing the peer review process to determine whether or not all products have been appropriately considered for peer review. In summary, peer review is critical for improving the quality of scientific and technical products and for enhancing the credibility and acceptability of EPA's decisions that are based on these products. We are encouraged by the renewed attention EPA is giving to improving the peer review process. Although it is too early for us to gauge the success of these efforts, the involvement of the agency's upper-level management should go a long way to ensure that the problems we identified are resolved. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to your questions or the questions of Subcommittee members. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed its recent report on the Environmental Protection Agency's (EPA) implementation of its peer review policy, focusing on EPA's: (1) progress in implementing its peer review policy; and (2) efforts to improve the peer review process. GAO noted that: (1) despite some recent progress, peer review continues to be implemented unevenly; (2) although GAO found some cases in which EPA's peer review policy was properly followed, it also found cases in which key aspects of the policy were not followed or in which peer review was not conducted at all; (3) GAO believes that two of the primary reasons for this uneven implementation are: (a) inadequate accountability and oversight to ensure that all relevant products are properly peer reviewed; and (b) confusion among EPA's staff and management about what peer review is, its importance and benefits, and how and when it should be conducted; (4) EPA officials readily acknowledge this uneven implementation and, during the course of GAO's work, had a number of efforts under way to improve the peer review process; (5) although GAO found these efforts to be steps in the right direction, it concluded that EPA was not addressing the underlying problems that GAO had identified; (6) accordingly, GAO recommended that EPA ensure that: (a) upper-level managers have the information they need to know whether or not all relevant products have been considered for peer review; and (b) staff and managers are educated about the need for and benefits of peer review and their specific responsibilities in implementing policy; (7) EPA agreed with GAO's recommendations and has several efforts under way to implement them; (8) for example, EPA plans to initiate a peer review training program for its managers and staff in June 1997; and (9) while it is still too early to be certain if these efforts will be fully successful, GAO is encouraged by the high-level attention being paid to this very important process.
| 3,189 | 411 |
According to an Army Human Resource official, the Army uses the workforce-planning model--CIVFORS--for human resources management. CIVFORS is a collection of software programs that anticipate future impacts on the workforce so that management can plan for changes instead of reacting to them. The model is used to evaluate a number of critical areas in civilian workforce planning, including projected recruitment of personnel, impact of organizational realignments, and changes in workforce trends (such as aging, retention, and projected personnel shortfalls). It is a life-cycle modeling and projection tool that models the most significant events that describe the life-cycle path of personnel, which includes accessions, promotions, reassignments, retirements, and voluntary and involuntary separations over a 7-year period. Verification and validation of models are important steps to building credible models because they provide the foundation for the accreditation process to ensure the suitability of the models for their intended purposes, as stated in Army guidance, Management of Army Models and Simulations. The verification process evaluates the extent to which a model has been developed using sound and established software engineering techniques, and it establishes whether the model's computer code correctly performs the intended functions. Model verification includes data verification, model documentation, and testing of the information technology structure that supports the model; model verification is contained in such documents as the programmer's manual, installation's manual, user's guide, analyst's manual, and trainer's manual. According to Army guidance, assessment of the correctness and forecasting capability of the model is also required, and it should be performed by a subject matter expert independent from the model developer; however, the developer is expected to conduct in-house verification and testing to assist in the overall model development process. Validation is the process of determining the extent to which the model adequately represents the real world. The Army has taken steps to ensure the reliability of the historical personnel data used by the model and the adequacy of its information technology structure used to support the model, but it has not provided documentation that it has sufficiently tested and reviewed the most critical aspect of the model--its forecasting capability and the appropriateness of its assumptions. As a result, the forecasting credibility of the current version of the model is not sufficiently validated or documented. Without proper documentation of the abilities of the model, there is a risk that the forecasts it produces may be inaccurate or misleading and the suitability for use by other organizations may be difficult to determine. The Army's review of the historical personnel data used to provide information for workforce planning was adequate to show that the data are sufficiently reliable for use in the workforce model. Data regarding personnel (such as date hired, education, age, grade level, and occupational series) are taken from the Army's Workforce Analysis Support System (WASS). CIVFORS uses the most recent 5 years of historical data to forecast the civilian workforce planning needs during the next 7 years. According to Army guidance, to ensure that data are sufficiently reliable for use in the Army model, support documents should contain information about the overall characteristics of the database. Furthermore, the documents should show the intended range of appropriate uses for the model as well as constraints on its use. They should also include concise statements of the condition of the database for the purpose of indicating its stability. The Army provided most, but not all, of the documents referred to in Army guidance; we believe that the documents provided are key ones and are adequate to show that WASS data are sufficiently reliable for use in CIVFORS. In addition, the Army program manager for the CIVFORS workforce-planning model stated that the workforce data are checked by reviewing the arithmetic in the numerical algorithms to verify that there is no unexplained change in the size of the civilian personnel workforce contained in the database. Further, edit checks include matching social security numbers for personnel from one time period to another to account for actual personnel and personnel transactions processed. In addition, CIVFORS has automated checks for inappropriate numbers or characters. Such steps help to assure that the data contained in WASS accurately and completely reflect critical personnel aspects and transactions. The Army's procedures for validating the information technology support structure (the software and hardware used to interface with and house the model) were also sufficient. For example, the Army (1) adequately documented the information technology structure to allow for continuity of operations, (2) tested its functionality, and (3) provided expertise for system modification and operation. Procedures used by the Army include documenting the model's system description and hardware and software requirements, providing system and user manuals, planning for configuration management, and conducting functionality tests to help ensure the system's usability and operability over time and to demonstrate the adequacy of the information technology structure to support use of the workforce model. The Army's documentation cannot show that the forecasting ability of CIVFORS has been adequately evaluated and, therefore, we cannot fully assess the credibility of the model. According to Army guidance, validation is the process of determining the extent to which a model adequately represents the real world. According to the Army program manager, over a 7-year period, CIVFORS forecasts the anticipated impacts on the workforce based on the most significant events in the life-cycle path of personnel (to include accessions, promotions, reassignments, retirements, voluntary separations, and involuntary separations). Army guidance states that an independent, peer, and subject matter expert review of the model should be conducted. The Army guidance also suggests generally accepted methods, such as conducting a careful line-by-line examination of the model design and computer code and algorithms. The Army's program manager said this had been done for the original certification of CIVFORS in 1987. However, no formal document of the reviews has been prepared in the years since, even though the Army has undertaken several model improvements, such as (1) an expanded scope to include more dimensions in the modeling process; (2) a more integrated, streamlined process that involves fewer steps; and (3) greater flexibility, achieved by generalizing the formulas and parameters. In addition, there is insufficient documentation regarding tests performed, since 1987, in which CIVFORS's forecasts for prior years are compared against equivalent historical data (called an "out of sample" test) to measure the model's forecasting capability. Such testing, which is one method to validate a model's forecasting capability, would involve using the first 5 of the last 7 years of historical data to forecast the 2 subsequent years. The forecasts for the last 2 years could then be compared to the actual historical data. The Army, however, performed tests comparing patterns of forecasts against historical data (called "in sample" tests), showing that forecasts reflect the same patterns as the historical data used to develop them for a sample of three Army major commands. However, the draft document that was provided to us was inadequate to fully assess the sampling used by the Army and the value of the tests. Finally, the Army could not provide adequate documentation of an independent or peer review of the model. The Army's CIVFORS program manager stated that the major commands served as peer reviewers by conducting a comparison of their workforce data to WASS and CIVFORS workforce data. We believe that such assessments by users provide important information but do not constitute a peer review as defined in Army guidance. Also, the results of these assessments were not available for us to review. The program manager also stated that an independent subject matter expert reviewed the functional design and the code in 1999, but a formal report of the activities performed and the specific changes or modifications implemented during the review were not produced. Documentation has often not been a priority for several reasons. According to the Army's CIVFORS program manager, lack of documentation is primarily due to limited funding, which was spent on implementing changes to CIVFORS and WASS rather than on the production of formal documents. Further, a shortage of staff (only one staff person--the program manager) and loss of documents during the attack on the Pentagon on September 11, 2001, also affected the amount of documentation the Army could provide us. The program manager also stated that some documentation was not needed because CIVFORS's design is predicated on proven methods in other Army active-duty, military manpower forecasting models. In addition, the program manager stated that the Army and contractors have primarily been adapting technology (upgrading from mainframe to personal computer to Web-based) to improve model functionality rather than creating new technology. However, without proper documentation of the abilities of the model, there exists a risk that the forecasts it produces may be inaccurate or misleading. Consequently, decisions about future workforce requirements may be questionable, and planning for the size, shape, and experience level of the future workforce may not adequately meet the Army's needs. These issues may extend beyond the Army. In April 2002, DOD published a strategic plan for civilian personnel, which includes a goal to obtain management systems to support workforce planning. According to a DOD official responsible for civilian workforce planning tools, components within DOD have been requesting a modeling tool to assist them with civilian workforce planning. As a result, DOD has decided to test the Army's civilian forecasting model. In October 2002, DOD purchased hardware, installed modified software, and provided training to a small number of personnel. Recently, DOD obtained a historical database of civilian personnel data from the Defense Management Data Center and provided the database to the contractor to load into the model. Two agencies have volunteered to test the model: the Defense Logistics Agency and the Washington Headquarters Service. DOD is working to develop a test for these organizations using their own civilian personnel data to test the model. At the end of the testing period, DOD will assess the model to obtain a better understanding of its logic and determine whether or not it should be implemented departmentwide. As DOD continues to transform and downsize its civilian workforce, it is imperative that the department properly shape and size the workforce. One tool that could assist in this effort is CIVFORS--the Army's workforce planning model. However, proper documentation of the verification and validation of CIVFORS is needed before expanding its use. The Army has taken adequate steps to ensure that the historical personnel data used in the model are sufficiently reliable and the information technology structure appropriately supports the model; however, it has not fully documented that it has taken adequate steps to demonstrate the credibility of the model's forecasting capability. Further, a model should be fully scrutinized before each new application because a change in purpose, passage of time, or input data may invalidate some aspects of the existing model. Without sufficient documentation to demonstrate that adequate steps have been taken to ensure the credibility of the model's forecasting capabilities, decisions about the Army's future civilian workforce may be based on questionable data and other potential users cannot determine with certainty the model's suitability for their use. To assure the reliability of Army civilian workforce projections, as well as the appropriateness of the model for use DOD-wide and by other federal agencies, we recommend that the Secretary of Defense direct the Secretary of the Army to appropriately document the Army's forecasting capability of the model. Although DOD stated, in written comments on a draft of this report, that it did not concur with our recommendation, the Army is taking actions that, in effect, implement it. DOD's written comments are contained in appendix I. Regarding our recommendation that the Secretary of Defense direct the Secretary of the Army to appropriately document the Army's forecasting capability of the model, DOD stated that the Army recognizes the need to fully document its verification and validation efforts. Further, DOD stated the staff of the Assistant Secretary of the Army, Manpower and Reserve Affairs, has begun developing a verification and validation plan to enable outside parties to assess the suitability and adaptability of the model for their organizational use. This verification and validation process is scheduled for completion in September 2003. However, during our review, DOD did not provide information about the full scope of this verification and validation effort. We believe that as the Army undertakes its verification and validation effort, it should clearly document, as we recommended, its assumptions, procedures, and the results so that future users can replicate the tests to appropriately establish the model's validity for their purposes. DOD also did not concur with our finding that the forecasting ability of the model has not been fully established. DOD stated that the ultimate test of a system is performance and that CIVFORS has been consistently generating Army projections with high standards of accuracy. We did not independently evaluate the model's accuracy. As our report makes clear, our basic point is that the model's forecasting ability has not been documented in accordance with Army guidance. We continue to believe that without adequate documentation, the Army cannot show that it has taken sufficient steps to ensure the model's credibility in terms of its forecasting capability. DOD also provided technical comments, which we incorporated where appropriate. We did not independently evaluate the model or the application of the steps; rather, we reviewed the adequacy of the steps that the Army program manager stated were taken to ensure the credibility of the model. To determine the adequacy of the steps the Army has taken to ensure the credibility of its civilian workforce-forecasting model, we discussed CIVFORS with the Army's CIVFORS program manager in the Army G-1 office, Civilian Personnel Policy Directorate, who has overall responsibility for the workforce analysis and the forecasting system. In addition, Army contractor officials who are responsible for providing technical, analytic, and management support to operate, maintain, and enhance the planning tool and model participated in several of our discussions with the program manager. We reviewed the following CIVFORS's documents regarding the information technology support structure: the Configuration Management Manual, the System's Specifications, the Design/Subsystem Documentation, the Operator's Manual, and the User's Manual. In addition, we reviewed the 1987 and draft 2002 test analysis report on the Civilian Forecasting System and other documentation provided by the Army to obtain information on how the model operates according to model assumptions. We also reviewed the DOD Defense Modeling and Simulation Office guidance on verification and validation of models, the Army regulation and pamphlet pertaining to the management of Army models and simulations, and other literature regarding model credibility. We also interviewed DOD officials in the Civilian Personnel Management Service responsible for developing plans to adopt the Army's workforce forecasting model to discuss the status of their efforts. We conducted our review from September 2002 to June 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, and the Secretary of the Army. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5559. Key contributors to this report are listed in appendix II. In addition to the name above, David Dornisch, Barbara Johnson, Barbara Joyce, John Smale, Dale Wineholt, and Susan Woodward made significant contributions to this report.
|
Between fiscal years 1989 and 2002, the Department of Defense (DOD) reduced its civilian workforce by about 38 percent, with little attention to shaping or specifically sizing this workforce for the future. As a result, the civilian workforce is imbalanced in terms of the shape, skills, and experience needed by the department. DOD is taking steps to transform its civilian workforce. To assist with this transformation, the department is considering adopting an Army workforce-planning model, known as the Civilian Forecasting System (CIVFORS), which the Army uses to forecast its civilian workforce needs. Other federal agencies are also considering adopting this model. GAO was asked to review the adequacy of the steps the Army has taken to ensure the credibility of the model. The Army has taken adequate steps to ensure that the historical personnel data used in the model are sufficiently reliable and that the information technology structure adequately and appropriately supports the model. For example, the Army has established adequate control measures (e.g., edit checks, expert review, etc.) to ensure that the historical data that goes into the model are sufficiently reliable. Moreover, it has taken adequate steps to ensure that the information technology support structure (i.e., the software and hardware used to interface with and house the model) would enable continuity of operations, functionality, and system modification and operations. However, the Army has not demonstrated that it has taken adequate steps to ensure that the model's forecasting capability provides the basis for making accurate forecasts of the Army's civilian workforce. The Army's original certification of CIVFORS in 1987 was based on a formal documented verification and validation of the model structure that has not been formally updated since that time even though the Army has undertaken several model improvements. According to the Army's CIVFORS program manager, the Army has taken several steps, to include an independent review, peer reviews, and a comparison of forecasted data to actual data. However, documentation of these steps is incomplete and, therefore, does not provide adequate evidence to demonstrate the credibility of the forecast results. Without adequate documentation, the Army cannot show that it has taken sufficient steps to ensure the model's credibility in terms of its forecasting capability; consequently, there exists a risk that the forecasts it produces may be inaccurate or misleading. Furthermore, without documentation of CIVFORS's forecasting capability, it may be difficult for DOD and other federal organizations to accurately determine its suitability for their use.
| 3,290 | 523 |
The Medicaid drug rebate program, the 340B drug pricing program, and the Medicare Part D program help pay for or reduce the costs of prescription drugs for eligible individuals and entities. Medicaid is the joint federal-state program that finances medical services for certain low-income adults and children. CMS, an agency of the Department of Health And Human Services (HHS), administers and oversees the program. While some benefits are federally required, outpatient prescription drug coverage is an optional benefit that all states have elected to offer. State Medicaid programs, though varying in design, cover both brand and generic drugs. Retail pharmacies distribute drugs to Medicaid beneficiaries, then receive reimbursements from states for the acquisition cost of the drug and a dispensing fee. In 2004, Medicaid outpatient prescription drug spending reached $31 billion, of which $19 billion was paid by the federal government. To help control Medicaid drug spending, federal law requires manufacturers to pay rebates to states as a condition for the federal contribution toward covered outpatient prescription drugs. Rebates manufacturers must pay states for brand drugs under the Medicaid drug rebate program are based on two prices that drug manufacturers must report to CMS: the average manufacturer price (AMP) (the average price paid to a manufacturer by wholesalers for drugs distributed to the retail pharmacy class of trade) and best price (the lowest price available from the manufacturer to any purchaser with certain exceptions). Both amounts are to reflect certain financial concessions that are available to drug purchasers. The statute governing the program and the standard rebate agreement that CMS signs with each manufacturer define AMP and best price and specify how these prices are to be used to determine the rebates due to states. CMS provides additional guidance to manufacturers regarding the calculation of these amounts. After manufacturers report the required price information to CMS, CMS uses it to calculate the rebate due for each unit of a brand drug and reports this to the states. The state Medicaid programs use the information to determine the amount of rebates to which they are entitled from the manufacturers based on the volume of drugs paid for by the programs. The 340B drug pricing program gives more than 12,000 eligible entities of various types--community health centers, disproportionate share hospitals, and AIDS Drug Assistance Programs (ADAP) among them-- access to discounted drug prices, called 340B prices. To access these prices, entities must enroll in the program, which is administered by HRSA. Drug manufacturers must offer covered drugs to enrolled entities at or below 340B prices in order to have their drugs covered by Medicaid. Enrolled entities may generally purchase drugs in two ways. They may choose the direct purchase option to receive the 340B prices up front, or they may choose the rebate option, typically purchasing drugs through a vendor and later receiving a rebate from the manufacturer covering any amount they paid above the 340B prices. Enrolled entities spent an estimated $3.4 billion on drugs in 2003. To determine the 340B prices, HRSA uses a statutory formula that relies on AMP and Medicaid rebate data that it receives from CMS. Manufacturers separately calculate the 340B prices for their drugs using the statutory formula, and use these calculations as the basis for the prices they charge eligible entities. HRSA does not share the 340B prices with the eligible entities due to the statutory provisions regarding the confidentiality of information used to determine them. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) created a voluntary outpatient prescription drug benefit effective January 1, 2006, as Part D of the Medicare program. Under Part D, Medicare beneficiaries may choose a prescription drug plan (PDP) from multiple competing PDPs offered by private organizations, often private insurers, that sponsor the plans. PDP sponsors enter into contracts with CMS, the agency that administers Medicare. PDPs may differ in the drugs they cover, the pharmacies they use, and the prices they negotiate with drug manufacturers and pharmacies. PDP sponsors may use pharmacy benefit managers (PBM) to negotiate with drug manufacturers and retail pharmacies for the prices of the drugs that each PDP covers. PDP sponsors are required to report to CMS the price concessions they negotiate; these price concession include discounts, rebates, direct or indirect subsidies, and direct or indirect remunerations. We and others have reported inadequacies in CMS's oversight of the price information reported by manufacturers under the Medicaid drug rebate program, including a lack of clarity in CMS's guidance to the manufacturers for calculating prices. We reported in 2005 that CMS conducted only limited checks for errors in manufacturer-reported drug prices and that it did not generally review the methods and underlying assumptions that manufacturers use to determine AMP and best prices. We also noted in that report that OIG found that CMS did not provide clear program guidance for manufacturers to follow when determining those prices--for example, how to treat sales to certain health maintenance organizations (HMO) and PBMs. OIG stated that its review efforts were hampered by unclear CMS guidance on how manufacturers were to determine AMP, a lack of manufacturer documentation, or both. Our review also examined the pricing methodologies of several large drug manufacturers and found considerable variation in the methods they used to determine AMP and best price, and some of these differences could have affected the accuracy of these prices and thereby reduced or increased rebates to state Medicaid programs. OIG similarly identified problems with manufacturers' price determination methods and their reported prices in four reports issued from 1992 to 2001. Recent litigation has highlighted the importance of the accuracy of prices manufacturers report to CMS and the rebates they pay to states. For example, two drug manufacturers agreed to pay about $88 million and $257 million, respectively, to states in 2003 to settle allegations that they failed to include in their best price determinations certain sales to an HMO. Another manufacturer agreed to pay $345 million to states in 2004 to settle several allegations, including that it did not account for drug discounts provided to two health care providers, resulting in an overstated best price for one of its top-selling drugs and reduced state rebates. CMS issued a proposed rule in December 2006 to, among other things, implement provisions of the Deficit Reduction Act of 2005 (DRA) related to prescription drugs under the Medicaid program. This rule is intended to provide more clarity to manufacturers in determining AMPs reported to CMS, by indicating which sales, discounts, rebates, and price concessions are to be included or excluded. For example, it specifies that sales to PBMs and mail-order pharmacies must be included in AMP. The proposed rule also specifies that best price must include sales to all purchasers, including HMOs, that are not explicitly excluded and specifies the prices that must be included or excluded from those sales. Recognizing the evolving marketplace for the sale of prescription drugs, the proposed rule states that CMS plans to issue future clarifications of AMP and best price in an expeditious manner. In its notice of proposed rulemaking, CMS also referred to the DRA requirement that CMS disclose AMP data to states and post these data on a public Web site. AMP data are currently not made public. The changes represented by this proposed rule would likely affect the prices that manufacturers report to the federal government. Only after these regulations are finalized and implemented will there be an opportunity to assess the extent to which they improve the accuracy of prices reported and rebates paid by manufacturers. We and others have reported inadequacies in HRSA's oversight of the 340B drug pricing program, problems related to the lack of transparency in the 340B prices, and overpayments to drug manufacturers. OIG recently reported that some of the 340B prices that HRSA calculated were inaccurate and that HRSA did not systematically compare the 340B prices with those that were separately calculated by drug manufacturers for consistency. In addition, we recently reported that HRSA did not routinely compare 340B prices with prices paid by certain eligible entities. We and OIG both found that many entities reviewed paid prices for drugs that were higher than the 340B prices. OIG estimated that 14 percent of total drug purchases made by entities in June 2005 exceeded the 340B prices, resulting in $3.9 million in overpayments. We also found that the prices of the eligible entities using the rebate option reported to HRSA did not reflect all rebates they later received from manufacturers, and thus we could not determine whether these entities paid prices that were at or below the ceiling established by the 340B prices. Because the 340B prices are not disclosed to eligible entities, the entities cannot know how the prices they pay compare with the 340B prices. Finally, because 340B prices are based on AMP and Medicaid drug rebate data, inaccuracies in those amounts affect the 340B drug pricing program. Recent legal settlements related to drug manufacturers' overstatement of best prices used in the Medicaid rebate program also led to settlements related to the 340B program. This was because overstated best prices could affect rebates and result in inaccurate 340B prices. HRSA has made changes to its oversight of the 340B drug pricing program that are intended to address some of the concerns we and OIG raised in our respective reports. For example, while manufacturers are not required to submit their calculated 340B prices to HRSA, the agency has requested that each manufacturer voluntarily submit its calculated 340B prices for comparison to the 340B prices calculated by HRSA. It has also indicated that it was planning to develop systems to allow eligible entities to check that the drug prices they are charged are appropriate while still maintaining the confidentiality of those prices. Because AMP is used to calculate 340B prices, the requirement under DRA that AMP become publicly available may enable HRSA to improve the transparency of these prices. However, the public reporting of AMP, which is only one element of the 340B price calculation, can only partially improve the transparency of 340B prices. The Medicare Part D program shares in common certain features with other federal programs that help pay for or reduce the cost of prescription drugs. Because these features presented oversight challenges with other programs, they may also present challenges for Part D. Some of the common features include the following: Under Medicare Part D, PDP sponsors are required to calculate and report to CMS aggregate price concessions they negotiate. Similarly, the Medicaid drug rebate program requires manufacturers to calculate and report certain price information to CMS and to include various price concessions in the calculations. Medicare Part D relies on PDP sponsors to pass on to beneficiaries the benefit of price concessions they negotiate with drug manufacturers. Similarly, the Medicaid drug rebate and 340B drug pricing programs rely on manufacturers to pass on to states or eligible entities the rebates or discounted prices to which they are entitled under the programs. Medicare Part D relies on CMS to audit PDP sponsors to ensure proper disclosure of price concessions negotiated with manufacturers. Similarly, the Medicaid drug rebate and 340B drug pricing programs rely on federal audits of manufacturers to ensure that the prices reported and charged are appropriate. Further, the Medicare Part D program shares in common with the Medicare prescription drug discount card program--which preceded Part D--features related to oversight inadequacies we identified with the discount card program. Under the discount card program, private sponsors negotiated drug discounts for beneficiaries and required the card sponsors to report price concessions they received for drugs and pass a share of these on to beneficiaries. We reported in 2005 that some card sponsors found that the guidance relating to the reporting of price concessions provided by CMS lacked clarity, and CMS reported that the quality of price concession data provided by card sponsors was questionable, with problems such as missing data. Two other features of the Medicare Part D program suggest potential oversight challenges. The first relates to the transition of the nearly 6 million typically high-cost individuals who qualify for both Medicaid and Medicare--referred to as dual eligibles--from Medicaid to Medicare Part D for prescription drug coverage. While the Medicaid drug rebate program is designed to help control prescription drug spending by requiring manufacturers to pay rebates to states, Medicare Part D relies on PDP sponsors to negotiate drug prices, including price concessions. Part D provides no assurance that the PDP sponsors will be able to negotiate price concessions that are as favorable as the rebates required under the Medicaid program. It is not yet known how the federal cost of prescription drug coverage for dual eligibles under Part D will compare with the costs incurred for these individuals under Medicaid. The second feature relates to the Part D program's reliance on contracts with private PDP sponsors. The PDP sponsors provide prescription drug coverage to beneficiaries through a complex set of relationships and transactions among insurers, PBMs, and drug manufacturers. These relationships have similarities to the Federal Employees Health Benefits Program (FEHBP), the health care program for federal employees, in which the federal government contracts with private organizations to provide drug benefits, and these organizations often contract with PBMs to negotiate with manufacturers and provide other administrative and clinical services. The relationships and transactions between PBMs and drug manufacturers within FEHBP and other federal programs have been the subject of litigation. For example, a large PBM agreed to pay about $138 million to the federal government in 2005, including about $55 million to the FEHBP, to settle allegations that it had received payments from drug manufacturers in exchange for marketing certain drugs made by those manufacturers to providers who are reimbursed by federal programs. Although actions taken by both CMS and HRSA may address some of the oversight inadequacies we and others have reported, it is too soon to know how effective these have been in improving program oversight. Thus, concerns about prescription drug pricing inaccuracies in the Medicaid drug rebate and 340B drug pricing programs and overpayments to drug manufacturers highlight the importance of federal oversight of prices reported by drug manufacturers under these programs. Because the new Medicare Part D program shares certain features in common with these programs, oversight of the price information reported under Part D is important as well. As the Committee develops its oversight agenda relating to federal programs that help pay for or lower the costs of prescription drugs, it may wish to consider the following areas. The extent to which federal agencies will take steps to systematically ensure the accuracy of price data associated with federal programs, specifically, the extent to which CMS will ensure that AMP and best prices reported by manufacturers under the Medicaid drug rebate program include all appropriate transactions and price concessions--particularly once the proposed rule is finalized; the extent to which HRSA will ensure the completeness and accuracy of the 340B prices it maintains, obtain final prices paid by all covered entities, and more systematically compare prices paid by entities with the 340B prices; and the measures CMS will take to ensure that the price information Part D sponsors report to CMS include aggregate price concessions sponsors negotiate with PBMs and drug manufacturers. Recognizing the evolving nature of purchasers and sellers in the prescription drug market, the extent to which CMS will be effective in updating and revising Medicaid drug rebate program pricing guidance for manufacturers as circumstances warrant. The extent to which the transition of dual eligibles from Medicaid to Medicare Part D will affect federal spending. The extent to which cognizant federal agencies will monitor for and detect abuses in the reporting of drug price information that affects federal programs. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other Members of the Committee may have. For future contacts regarding this testimony, please contact John Dicken at (202) 512-7119 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Randy DiRosa, Assistant Director; Gerardine Brennan; Martha Kelly; Stephen Ulrich; and Timothy Walker made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Several federal programs help pay for or reduce the costs of prescription drugs for eligible individuals and entities. Three examples are the Medicaid drug rebate program, part of the joint federal-state Medicaid program that finances medical services for certain low-income people; the 340B drug pricing program, which provides discounted drug prices to certain eligible entities such as community health centers; and the Medicare Part D program, which provides a Medicare drug benefit for the elderly and certain disabled people. The price information drug manufacturers report under these federal programs affects related federal spending. Spending is also affected by the extent to which federal oversight ensures the accuracy of this information. GAO was asked to provide information related to the oversight of prescription drug pricing practices that affect these federal programs. This testimony focuses on the oversight of drug pricing related to the three programs and the implications for future congressional oversight. This testimony is based on recent GAO reports examining these programs and related work by the Department of Health and Human Services Office of Inspector General and others. Regarding the Medicaid drug rebate program, GAO and others have reported inadequacies in the Centers for Medicare & Medicaid Services' (CMS) oversight of the prices manufacturers report to CMS to determine the statutorily required rebates owed to states. For example, GAO and others have reported a lack of clarity in CMS's guidance to manufacturers for calculating these prices. Several recent legal settlements under which manufacturers agreed to pay hundreds of millions of dollars to states because they were alleged to report inaccurate prices to CMS highlight the potential for abuse under the program. CMS recently issued a proposed rule intended to provide more clarity to manufacturers in determining the prices they report. GAO and others have reported inadequacies in the Health Resources and Services Administration's (HRSA) oversight of the 340B drug pricing program and problems related to the lack of transparency in the maximum prices, called 340B prices, charged to eligible entities. GAO reported that HRSA did not routinely compare the prices actually paid by certain eligible entities with the 340B prices and that many of these eligible entities paid prices higher than the 340B prices. Because these prices are not disclosed to the entities, the entities are unable to determine whether the prices they pay are at or below these prices. In addition, because 340B prices are based on information reported by drug manufacturers for the Medicaid drug rebate program, inaccuracies under that program affect these prices. HRSA has made changes to its oversight of the program intended to address some of these concerns. The Medicare Part D program shares in common with other federal programs certain features that led to federal agency oversight challenges. For example, Part D relies on multiple private organizations to report to CMS certain price concessions from manufacturers, similar to the Medicaid drug rebate program. Also, Part D relies on CMS's oversight to ensure that price information reported to it by private organizations are accurate, similar to the Medicaid drug rebate and 340B drug pricing programs. Other features of Part D, such as its reliance on contracts with private insurers to provide drug coverage to beneficiaries through a complex set of relationships and transactions with private entities, also suggest potential oversight challenges. Oversight inadequacies, inaccurate prices, lack of price transparency, and the potential for abuse suggest areas the Committee may wish to consider as it develops its oversight agenda. The Committee may wish to consider the extent to which CMS and HRSA will systematically take steps to ensure the accuracy of prices reported and charged by private organizations that participate in federal programs. The Committee may also wish to consider the extent to which federal agencies will effectively monitor for and detect abuses in the reporting of drug price information that affect these three federal programs.
| 3,442 | 781 |
While numerous military aircraft provide refueling services, the bulk of U.S. refueling capability lies in the Air Force fleet of 59 KC-10 and 543 KC-135 aircraft. These are large, long-range aircraft that have counterparts in the commercial airlines, but which have been modified to turn them into tankers. The KC-10 is based on the DC-10 aircraft, and the KC-135 is similar to the Boeing-707 airliner. Because of their large numbers, the KC-135s are the mainstay of the refueling fleet, and successfully carrying out the refueling mission depends on the continued performance of the KC-135s. Thus, recapitalizing this fleet of KC-135s will be crucial to maintaining aerial refueling capability, and it will be a very expensive undertaking. There are two basic versions of the KC-135 aircraft, designated the KC-135E and KC-135R. The R model aircraft have been re-fitted with modern engines and other upgrades that give them an advantage over the E models. The E-model aircraft on average are about 2 years older than the R models, and the R models provide more than 20 percent greater refueling capacity per aircraft. The E models are located in the Air National Guard and Air Force Reserve. Active forces have only R models. Over half the KC-135 fleet is located in the reserve components. The rest of the DOD refueling fleet consists of Air Force HC- and MC-130 aircraft used by special operations forces, Marine Corps KC-130 aircraft, and Navy F-18 and S-3 aircraft. However, the bulk of refueling for Marine and Navy aircraft comes from the Air Force KC-10s and KC-135s. These aircraft are capable of refueling Air Force and Navy/Marine aircraft, as well as some allied aircraft, although there are differences in the way the KC-10s and KC-135s are equipped to do this. The KC-10 aircraft are relatively young, averaging about 20 years in age. Consequently, much of the focus on modernization of the tanker fleet is centered on the KC-135s, which were built in the 1950s and 1960s, and now average about 43 years in age. While the KC-135 fleet averages more than 40 years in age, the aircraft have relatively low levels of flying hours. The Air Force projects that E and R models have lifetime flying hours limits of 36,000 and 39,000 hours, respectively. According to the Air Force, only a few KC-135s would reach these limits before 2040, but at that time some of the aircraft would be about 80 years old. Flying hours for the KC-135s averaged about 300 hours per year between 1995 and September 2001. Since then, utilization is averaging about 435 hours per year. According to Air Force data, the KC-135 fleet had a total operation and support cost in fiscal year 2001 of about $2.2 billion. The older E model aircraft averaged total costs of about $4.6 million per aircraft, while the R models averaged about $3.7 million per aircraft. Those costs include personnel, fuel, maintenance, modifications, and spare parts. The Air Force has a goal of an 85 percent mission capable rate. Mission capable rates measure the percent of time on average that an aircraft is available to perform its assigned mission. KC-135s in the active duty forces are generally meeting the 85 percent goal for mission capable rates. Data on the mission capable rates for the KC-135 fleet are shown in table 1. Mission capable rate (percent) For comparison purposes, the KC-10 fleet is entirely in the active component, and the 59 KC-10s had an average mission capable rate during the same period of 81.2 percent. By most indications, the fleet has performed very well during the past few years of high operational tempo. Operations in Kosovo, Afghanistan, Iraq, and here in the United States in support of Operation Noble Eagle were demanding, but the current fleet was able to meet the mission requirements. Approximately 150 KC-135s were deployed to the combat theater for Operation Allied Force in Kosovo, about 60 for Operation Enduring Freedom in Afghanistan, and about 150 for Operation Iraqi Freedom. Additional aircraft provided "air bridge" support for movement of fighter and transport aircraft to the combat theater, for some long-range bomber operations from the United States, and, at the same time, to help maintain combat air patrols over major U.S. cities since September 11, 2001. Section 8159 of the Department of Defense Appropriations Act for fiscal year 2002, which authorized the Air Force to lease the KC-767A aircraft, also specified that the Air Force could not commence lease arrangements until 30 calendar days after submitting a report to the House and Senate Armed Services and Appropriations Committees (1) outlining implementation plans and (2) describing the terms and conditions of the lease and any expected savings. The Air Force has stated that it will not proceed with the lease until it receives approval from all of the committees of the New Start Notification. The Air Force also submitted the report of the proposed lease to the committees as required by section 8159. I will now summarize the key points that the Air Force made in this report to the committees: The Air Force pointed out that aerial refueling helps to support our nation's ability to respond quickly to operational demands anywhere around the world. This is possible because aerial refueling permits other aircraft to fly farther, stay aloft longer, and carry more weapons, equipment, or supplies. The Air Force indicated that KC-135 aircraft are aging and becoming increasingly costly to operate due to corrosion, the need for major structural repair, and increasing rates of inspection to ensure air safety. Moreover, the report indicates that the Air Force believes it is incurring a significant risk by having 90 percent of its aerial refueling capability in a single, aging airframe. The Air Force considered maintaining the current fleet until about 2040 but concluded that the risk of a "fleet-grounding" event made continued operation of the fleet unacceptable, unless it began its re-capitalization immediately. The Air Force considered replacing the KC-135 (E model) engines with new engines but rejected this changeover since it would not address the key concern of aircraft corrosion and other age-related concerns. The Air Force eventually plans to replace all 543 KC-135 aircraft over the next 30 years and considered lease and purchase alternatives to acquire the first 100 aircraft. The Air Force added traditional procurement funding to the fiscal year 2004-2009 Future Years Defense Program in order that 100 tankers would be delivered between fiscal years 2009 and 2016. Conversely, the report states that under the lease option, all 100 aircraft could be delivered from fiscal years 2006 to 2011. To match that delivery schedule under a purchase option, the Air Force stated that it would have to reprogram billions of dollars already committed to other uses. Office of Management and Budget Circular A-94 directs a comparison of the present value of lease versus purchase before executing a lease. In its report, the Air Force estimated that purchasing would be about $150 million less than leasing on a net present value basis. The Air Force plans to award a contract to a special purpose entity created to issue bonds needed to raise sufficient capital to purchase the new aircraft from Boeing and to lease them to the Air Force. The lease will be a three-party contract between the government, Boeing, and the special purpose entity. The entity is to issue bonds on the commercial market based on the strength of the lease and not the creditworthiness of Boeing. Office of Management and Budget Circular A-11 requires that an operating lease meet certain terms and conditions including a prohibition on paying for more than 90 percent of the fair market value of the asset over the life of the lease at the time that the lease is initiated. The report to Congress states that the Defense Department believes the proposed lease meets those criteria. If Boeing sells comparable aircraft during the term of the contract to another customer for a lower price than that agreed to by the Air Force, the government would receive an "equitable adjustment." The report also states that Boeing has agreed to a return-on-sales cap of 15 percent and that an audit of its internal cost structure will be conducted in 2011, with any return on sales exceeding 15 percent reimbursed to the government. According to the report, if the government were to terminate the lease, it must do so for all of the delivered aircraft and may terminate any planned aircraft for which construction has not begun, must give 12-months advance notification prior to termination, return the aircraft, and pay an amount equal to 1 year's lease payment for each aircraft terminated. If termination occurs before all aircraft have been delivered, the price for the remaining aircraft would be increased to include unamortized costs incurred by the contractor that would have been amortized over the terminated aircraft and a reasonable profit on those costs. The government will pay for and the contractor will obtain commercial insurance to cover aircraft loss and third party liability, as part of the lease agreement. Aircraft loss insurance is to be in the amount of $138.4 million per aircraft in calendar year 2002 dollars. Liability insurance will be in the amount of $1 billion per occurrence per aircraft. If any claim is not covered by insurance, the Air Force will indemnify the special purpose entity for any claims from third parties arising out of the use, operation, or maintenance of the aircraft under the contract. At the expiration of the lease, the Air Force will return the aircraft to the special purpose entity after removing, at government expense, any Air Force unique configurations. The contractor will warrant that each aircraft will be free from defects in materials and workmanship, and the warranty will be of 36 months duration and will commence after construction of the commercial Boeing 767 aircraft, but before they have been converted into aerial refueling aircraft. Upon delivery to the Air Force, each KC-767A aircraft will carry a 6-month design warranty, 12-month material and workmanship warranty on the tanker modification, and the remainder of the original warranty on the commercial components of the aircraft, estimated to be about 2 years. Because we have only had the Air Force report for a few days, we do not have any definitive analytical results. However, we do have a number of questions and observations about the report that we believe are important for the Congress to explore in reaching a decision on the Air Force proposal. 1. What is the full cost to acquire and field the KC767A aircraft under the proposed lease (and assuming the exercising of an option to purchase at the conclusion of the lease)? While the report includes the cost of leasing, the report does not include the costs of buying the tankers at the end of the lease. The report shows a present value of the lease payments of $11.4 billion and a present value of other costs, such as military construction and operation and support costs of $5.8 billion. This totals to $17.2 billion. If the option to purchase were exercised, the present value of those payments would be $2.7 billion. Adding these costs to the present value of the lease payments and other costs, this would total $19.9 billion in present value terms. The costs of the leasing plan have also been presented as $131 million per plane for the purchase price, with $7.4 million in financing costs per plane, both amounts in calendar year 2002 dollars. If the option to purchase were exercised, the price paid would be $35.1 million per plane in calendar year 2002 dollars. Adding all of these costs together, the cost of leasing plus buying the planes at the end of the lease would total $173.5 million per plane in calendar 2002 dollars or $17.4 billion for the 100 aircraft. 2. How strong is the Air Force's case for the urgency of this proposal? As far back as our 1996 report, we said that the Air Force needed to start planning to replace the KC-135 fleet, but until the past year and a half, the Air Force had not placed high priority on replacement in its procurement budget. While the KC-135 fleet is old and is increasingly costly to maintain due mainly to age-related corrosion, there has been no indication that mission capable rates are falling or that the aircraft cannot be operated safely. By having 90 percent of its refueling fleet in one aircraft type, the Air Force for some years now has been accepting the risk of fleet wide problems that could ground the entire fleet; it is really a question of how much risk and how long the Air Force and the Congress are willing to accept that risk. 3. How will the special purpose entity work? Under the Air Force proposal, the 767 aircraft would be owned by a special purpose entity and leased to the Air Force. This is a new concept for the Air Force, and the details of the workings of this entity have not been presented in detail. It is important for the Congress to understand how this concept will work and how the government's interests are protected under such an arrangement. For example, what audit rights does the government have? Will financial records be available for public scrutiny? 4. What process did the AF follow to assure itself that it obtained a reasonable price? Because this aircraft is being acquired under the Federal Acquisition Regulations, the Air Force is required to assure itself through market analysis and other means that the price it is paying is reasonable and fair. To assess this issue, we would need to know how much of the $131 million purchase price is comprised of the basic 767 commercial aircraft and how much represents the cost of modifications to convert it to a tanker. There is an ample market for commercial 767s, and the Air Force should have some basis for comparison to assess the reasonableness of that part of the price. The cost of the modifications is more difficult to assess, and the Air Force has not provided us the data to analyze this cost. It would be useful for the Congress to understand the process the Air Force followed. 5. Does the proposed lease comply with the OMB criteria for an operating lease? Office of Management and Budget Circular A-11 provides criteria that must be met for an operating lease. The Air Force report says that the proposal complies with the criteria, but the report points out that one of the criteria is troublesome for this lease. This criterion, in particular, provides that in order for an agreement to be considered an operating lease, the present value of the minimum lease payments over the life of the lease cannot exceed 90 percent of the fair market value of the asset at the inception of the lease. Depending on the fair market value used, the net present value of the lease payments in this case may exceed 90 percent of initial value. Specifically, if the fair market value is considered to include the cost of construction financing, then the lease payments would represent 89.9 percent. If the fair market value were taken as $131 million per aircraft, which is the price the special purpose entity will pay to Boeing, then the lease payments would represent 93 percent. We do not have a position at this time on which is the more valid approach, but we believe the Air Force was forthright in presenting both figures in its report. Congress will need to consider whether this is an important issue and which figure is most appropriate for this operating lease. 6. Did the Air Force comply with OMB guidelines for lease versus purchase analysis in its report? A-94 specifies how lease versus purchase analysis should be conducted. Our preliminary analysis indicates that the Air Force followed the prescribed procedures, but we have not yet had time to validate the Air Force's analysis or the reasonableness of the assumptions. The Air Force reported that under all assumptions and scenarios considered, leasing is more expensive than purchasing, but by only about $150 million under its chosen assumptions. In a footnote, however, the report points out that if the comparison were to a multiyear procurement, the difference in net present value would be $1.9 billion favoring purchase. 7. Why does the proposal provide for as much as a 15 percent profit on the aircraft? The Air Force report indicates that Boeing could make up to 15 percent profit on the 767 aircraft. However, since this aircraft is basically a commercial 767 with modifications to make it a military tanker, a question arises about why the 15 percent profit should apply to the full cost. One financial analysis published recently said that Boeing's profit on commercial 767s is in the range of 6 percent. Did the Air Force consider having a lower profit margin on that portion of the cost, with the 15 percent profit applying to the military-specific portion? This could lower the cost by several million dollars per aircraft. In addition to the questions and observations presented above on the Air Force report to the Congress, we believe there are a number of additional considerations that Congress may want to explore, including the following: What is the status of the lease negotiations? The Air Force has informed us that the lease is still in draft and under negotiation. We believe it is important for the Congress to have all details of the lease finalized and available to assure that there are no provisions that might be disadvantageous to the government. Just last Friday, the Air Force let us read the draft lease in the Pentagon but has not provided us with a copy of it, so we have not had time to review it in detail. What other costs are associated with this lease agreement? In addition to the lease payments, the Air Force has proposed about $600 million in military construction, and it has negotiated with Boeing for training costs and maintenance costs related to the lease agreement that could total about $6.8 billion over the course of the lease. In addition, AF documents indicate that there are other costs for things like insurance premiums (estimated to be about $266 million) and government contracting costs. Given the cost of the maintenance agreement, how has the Air Force assured itself that it received a good price? The Air Force estimates that the maintenance agreement with Boeing will cost between $5 billion and $5.7 billion during the lease period. It has negotiated an agreement with Boeing as part of the lease negotiations, covering all maintenance except flight-line maintenance to be done by Air Force mechanics. This represents an average of about $50 million per aircraft, with each aircraft being leased for 6 years, or over $8 million per year. We do not know how the Air Force determined that this was a reasonable price or whether competition might have yielded a better value. A number of commercial airlines and maintenance contractors already maintain the basic 767 commercial aircraft. What happens when the lease expires? At the end of each 6-year lease, the aircraft are supposed to be returned to the owner, the special purpose entity, or be purchased by the Air Force for their residual value, estimated at about $44 million each in then-year dollars. If the aircraft were returned, the Air Force tanker fleet would be reduced, and the Air Force would have to find some way to replace the lost capability even though lease payments would have paid almost the full cost of the aircraft. In addition, the Air Force would have to pay an additional estimated $778 million if the entire 100 aircraft were returned; this provision is intended to cover the cost of removing military-specific items. For these reasons, returning the aircraft would probably make little sense, and the Congress would almost certainly be asked to fund the purchase of the aircraft at their residual value when the leases expire. How is termination liability being handled? If the lease is terminated prematurely, the Air Force must pay Boeing 1 year's lease payment. Ordinarily, under budget scoring rules, the cost of the termination liability would have to be obligated when the lease is signed. Because this could amount to $1 billion to $2 billion for which the Air Force would have to have budget authority, this requirement was essentially waived by Section 8117 of the Fiscal Year 2003 Department of Defense Appropriation Act. This means that if the lease were terminated, the Air Force would have to find the money in its budget to pay the termination amount or come to Congress for the appropriation. If the purpose of the lease is to "kick-start" replacement of the KC-135 fleet--as the Air Force has stated--why are 100 aircraft necessary, as stipulated under this lease arrangement? The main advantage of the lease, as pointed out by the Air Force, is that it would provide aircraft earlier than purchasing the aircraft and without disrupting other budget priorities. It is not clear, however, why 100 aircraft is the right number to do this. Section 8159 authorized up to 100 aircraft to be leased for up to 10 years. The Air Force has negotiated a shorter lease period, but stayed with the full 100 aircraft to be acquired from fiscal years 2006 to 2011. The "kick-start" occurs in the early years, and by fiscal year 2008 the Air Force would have 40 new aircraft delivered. We do not know to what extent the Air Force (1) considered using the lease for some smaller number of aircraft and then (2) planned to use the intervening time to adjust its procurement budget to begin purchasing rather than leasing. Such an approach would provide a few years to conduct the Tanker Requirements Study and the analysis of alternatives that the Air Force has said it will begin soon. In the coming weeks, we will continue to look into these questions in anticipation of future hearings by the Senate Armed Services Committee and the Senate Commerce Committee. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or Members of the Committee may have. Contacts and Staff Acknowledgments For future questions about this statement, please contact me at (757) 552-8111 or Brian J. Lepore at (202) 512-4523. Individuals making key contributions to this statement include Kenneth W. Newell, Tim F. Stone, Joseph J. Faley, Steve Marrin, Kenneth Patton, Charles W. Perdue, and Susan K. Woodward. Military Aircraft: Information on Air Force Aerial Refueling Tankers. GAO-03-938T. Washington, D.C.: June 24, 2003. Air Force Aircraft: Preliminary Information on Air Force Tanker Leasing. GAO-02-724R. Washington, D.C.: May 15, 2002. U.S. Combat Air Power: Aging Refueling Aircraft Are Costly to Maintain and Operate. GAO/NSIAD-96-160. Washington, D.C.: Aug. 8, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony discusses the Air Force's report on the planned lease of 100 Boeing 767 aircraft modified for aerial refueling. These aircraft would be known by a new designation, KC-767A. Section 8159 of the Department of Defense Appropriations Act for fiscal year 2002 authorizes the Air Force to lease up to 100 KC-767A aircraft. We received the report required by section 8159 when it was sent to the Congress on July 10. We subsequently received a briefing from the Air Force and some of the data needed to review the draft lease and lease versus purchase analysis. However, we were permitted to read the lease for the first time on July 18 but were not allowed to make a copy and so have not had time to fully review and analyze the terms of the draft lease. As a result, this testimony today will be based on very preliminary work. It will (1) describe the condition of the current aerial refueling fleet, (2) summarize the proposed lease as presented in the Air Force's recent report, (3) present our preliminary observations on the Air Force lease report, and (4) identify related issues that we believe deserve further scrutiny. The KC-10 aircraft are relatively young, averaging about 20 years in age. Consequently, much of the focus on modernization of the tanker fleet is centered on the KC-135s, which were built in the 1950s and 1960s, and now average about 43 years in age. While the KC-135 fleet averages more than 40 years in age, the aircraft have relatively low levels of flying hours. The Air Force projects that E and R models have lifetime flying hours limits of 36,000 and 39,000 hours, respectively. According to the Air Force, only a few KC-135s would reach these limits before 2040, but at that time some of the aircraft would be about 80 years old. Flying hours for the KC-135s averaged about 300 hours per year between 1995 and September 2001. Since then, utilization is averaging about 435 hours per year. The Air Force eventually plans to replace all 543 KC-135 aircraft over the next 30 years and considered lease and purchase alternatives to acquire the first 100 aircraft. Office of Management and Budget Circular A-94 directs a comparison of the present value of lease versus purchase before executing a lease. In its report, the Air Force estimated that purchasing would be about $150 million less than leasing on a net present value basis. The Air Force plans to award a contract to a special purpose entity created to issue bonds needed to raise sufficient capital to purchase the new aircraft from Boeing and to lease them to the Air Force. The lease will be a three-party contract between the government, Boeing, and the special purpose entity. Office of Management and Budget Circular A-11 requires that an operating lease meet certain terms and conditions including a prohibition on paying for more than 90 percent of the fair market value of the asset over the life of the lease at the time that the lease is initiated. According to the report, if the government were to terminate the lease, it must do so for all of the delivered aircraft and may terminate any planned aircraft for which construction has not begun, must give 12-months advance notification prior to termination, return the aircraft, and pay an amount equal to one year's lease payment for each aircraft terminated. If termination occurs before all aircraft have been delivered, the price for the remaining aircraft would be increased to include unamortized costs incurred by the contractor that would have been amortized over the terminated aircraft and a reasonable profit on those costs. At the expiration of the lease, the Air Force will return the aircraft to the special purpose entity after removing, at government expense, any Air Force unique configurations. The contractor will warrant that each aircraft will be free from defects in materials and workmanship, and the warranty will be of 36 months duration and will commence after construction of the commercial Boeing 767 aircraft, but before they have been converted into aerial refueling aircraft. Upon delivery to the Air Force, each KC-767A aircraft will carry a 6-month design warranty, 12-month material and workmanship warranty on the tanker modification, and the remainder of the original warranty on the commercial components of the aircraft, estimated to be about 2 years. Because we have only had the Air Force report for a few days, we do not have any definitive analytical results. However, we do have a number of questions and observations about the report that we believe are important for the Congress to explore in reaching a decision on the Air Force proposal.
| 4,891 | 971 |
As a result of 150 years of changes to financial regulation in the United States, the regulatory system has become complex and fragmented. Today, responsibilities for overseeing the financial services industry are shared among almost a dozen federal banking, securities, futures, and other regulatory agencies, numerous self-regulatory organizations, and hundreds of state financial regulatory agencies. For example: Insured depository institutions are overseen by five federal agencies--the Federal Deposit Insurance Corporation (FDIC), the Board of Governors of the Federal Reserve System (Federal Reserve), the Office of the Comptroller of the Currency (OCC), the Office of Thrift Supervision (OTS), and the National Credit Union Administration (NCUA)--and states supervise state-chartered depository and certain other institutions. Securities activities and markets are overseen by the Securities and Exchange Commission (SEC) and state government entities, and private sector organizations performing self-regulatory functions. Commodity futures markets and activities are overseen by the Commodity Futures Trading Commission (CFTC) and also by industry self-regulatory organizations. Insurance activities are primarily regulated at the state level with little federal involvement. Other federal regulators also play important roles in the financial regulatory system, such as the Federal Trade Commission, which acts as the primary federal agency responsible for enforcing compliance with federal consumer protection laws for financial institutions such as finance companies that are not overseen by another financial regulator. Much of this structure has developed as the result of statutory and regulatory measures taken in response to financial crises or significant developments in the financial services sector. For example, the Federal Reserve was created in 1913 in response to financial panics and instability around the turn of the century, and much of the remaining structure for bank and securities regulation was created as the result of the Great Depression turmoil of the 1920s and 1930s. Changes in the types of financial activities permitted for financial institutions and their affiliates have also shaped the financial regulatory system over time. For example, under the Glass-Steagall provisions of the Banking Act of 1933, financial institutions were prohibited from simultaneously offering commercial and investment banking services, but with the passage of the Gramm-Leach- Bliley Act of 1999, Congress permitted financial institutions to fully engage in both types of activities, under certain conditions. Several key developments in financial markets and products in the past few decades have significantly challenged the existing financial regulatory structure. (See fig. 1.) Regulators have struggled, and often failed, to identify the systemic risks posed by large and interconnected financial conglomerates, as well as new and complex products, and to adequately manage these risks. These firms' operations increasingly cross financial sectors, but no single regulator is tasked with assessing the risks such an institution might pose across the entire financial system. In addition, regulators have had to address problems in financial markets resulting from the activities of sometimes less-regulated and large market participants--such as nonbank mortgage lenders, hedge funds, and credit rating agencies--some of which play significant roles in today's financial markets. Further, the increasing prevalence of new and more complex financial products has challenged regulators and investors, and consumers have faced difficulty understanding new and increasingly complex retail mortgage and credit products. Standard setters for accounting and financial regulators have also faced growing challenges in ensuring that accounting and audit standards appropriately respond to financial market developments. And despite the increasingly global aspects of financial markets, the current fragmented U.S. regulatory structure has complicated some efforts to coordinate internationally with other regulators. Because of this hearing's focus on prudential regulation of the banking industry, I would like to reinforce that our prior work has repeatedly identified limitations of the fragmented banking regulatory structure. For example: In 1996, we reported that the division of responsibilities among the four federal bank oversight agencies in the United States was not based on specific areas of expertise, functions or activities, either of the regulator or the banks for which they are responsible, but based on institution type and whether the banks were members of the Federal Reserve System. Despite their efforts to coordinate, this multiplicity of regulators was cited as resulting in inconsistent treatment of banking institutions in examinations, enforcement actions, and regulatory decisions. In a 2007 report we noted that having bank holding company affiliates supervised by multiple banking regulators increased the potential for conflicting information to be provided to the institution, such as when a large, complex banking organization initially received conflicting information from the Federal Reserve, its consolidated supervisor, and OCC, its primary bank supervisor, about the firm's business continuity provisions. In 2005, we reported that a difference in authority across the banking regulators could lead to problems in oversight. For example, FDIC's authority over the holding companies and affiliates of industrial loan corporations was not as extensive as the authority that the other supervisors have over the holding companies and affiliates of banks and thrifts. For example, FDIC's authority to examine an affiliate of an insured depository institution exists only to disclose the relationship between the depository institution and the affiliate and the effect of that relationship on the depository institution. Therefore, any reputation or other risk from an affiliate that has no relationship with the industrial loan corporation could go undetected. In a 2004 report, we noted cases in which interagency cooperation between bank regulators has been hindered when two or more agencies share responsibility for supervising a bank. For example, in the failure of Superior Bank of West Virginia problems between OTS, Superior's primary supervisor, and FDIC hindered a coordinated supervisory approach, including OTS refusing to let FDIC participate in at least one examination. Similarly, disagreements between OCC and FDIC contributed to the 1999 failure of Keystone Bank. In a 2007 report, we expressed concerns over the appropriateness of having OTS oversee diverse global financial firms given the size of the agency relative to the institutions for which it was responsible. Our recent work has further revealed limitations in the current regulatory system, reinforcing the need for change and the need for an entity responsible for identifying existing and emerging systemic risks. In January 2009, we designated modernizing the outdated U.S. financial regulatory system as a new high-risk area to bring focus to the need for a broad-based systemwide transformation to address major economy, efficiency, and effectiveness challenges. We have found that: Having multiple regulators results in inconsistent oversight. Our February 2009 report on the Bank Secrecy Act found that multiple regulators are examining for compliance with the same laws across industries and, for some larger holding companies, within the same institution. However, these regulators lack a mechanism for promoting greater consistency, reducing unnecessary regulatory burden, and identifying concerns across industries. In July 2009, we reported many violations by independent mortgage lenders of the fair lending laws intended to prevent lending discrimination could go undetected because of less comprehensive oversight provided by various regulators. Lack of oversight exists for derivatives products. In March 2009, we reported that the lack of a regulator with authority over all participants in the market for credit default swaps (CDS) has made it difficult to monitor and manage the potential systemic risk that these products can create. Gaps in the oversight of significant market participants. We reported in May 2009 on the issues and concerns related to hedge funds, which have grown into significant market participants with limited regulatory oversight. For example, under the existing regulatory structure, SEC's ability to directly oversee hedge fund advisers is limited to those that are required to register or voluntarily register with the SEC as an investment advisor. Further, multiple regulators (SEC, CFTC, and federal banking regulators) each oversee certain hedge fund-related activities and advisers. We concluded that given the recent experience with the financial crisis, regulators should have the information to monitor the activities of market participants that play a prominent role in the financial system, such as hedge funds, to protect investors and manage systemic risk. Lack of appropriate resolution authorities for financial market institutions. We recently reported that one of the reasons that federal authorities provided financial assistance to at least one troubled institution--the insurance conglomerate AIG--in the crisis stemmed from concerns that a disorderly failure by this institution would have contributed to higher borrowing costs and additional failures, further destabilizing fragile financial markets. According to Federal Reserve officials, the lack of a centralized and orderly resolution mechanism presented the Federal Reserve and Treasury with few alternatives in this case. The lack of an appropriate resolution mechanism for non-banking institutions has resulted in the federal government providing assistance and having significant ongoing exposure to AIG. Lack of a focus on systemwide risk. In March 2009 we also reported on the results of work we conducted at some large, complex financial institutions that indicated that no existing U.S. financial regulator systematically looks across institutions to identify factors that could affect the overall financial system. While regulators periodically conducted horizontal examinations on stress testing, credit risk practices, and risk management, they did not consistently use the results to identify potential systemic risks and have only a limited view of institutions' risk management or their responsibilities. Our July 2009 report on approaches regulators used to restrict the use of financial leveraging--the use of debt or other products to purchase assets or create other financial exposures--by financial institutions also found that regulatory capital measures did not always fully capture certain risks and that none of the multiple regulators responsible for individual markets or institutions had clear responsibility to assess the potential effects of the buildup of systemwide leverage. Recognition of the need for regulatory reform extends beyond U.S. borders. Various international organizations such as the G20, G30, Bank for International Settlements, and Committee on Capital Markets Regulation have all reported that weaknesses in regulation contributed to the financial crisis. Specifically, among other things, these reports pointed to the fragmented regulatory system, the lack of a systemwide view of risks, and the lack of transparency or oversight of all market participants as contributing to the crisis. Further, the reports noted that sound regulation and a systemwide focus were needed to prevent instability in the financial system, and that recent events have clearly demonstrated that regulatory failures had contributed to the current crisis. In response to consolidation in the financial services industry and past financial crises, other countries have previously made changes to their financial regulatory systems in the years before the most recent crisis. For the purposes of our study, we selected five countries--Australia, Canada, Sweden, the Netherlands, and the United Kingdom--that had sophisticated financial systems and different regulatory structures. Each of these countries restructured their regulatory systems within the last 20 years in response to market developments or financial crises (see table 1). The countries we reviewed chose one of two models--with some implementing an integrated approach, in which responsibilities for overseeing safety and soundness issues and business conduct issues are centralized and unified in usually a single regulator, and with others implementing what is commonly referred to as a "twin peaks" model, in which separate regulatory organizations are responsible for safety and soundness and business conduct regulation. A single regulator is viewed by some as advantageous because, with financial firms not being as specialized as they used to be, a single regulator presents economies of scale and efficiency advantages, can quickly resolve conflicts that arise between regulatory objectives, and the regulatory model increases accountability. For example, the United Kingdom moved to a more integrated model of financial services regulation because it recognized that major financial firms had developed into more integrated full services businesses. As a result, this country created one agency (Financial Services Authority) to deal with banking, insurance, asset management and market supervision and regulation. Similarly, Canada and Sweden integrated their regulatory systems prior to the current global financial crisis. In contrast, other countries chose to follow a twin peaks model. The twin peaks model is viewed by some as advantageous because they view the two principal objectives of financial regulation--systemic protection and consumer protection--as being in conflict. Putting these objectives in different agencies institutionalizes the distinction and ensures that each agency focuses on one objective. For example, in order to better regulate financial conglomerates and minimize regulatory arbitrage, Australia created one agency responsible for prudential soundness of all deposit taking, general and life insurance, and retirement pension funds (Australian Prudential Regulatory Authority) and another for business conduct regulation across the financial system including all financial institutions, markets, and market participants (Australian Securities and Investment Commission). In the Netherlands, regulators were divided along the lines of banking, insurance, and securities until the twin peaks approach was adopted. Under the revised structure, the prudential and systemic risk supervisor of all financial services including banking, insurance, pension funds, and securities is the central bank (DNB). Another agency (Netherlands Authority for Financial Markets) is responsible for conduct of business supervision and promoting transparent markets and processes to protect consumers. However, regardless of the regulatory system structure, these and many other countries were affected to some extent by the recent financial crisis. For example, the United Kingdom experienced bank failures, and the government provided financial support to financial institutions. Further, in the Netherlands, where the twin peaks approach is used, the government took over the operations of one bank, provided assistance to financial institutions to reinforce their solvency positions, and took on the risk of a high-risk mortgage portfolio held by another bank, among other actions. However, regulators or financial institutions in some of these countries took steps that may have reduced the impact of the crisis on their institutions. For example, according to a testimony that we reviewed, the impact on Australian institutions was mitigated by the country's relatively stricter prudential standards compared to other countries. The Australian prudential regulator had also conducted a series of stress tests on its five largest banks that assessed the potential impact of asset price changes on institutions. According to Canadian authorities, the positive performance of Canadian banks relative to banks in other countries in the recent crisis was the result of a more conservative risk appetite that limited their activities in subprime mortgages, and exotic financial instruments. However, both countries still experienced some turbulence, requiring among other actions, some government purchases of mortgage-backed securities by the Australian government and some Canadian banks taking advantage of liquidity facilities provided by the Bank of Canada. Authorities in these five countries have taken actions or are contemplating additional changes to their financial regulatory systems based on weaknesses identified during the current financial crisis. These changes included strengthening bank capitalization requirements, enhancing corporate governance standards, and providing better mechanisms for resolving failed financial institutions. For example, in the United Kingdom, in response to its experience dealing with one large bank failure (Northern Rock) the government has called for strengthening the role of the central bank. The Banking Act of 2009 formalized a leading role for the Bank of England in resolving financial institution and provided it statutory authority in the oversight of systemically important payment and settlement systems. With a clear need to improve regulatory oversight, our January 2009 report offered a framework for crafting and evaluating regulatory reform proposals. This framework includes nine characteristics that should be reflected in any new regulatory system, including: goals that are clearly articulated and relevant, so that regulators can effectively conduct activities to implement their missions. appropriately comprehensive coverage to ensure that financial institutions and activities are regulated in a way that ensures regulatory goals are fully met; a mechanism for identifying, monitoring, and managing risks on a systemwide basis, regardless of the source of the risk or the institution in which it is created; an adaptable and forward-looking approach allows regulators to readily adapt to market innovations and changes and evaluate potential new risks; efficient oversight of financial services by, for example, eliminating overlapping federal regulatory missions, while effectively achieving the goals of regulation; consumer and investor protection as part of the regulatory mission to ensure that market participants receive consistent, useful information, as well as legal protections for similar financial products and services, including disclosures, sales practices standards, and suitability requirements; assurance that regulators have independence from inappropriate influence; have sufficient resources and authority, and are clearly accountable for meeting regulatory goals; assurance that similar institutions, products, risks, and services are subject to consistent regulation, oversight, and transparency; and adequate safeguards that allow financial institution failures to occur while limiting taxpayers' exposure to financial risk. Various organizations have made proposals to reform the U.S. regulatory system, and several proposals have been introduced to the Congress. Among these proposals are the administration's proposal, which is specified in its white paper and draft legislation, and another proposal that has been introduced as legislation in the House of Representatives (H.R. 3310). The administration's proposal includes various elements that could potentially improve federal oversight of the financial markets and better protect consumers and investors. For example, it establishes a council consisting of federal financial regulators that would, among other things, advise Congress on financial regulation and monitor the financial services market to identify the potential risks systemwide. Under H.R. 3310, a board consisting of federal financial regulators and private members, would also monitor the financial system for exposure to systemic risk and advise Congress. The creation of such a body under either proposal would fill an important need in the current U.S. regulatory system by establishing an entity responsible for helping Congress and regulators identify potential systemic problems and making recommendations in response to existing and emerging risks. However, such an entity would also need adequate authority to ensure that actions were taken in response to its recommendations. As discussed, the inability of regulators to take appropriate action to mitigate problems that posed systemic risk contributed to the current crisis. The administration's proposal also contains measures to improve the consistency of consumer and investor protection. First, the administration proposes to create a new agency, the Consumer Financial Protection Agency (CFPA). Among other things, this agency would assume the consumer protection authorities of the current banking regulators and would have broad jurisdiction and responsibility for protecting consumers of credit, savings, payment and other consumer financial products and services. Its supervisory and enforcement authority generally would cover all persons subject to the financial consumer protection statutes it would be charged with administering. However, the SEC and CFTC would retain their consumer protection role in securities and derivatives markets. As our January report described, consumers have struggled with understanding complex products and the multiple regulators responsible for overseeing such issues have not always performed effectively. We urged that a new regulatory system be designed to provide high-quality, effective, and consistent protection for consumers and investors in similar situations. The administration's proposal addresses this need by charging a single financial regulatory agency with broad consumer protection responsibilities. This approach could improve the oversight of this important issue and better protect U.S. consumers. However, separating the conduct of consumer protection and prudential regulation can also create challenges. Therefore, having clear requirements to coordinate efforts across regulators responsible for these different missions would be needed. Although the Administration's proposal would make various improvements in the U.S. regulatory system, our analysis indicated that additional opportunities exist to further improve the system exist. Unlike H.R. 3310, which would combine all five federal depository institution regulators, the Administration's proposal would only combine the current regulators for national banks and thrifts into one agency, leaving the three other depository institution regulators--the Federal Reserve, the FDIC, and NCUA--to remain separate. As we reported in our January 2009 report, having multiple regulators performing similar functions presents challenges. For example, we found that some regulators lacked sufficient resources and expertise, that the need to coordinate among multiple regulators slowed responses to market events, and that institutions could take advantage of regulatory arbitrage by seeking regulation from an agency more likely to offer less scrutiny. Regulators that are funded by assessments on their regulated entities can also become overly dependent on individual institutions for funding, which could potentially compromise their independence because such firms have the ability to choose to be overseen by another regulator. Finally, regardless of any regulatory reforms that are adopted, we urge Congress to continue to actively monitor the progress of such implementation and to be prepared to make legislative adjustments to ensure that any changes to the U.S. financial regulatory system are as effective as possible. In addition, we believe that it is important that Congress provides for appropriate GAO oversight of any regulatory reforms to ensure accountability and transparency in any new regulatory system. GAO stands ready to assist the Congress in its oversight capacity and evaluate the progress agencies are making in implementing any changes. Mr. Chairman and Members of the Committee, I appreciate the opportunity to discuss these critically important issues and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact Orice Williams Brown at (202) 512-8678 or [email protected], or Richard J. Hillman at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Cody J. Goebel, Assistant Director; Sonja J. Bensen; Emily R. Chalmers, Patrick S. Dynes; Marc W. Molino; Jill M. Naamane; and Paul Thompson. As a result of significant market developments in recent decades that have outpaced a fragmented and outdated regulatory structure, significant reforms to the U.S. regulatory system are critically and urgently needed. The following framework consists of nine elements that should be reflected in any new regulatory system. This framework could be used to craft proposals, or to identify aspects to be added to existing proposals to make them more effective and appropriate for addressing the limitations of the current system. Goals should be clearly articulated and relevant, so that regulators can effectively carry out their missions and be held accountable. Key issues include considering the benefits of re- examining the goals of financial regulation to gain needed consensus and making explicit a set of updated comprehensive and cohesive goals that reflect today's environment. Financial regulations should cover all activities that pose risks or are otherwise important to meeting regulatory goals and should ensure that appropriate determinations are made about how extensive such regulations should be, considering that some activities may require less regulation than others. Key issues include identifying risk-based criteria, such as a product's or institution's potential to create systemic problems, for determining the appropriate level of oversight for financial activities and institutions, including closing gaps that contributed to the current crisis. Mechanisms should be included for identifying, monitoring, and managing risks to the financial system regardless of the source of the risk. Given that no regulator is currently tasked with this, key issues include determining how to effectively monitor market developments to identify potential risks; the degree, if any, to which regulatory intervention might be required; and who should hold such responsibilities. A regulatory system that is flexible and forward looking allows regulators to readily adapt to market innovations and changes. Key issues include identifying and acting on emerging risks in a timely way without hindering innovation. Effective and efficient oversight should be developed, including eliminating overlapping federal regulatory missions where appropriate, and minimizing regulatory burden without sacrificing effective oversight. Any changes to the system should be continually focused on improving the effectiveness of the financial regulatory system. Key issues include determining opportunities for consolidation given the large number of overlapping participants now, identifying the appropriate role of states and self-regulation, and ensuring a smooth transition to any new system. Consumer and investor protection should be included as part of the regulatory mission to ensure that market participants receive consistent, useful information, as well as legal protections for similar financial products and services, including disclosures, sales practice standards, and suitability requirements. Key issues include determining what amount, if any, of consolidation of responsibility may be necessary to streamline consumer protection activities across the financial services industry. Regulators should have independence from inappropriate influence, as well as prominence and authority to carry out and enforce statutory missions, and be clearly accountable for meeting regulatory goals. With regulators with varying levels of prominence and funding schemes now, key issues include how to appropriately structure and fund agencies to ensure that each one's structure sufficiently achieves these characteristics. Similar institutions, products, risks, and services should be subject to consistent regulation, oversight, and transparency, which should help minimize negative competitive outcomes while harmonizing oversight, both within the United States and internationally. Key issues include identifying activities that pose similar risks, and streamlining regulatory activities to achieve consistency. A regulatory system should foster financial markets that are resilient enough to absorb failures and thereby limit the need for federal intervention and limit taxpayers' exposure to financial risk. Key issues include identifying safeguards to prevent systemic crises and minimizing moral hazard. Financial Markets Regulation: Financial Crisis Highlights Need to Improve Oversight of Leverage at Financial Institutions and across System. GAO-09-739. Washington, D.C.: Jul. 22, 2009. Fair Lending: Data Limitations and the Fragmented U.S. Financial Regulatory Structure Challenge Federal Oversight and Enforcement Efforts. GAO-09-704. Washington, D.C.: Jul. 15, 2009. Hedge Funds: Overview of Regulatory Oversight, Counterparty Risks, and Investment Challenges. GAO-09-677T. Washington, D.C.: May 7, 2009. Financial Regulation: Review of Regulators' Oversight of Risk Management Systems at a Limited Number of Large, Complex Financial Institutions. GAO-09-499T. Washington, D.C.: Mar. 18, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: Mar. 18, 2009. Systemic Risk: Regulatory Oversight and Recent Initiatives to Address Risk Posed by Credit Default Swaps. GAO-09-397T. Washington, D.C.: Mar. 5, 2009. Bank Secrecy Act: Federal Agencies Should Take Action to Further Improve Coordination and Information-Sharing Efforts. GAO-09-227. Washington, D.C.: Feb. 12, 2009. Financial Regulation: A Framework for Crafting and Assessing Proposals to Modernize the Outdated U.S. Financial Regulatory System. GAO-09-216. Washington, D.C.: Jan. 8, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008. Hedge Funds: Regulators and Market Participants Are Taking Steps to Strengthen Market Discipline, but Continued Attention Is Needed. GAO-08-200. Washington, D.C.: January 24, 2008. Information on Recent Default and Foreclosure Trends for Home Mortgages and Associated Economic and Market Developments. GAO-08-78R. Washington, D.C.: October 16, 2007. Financial Regulation: Industry Trends Continue to Challenge the Federal Regulatory Structure. GAO-08-32. Washington, D.C.: October 12, 2007. Financial Market Regulation: Agencies Engaged in Consolidated Supervision Can Strengthen Performance Measurement and Collaboration. GAO-07-154. Washington, D.C.: March 15, 2007. Alternative Mortgage Products: Impact on Defaults Remains Unclear, but Disclosure of Risks to Borrowers Could Be Improved. GAO-06-1021. Washington, D.C.: September 19, 2006. Credit Cards: Increased Complexity in Rates and Fees Heightens Need for More Effective Disclosures to Consumers. GAO-06-929. Washington, D.C.: September 12, 2006. Financial Regulation: Industry Changes Prompt Need to Reconsider U.S. Regulatory Structure. GAO-05-61. Washington, D.C.: October 6, 2004. Consumer Protection: Federal and State Agencies Face Challenges in Combating Predatory Lending. GAO-04-280. Washington, D.C.: January 30, 2004. Long-Term Capital Management: Regulators Need to Focus Greater Attention on Systemic Risk. GAO/GGD-00-3. Washington, D.C.: October 29, 1999. Financial Derivatives: Actions Needed to Protect the Financial System. GAO/GGD-94-133. Washington, D.C.: May 18, 1994. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
This testimony discusses issues relating to efforts to reform the regulatory structure of the financial system. In the midst of the worst economic crisis affecting financial markets globally in more than 75 years, federal officials have taken unprecedented steps to stem the unraveling of the financial services sector. While these actions aimed to provide relief in the short term, the severity of the crisis has shown clearly that in the long term, the current U.S. financial regulatory system was in need of significant reform. Our January 2009 report presented a framework for evaluating proposals to modernize the U.S. financial regulatory system, and work we have conducted since that report further underscores the urgent need for changes in the system. Given the importance of the U.S. financial sector to the domestic and international economies, in January 2009, we also added modernization of its outdated regulatory system as a new area to our list of high-risk areas of government operations because of the fragmented and outdated regulatory structure. We noted that modernizing the U.S. financial regulatory system will be a critical step to ensuring that the challenges of the 21st century can be met. This testimony discusses (1) how regulation has evolved and recent work that further illustrates the significant limitations and gaps in the existing regulatory system, (2) the experiences of countries with other types of varying regulatory structures during the financial crisis, and (3) how certain aspects of proposals would reform the U.S. regulatory system. The current U.S. financial regulatory system is fragmented due to complex arrangements of federal and state regulation put into place over the past 150 years. It has not kept pace with major developments in financial markets and products in recent decades. Today, almost a dozen federal regulatory agencies, numerous self-regulatory organizations, and hundreds of state financial regulatory agencies share responsibility for overseeing the financial services industry. Several key changes in financial markets and products in recent decades have highlighted significant limitations and gaps in the existing U.S. regulatory system. For example, regulators have struggled, and often failed, both to identify the systemic risks posed by large and interconnected financial conglomerates and to ensure these entities adequately manage their risks. In addition, regulators have had to address problems in financial markets resulting from the activities of sometimes less-regulated and large market participants--such as nonbank mortgage lenders, hedge funds, and credit rating agencies--some of which play significant roles in today's financial markets. Further, the increasing prevalence of new and more complex financial products has challenged regulators and investors, and consumers have faced difficulty understanding new and increasingly complex retail mortgage and credit products. Our recent work has also highlighted significant gaps in the regulatory system and the need for an entity responsible for identifying existing and emerging systemic risks. Various countries have implemented changes in their regulatory systems in recent years, but the current crisis affected most countries regardless of their structure. All of the countries we reviewed have more concentrated regulatory structures than that of the United States. Some countries, such as the United Kingdom, have chosen an integrated approach to regulation that unites safety and soundness and business conduct issues under a single regulator. Others, such as Australia, have chosen a "twin peaks" approach, in which separate agencies are responsible for safety and soundness and business conduct regulation. However, regardless of regulatory structure, each country we reviewed was affected to some extent by the recent financial crisis. One regulatory approach was not necessarily more effective than another in preventing or mitigating a financial crisis. However, regulators in some countries had already taken some actions that may have reduced the impact on their institutions. These and other countries also have taken or are currently contemplating additional changes to their regulatory systems to address weaknesses identified during this crisis. The Department of the Treasury's recent proposal to reform the U.S. financial regulatory system includes some elements that would likely improve oversight of the financial markets and make the financial system more sound, stable, and safer for consumers and investors. For example, under this proposal a new governmental body would have responsibility for assessing threats that could pose systemic risk. This proposal would also create an entity responsible for business conduct, that is, ensuring that consumers of financial services were adequately protected. However, our analysis indicated that additional opportunities exist beyond the Treasury's proposal for additional regulatory consolidation that could further decrease fragmentation in the regulatory system, reduce the potential for differing regulatory treatment, and improve regulatory independence.
| 6,185 | 921 |
Since the 1960s, the United States has operated polar-orbiting satellite systems that obtain environmental data to support weather observations and forecasts. These data are processed to provide graphical weather images and specialized weather products. Data from polar satellites are also the predominant input to numerical weather prediction models, which are a primary tool for forecasting weather days in advance--including forecasting the path and intensity of hurricanes. These weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate its effects. Polar-orbiting satellites circle the earth in a nearly north-south orbit, providing global observation of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each polar-orbiting satellite views the entire earth's surface twice a day. Currently, the polar-orbiting satellites that are considered primary satellites for providing input to weather forecasting models are a NOAA/NASA satellite (called Suomi National Polar-orbiting Partnership, or S-NPP), two Department of Defense (DOD) satellites, and a series of European satellites. These satellites cross the equator in early morning, mid-morning, and early afternoon orbits, with S-NPP in the early afternoon orbit. NOAA, the Air Force, and a European weather satellite organization also maintain older satellites that provide limited backup to these operational satellites. Figure 1 illustrates the current operational polar satellite constellation. According to NOAA, 80 percent of the data assimilated into its National Weather Service numerical weather prediction models that are used to produce weather forecasts 3 days and beyond are provided by polar- orbiting satellites. Specifically, a single afternoon polar satellite provides NOAA 45 percent of the global coverage it needs for its numerical weather models. NOAA obtains the rest of the polar satellite data it needs from other satellite programs, including DOD's early morning satellites and the European mid-morning satellite. NOAA is currently executing a major satellite acquisition program to replace existing polar satellite systems that are nearing the end of their expected life spans. NOAA established the JPSS program in 2010 after a prior tri-agency program was disbanded due to technical and management challenges, cost growth, and schedule delays. The JPSS program guided the development and launch of the S-NPP satellite in 2011 and is responsible for two other planned JPSS satellites, known as JPSS-1 and JPSS-2. The current anticipated launch dates for these two satellites are March 2017 and December 2021, respectively. More recently, NOAA has also begun planning the Polar Follow-On (PFO) program, which is to include the development and launch of a third and fourth satellite in the series in July 2026 and July 2031, respectively. These are planned to be nearly identical to the JPSS-2 satellite. NOAA has organized the JPSS program into flight and ground projects that have separate areas of responsibility. The flight project includes a set of five instruments, the spacecraft, and launch services. The ground project consists of ground-based systems that handle satellite communications and data processing. The ground system's versions are numbered; the version that is currently in use is called Block 1.2, and the new version that is under development is called Block 2.0. Among other things, Block 2.0 is to enable the JPSS ground system to support both the S-NPP and all planned JPSS satellites. Since 2012, we have issued reports on the JPSS program that highlighted technical issues, component cost growth, management challenges, and key risks. In these reports, we made 15 recommendations to NOAA to improve the management of the JPSS program. These recommendations included addressing key risks, establishing a comprehensive contingency plan consistent with best practices, and addressing weaknesses in information security practices. As we reported in May 2016, the agency had implemented 2 recommendations and was working to address the remainder. In particular, NOAA established contingency plans to mitigate the possibility of a polar satellite data gap and began tracking completion dates for its gap mitigation activities. NOAA has also taken steps such as performing a new schedule risk analysis, and adding information on the impact of space debris to its annual assessment of satellite availability. We have ongoing work reviewing the agency's progress in implementing these open recommendations. Over the past year, the JPSS program has made progress in developing the JPSS-1 satellite, but continues to face challenges as it approaches the early 2017 launch date. The program completed all instruments on the JPSS-1 satellite and integrated them on the spacecraft by early 2016. As of December 2015, the JPSS program reported that it remained on track to meet its committed launch date of March 2017. However, as highlighted in our May 2016 report, the JPSS program continues to face challenges as it approaches the early 2017 launch date. Specifically, the JPSS program had experienced delays ranging from 3 to 10 months on key components since mid-2014, as well as technical challenges on both the flight and ground systems. For example, the program recently experienced multiple issues in completing a component on the spacecraft, called a gimbal, which moved the component's planned completion date forward by almost a year before it was completed in March 2016. These issues in turn delayed the beginning of the JPSS-1 satellite's environmental testing. The gimbal issue also was a factor in the program choosing to move back its launch readiness date--the date that the JPSS-1 satellite is planned to be ready for launch--from December 2016 to January 2017. Regarding the JPSS ground system, the program experienced an unexpectedly high number of program trouble reports in completing the upgrade to Block 2.0, which is needed for security and requirements improvements in tandem with the JPSS-1 satellite's launch. A key milestone related to this upgrade was recently delayed from January to August 2016. While NOAA satellite timelines show continuous coverage in the afternoon orbit, the JPSS program still faces the potential for a near-term gap in satellite coverage. As we reported in May 2016, NOAA had increased the estimated useful life for S-NPP by up to 4 years. Under this new scenario, a near-term gap in satellite data would not be expected because S-NPP would last longer than the expected start of operations for JPSS-1. However, subsequent NOAA documentation showed this 4-year period as "fuel limited life." NOAA officials explained that this extended period is based on expected fuel availability, and does not take into account the likelihood that the instruments and spacecraft will fail before the satellite runs out of fuel. In other words, the extended useful life depicts the satellite's maximum possible life, not its expected life. As a result, the JPSS program continues to face a potential gap of 8 months between the end of S-NPP's expected life in October 2016, and when the JPSS-1 satellite is launched and completes post-launch testing in June 2017. Figure 2 shows the potential gap period. The June 2017 completion date also assumes a 3 month period for the JPSS-1 satellite's on-orbit checkout. However, based on on-orbit checkout periods from past polar satellites, it is likely that checkout could take longer than this, potentially lengthening the gap. As a precedent, it took the JPSS program about 2 years to fully validate the highest-priority data products from the S-NPP satellite. If S-NPP unexpectedly fails sooner, or the JPSS-1 launch date is delayed, a longer gap could result. In addition to its work in completing the JPSS-1 satellite, NOAA has begun planning for new satellites to ensure the future continuity of polar satellite data. In a new program, called the Polar Follow-On (PFO), NOAA plans to build two new satellites, JPSS-3 and JPSS-4, that are copies of the JPSS-2 satellite. Like JPSS-2, these satellites are to include all three key performance parameter instruments, as well as a fourth environmental sensor. NOAA plans to complete development of JPSS-3 and JPSS-4 several years ahead of their planned launch date. In the nearer term, NOAA plans to build a smaller satellite that can provide a replacement for some data produced by one of the most essential JPSS instruments. NOAA's decisions on what PFO will include are based on what the agency calls a robust constellation, creating a situation where it would take two failures to create a gap on data from key instruments, and where the agency would be able to restore full coverage in a year in the event of a failure. We reported in May 2016 that NOAA has taken several steps in planning the PFO program, including establishing goal launch dates and high-level budget estimates. However, it had not completed formulation documents such as high-level requirements, a project plan, or budget information for key components. In addition, uncertainties remain about whether early development of JPSS-3 and JPSS-4 is necessary to achieve robustness. For instance, in its initial calendar for PFO, NOAA considered lifetimes of 10 years or more for the JPSS-1 and JPSS-2 satellites, while NOAA charts used for budget justification continue to show only 7 year lifetimes. If satellites are likely to last longer than expected, there could be unnecessary redundancy in coverage. Until NOAA ensures that its plans for future polar satellite development are based on the full range of estimated lives of potential satellites, the agency may not be making the most efficient use of the nation's sizable investment in the polar satellite program. As a result of this uncertainty, we recommended that NOAA evaluate the costs and benefits of different launch scenarios for the JPSS PFO program, based on updated satellite life expectancies, to ensure satellite continuity while minimizing program costs. NOAA concurred and noted that it had evaluated the costs and benefits of different launch scenarios using the latest estimates of satellite lives as part of its budget submission. However, the agency did not provide sufficient supporting evidence or artifacts showing that it had evaluated costs and benefits of launch scenarios in this way. NOAA's National Environmental Satellite Data and Information Service (NESDIS) regularly publishes "flyout charts" for its satellites which depict timelines for the launch, on-orbit storage, and operational life of its satellites. Among other things, NOAA uses these charts to support budget requests, alert users when new satellites will be operational, and keep the public informed on plans to maintain satellite continuity. In a draft report currently at the Department of Commerce for comment, we reported that NOAA has updated its polar flyout charts three times in the last 2-and-a-half years. Key changes that can result in an update include adding newly planned satellites; removing a satellite that has reached the end of its life; and adjusting planned dates for when satellites are to launch, begin operations, or reach the end of their useful lives. Among the data NOAA uses in updating its charts are health status information of operational satellites, planned schedules for new satellites, and analysis from operational satellite experts. However, while NOAA regularly updates its charts and most of the data on them were aligned with other program documentation, the agency has not consistently ensured that its charts were accurate, supported by stringent analysis, and fully documented. Specifically: The charts were at times inconsistent with other program data. For example, in one out of 10 available instances for comparison, flyout chart data did not match underlying program data. JPSS program data as of April 2015 listed the JPSS-2 satellite launch as November 2021, but the flyout chart from that month showed it 4 months earlier, in July 2021. The flyout charts also inconsistently reflected data from annual satellite availability assessments performed by the JPSS program. In addition, weaknesses remained in the latest annual availability assessment from 2015. For example, NOAA assumed that JPSS-1 data from key instruments will be available to users 3 months after launch. However, based on on-orbit checkout periods from past polar satellites, it is likely that checkout could take much longer than this, potentially lengthening the gap. NOAA did not consistently document the justification for updates to its polar satellite flyout charts. For example, the NOAA department responsible for providing summary packages for each flyout chart update provided justification for the key changes in only one of three documentation packages. Furthermore, standard summary documents, such as a routing list and information on the disposition of comments, were included for only one of the three documentation packages for polar flyout charts. NOAA also does not consistently depict how long a satellite might last once it is beyond its design life. For instance, NESDIS, the NOAA entity responsible for satellite operations, recently added a 4-year extension to the useful life of the S-NPP satellite. This extension was meant to depict maximum potential life, assuming all instruments and the spacecraft continue functioning. However, the agency did not clearly define this term on its charts, thereby allowing readers to assume the agency expects the satellites to last through the end of the fuel-limited life period. Also, as stated above, in its justification for funding for the PFO program, NOAA considered lifetimes for JPSS-1 and JPSS-2 to be longer by several years when compared to the lifetimes listed on its flyout charts. Program officials indicated that the estimates they develop prior to a satellite's launch are more conservative due to greater uncertainty at that stage. However, inconsistencies such as these have the effect of implying that some satellites will reach their end-of-life sooner or later than the agency anticipates. Part of the reason for these process shortfalls is that NOAA has not finalized a policy with standard steps to follow when making chart updates. Consequently, the information that NOAA provides Congress on the flyout charts is not as accurate as it needs to be, which could result in less-than-optimal decisions. Furthermore, lack of communication of the potential ambiguities inherent in changes to satellite lifetimes could have major effects on future decision-making. To address these weaknesses, our draft report includes a series of recommendations to NOAA, including requiring satellite programs to perform regular assessments of satellite availability, implementing a consistent approach to depicting satellites beyond their design lives, and revising and finalizing the policy for updating flyout charts. Safeguarding federal computer systems and systems supporting national infrastructure is essential to protecting public health and safety. Federal law and guidance specify requirements for protecting federal information and information systems. In particular, the Federal Information Security Modernization Act of 2014 (FISMA) requires executive branch agencies to develop, document, and implement an agency-wide information security program. FISMA also requires the National Institute of Standards and Technology (NIST) to develop standards and guidelines for agencies to use in categorizing their information systems and minimum requirements for each category. Accordingly, NIST developed a risk management framework of standards and guidelines to follow in developing information security programs. Figure 3 shows an overview of the steps in this framework, including components of the risk management lifecycle as well as key activities and artifacts. As we reported in May 2016, NOAA had established information security policies in key areas detailed by FISMA and recommended by NIST guidance and the JPSS program had made progress in implementing these policies. However, we found that the program had weaknesses in several areas related to its ground system which, if not addressed, could put the JPSS ground system at high risk of compromise. Key controls not fully implemented. The JPSS program, using NIST guidance on system categorization, identified its ground system as a high-impact system, meaning that a loss of confidentiality, integrity, or availability could be expected to have a catastrophic effect on operations, and identified needed security controls based on this classification. However, the program had fully implemented only 53 percent of required security controls, and had fully implemented controls in only one area. Limitations in controls assessment. The program developed an assessment plan to identify weaknesses in the controls established by the program, and implemented the assessment. However, the assessment had significant limitations, including inconsistencies in maintaining a valid inventory, uncertainty about the physical locations for program components, and a discrepancy between the inventory used for testing and the actual live inventory of the program's systems. Delay in fixing critical weaknesses. In accordance with NOAA policy, the program established plans of action and milestones to address control weaknesses in both the current and future version of its ground system, and had made progress in addressing many of its security weaknesses through this process. However, many vulnerabilities remain unaddressed because the program did not comply with Department of Commerce policy to remediate critical and high-risk vulnerabilities within 30 days. As of its 2015 assessment of program controls, the JPSS program had 146 critical and 951 high-risk vulnerabilities on the current iteration of the ground system, and 102 critical and 295 high-risk vulnerabilities on the next iteration of the ground system. Vulnerabilities remaining open include instances of outdated software, an obsolete web server, as well as more than 200 instances of use of outdated definitions used to scan and identify viruses. Figure 4 graphically shows the number of open vulnerabilities on the current JPSS ground system over time. Without addressing these vulnerabilities in a timely manner, the program remains at increased risk of potential exploits. Security incidents reported but not consistently tracked. In accordance with NOAA policy, the JPSS program established a continuous monitoring plan to track security incidents and intrusions and to ensure that information security controls are working. Specifically, NOAA officials reported 10 medium and high-severity incidents related to the JPSS ground system, including incidents involving unauthorized access to web servers and computers, between August 2014 and August 2015. Of these, NOAA closed 6 incidents involving hostile probes, improper usage, unauthorized access, password sharing, and other IT-related security concerns. However, the agency did not consistently track all incidents. Specifically, there were differences between what is being tracked by the JPSS program, and what is closed by NOAA's incident response team. For example, 2 of the 4 incidents that were recommended for closure by the JPSS program office are currently still open according to the incident report. Until NOAA and the JPSS program have a consistent understanding of the status of incidents, there is an increased risk that key vulnerabilities will not be identified or properly addressed. To address these deficiencies, we recommended in our May 2016 report that the Secretary of Commerce direct the Administrator of NOAA to establish a plan to address the limitations in the program's efforts to test security controls, including ensuring that (1) any changes in the system's inventory do not materially affect test results; (2) critical and high-risk vulnerabilities are addressed within 30 days, as required by agency policy; and (3) the agency and program are tracking and closing a consistent set of incident response activities. NOAA concurred with our recommendations. Regarding critical and high- risk vulnerabilities, NOAA noted that the JPSS program would continue to follow agency policy allowing its authorizing official to accept risks when remediation cannot be performed as anticipated. However, the program did not have documentation from the authorizing official accepting the risk of a delayed remediation schedule for critical and high-risk vulnerabilities. In summary, NOAA is making progress in developing and testing the JPSS-1 satellite as it moves toward a March 2017 launch date, but continues to experience issues in remaining ground system development, and faces a potential near-term data gap in the period before this satellite becomes operational. In addition, NOAA is planning to launch a future set of satellites to ensure continuity of future satellite data, but it is uncertain which launch timing will best meet the agency's criteria for a robust constellation. Without ensuring that its plans for future satellite development are based on the full range of estimated lives of potential satellites, the agency may not be making the most efficient use of the nation's sizable investment in the polar satellite program. Further, findings from a draft report show that NOAA's efforts to depict and update key polar satellite information, such as timelines and operational life, need to be improved. Its flyout charts, used to inform users of potential gaps and support budget requests, did not always accurately reflect current program data or consistently present key information, such as a satellite's lifetime once beyond its original design life. This is in part because NOAA has not finalized a policy that includes standard steps for updating its charts. Until NOAA addresses these shortfalls, it runs an increased risk that its flyout charts will mislead Congress and may lead to less-than-optimal decisions. As a part of JPSS ground system development, NOAA has established policies in key information security areas called for by guidance. However, the program has not fully implemented the policy in several areas. For example, the program fully implemented just over half of its required security controls, a recent security assessment itself had significant limitations, and the program has not remediated critical and high-risk vulnerabilities in a timely manner. Until NOAA addresses these weaknesses, the JPSS ground system remains at high risk of compromise. Chairman Bridenstine, Ranking Member Bonamici, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at [email protected]. Other contributors include Colleen Phillips (Assistant Director), Shaun Byrnes (Analyst-in-Charge), Christopher Businsky, Torrey Hardee, Lee McCracken, and Umesh Thakkar. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Polar-orbiting satellites provide data that are essential to support weather observations and forecasts. NOAA is preparing to launch the second satellite in the JPSS program in March 2017, but a near-term gap in polar satellite coverage remains likely. Given the criticality of satellite data to weather forecasts and the potential impact of a satellite data gap, GAO added this area to its High-Risk List in 2013. This statement addresses the status of the JPSS program and plans for future satellites, NOAA's efforts to depict and update satellite timelines, and the JPSS program's implementation of key information security protections. This statement is based on a May 2016 report on JPSS and a draft report on satellite timelines. To develop the draft report, GAO reviewed agency procedures for updating satellite timelines, compared timelines to best practices and agency documentation, and interviewed officials. As highlighted in a May 2016 report, the National Oceanic and Atmospheric Administration's (NOAA) Joint Polar Satellite System (JPSS) program has continued to make progress in developing the JPSS-1 satellite for a March 2017 launch. However, the program has experienced technical challenges which have resulted in delays in interim milestones. In addition, NOAA faces the potential for a near-term gap in satellite coverage of 8 months before the JPSS-1 satellite is launched and completes post-launch testing (see figure). NOAA has also begun planning for future polar satellites. However, uncertainties remained on the best timing for launching these satellites, in part because of the potential for some satellites already in orbit to last longer. NOAA did not provide sufficient evidence that it had evaluated the costs and benefits of launch scenarios for these new satellites based on updated life expectancies. Until this occurs, NOAA may not make the most efficient use of investments in the polar satellite program. Note: The afternoon orbit is one of three primary polar orbits providing needed coverage for numerical weather models. As noted in a draft GAO report, NOAA publishes "flyout charts" depicting satellite timelines to support budget requests and appropriations discussions. The agency regularly updates its charts when key changes occur. However, the charts do not always accurately reflect data from other program documentation such as the latest satellite schedules or assessments of satellite availability. NOAA also has not consistently documented its justification for chart updates or depicted lifetimes for satellites beyond their design life, and has not finalized a policy for updating its charts. As a result, the information NOAA provides Congress on the flyout charts is not as accurate as it needs to be, which could result in less-than-optimal decisions. GAO reported in May 2016 that, although NOAA has established information security policies in key areas recommended by guidance, the JPSS program has not yet fully implemented them. Specifically, while the program has implemented multiple relevant security controls, it has not yet fully implemented almost half of the recommended security controls, did not have all of the information it needed when assessing security controls, and has not addressed key vulnerabilities in a timely manner. Furthermore, NOAA has experienced 10 key information security incidents related to the JPSS ground system, including incidents regarding unauthorized access to web servers and computers. Until NOAA addresses these weaknesses, the JPSS ground system remains at high risk of compromise. In its May 2016 report, GAO recommended that NOAA assess the costs and benefits of different launch decisions based on updated satellite life expectancies, and address deficiencies in its information security program. NOAA concurred with these recommendations. GAO's draft report includes recommendations to NOAA to improve the accuracy, consistency, and documentation supporting updates to satellite timelines, and to revise and finalize its draft policy governing timeline updates. This report is currently at the Department of Commerce for comment.
| 4,859 | 830 |
The decennial census is the nation's largest, most complex survey. To conduct its decennial activities, the Bureau recruits, hires, and trains over half a million field staff based out of local census offices nationwide, temporarily making it one of the nation's largest employers. The first operation for the 2010 Census has already begun. Starting in January 2007, the Bureau notified state and local governments that it would seek their help in developing a complete address file through the Bureau's LUCA program. Address canvassing--a field operation to build a complete and accurate address list in which census field workers go door to door verifying and correcting addresses for all households and street features contained on decennial maps--will begin in April 2009. One year later, the Bureau will mail census questionnaires to the majority of the population in anticipation of Census Day, April 1, 2010. Those households that do not return their questionnaire will be contacted by census field workers during the nonresponse follow-up operation to determine the number of people living in the housing unit on Census Day, among other information. In addition to these operations, the Bureau conducts other operations, including gathering data from residents in group quarters such as prisons or military bases. The Bureau also employs different enumeration methods in certain settings, such as remote Alaska enumeration, in which people living in inaccessible communities must be contacted in January 2010 in anticipation of the spring thaw, which makes travel difficult, or update/enumerate, a data collection method involving personal interviews that is used in communities where many housing units may not have typical house number-street name mailing addresses. The decennial census is conducted against a backdrop of immutable deadlines. The census's elaborate chain of interrelated pre- and post- Census Day activities is predicated upon those dates. To meet these mandated reporting requirements, census activities must occur at specific times and in the proper sequence. The Secretary of Commerce is legally required to (1) conduct the census on April 1 of the decennial year, (2) report the state population counts to the President for purposes of congressional apportionment by December 31 of the decennial year, and (3) send population tabulations to the states for purposes of redistricting no later than 1 year after the April 1 census date. (See table 1 for dates of selected key decennial activities.) The Bureau estimates that it will spend about $3 billion in information technology investments to support collections, processing and dissemination of census data and will be undertaking four major systems acquisitions--totaling about $2 billion. The major acquisitions include the Decennial Response Integration System (DRIS); Field Data Collection Automation (FDCA) program, which includes the handheld mobile computing devices to be used by the Bureau's temporary field staff; Data Access and Dissemination System (DADS II); and Master Address File/Topologically Integrated Geographic Encoding and Referencing Accuracy Improvement Project (MTAIP) system. The four systems were planned to be available for the Dress Rehearsal so that their functionality could be tested in an operational environment. (See table 2.) In June 2005, we reported on the Bureau's progress in five information technology (IT) areas--investment management, systems development/management, enterprise architecture management, information security, and human capital. These areas are important because they have substantial influence on the effectiveness of organizational operations and, if applied effectively, can reduce the risk of cost and schedule overruns, and performance shortfalls. We reported that, while the Bureau had many practices in place, much remained to be done to fully implement effective IT management capabilities. We made several recommendations to improve the Bureau's management. Subsequently, in March 2006, we testified on the Bureau's acquisition and management of two key information technology system acquisitions for the 2010 Census--FDCA and DRIS. We reported on the Bureau's progress in implementing acquisitions and management capabilities for these initiatives. To effectively manage major IT programs, organizations should use sound acquisition and management processes, minimize risk, and thereby maximize chances for success. Such processes include project and acquisition planning, solicitation, requirement development and management, and risk management. We reported that, while the project offices responsible for these two contracts have carried out initial acquisition management activities, neither office had the full set of capabilities they needed to effectively manage the acquisitions, including a full risk management process. We also made recommendations for the Bureau to implement key activities needed to effectively manage acquisitions. The Bureau agreed with the recommendations but is still in the process of implementing them. Careful planning and monitoring are key to successfully managing a complex undertaking such as the decennial census. In January 2004, we recommended that the Bureau develop a comprehensive integrated project plan. Specifically, we recommended that such a project plan be updated as needed and include: (1) detailed milestones that identify all significant interrelationships; (2) itemized estimated costs of each component, including a sensitivity analysis, and an explanation of significant changes in the assumptions on which these costs are based; (3) key goals translated into measurable, operational terms to provide meaningful guidance for planning and measuring progress; and (4) risk and mitigation plans that fully address all significant potential risks. We reported that although some of this information is available piecemeal, to facilitate a thorough, independent review of the Bureau's plans and hold the agency accountable for results, having a single, comprehensive document would be important. In May 2007, we met with Bureau officials to discuss the status of the 2010 project plan. At that time officials indicated that they planned to finalize the project plan over the next several months. We look forward to reviewing the 2010 Census project plan once it becomes available, and we will continue to monitor the Bureau's planning efforts. Among the elements of that plan, we specifically recommended that the Bureau itemize the then-estimated $11.3 billion in costs for completing key activities for the upcoming decennial census. However, in June 2006 before this subcommittee, we testified that the Bureau's $11.3 billion life- cycle cost estimate for the 2010 Census lacked timely and complete supporting data. Specifically, the supporting data of the estimate were not timely because the data did not contain the most current information from testing and evaluation, and were not complete because sufficient information on how changing assumptions could affect cost was not provided. In its Fiscal Year 2008 Budget Estimates, the Bureau updated its estimate to about $11.5 billion. According to Bureau documents, the estimated life- cycle cost for the entire 2010 Census remained relatively unchanged between 2001, when the $11.3 billion estimate first was released, and 2006. In our testimony last year, we noted that the September 2005 estimate was based on assumptions made in 2001 that had not been borne out by testing. One such assumption pertained to the testing of a new handheld mobile computing device that is intended to automate and streamline address canvassing, nonresponse follow-up, coverage measurement, and payroll operations. After its 2004 Census Test the Bureau found that local office space and staff savings of 50 percent as a result of using the handheld computers were not realized. Nonetheless, the 2005 estimate continued to assume the 50 percent savings. In our view, revising cost estimates with the most current information allows the Bureau to better manage the cost of the census and make necessary resource trade-offs. Most recently, the Bureau tested a new prototype of the handheld mobile computing devices during the address canvassing operation of the 2008 Dress Rehearsal. This experience should provide the Bureau additional data on productivity and space needs when using the new devices. Table 3 shows the Bureau's cost estimate released in June 2006. Based on the table, most spending will occur between fiscal years 2008 through 2013. Mr. Chairman, as you can see, given the projected increase in spending, it will be imperative that the Bureau effectively manage the 2010 Census, as the risk exists that the actual, final cost of the census could be considerably higher than anticipated. Indeed, this was the case for the 2000 Census, when the Bureau's initial cost projections proved to be too low because of such factors as unforeseen operational problems or changes to the fundamental design. For example, the Bureau estimated that the 2000 Census would cost around $4 billion if sampling was used, and a traditional census without sampling would cost around $5 billion. However, the final price tag for the 2000 Census (without sampling) was over $6.5 billion, a 30 percent increase in cost. Large federal deficits and other fiscal challenges underscore the importance of managing the cost of the census, while promoting an accurate, timely census. At the request of the House Committee on Appropriations, Subcommittee on Commerce, Justice, Science and Related Agencies, we are reviewing the life-cycle cost estimate of the 2010 Census to determine whether it is comprehensive, credible, accurate, and adequately supported. During the address canvassing phase of the 2008 Dress Rehearsal, the Bureau tested a prototype of the handheld computers that it intends to use for 2010. The devices are a keystone to the reengineered census because they allow the Bureau to automate operations, and eliminate the need to print millions of paper questionnaires and maps used by temporary field staff to conduct address canvassing and nonresponse follow-up as well as to manage the payroll for field staff. Automating operations allows the Bureau to reduce the cost of operations; thus, it is critical that the risks surrounding the use of the handheld devices be closely monitored and effectively managed to ensure their success. However, during the address canvassing phase of the 2008 Dress Rehearsal, we observed some technical difficulties with the handheld mobile computing device. We observed that it took an inordinate amount of time for field staff using the handheld devices to link multiple units to one mapspot, which occurs when listing units within apartment buildings. In North Carolina, for example, we observed a field staffer take 2 hours to verify 16 addresses in one apartment building. The device was also slow to process addresses that were a part of a large assignment area. These inefficiencies affect productivity and ultimately the cost of the census. Over the next several weeks, we will be working with the Bureau to understand the root cause of the problems we observed. Given the lateness in the testing cycle, the Bureau now runs the risk that if problems do emerge, little time will be left to develop, test, and incorporate refinements to the handheld devices before 2010. To date, the Bureau, in its 2008 Dress Rehearsal, has completed nearly all LUCA activities, and while the Bureau has taken many steps to improve LUCA since 2000, additional steps could be taken to address possible new challenges. To reduce participant workload and burden, the Bureau provided a longer period for reviewing and updating LUCA materials; provided options for submitting materials for the LUCA program; and created MAF/TIGER Partnership Software (MTPS), which is designed to assist LUCA program participants in reviewing and updating address and map data. This software will enable users to import address lists and maps for comparison to the Bureau's data and participate at the same time in both the LUCA and another geographic program, the Boundary and Annexation Survey. However, during the Dress Rehearsal, the Bureau tested MTPS with only one local government. The Bureau also planned improvements to LUCA by offering specialized workshops for informational and technical training and supplementing the workshops with new computer-based training. However, the Bureau did not test its computer-based training software in the Dress Rehearsal. Properly executed user-based methods for software testing can give the truest estimate of the extent to which real users can employ a software application effectively, efficiently, and satisfactorily. In June 2007, we recommended the Bureau better assess the usability of the MTPS and test the computer-based training software with local governments. The Bureau has agreed to do so, and in August 2007 is expected to provide an action plan for how it will implement this recommendation. Additionally, not all participants will rely on the MTPS. For these participants, the Bureau could do more to help them use their own software. We found that participants in the LUCA Dress Rehearsal experienced problems converting files from the Bureau's format to their respective applications; our survey of participants in the LUCA Dress Rehearsal showed that the majority of respondents had, to some extent, problems with file conversions to appropriate formats. For example, one local official noted that it took him 2 days to determine how to convert the Bureau's files. At present, the Bureau does not know how many localities that participate in LUCA will opt not to use MTPS, but those localities may face the same challenges faced by participants in the LUCA Dress Rehearsal. In response to our recommendations, the Bureau agreed to disseminate instructions on file conversion on its Web site and provide instructions to help-desk callers. The Bureau's reengineered approach for the 2010 Census involves greater use of automation, which offers the prospect of greater efficiency and effectiveness; however, these actions also introduce new risks. The automation of key census processes involves an extensive reliance on contractors. Consequently, contract oversight and management become a key challenge to a successful census. We are (1) determining the status and plans for DRIS, FDCA, MTAIP, and DADS II (including cost, schedule, and performance); and (2) assessing whether the bureau is adequately managing risks associated with these key contracts including efforts to integrate systems. We are scheduled to report the results of our work by September 2007. Effective risk management includes identifying and analyzing risks, assigning resources, and developing risk mitigation plans and milestones for key mitigation deliverables, briefing senior-level managers on high-priority risks, and tracking risks to closure. Risk management is an important project management discipline to ensure that among other things, key technologies are delivered on time, within budget, and with the promised functionality. The Bureau has awarded three of four 2010 decennial census contracts: MTAIP (June 2002), DRIS (October 2005), and FDCA (March 2006). For DADS II, the Bureau delayed the contract award by 1 year (the contract is now scheduled to be awarded in September 2007). In March 2006, Bureau officials said that this 1-year delay occurred to gain a clearer sense of budget priorities before initiating the request for proposal process. Our preliminary results on the status and plans for the three awarded 2010 decennial census system contracts show that the contractors are making mixed progress in meeting cost, schedule, and functional performance. Specifically, the DRIS, FDCA, and MTAIP contractors are delivering products on schedule. For example, as of March 2007, the MTAIP contractor delivered 2,513 of the 3,232 improved county map files to the Bureau's repository of the location of every street, boundary, and other map features (known as the TIGER database). In addition, the DRIS contractor has delivered certain program management documents on schedule, including the External Interface Control document, which documents the interfaces between DRIS and the other 2010 Census systems, such as FDCA. Also, the FDCA contractors provided the 1,400 handheld mobile computing devices on schedule for conducting the May 2007 address canvassing for the Dress Rehearsal sites in North Carolina and California. Concerning costs, two projects--DRIS and MTAIP--are in line with the projected budget. For example, as of March 2007, of the $66 million planned for DRIS during this period, the Bureau has obligated $37 million and disbursed $19 million with the project 36 percent completed. Further, our analyses of cost performance reports show no projected cost overrun for DRIS by the 2008 Dress Rehearsal. However, the FDCA project is projected to experience cost overruns by the 2008 Dress Rehearsal. Our analyses of earned value management (EVM) data show a projected FDCA cost overrun by between $17 million and $22 million, with the most likely cost overrun being about $18 million. According to the contractor, the overrun is occurring primarily due to the increase in system requirements. We are concerned that this is an indication of additional cost increases that are forthcoming, given requirements growth associated with FDCA. The Bureau has delayed delivering some key functionality that was expected to be delivered for the Dress Rehearsal. For example, some key functionality expected to be delivered with DRIS contract including the 2010 Census telephone assistance system has been delayed until fiscal year 2009. The Bureau has stated that it will not have a robust telephone assistance system in place for the Dress Rehearsal. The Bureau has also delayed selecting data capture center sites for the 2010 Census, building- out data capture facilities (including physical security, hardware, furniture, and telecommunications), and recruiting and hiring data capture center staff. According to the Bureau, this delay will affect areas, such as hardware installation and staffing training. Further, the Dress Rehearsal will not include all collection forms for the 2010 Census. According to project team officials, changes to the DRIS original functionality were due to the Bureau's fiscal year 2006 budget constraints, and therefore changed their priorities for the 2008 Dress Rehearsal. The importance of testing is particularly important, since systems and functionality planned for the 2010 Census will not be available for the 2008 Dress Rehearsal. The Bureau has plans to conduct system tests, such as the interfaces between FDCA and DRIS. The Bureau has not finalized plans for other tests to be performed for the 2010 Census, such as end-to- end testing. End-to-end testing is performed to verify that a defined set of interrelated systems that collectively support an organizational core business function interoperate as intended in an operational environment. The failure to conduct end-to-end testing increases the risks of systems performance failure occurring during the 2010 Census operations. Our preliminary results also show that the Bureau's project teams have made progress in risk management activities, but weaknesses remain. According to the Software Engineering Institute's (SEI) Capability Maturity Model®️ Integration (CMMISM), the purpose of risk management is to identify potential problems before they occur so that risk-handling activities can be executed as needed to mitigate adverse impacts. Risk management activities can be divided into key areas, including identifying and analyzing risks, mitigating risks, and executive oversight. The discipline of risk management is important to help ensure that projects are delivered on time, within budget, and with the promised functionality. It is especially important for the 2010 Census, given the immovable deadline. Our preliminary results on the Bureau's risk management processes show that the project teams have performed many practices associated with establishing sound and capable risk management processes. Specifically, most of the projects (DRIS, FDCA, and DADS II) had developed a risk management strategy to identify the methods or tools to be used for risk identification, risk analysis and prioritization, and risk mitigation. However, some projects did not fully identify risks, establish mitigation plans that identified planned actions and milestones, and report risk status to higher level officials. All four projects were identifying and analyzing risks, but one project team was not adequately performing this activity. As of May 2007, the most significant risks for DRIS included the possibility of a continuing budget resolution for fiscal year 2008, new system security regulations, and disagreement between the Bureau and contractor on functionality implementation. For FDCA, as of May 2007, the most significant risks included insufficient funding, late development of training materials, and untimely completion of IT Security Certification and Accreditation. However, as part of our ongoing work, we question the completeness of the reported risks. For example, although the FDCA project had experienced a major increase in the number of requirements, the project team did not identify this as a significant risk. In addition, the project office did not identify any risks associated with using the handheld mobile computing devices. All four projects are developing risk mitigation plans as a response strategy for the handling of risks, but three project teams (DADS II, FDCA, and MTAIP) developed mitigation plans that were often untimely or had incomplete activities and milestones. For example, although mitigation plans were developed for all high-level risks, they did not always identify milestones for implementing mitigating activities. In addition, the FDCA project has yet to provide any evidence of mitigation plans to handle their medium-level risks as described in their risk management strategy. Two projects (MTAIP and FDCA) have yet to provide evidence that risks were reported regularly to higher-level Department of Commerce and Bureau officials. For example, although both project teams had met with Commerce and Bureau officials to discuss the status of the projects, the meetings did not include discussions about the status of risks. The failure to develop timely and complete mitigation plans increases the project's exposure to risks and reduces the project team's ability to effectively control and manage risks during the work effort. Further, failure to report a project's risks to higher level officials reduces the visibility of risks to executives that should be playing a role in mitigating them. Until the project teams implement effective and consistent risk management processes, the Bureau faces increased risks that system acquisition projects will incur cost overruns, schedule delays, and performance shortfalls. As part of our evaluation of the Bureau's LUCA Dress Rehearsal, we visited the localities along the Gulf Coast to assess the effect that Hurricanes Katrina and Rita might have on decennial activities in these geographic areas, and we found that the damage and devastation of these hurricanes will likely affect the Bureau's LUCA program and possibly other operations. The Bureau has begun to take steps toward addressing these issues by developing proposed actions. However, the Bureau has not yet finalized plans and milestones related to changes in actions for modifying address canvassing or subsequent operations in hurricane- affected areas. In visiting localities along the Gulf Coast earlier this year, we observed that the effects of the hurricanes are still visible throughout the Gulf Coast region. Hurricane Katrina alone destroyed or made uninhabitable an estimated 300,000 homes; in New Orleans, local officials reported that Hurricane Katrina damaged an estimated 123,000 housing units. Such changes in housing unit stock continue to present challenges to the implementation of the 2010 LUCA Program and address canvassing operations in the Gulf Coast region. Many officials of local governments we visited in hurricane-affected areas said they have identified numerous housing units that have been or will be demolished as a result of Hurricanes Katrina and Rita and subsequent deterioration. Conversely, many local governments estimate that there is new development of housing units in their respective jurisdictions. The localities we interviewed in the Gulf Coast region indicated that such changes in the housing stock of their jurisdictions are unlikely to subside before local governments begin reviewing and updating materials for the Bureau's 2010 LUCA Program--in August 2007. As a result, local governments in hurricane-affected areas may be unable to fully capture reliable information about their address lists before the beginning of LUCA. The mixed condition of the housing stock in the Gulf Coast could decrease productivity rates during address canvassing. We observed that hurricane- affected areas have many neighborhoods with abandoned and vacant properties mixed in with occupied housing units. Bureau field staff conducting address canvassing in these areas may have decreased productivity due to the additional time necessary to distinguish between abandoned, vacant, and occupied housing units. We also observed many areas where lots included a permanent structure with undetermined occupancy as well as a trailer. Bureau field staff may be presented with the challenge of determining whether a residence or a trailer (see fig. 1), or both, are occupied. Another potential issue is that, due to continuing changes in the condition in the housing stock, housing units that are deemed uninhabitable during address canvassing may be occupied on Census Day, April 1, 2010. Bureau officials said that they recognize there are issues with identifying uninhabitable structures in hurricane-affected zones. Further, workforce shortages may also pose significant problems for the Bureau's hiring efforts for address canvassing. The effects of Hurricanes Katrina and Rita caused a major shift in population away from the hurricane-affected areas, especially in Louisiana. This migration displaced many low-wage workers. Should this continue, it could affect the availability of such workers for address canvassing and other decennial census operations. In June 2006, we recommended that the Bureau develop plans (prior to the start of the 2010 LUCA Program in August 2007) to assess whether new procedures, additional resources, or local partnerships, may be required to update the MAF/TIGER database along the Gulf Coast--in the areas affected by Hurricanes Katrina and Rita. The Bureau consulted with state and regional officials from the Gulf Coast on how to make LUCA as successful as possible, and held additional promotional workshops for geographic areas identified by the Bureau as needing additional assistance. The Bureau has also considered changes to address canvassing and subsequent operations in the Gulf Coast region. For example, Bureau officials stated that they recognize issues with identifying uninhabitable structures in hurricane-affected zones and, as a result, that they may need to change procedures for address canvassing. The Bureau is still brainstorming ideas, including the possibility of using its "Update/Enumerate" operation in areas along the Gulf Coast. Bureau officials also said that they may adjust training for field staff conducting address canvassing in hurricane-affected areas to help them distinguish between abandoned, vacant, and occupied housing units. Without proper training, field staff can make errors and will not operate as efficiently. The Bureau's plans for how it may adjust address canvassing operations in the Gulf Coast region can also have implications for subsequent operations. For example, instructing its field staff to be as inclusive as possible in completing address canvassing could cause increased efforts to contact nonrespondents because the Bureau could send questionnaires to housing units that could be vacant on Census Day. In terms of the Bureau's workforce in the Gulf Coast region, Bureau officials also recognize the potential difficulty of attracting field staff, and have recommended that the Bureau be prepared to pay hourly wage rates for future decennial field staff that are considerably higher than usual. However, Bureau officials stated that there are "no concrete plans" to implement changes to address canvassing or subsequent decennial operations in the Gulf Coast region. Mr. Chairman, the Bureau faces formidable challenges in successfully implementing a redesigned decennial. It must also overcome significant challenges of a demographic and socioeconomic nature due to the nation's increasing diversity in language, ethnicity, households, and housing type, as well a reluctance of the population to participate in the census. The need to enumerate in the areas devastated by Hurricanes Katrina and Rita is one more significant difficulty the Bureau faces. We have stated in the past, and believe still, that the Bureau's reengineering effort, if effectively implemented, can help control costs and improve cost effectiveness and efficiency. Yet, there is more that the Bureau can do in managing risks for the 2010 Census. The Dress Rehearsal represents a critical stage in preparing for Census 2010--a time when the Bureau's plans will be tested as close to census-like conditions as is possible. This is a time when the Congress, the Department of Commerce, and others should have the information needed to know how well the design is working. This is a time for making transparent the risks that the Bureau must manage to ensure a successful census. We have highlighted some of these risks today. First, the Bureau's planning and reporting of milestones and estimated costs could be made more useful. Second, the performance of key contractors needs more oversight. Third, the Bureau can build on lessons learned early in the Dress Rehearsal by further testing new software that will help localities participating in the LUCA program. The functionality and usability of the handheld computing device--a key piece of hardware in the reengineered census--also bears watching. If, after the 2008 Dress Rehearsal, the handheld computers are found to not be reliable, the Bureau could be faced with the remote but daunting possibility of having to revert, in whole or in part, to the costly, paper-based census used in 2000. Finally, the Bureau must complete plans for ensuring an accurate population count in areas affected by Hurricanes Katrina and Rita. All told, these areas continue to call for risk mitigation plans by the Bureau and careful monitoring and oversight by the Commerce Department, Office of Management and Budget, the Congress, GAO, and other key stakeholders. As in the past, we look forward to supporting this subcommittee's oversight efforts to promote a timely, complete, accurate, and cost- effective census. Mr. Chairman that concludes our statement. We would be glad to answer any questions you and the committee members may have. 2010 Census: Census Bureau Is Making Progress on the Local Update of Census Addresses Program, but Improvements Are Needed. GAO-07- 1063T. Washington, D.C.: June 26, 2007. 2010 Census: Census Bureau Has Improved the Local Update of Census Addresses Program, but Challenges Remain. GAO-07-736. Washington, D.C.: June 14, 2007. 2010 Census: Census Bureau Should Refine Recruiting and Hiring Efforts and Enhance Training of Temporary Field Staff. GAO-07-361. Washington, D.C.: April 27, 2007. 2010 Census: Design Shows Progress, but Managing Technology Acquisitions, Temporary Field Staff, and Gulf Region Enumeration Require Attention. GAO-07-779T. Washington, D.C.: April 24, 2007. 2010 Census: Redesigned Approach Holds Promise, but Census Bureau Needs to Annually Develop and Provide a Comprehensive Project Plan to Monitor Costs. GAO-06-1009T. Washington, D.C.: July 27, 2006. 2010 Census: Census Bureau Needs to Take Prompt Actions to Resolve Long-standing and Emerging Address and Mapping Challenges. GAO-06- 272. Washington, D.C.: June 15, 2006. 2010 Census: Costs and Risks Must be Closely Monitored and Evaluated with Mitigation Plans in Place. GAO-06-822T. Washington, D.C.: June 6, 2006. 2010 Census: Census Bureau Generally Follows Selected Leading Acquisition Planning Practices, but Continued Management Attentions Is Needed to Help Ensure Success. GAO-06-277. Washington, D.C.: May 18, 2006. Census Bureau: Important Activities for Improving Management of Key 2010 Decennial Acquisitions Remain to be Done. GAO-06-444T. Washington, D.C.: March 1, 2006. 2010 Census: Planning and Testing Activities Are Making Progress. GAO-06-465T. Washington, D.C.: March 1, 2006. Information Technology Management: Census Bureau Has Implemented Many Key Practices, but Additional Actions Are Needed. GAO-05-661. Washington, D.C.: June 16, 2005. 2010 Census: Basic Design Has Potential, but Remaining Challenges Need Prompt Resolution. GAO-05-09. Washington, D.C.: January 12, 2005. Data Quality: Census Bureau Needs to Accelerate Efforts to Develop and Implement Data Quality Review Standards. GAO-05-86. Washington, D.C.: November 17, 2004. Census 2000: Design Choices Contributed to Inaccuracies in Coverage Evaluation Estimates. GAO-05-71. Washington, D.C.: November 12, 2004. American Community Survey: Key Unresolved Issues. GAO-05-82. Washington, D.C.: October 8, 2004. 2010 Census: Counting Americans Overseas as Part of the Decennial Census Would Not Be Cost-Effective. GAO-04-898. Washington, D.C.: August 19, 2004. 2010 Census: Overseas Enumeration Test Raises Need for Clear Policy Direction. GAO-04-470. Washington, D.C.: May 21, 2004. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO- 04-37. Washington, D.C.: January 15, 2004. Decennial Census: Lessons Learned for Locating and Counting Migrant and Seasonal Farm Workers. GAO-03-605. Washington, D.C.: July 3, 2003. Decennial Census: Methods for Collecting and Reporting Hispanic Subgroup Data Need Refinement. GAO-03-228. Washington, D.C.: January 17, 2003. Decennial Census: Methods for Collecting and Reporting Data on the Homeless and Others Without Conventional Housing Need Refinement. GAO-03-227. Washington, D.C.: January 17, 2003. 2000 Census: Lessons Learned for Planning a More Cost-Effective 2010 Census. GAO-03-40. Washington, D.C.: October 31, 2002. The American Community Survey: Accuracy and Timeliness Issues. GAO-02-956R. Washington, D.C.: September 30, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The decennial census is a Constitutionally-mandated activity that produces critical data used to apportion congressional seats, redraw congressional districts, and allocate billions of dollars in federal assistance. The Census Bureau (Bureau) estimates the 2010 Census will cost $11.3 billion, making it the most expensive in the nation's history after adjusting for inflation. This testimony, based primarily on GAO's issued reports and preliminary observations from our ongoing work, discusses the extent to which the Bureau has (1) developed a comprehensive project plan with the most current cost data; (2) incorporated lessons learned from Dress Rehearsal activities; (3) managed automation and technology for the reengineered census; and (4) planned for an accurate census in areas affected by Hurricanes Katrina and Rita. The Bureau is conducting its Dress Rehearsal of the 2010 Census, the last opportunity it will have to test its design under census-like conditions. Given the importance of a successful enumeration and the complexities of enumerating a hard-to-count population in a more technology-dependent census, our message remains that the risks associated with the decennial must be closely monitored, evaluated, and managed. GAO found that the Bureau is developing but has not yet completed a comprehensive project plan that includes milestones, itemized costs, and measurable goals, nor has it updated the 2010 life-cycle cost estimate to reflect current information from testing. Having a comprehensive project plan and updated cost information will allow the Bureau to manage the operations and cost of the decennial census. Moreover, GAO observed technical problems with the handheld computing devices used in the Dress Rehearsal by field staff for address canvassing (in which the Bureau verifies addresses). If the device does not function as expected or needed, little time will be left for the Bureau to take corrective action. In addition, during the LUCA Dress Rehearsal, the Bureau did not fully test software tools intended to reduce burden on participants. Also, the Bureau's level of reliance on automation and technology for the 2010 Census, at an estimated cost of $3 billion, makes effective contractor oversight (of cost, schedule, and technical performance) and risk management activities imperative. Finally, in the Gulf Coast Region, the condition of the changing housing stock is likely to present additional challenges for the address canvassing operation and subsequent operations. However, the Bureau has not finalized plans for modifying the address canvassing operation or subsequent operations in the Gulf Coast region.
| 7,317 | 547 |
In 1993, we reported that USAID had not adequately managed changes in its overseas workforce and recommended that USAID develop a comprehensive workforce planning system to better identify staffing needs and requirements. In the mid-1990s, USAID reorganized its activities around strategic objectives and began reporting in a results-oriented format but had made little progress in personnel reforms. In July 2002, we reported that USAID could not quickly relocate or hire the staff needed to implement a large-scale reconstruction and recovery program in Latin America, and we recommended actions to help improve USAID's staffing flexibility for future disaster recovery requirements. Studies by several organizations, including GAO, have shown that highly successful service organizations use strategic management approaches to prepare their workforces to meet present and future mission requirements. We define strategic workforce planning as focusing on long-term strategies for acquiring, developing, and retaining an organization's workforce and aligning human capital approaches that are clearly linked to achieving programmatic goals. Based on work with the Office of Personnel Management and other entities, we identified strategic workforce planning principles used by leading organizations. According to these principles, a strategic workforce planning and management system should (1) involve senior management, employees, and stakeholders in developing, communicating, and implementing the workforce plan; (2) determine the agency's current critical skills and competencies and those needed to achieve program results; (3) develop strategies to address gaps in critical skills and competencies; and (4) monitor and evaluate progress and the contribution of strategic workforce planning efforts in achieving program goals. Until the mid-1970s, about two thirds of USAID's operating expenses were funded from appropriations to program accounts, and the rest were funded from a separate administrative expenses account. In 1976, Congress began providing a line-item appropriation for operating expenses separate from USAID's humanitarian and economic development assistance programs. The accompanying Senate report noted that USAID's "cost of doing business" would be better managed if these funds were separately appropriated. Congress authorized USAID's separate operating expense account the following year. USAID's criteria for determining the expenses to be paid from operating expense funds are based on guidance it has received from Congress as well as its assessment of who benefits from a particular activity--the agency or the intended program recipient. For example, congressional reports in the late 1970s directed USAID to fund the costs of all full-time staff in permanent positions from the operating expense account. USAID faces a number of challenges in developing and implementing a strategic workforce plan. Its overseas missions operate in a changing foreign policy environment often under very difficult conditions. USAID's workforce, particularly its U.S. direct-hire foreign service officers, has decreased over the years; but in recent years program dollars and the number of countries with USAID activities have increased. These factors have combined to produce certain human capital vulnerabilities that have implications for the agency's ability to effectively carry out and oversee foreign assistance. A strategic approach to workforce planning and management can help USAID identify the workforce it needs and develop strategies for attaining this workforce that will last throughout successive administrations. Since 1990, USAID has continued to evolve from an agency in which U.S. direct-hire foreign service employees directly implemented development projects to one with a declining number of direct-hire staff who oversee the contractors and grantees carrying out most of its day-to-day activities. As numbers of U.S. direct-hire staff declined, mission directors began relying on other types of employees, primarily foreign national personal services contractors, to manage mission operations and oversee development activities implemented by third parties. In December 2002, according to USAID's staffing report, the agency's workforce totaled 7,741, including 1,985 U.S. direct-hires. Personal services contractors made up more than two-thirds of USAID's total workforce, including 4,653 foreign national contractors. Of the 1,985 U.S. direct-hires, 974 were foreign service officers, about 65 percent of whom were posted overseas. Other individuals not directly employed by USAID also perform a wide range of services in support of the agency's programs. These individuals include employees of institutional or services contractors, private voluntary organizations, and grantees. In addition to having reduced the number of U.S. direct hires, USAID now manages programs in more countries with no USAID direct-hire presence, and its overseas structure has become more regional. Table 1 illustrates the changes in USAID's U.S. direct-hire overseas presence between fiscal years 1992 and 2002. In fiscal year 2002, USAID managed activities in 88 countries with no U.S. direct-hire presence. According to USAID, in some cases, activities in these countries are very small and require little management by USAID staff. However, in 45 of these countries USAID manages programs of $1 million or more, representing a more significant burden on the agency. USAID also increasingly provides administrative and program support to countries from regional service platforms, which have increased from 2 to 26 between fiscal years 1992 and 2002. Program funding also recently increased about 78 percent--from $7.3 billion in fiscal year 2001 to about $13 billion in fiscal year 2003. As a result of the decreases in U.S. direct-hire foreign service staff levels, increasing program demands, and a mostly ad-hoc approach to workforce planning, USAID now faces several human capital vulnerabilities. For example, the attrition of its more experienced foreign service officers, its difficulties in filling overseas positions, and limited opportunities for training and mentoring have sometimes led to the deployment of direct- hire staff who do not have essential skills and experience and the reliance on contractors to perform many functions. In addition, USAID lacks a "surge capacity" to enable it to respond quickly to emerging crises and changing strategic priorities. As a result, according to USAID officials and a recent overseas staffing assessment, the agency is finding it increasingly difficult to manage the delivery of foreign assistance. In addition, USAID works in an overseas environment that presents unique challenges to workforce planning. Mission officials noted the difficulties in adhering to a formal workforce plan linked to country strategies in an uncertain foreign policy environment. For example, following the events of September 11, 2001, the Middle East and sub-Saharan African missions we visited--Egypt, Mali, and Senegal--received additional work that was not anticipated when they developed their country development strategies and work plans. Also, the mission in Ecuador had been scheduled to close in fiscal year 2003. However, this decision was reversed due to political and economic events in Ecuador, including a coup in 2000, the collapse of the financial system, and rampant inflation. Program funding for Ecuador tripled from fiscal year 1999 to fiscal year 2000, while staffing was reduced from 110 to 30 personnel; and the budget for the mission's operating expenses was reduced from $2.7 million to $1.37 million. During our field work, we found that other factors unique to USAID's overseas work environment can affect its ability to conduct workforce planning and attract and retain top staff. These factors vary from country to country and among regions and include difficulties in attracting staff to hardship posts, inadequate salaries and benefits for attracting the top host country professionals, and lengthy clearance processes for locally contracted staff. USAID's workforce challenges are illustrated by its difficulties in staffing hardship posts like Afghanistan and Iraq. As of September 4, 2003, according to USAID's new personnel data system, the mission in Kabul had 42 full-time staff--7 foreign service officers and 35 personal service contractors, mostly local hires. However, the mission had 61 vacancies, including 5 vacancies for foreign service officers. In Iraq, as of September 15, 2003, the mission had 13 USAID direct-hire staff; 3 additional U.S. government employees; and about 60 personal services and institutional contractors. The mission had 13 vacancies that will most likely be filled by contract staff. USAID's human resource office is in its annual bidding process for foreign service positions. When that process is complete, the office expects to have a better picture of replacements for current staff in Afghanistan and Iraq as well as additional placements. According to USAID staff, the agency is having trouble attracting foreign service officers to these posts because in-country conditions are difficult and tours are unaccompanied. USAID's average staff age is in the late forties, and this age group is generally attracted to posts that can accommodate families. Both posts are responsible for huge amounts of foreign aid--in fiscal year 2003 alone, USAID's assistance for Afghanistan and Iraq is expected to total $817 million and $1.6 billion, respectively. USAID faces serious accountability and quality of life issues as it attempts to manage and oversee large-scale, expensive reconstruction programs in countries with difficult conditions and inadequate numbers of both foreign service and local hire staff. In response to the President's Management Agenda, USAID has taken steps toward developing a comprehensive workforce planning and human capital management system that should enable the agency to meet its challenges and achieve its mission, but progress so far is limited. In evaluating USAID's efforts in terms of proven strategic workforce planning principles, USAID has more to do. For example: The involvement of USAID leadership, employees, and stakeholders in developing and communicating a strategic workforce plan has been mixed. USAID's human resource office is drafting a human capital strategy, but at the time of our review it had not yet been finalized or approved by such stakeholders as OMB and the Office of Personnel Management. As a result, we cannot comment on whether USAID employees and other stakeholders will have an active role in developing and communicating the agency's workforce strategies. USAID has begun identifying the core competencies its future workforce will need, and a working group is conducting a comprehensive workforce analysis and planning pilot at three headquarters units that will include an analysis of current skills. However, it has not yet conducted a comprehensive assessment of the critical skills and competencies of its current workforce. USAID hopes to have a contractor in place by the end of September, 2003, to assist the working group in identifying critical competencies and devising strategies to close skill gaps. USAID is also in the process of determining the appropriate information technology instrument and methodology that will permit the assessment of its current workforce skills and competencies. USAID's strategies to address critical skill gaps are not comprehensive and have not been based on a critical analysis of current capabilities matched with future requirements. USAID has begun hiring foreign service officers and Presidential Management Interns to replace staff lost through attrition. However, the agency has not completed its civil service recruitment plan and has not yet included personal services contractors--the largest segment of its workforce--in its agencywide workforce analysis and planning efforts. According to USAID human resource staff, the civil service recruitment plan will be completed after conducting the competency analysis for civil service staff. USAID has not created a system to monitor and evaluate its progress toward reaching its human capital goals and ensuring that its efforts continue under the leadership of successive administrators. Because it does not have a comprehensive workforce planning and management system, USAID cannot ensure that it has the essential skills needed to carry out its ongoing and future programs. To help USAID plan for changes in its workforce and continue operations in an uncertain environment, our report recommends that the USAID Administrator develop and institutionalize a strategic workforce planning and management system that takes advantage of strategic workforce planning principles. USAID's operating expense account does not fully reflect the agency's cost of delivering foreign assistance, primarily because the agency pays for some administrative activities done by contractors with program funds. As we noted in our recent report, USAID's overseas missions have increasingly hired personal services contractors to manage USAID's development activities due to declining numbers of U.S. direct-hire staff. According to USAID guidance, contractor salaries and related support can be paid from program funds when the expenses are benefiting a particular program or project. In some cases, however, the duties performed by contractors, especially personal services contractors, are indistinguishable from those done by U.S. direct-hire staff. One senior level USAID program planning officer told us that 10 to 15 percent of program funds may be a more realistic estimate of USAID's cost of doing business, as opposed to the 8.5 percent average since fiscal year 1995 that we calculated based on our analysis of USAID reported data. A recent USAID internal study identified about 160 personal services contractors who were performing inherently governmental duties, but these costs are not always reported as operating expenses. Recent data collection efforts by USAID indicate that the agency will likely obligate approximately $350 million in program funds for operating expenses incurred during fiscal year 2003. Because USAID's cost of doing business is not always separated from its humanitarian and development programs--the original intent behind establishing the separate operating expense account, the amount of program funds that directly benefits a foreign recipient is likely overstated. Overall, to accomplish our objectives, we analyzed personnel data, workforce planning documents, and obligations data reported by USAID in its annual budget justification documents. We did not verify the accuracy of USAID's reported data. We also interviewed cognizant USAID officials representing the agency's regional, technical, and management bureaus in Washington, D.C., and conducted fieldwork at seven overseas missions-- the Dominican Republic, Ecuador, Egypt, Mali, Peru, Senegal, and the West Africa Regional Program in Mali. To examine USAID's progress in developing and implementing a strategic workforce planning system, we evaluated the agency's efforts in terms of workforce planning principles used by leading organizations: ensuring the involvement of agency leadership, employees, and stakeholders; determining current skills and competencies and those needed; implementing strategies to address critical staffing needs; and evaluating progress in achieving human capital goals. To determine whether USAID's operating expenses reflect its cost of doing business, we reviewed USAID reports and obligations data and discussed the matter with cognizant officials at USAID, the Department of State, and the Office of Management and Budget. We also reviewed mission staffing reports to determine whether staff were funded from the operating expense account or program funds and discussed staff duties with cognizant mission officials. We obtained written comments on a draft of our report on USAID's workforce planning and discussed our preliminary findings from our review of USAID's operating expense account with cognizant USAID officials. Overall, USAID agreed with our findings and concurred with our recommendation to implement a strategic workforce planning system. Our review was conducted between July 2002 and September 2003 in accordance with generally accepted government auditing standards. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For future contacts regarding this testimony, please call Jess Ford at (202) 512-4268 or Al Huntington at (202) 512-4140. Individuals making key contributions to this testimony included Kimberly Ebner, Jeanette Espinola, Emily Gupta, Rhonda Horried, and Audrey Solis. Mark Dowling, Reid Lowe, and Jose Pena provided technical assistance. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
USAID oversees humanitarian and economic assistance--an integral part of the U.S. global security strategy--to more than 160 countries. GAO recommended in 1993 that USAID develop a comprehensive workforce plan; however, human capital management continues to be a high-risk area for the agency. GAO was asked to testify on how changes in USAID's workforce over the past 10 years have affected its ability to deliver foreign aid, the agency's progress in implementing a strategic workforce planning system, and whether its reported operating expenses reflect the full costs of delivering foreign aid. USAID has evolved from an agency in which U.S. direct-hire staff directly implemented development projects to one in which U.S. direct-hire staff oversee the activities of contractors and grantees. Since 1992, the number of USAID U.S. direct-hire staff declined by 37 percent, but the number of countries with USAID programs doubled and, over the last 2 years, program funding increased more than 78 percent. As a result of these and other changes in its workforce and its mostly ad-hoc approach to workforce planning, USAID faces several human capital vulnerabilities. For example, attrition of experienced foreign service officers and inadequate training and mentoring have sometimes led to the deployment of staff who lack essential skills and experience. The agency also lacks a "surge capacity" to respond to evolving foreign policy priorities and emerging crises. With fewer and less experienced staff managing more programs in more countries, USAID's ability to oversee the delivery of foreign assistance is becoming increasingly difficult. USAID has taken steps toward developing a workforce planning and human capital management system that should enable the agency to meet its challenges and achieve its mission, but it needs to do more, such as conducting a comprehensive skills assessment and including its civil service and contracted employees in its workforce planning efforts. USAID's reported that operating expenses do not always reflect the full costs of administering foreign assistance because the agency pays for some support and oversight activities done by contractors with program funds. As a result, the amount of program funds directly benefiting foreign recipients is likely overstated.
| 3,339 | 457 |
The nation's special operations forces provide the National Command Authorities a highly trained, rapidly deployable joint force capable of conducting special operations anywhere in the world. In November 1986, Congress enacted section 1311 of Public Law 99-661, which directed the President to establish USSOCOM, a unified combatant command to ensure that special operations forces were combat ready and prepared to conduct specified missions. USSOCOM's component commands include AFSOC, the Army Special Operations Command, the Naval Special Warfare Command, and the Joint Special Operations Command. AFSOC, located at Hurlburt Field, Florida, deploys and supports special operations forces worldwide. To ensure that special operations were adequately funded, Congress further provided in section 1311 of Public Law 99-661 that the Department of Defense create for the special operations forces a major force program (MFP) category for the Future Years Defense Plan of the Department of Defense. Known as MFP-11, this is the vehicle to request funding for the development and acquisition of special operations-peculiar equipment, materials, supplies, and services. The services remain responsible under 10 U.S.C. section 165 for providing those items that are not special operations-peculiar. Since Operation Desert Storm, AFSOC's threat environment has become more complex and potentially more lethal. More sophisticated threat systems, both naval and land-based, have been fielded, and the systems are proliferating to more and more countries. Even nations without complex integrated air defense systems have demonstrated the capability to inflict casualties on technologically superior opponents. According to threat documents, worldwide proliferation of relatively inexpensive, heat-seeking missiles is dramatically increasing the risk associated with providing airlift support in remote, poorly developed countries. Increased passive detection system use is also expected throughout lesser developed countries. Passive detection allows the enemy to detect incoming aircraft without alerting the crew that they have been detected, thereby jeopardizing operations. Finally, commercially available, second-generation night vision devices, when linked with warfighter portable air defense systems (e.g., shoulder-fired missiles), provide these countries with a night air defense capability. This night air defense capability is significant because AFSOC aircrews have historically relied on darkness to avoid detection. AFSOC aircraft carry a wide variety of electronic warfare systems to deal with enemy threat systems. Some of AFSOC's systems are common with systems used by the regular Air Force, while others are unique to special operations. Memoranda of Agreement (MOA) between USSOCOM and the military services lay out specifically the areas of support the services agree to undertake in support of the special forces. An MOA, dated September 16, 1989, and one of its accompanying annexes, dated February 22, 1990, entered into between the Air Force and USSOCOM list those items and services the Air Force agrees to fund in support of AFSOC's special operations mission. This list includes modifications common to both AFSOC and regular Air Force aircraft, and electronics and telecommunications that are in common usage. Part of AFSOC's electronic warfare equipment for fixed-wing aircraft is acquired with USSOCOM MFP-11 funds as special operations-peculiar items because the Air Force historically has employed very little electronic warfare equipment on its C-130s. AFSOC's acquisition strategy for electronic warfare equipment is contained within AFSOC's Technology Roadmap. The Technology Roadmap identifies and ranks operational deficiencies and links the deficiencies to material solutions. The Roadmap flows out of AFSOC's mission area plans for mobility, precision engagement/strike, forward presence and engagement, and information operations. For C-130s, the Roadmap indicates that AFSOC has serious electronic warfare operational deficiencies in several areas and identifies solutions for each of these operational deficiencies. These solutions include introducing a mix of new systems and making upgrades to older systems (See app. II for descriptions of AFSOC's C-130 aircraft.) AFSOC's acquisition strategy is sound because it is based on eliminating operational and supportability deficiencies confirmed by an Air Force study, test reports, and maintenance records. According to AFSOC officials responsible for electronic warfare acquisition, AFSOC's C-130s are most vulnerable to three types of threat systems: (1) infrared missiles, (2) passive detectors, and (3) radar-guided missiles.These deficiencies have become more critical since Operation Desert Storm in 1991 as more sophisticated threats have been developed and spread to more areas of the world. An ongoing Air Force Chief of Staff directed study, the Electronic Warfare Operational Shortfalls Study, confirms what AFSOC officials maintain. This study found that there are many electronic warfare-related operational deficiencies within the overall Air Force, including the C-130 community. The study identified deficiencies with missile warning system missile launch indications and warning times, infrared expendables and jamming effectiveness, signature reduction, passive detection, situational awareness, and electronic warfare support equipment. Classified test reports and threat documentation corroborate the study's findings. According to Air Force officials, electronic warfare deficiencies within Air Force components, including AFSOC, are so extensive that the solutions necessary to correct all of them are not affordable within the framework of Air Force fiscal year 2000-2005 projected budgets. AFSOC's aging electronic warfare systems are also failing more often and requiring more staff hours to support. According to AFSOC's Technology Roadmap and maintenance records, all AFSOC electronic warfare systems have some supportability problems. AFSOC maintenance personnel told us that they are working more hours to repair the systems, and maintenance records show that system failures are becoming more frequent. The ALQ-172(v)1 high band radar jammer in particular is problematic, requiring more staff hours for maintenance than any other AFSOC electronic warfare system. The staff hours charged for maintaining the ALQ-172(v)1 represent 34 percent of the total time charged to maintaining all electronic warfare systems from 1995 through 1997. AFSOC has made several efforts to correct deficiencies and maximize commonality in electronic warfare systems. USSOCOM is funding the Common Avionics Architecture for Penetration (CAAP) program, which is designed to make AFSOC's C-130 aircraft less susceptible to passive detection, enhance the aircrews' situational awareness, lower support costs, and improve commonality. AFSOC has sought to begin several other efforts in the past several years, as well, but USSOCOM has rejected these requests. In addition to addressing deficiencies identified in the Technology Roadmap, AFSOC is trying to improve commonality among its electronic warfare systems by eliminating some of those systems from its inventory. For example, it is replacing the ALR-56M radar warning receiver on its AC-130U Gunships with the ALR-69 radar warning receiver already on the rest of its C-130s. AFSOC also planned to replace ALQ-131 radar jamming pods on its AC-130H Gunships with a future upgraded ALQ-172(v)3 radar jammer for its AC-130s and MC-130Hs. Achieving commonality avoids duplicating costs for system development, lowers unit production costs through larger quantity buys, and simplifies logistical support. According to USSOCOM officials, in selecting what to fund they had to determine which programs would maximize capability, including sustainability, while conserving resources. The USSOCOM officials said that these decisions were difficult because although some systems offer tremendous improvements in capabilities, they require significant commitment of resources. For instance, USSOCOM did not have sufficient resources to fund both the CAAP program and the ALQ-172(v)3 upgrade program to improve commonality and capability against radar-guided missiles. Additionally, AFSOC had planned to replace its ALE-40 flare and chaff dispensers with the newer programmable ALE-47 to improve protection against infrared-guided missiles. But, because of budget constraints, AFSOC will have to keep the ALE-40 on two of its C-130 model aircraft while the other models are upgraded to the ALE-47 configuration. Furthermore, in prioritizing resources for fiscal year 2000-2005, USSOCOM is accepting increased operational and sustainment risks for systems it does not anticipate being key in 2010 or beyond. Under this approach, USSOCOM is dividing AFSOC's C-130s into so-called legacy and bridge aircraft. The older legacy aircraft will receive flight safety modifications but not all electronic warfare upgrades; newer bridge aircraft will receive both. As a result, the legacy aircraft will become less common over time with the newer bridge aircraft, even as they become more vulnerable to threats and more difficult to maintain. Because, according to AFSOC officials, the legacy aircraft are planned to remain in service for 12 more years, for the foreseeable future, AFSOC will have to operate and maintain more types of electronic warfare systems. Since AFSOC's electronic warfare acquisition strategy was adopted, the Air Force has decided to fund a $4.3-billion Air Force-wide C-130 modernization program of all C-130s, including the special operations fleet. This avionics modernization program shares many common elements with the USSOCOM CAAP program. CAAP includes $247 million of MFP-11 funds for upgrades/systems to address AFSOC's C-130 aircraft situational awareness and passive detection problems. Consistent with the provisions of title 10, the MOA requires that the Air Force, rather than USSOCOM, fund common items. Therefore, the overlap between the two programs creates an opportunity for USSOCOM to direct its MFP-11 funding from CAAP to other solutions identified in AFSOC's Technology Roadmap instead of paying for items that will be common to all Air Force C-130s. The Air Force is funding its avionics modernization program to lower C-130 ownership costs by increasing the commonality and survivability of the C-130 fleet. Because USSOCOM designed CAAP independently of and earlier than the Air Force modernization program, CAAP provides funding for a number of items that are now planned to be included in the Air Force program. These include (1) an open systems architecture, (2) upgraded displays and display generators, (3) a computer processor to integrate electronic warfare systems, (4) a digital map system, and (5) a replacement radar. USSOCOM and AFSOC officials note that these C-130 modernization program items have the potential to satisfy CAAP requirements with only minor modifications. For example, AFSOC's estimates indicate that the cost to develop and procure a new low-power navigation radar with a terrain following/terrain avoidance feature as part of CAAP would be approximately $133 million. However, if the navigation radar selected for the avionics modernization program incorporates or has a growth path that will allow for the addition of a low-power terrain following/terrain avoidance feature to satisfy CAAP requirements, USSOCOM could avoid the significant development and procurement costs of the common items. According to Air Force, USSOCOM and AFSOC officials, coordinating these two programs would maximize C-130 commonality and could result in additional MFP-11 funding being available to meet other AFSOC electronic warfare deficiencies. Consistent with the provisions of title 10, and as provided for in the MOA between the Air Force and USSOCOM, the Air Force has included the AFSOC C-130 fleet in its draft planning documents to upgrade the C-130 avionics. However, while the MOA requires the Air Force to pay for common improvements incorporated into AFSOC's C-130, the Air Force may not pay for special operations-peculiar requirements as part of the common upgrade. Nevertheless, the Air Force is not otherwise precluded from selecting systems that can satisfy both the Air Force's and AFSOC's requirements or which could be easily and/or inexpensively upgraded by AFSOC to meet special operations-peculiar requirements. AFSOC has a sound electronic warfare acquisition strategy based on a need to eliminate operational and supportability deficiencies while maximizing commonality within its C-130 fleet. Because of budget constraints, however, USSOCOM funding decisions are undercutting AFSOC's efforts to implement its Technology Roadmap. An opportunity now exists, however, to help free up some MFP-11 funds to permit AFSOC to continue implementing its electronic warfare strategy as outlined in the Technology Roadmap. We recommend that the Secretary of Defense direct the Secretary of the Air Force in procuring common items for its C-130 avionics modernization, to select items that, where feasible, address USSOCOM's CAAP requirements or could be modified by USSOCOM to meet those requirements. We further recommend that the Secretary of Defense direct USSOCOM to use any resulting MFP-11 funds budgeted for but not spent on CAAP to address other electronic warfare deficiencies or to expand the CAAP program to other special operations forces aircraft. In comments on a draft of this report, the Department of Defense (DOD) partially concurred with both recommendations. With regard to our first recommendation, DOD stated that Air Force and USSOCOM requirements require harmonization in order to take advantage of commonality and economies of scale. DOD agreed to require the Air Force and USSOCOM to document their common requirements. While this action is a step in the right direction, Office of the Secretary of Defense-level direction may be necessary to ensure that appropriate common items for USSOCOM are procured by the Air Force. As for our second recommendation, DOD officials stated that any MFP-11 funds originally budgeted for CAAP but saved through commonality should be used to address documented electronic warfare deficiencies or to deploy CAAP on other special operations forces aircraft. We agree with DOD that savings to the CAAP program by using common items should be used to address electronic warfare deficiencies or for expansion of the CAAP program to other special operations forces aircraft. We have reworded our recommendation to reflect that agreement. DOD's comments are reprinted in appendix I. To assess the basis for AFSOC's strategy for acquiring and upgrading electronic warfare equipment and determine the extent to which it would address deficiencies and maximize commonality, we analyzed AFSOC acquisition plans and studies and reviewed classified test reports and threat documentation. We also discussed AFSOC's current electronic warfare systems and aircraft and AFSOC's planned electronic warfare upgrades and system acquisition with officials at USSOCOM, MacDill Air Force Base, Florida; AFSOC, Hurlburt Field, Florida; and Air Force Headquarters, Washington, D.C. Additionally, we discussed AFSOC electronic warfare system supportability with officials responsible for the systems at USSOCOM; AFSOC; and Warner Robins Air Logistics Center, Georgia, and reviewed logistics records for pertinent systems. We accepted logistics records provided by AFSOC as accurate without further validation. To identify alternative sources of funding to implement AFSOC's strategy, we examined legislation establishing and affecting USSOCOM and memoranda of agreement between USSOCOM and the Air Force regarding research, development, acquisition, and sustainment programs. We discussed relevant memoranda of agreement with USSOCOM, AFSOC, and Air Force officials. Furthermore, we reviewed planning documents and discussed the planned Air Force C-130 avionics modernization program with Air Force officials at Air Force Headquarters and the Air Mobility Command, Scott Air Force Base, Illinois. We conducted our work from October 1997 through July 1998 in accordance with generally accepted government auditing standards. We will send copies of this report to interested congressional committees; the Secretaries of Defense and the Air Force; the Assistant Secretary of Defense, Office of Special Operations and Low-Intensity Conflict; the Commander, U.S. Special Operations Command; the Director, Office of Management and Budget; and other interested parties. Please contact me at (202) 512-4841 if you or your staff have any questions. Major contributors to this assignment were Tana Davis, Charles Ward, and John Warren. The Air Force Special Operations Command (AFSOC) uses specially modified and equipped variants of the C-130 Hercules aircraft to conduct and support special operations missions worldwide. Following are descriptions of the C-130 models. Mission: The AC-130H is a gunship with primary missions of close-air support, air interdiction, and armed reconnaissance. Additional missions include perimeter and point defense, escort, landing, drop and extraction zone support, forward air control, limited command and control, and combat search and rescue. Special equipment/features: These heavily armed aircraft incorporate side-firing weapons integrated with sophisticated sensor, navigation, and fire control systems to provide precision firepower or area saturation during extended periods, at night, and in adverse weather. The sensor suite consists of a low-light level television sensor and an infrared sensor. Radar and electronic sensors also give the gunship a method of positively identifying friendly ground forces and deliver ordnance effectively during adverse weather conditions. Navigational devices include an inertial navigation system and global positioning system. Mission: The AC-130U's primary missions are nighttime, close-air support for special operations and conventional ground forces; air interdiction; armed reconnaissance; air base, perimeter, and point defense; land, water, and heliborne troop escort; drop, landing, and extraction zone support; forward air control; limited airborne command and control; and combat search and rescue support. Special equipment/features: The AC-130U has one 25-millimeter Gatling gun, one 40-millimeter cannon, and one 105-millimeter cannon for armament and is the newest addition to AFSOC's fleet. This heavily armed aircraft incorporates side-firing weapons integrated with sophisticated sensor, navigation, and fire control systems to provide firepower or area saturation at night and in adverse weather. The sensor suite consists of an all light level television system and an infrared detection set. A multi-mode strike radar provides extreme long-range target detection and identification. The fire control system offers a dual target attack capability, whereby two targets up to 1 kilometer apart can be simultaneously engaged by two different sensors, using two different guns. Navigational devices include the inertial navigation system and global positioning system. The aircraft is pressurized, enabling it to fly at higher altitudes and allowing for greater range than the AC-130H. The AC-130U is also refuelable. Defensive systems include a countermeasures dispensing system that releases chaff and flares to counter radar-guided and infrared-guided anti-aircraft missiles. Also infrared heat shields mounted underneath the engines disperse and hide engine heat sources from infrared-guided anti-aircraft missiles. Command: Air National Guard Mission: EC-130E Commando Solo, the Air Force's only airborne radio and television broadcast mission, is assigned to the 193rd Special Operations Wing, the only Air National Guard unit assigned to AFSOC. Commando Solo conducts psychological operations and civil affairs broadcasts. The EC-130E flies during either day or night scenarios and is air refuelable. Commando Solo provides an airborne broadcast platform for virtually any contingency, including state or national disasters or other emergencies. Secondary missions include command and control communications countermeasures and limited intelligence gathering. Special equipment/features: Highly specialized modifications include enhanced navigation systems, self-protection equipment, and the capability to broadcast color television on a multitude of worldwide standards. Commands: AFSOC, Air Force Reserve, and Air Education and Training Command Quantity: 14 Combat Talon Is, 24 Combat Talon IIs Mission: The mission of the Combat Talon I/II is to provide global, day, night, and adverse weather capability to airdrop and airland personnel and equipment in support of U.S. and allied special operations forces. The MC-130E also has a deep penetrating helicopter refueling role during special operations missions. Special equipment/features: These aircraft are equipped with in-flight refueling equipment, terrain-following/terrain-avoidance radar, an inertial and global positioning satellite navigation system, and a high-speed aerial delivery system. The special navigation and aerial delivery systems are used to locate small drop zones and deliver people or equipment with greater accuracy and at higher speeds than possible with a standard C-130. The aircraft is able to penetrate hostile airspace at low altitudes and crews are specially trained in night and adverse weather operations. Commands: Air Force Special Operations Command, Air Education and Training Command, and Air Force Reserve Mission: The MC-130P Combat Shadow flies clandestine or low visibility, low-level missions into politically sensitive or hostile territory to provide air refueling for special operations helicopters. The MC-130P primarily flies its single- or multi-ship missions at night to reduce detection and intercept by airborne threats. Secondary mission capabilities include airdrop of small special operations teams, small bundles, and rubber raiding craft; night-vision goggle takeoffs and landings; and tactical airborne radar approaches. Special equipment/features: When modifications are complete in fiscal year 1999, all MC-130P aircraft will feature improved navigation, communications, threat detection, and countermeasures systems. When fully modified, the Combat Shadow will have a fully integrated inertial navigation and global positioning system, and night-vision goggle-compatible interior and exterior lighting. It will also have a forward-looking infrared radar, missile and radar warning receivers, chaff and flare dispensers, and night-vision goggle-compatible heads-up display. In addition, it will have satellite and data burst communications, as well as in-flight refueling capability. The Combat Shadow can fly in the day against a reduced threat; however, crews normally fly night, low-level, air refueling and formation operations using night-vision goggles. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO reviewed the U.S. Special OperationsCommand's (USSOCOM) acquisition strategy for aircraft electronic warfare systems, focusing on the: (1) fixed-wing C-130 aircraft operated by USSOCOM's Air Force Special Operations Command (AFSOC); (2) soundness of AFSOC's electronic warfare acquisition strategy; and (3) extent to which AFSOC is correcting deficiencies and maximizing commonality in its electronic warfare systems. GAO noted that: (1) AFSOC's electronic warfare acquisition strategy is sound because it is based on eliminating operational and supportability deficiencies confirmed by an Air Force study, test reports, and maintenance records; (2) this evidence indicates that AFSOC's current electronic warfare systems are unable to defeat many current threat systems and have supportability problems; (3) AFSOC's acquisition strategy is to procure a mix of new systems and upgrades for older ones while maximizing commonality within its fleet of C-130s; (4) amidst budget constraints, USSOCOM is funding only portions of AFSOC's acquisition strategy due to other higher budget priorities, thereby hampering AFSOC's efforts to correct deficiencies and maximize commonality in electronic warfare systems; (5) for example, although USSOCOM is funding an AFSOC effort to make C-130 aircraft less susceptible to passive detection, enhance aircrews' situational awareness, and increase commonality, it has rejected other requests to fund effectiveness and commonality improvements to systems dealing with radar- and infrared-guided missiles; (6) as a result, in the foreseeable future, deficiencies will continue, and AFSOC will have to operate and maintain older and upgraded electronic warfare systems concurrently; (7) an opportunity exists, however, to help AFSOC implement its electronic warfare acquisition strategy; (8) since AFSOC's acquisition strategy was adopted, the Air Force has decided to begin a-$4.3 billion C-130 modernization program (C-130X program) for all C-130s; (9) some of the planned elements of this modernization are common with some of the elements of AFSOC's acquisition strategy that was to be funded by USSOCOM's Major force program-11 (MFP) funds; and (10) if, as required by the memoranda of agreement, the Air Force C-130 avionics modernization program funds these common elements, USSOCOM could redirect significant portions of its MFP-11 funding currently budgeted for AFSOC C-130 passive detection and situational awareness deficiencies to other unfunded portions of AFSOC's electronic warfare acquisition strategy.
| 5,060 | 569 |
VBA's disability compensation and pension claims processing is done in its 57 regional offices. Each state, except Wyoming, has at least 1 regional office; California has 3, and New York, Pennsylvania, and Texas have 2 each. VBA also has regional offices in Washington, D.C.; San Juan, Puerto Rico; and Manila, the Philippines. Also, VBA has 142 Benefits Delivery at Discharge sites, where VBA staff process claims from newly separated service members. In fiscal year 2004, VBA spent about $926 million to administer its disability compensation and pension programs. This included support for about 9,100 full-time equivalent (FTE) employees. In fiscal year 2004, VBA received about 771,000 rating-related claims from veterans and their families for disability benefits. This included about 195,000 original claims for compensation of service-connected disabilities (injuries or diseases incurred or aggravated while on active military duty), and about 438,000 reopened compensation claims. In addition, about 87,000 original and reopened claims were filed for pensions for wartime veterans who have low incomes and are permanently and totally disabled for reasons not service-connected, and their survivors. In addition, VBA received about 29,000 original claims for dependency and indemnity compensation from deceased veterans' spouses, children, and parents and to survivors of service members who died on active duty. When a veteran or other claimant submits a claim for disability compensation, pension, or dependency and indemnity compensation to a VBA regional office, veterans service center staff process the claim in accordance with VBA regulations, policies, procedures, and guidance. A veterans service representative (VSR) in a predetermination team develops the claim, that is, assists the claimant in obtaining sufficient evidence to decide the claim. For rating-related claims, a decision is made in a rating team by rating veterans service representatives (also known as rating specialists). VSRs also perform a number of other duties, including establishing claims files, authorizing payments to beneficiaries and generating notification letters to claimants, conducting in-person and telephone contacts with veterans and other claimants, and assisting in the processing of appeals of claims decisions. For a number of years, VBA's regional offices have experienced problems processing veterans' disability compensation and pension claims. As we reported in May 2000, VBA's regional offices still experience problems such as large backlogs of pending claims, lengthy processing times, and questions about the consistency of its regional office decisions. VBA has acknowledged the need to improve the timeliness and accuracy of claims processing. Since 2001, VBA has made a number of changes to its field structure and staff deployment in an effort to provide veterans with faster decisions and reduce its rating-related claims inventory. In October 2001, VBA established a Tiger Team, including experienced rating specialists, to complete very old claims and claims from elderly veterans. Also, to supplement regional offices' claims processing capacity, VBA established nine resource centers, where teams of rating specialists decided claims developed at the regional offices of jurisdiction. Further, VBA has consolidated specific types of work, including pension maintenance work (such as annual means testing for VA pension beneficiaries) at three regional offices, in an effort to free up staff at other offices to concentrate on rating-related claims. VBA also consolidated in-service dependency and indemnity compensation claims at its Philadelphia regional office; created an Appeals Management Center in Washington, D.C., to process appeals remanded from VA's Board of Veterans Appeals; and is consolidating the rating of Benefits Delivery at Discharge claims at the Salt Lake City and Winston-Salem, North Carolina, regional offices. Further, VBA reduced the jurisdictions of two regional offices with inadequate performance-- Washington, D.C., and Newark--to reduce their claims workloads. In fiscal year 2002, VBA established special units to supplement regional offices' claims processing capacity, as part of its effort to achieve rating- related decision timeliness improvement and reduce its pending claims inventory. The Tiger Team at the Cleveland, Ohio, regional office was tasked to process very old claims (pending 1 year or more), and claims by elderly veterans (aged 70 and older). The Tiger Team was staffed with experienced rating specialists and with veterans service representatives, primarily from the Cleveland office's staff, to perform whatever additional development work was needed on the claims they receive and to make rating decisions on these claims. To help expedite development work, VBA obtained priority access for the Tiger Team to obtain evidence from VA and other federal agencies. For example, VA and the National Archives and Records Administration completed a memorandum of understanding in October 2001 to expedite Tiger Team requests for service records at the National Personnel Records Center (NPRC) in St. Louis, Missouri. Also, VBA established procedures and time frames for expediting Tiger Team requests for medical evidence and examinations from the Veterans Health Administration. In fiscal year 2004, the Tiger Team completed over 14,000 decisions. Since its creation in fiscal year 2002, the average age of VBA's inventory of rating-related claims has declined from 182 days at the end of September 2001 to about 118 days at the end of September 2004. In addition, VBA supplemented regional offices' capacity to make claims decisions by establishing resource centers at nine regional offices. The resource centers, staffed with rating specialists who were less experienced than the Tiger Team's, were to decide "ready to rate" claims. These are claims where veterans service representatives at the regional offices of jurisdiction had developed the evidence needed to support decisions on the claims. In fiscal year 2004, the nine resource centers completed about 69,000 decisions. Since their creation, the inventory of rating-related claims has declined from about 421,000 to about 321,000 claims at the end of fiscal year 2004. VBA has also consolidated some specific types of compensation and pension work into specialized units. In January 2002, VBA consolidated pension maintenance work at three regional offices--St. Paul, Minnesota; Philadelphia; and Milwaukee, Wisconsin. This work involves, for VBA's means-tested pension programs, conducting periodic income and eligibility verifications for beneficiaries. In fiscal year 2004, the Pension Maintenance Centers completed over 200,000 pension maintenance actions. In addition to consolidating pension maintenance, VBA plans to consolidate all pension claims processing at the three Pension Maintenance Centers. VBA also consolidated in-service dependency and indemnity compensation claims at the Philadelphia regional office. These claims are filed by survivors of service members who die while in military service. VBA consolidated these claims as part of its efforts to provide expedited service to these survivors, including service members who died in Operations Enduring Freedom and Iraqi Freedom. VBA has also consolidated the processing of decisions remanded on appeal by VA's Board of Veterans Appeals. Effective February 2002, VA issued a new regulation to streamline and expedite the appeals process. The new regulation allowed the board to process remanded decisions without having to send them back to VBA regional offices. To implement this regulation, the board established a unit to process remanded appeals. However, in May 2003, the U.S. Court of Appeals for the Federal Circuit held that the board could not, except in certain statutorily authorized exceptions, decide appeals in cases in which the board had developed evidence. As a result, VBA regained responsibility for evidence development and adjudication work on remands, and chose to establish a centralized Appeals Management Center at its Washington regional office. According to VBA officials, remand processing was consolidated because a consolidated unit, focusing only on remands, could process them faster and more consistently, and with better accountability, than the individual regional offices. VBA's Washington regional office was chosen because of its proximity to the board's headquarters. The Appeals Management Center was established in July 2003, and was, according to VBA officials, fully operational by February 2004. According to a VBA official, it was staffed largely through transfers from regional offices and with staff from the board's former remand processing unit. VBA continues to consolidate specific types of claims processing work. VBA is in the process of consolidating decision making on Benefits Delivery at Discharge claims, which are generally original claims for disability compensation, at the Salt Lake City and Winston-Salem regional offices. VBA established this program to expedite decisions on disability compensation claims from newly separated service members. A service member can file a BDD claim up to 180 days before separation; VBA staff performs some development work on the claim before separation. VBA actually decides the claim after the service member is separated, and the official discharge form (DD Form 214) is received. Under the consolidation, regional offices and BDD sites will accept and develop claims, but will send the developed claims to Salt Lake City or Winston- Salem for decision. VBA expects this consolidation to help improve decision efficiency and consistency. Consolidation began in December 2004 and is expected to be completed by March 2006. According to VBA officials, claims processing performance was one reason for selecting these two regional offices. In the case of Salt Lake City, the availability of space and the ability to recruit new claims processing staff were also factors. The Salt Lake City office is in a relatively new building on the campus of the Salt Lake City VA Medical Center. VBA has also made changes in the jurisdictions of some regional offices. The Washington regional office has lost most of its jurisdiction. Claims from veterans residing in Washington's Maryland and Virginia suburbs were transferred to the Baltimore, Maryland, and Roanoke, Virginia, regional offices, respectively. The Washington regional office's staff declined by about 37 percent between fiscal year 2001 and 2004. Also, jurisdiction over claims from veterans residing outside the United States was transferred from Washington to the Pittsburgh, Pennsylvania, regional office. Meanwhile, the Newark regional office lost jurisdiction over claims from veterans in seven southern New Jersey counties to the Philadelphia regional office. The Newark regional office lost about 16 percent of its staff between fiscal year 2001 and 2004. These shifts in jurisdiction were, according to VBA officials, in response to poor performance by the Washington and Newark regional offices, such as inadequate timeliness and accuracy. While VBA has done limited field restructuring and claims processing staff reallocation, it has not changed the basic field structure for processing claims for disability compensation and pension benefits and still faces challenges in improving performance. VBA continues to process claims at 57 regional offices, which experience large performance variations and questions about the consistency of their decisions. In addition, we have reported that in order to improve long-term performance in the face of increased workloads and without significant staffing increases, VBA needs to improve its productivity. Several studies by VA and outside groups have suggested that VBA could improve claims processing efficiency and consistency by consolidating claims processing into fewer offices as well as other strategic changes. In taking on these broader changes, however, VBA would need to consider an array of human capital and real property challenges, such as optimizing its ability to recruit and retain staff and minimizing the cost of office space. VBA continues to struggle to improve nationwide performance, and significant performance differences exist among its regional offices. For example, in fiscal year 2004 the average time to complete rating-related claims VBA-wide was 166 days, far from VBA's strategic goal of 125 days. Average completion times ranged from 99 days at the Salt Lake City regional office to 237 days at the Honolulu, Hawaii, regional office. To help struggling offices reduce their inventories of pending claims, VBA has been brokering (that is, having a regional office send a claim to another office to be decided) tens of thousands of rating-related claims. In fiscal year 2004, regional offices brokered out about 92,000 claims--about 90 percent to the Tiger Team and resource centers. This action enabled some individual offices reduce the size and age of their pending inventories. For example, the Providence, Rhode Island regional office brokered out about two-thirds of its rating-related decisions in fiscal year 2004. This helped Providence to reduce its rating-related inventory by almost 30 percent, while the nationwide inventory of pending claims grew by more than 25 percent. Also, Providence was able to reduce its inventory's average age by about 7 weeks, while the nationwide inventory's average age increased by about 1 week. VBA also experiences problems ensuring the accuracy and consistency of its rating decisions. As measured by VBA's Systematic Technical Accuracy Review (STAR) data for fiscal year 2004, the accuracy of regional office decisions varied from a low of 76 percent at its Boston regional office to 96 percent at its Fort Harrison regional office. Moreover, as we recently testified and reported, VA still needs to develop a plan for assessing variations in disability claims decisions and whether they are within the bounds of reasonableness. While some variation is inherent in the claims decision-making process, we have reported in the past on wide variations in the state-to-state average compensation payments per disabled veteran, and more recently, VA's inspector general has found that inconsistency remains a problem. In addition to the challenges VBA faces in improving claims processing timeliness and consistency, VBA also faces productivity challenges. In November 2004, we reported that to achieve its claims processing performance goals in the face of increasing workloads and decreased staffing levels, VBA would have to rely on productivity improvements. VBA's fiscal year 2006 budget justification provided information on actual and planned productivity, in terms of rating-related claims decided per direct full-time equivalent employee, and identified a number of initiatives that could improve claims processing performance. These initiatives included technology initiatives such as Virtual VA, involving the creation of electronic claims folders; consolidation of the processing of Benefits Delivery at Discharge claims at 2 regional offices; and collaboration with the Department of Defense to improve VBA's ability to obtain evidence, such as evidence of in-service stressors for veterans claiming service- connected post-traumatic stress disorder. VBA's fiscal year 2006 budget justification assumed that it would increase the number of rating-related claims completed per FTE from 94 in fiscal year 2004 to 109 in fiscal year 2005 and 2006, a 16 percent increase. For fiscal year 2005, this level of productivity translates into VBA completing almost 826,000 rating-related decisions. VBA completed about 763,000 decisions in fiscal year 2005. It is not clear whether these measures will enable VBA to achieve its planned improvements in productivity. Organizations studying these challenges have suggested that they could be addressed by more strategic, comprehensive restructuring than has been done to date. For example, in a 1997 report, the National Academy of Public Administration found that VA could achieve significant savings in administrative overhead costs by closing a large number of regional offices. Similarly, in its January 1999 report, the Congressional Commission on Servicemembers and Veterans Transition Assistance found that some regional offices are so small that their disproportionately large supervisory overhead may unnecessarily consume personnel resources. The commission highlighted a need to consolidate disability claims processing into fewer locations. VBA has consolidated its education assistance and housing loan guaranty programs into fewer than 10 locations, and the commission encouraged VBA to take similar action in the disability programs. In its own 1995 study of field restructuring, VBA enumerated several potential benefits of consolidating processing into fewer than 57 regional offices. These included allowing VBA to assign the most experienced and productive adjudication officers and directors to the consolidated offices; facilitating increased specialization and as-needed expert consultation in deciding complex cases; improving the completeness of claims development, the accuracy and consistency of rating decisions, and the clarity of decision explanations; improving overall adjudicative quality by increasing the pool of experience and expertise in critical technical areas; and facilitating consistency in decision making through fewer consolidated claims processing centers. Consolidating compensation and pension claims processing into fewer offices would not necessarily mean that regional offices would be closed. As the VA Claims Processing Task Force suggested, regional offices that lose claims processing functions could still provide public contact and outreach services. Also, VBA officials suggested that these offices could continue to provide vocational rehabilitation and employment services. No matter which alternative VBA chooses to pursue in making further changes to its field office structure, it will need to address an array of human capital and real property issues. These include, for example, (1) assessing what mix of incentives--such as buyouts, early retirements, or retention bonuses--would be needed to accommodate downsizing at some offices and workload increases at others, (2) what additional training would be needed to ensure staff could take on new responsibilities, and (3) how office space could be disposed of or acquired as needed to accommodate workload shifts. At the same time, given potential resistance to changes in field structure, VA would need to find effective ways of communicating its plans while enhancing staff morale and productivity. VBA has taken limited actions to realign its field structure and redeploy staff resources as part of its effort to improve overall claims processing performance. While targeted at specific types of work and specific regional offices, these actions have not been in the context of a comprehensive restructuring strategy. Rather, VBA has made piecemeal changes, many in the context of short-term performance improvements, particularly in claims processing efficiency. Unless more comprehensive and strategic changes are made to its field structure, VBA is likely to continue to miss opportunities to substantially improve productivity, accuracy, and consistency in its disability claims processing, especially in the face of future workload increases. To help ensure more timely, accurate, and consistent decisions in a cost- effective manner, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits to undertake a comprehensive review of VBA's field structure for processing disability compensation and pension claims. This review would address staff deployment, opportunities for consolidating disability compensation and pension claims processing, and human capital and real property issues. In its written comments on a draft of this report (see app. II), VA agreed with our conclusions and concurred fundamentally with our recommendation that it undertake a comprehensive review of VBA's field structure for processing disability compensation and pension claims. VA stated that it will establish a task force to thoroughly explore potential areas for further consolidation. VA also noted that field restructuring is a complex process that involves, among other things, obtaining input and support from service organizations, members of Congress, and labor partners. We agree that field restructuring is a complex process but urge VA to establish its task force expeditiously to ensure that VA can achieve the potential benefits of field restructuring as soon as possible. As VA noted in its comments, these could include improved proficiency, greater accuracy, and consistency in operations. We will send copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. The report will also be available at GAO's Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please call me at (202) 512-7215. Carl Barden, Irene Chu, Martin Scire, Greg Whitney, Vanessa Taylor, and Walter Vance also made key contributions to this report. To develop the information for this report, we reviewed prior studies on Veterans Benefits Administration (VBA) claims processing, including the 1995 report of VBA's Field Restructuring Task Force, the National Academy of Public Administration's 1997 report on management of compensation and pension benefits claim processes for veterans, the 1999 report of the Congressional Commission on Servicemembers and Veterans Transition Assistance, and the 2001 Department of Veterans Affairs (VA) Claims Processing Task Force report. We reviewed VBA's model for allocating staff to its regional offices and discussed the allocation model with VBA officials. We also analyzed VBA staffing data from fiscal years 2001 through 2004 for VBA's regional offices. To determine the range in workload and performance of VBA's regional offices, we reviewed VBA workload, timeliness, and accuracy data. To discuss VBA initiatives and the impacts of changes in staffing levels, we visited the VBA regional offices in Washington, D.C.; Boston, Massachusetts; Newark, New Jersey; Philadelphia, Pennsylvania; and Salt Lake City, Utah. Also, while visiting the Salt Lake City regional office, we interviewed by videoconference officials of the Anchorage, Alaska, and Fort Harrison, Montana, regional offices--which are operated by the Salt Lake City regional office. We selected the Philadelphia and Salt Lake City offices because they have added, or are in the process of adding, workload through consolidations. The Philadelphia regional office hosts one of the three Pension Maintenance Centers; processes in-service dependency and indemnity claims; and has taken jurisdiction for southern New Jersey from the Newark regional office. The Salt Lake City regional office was in the process of expanding its staffing as part of VBA's plan to consolidate Benefits Delivery at Discharge (BDD) claims decision making, and in fiscal year 2004, it made almost 90 percent of the Anchorage regional office's rating-related decisions. The Boston, Newark, and Washington regional offices were chosen because they had lost a large percentage of their staff since fiscal year 2001. Also, the Newark and Washington offices had lost jurisdiction to other regional offices in recent years. Finally, we visited the Washington office because it is the site of VBA's Appeals Management Center. We assessed the reliability of VBA's timeliness and workload data and found that the data were sufficiently reliable for the purposes of this report. For data at the VBA-wide level we relied on the assessment we performed for our November 2004 report on VBA's fiscal year 2005 budget request. For data on workload and timeliness at the regional office level, we used data from VBA's Distribution of Operational Resources (DOOR) reports. We were unable to directly assess the reliability of the data contained in these reports because VBA officials responsible for putting together the DOOR reports do not receive claims-level data. For this reason, to corroborate the data in the DOOR reports, we obtained claims- level data that had been archived by VBA's Office of Performance Analysis and Integrity (PA&I). We utilized PA&I's methodology and calculated workload and timeliness numbers for September 2004 with minimal differences from those contained in the DOOR reports. This gave us reasonable assurance that the DOOR numbers accurately reflect VBA's workload and timeliness. We assessed the reliability of VBA's claims brokering data and found the data sufficiently reliable for the purposes of this report. We discussed VBA's brokering data with VBA officials and reviewed guidance on reporting brokering data. According to VBA, regional offices work with VBA's area offices to ensure that brokered cases are properly counted. The area offices, in turn, provide the data to VBA headquarters. These data are updated monthly. According to VBA, the Office of Performance Analysis and Integrity reviews and validates brokering data. We also assessed the reliability of VBA's fiscal year 2004 benefit entitlement accuracy data and found that the data were sufficiently reliable to show the range in accuracy between VBA's most and least accurate offices, but not to make further distinctions in accuracy among regional offices. We interviewed officials responsible for VBA's Systematic Technical Accuracy Review (STAR) program and discussed their procedures for requesting cases for review. We obtained data by regional office on the number of cases requested and reviewed. We found that VBA's STAR unit had requested, but never received or reviewed, hundreds of sampled cases from its regional offices. This could have affected regional office accuracy scores for fiscal year 2004. For example, the Washington regional office's score was reported as 77 percent. However, because a large number of cases were never received by the STAR unit, Washington's accuracy score could have been as high as 87 percent or as low as 42 percent. According to VBA officials, VBA is now tracking cases that it requests as part of its STAR accuracy review sample and charges offices with errors if cases are not sent in for review.
|
The Chairman, former Chairman, and Ranking Minority Member, Senate Committee on Veterans' Affairs asked GAO to review the Veterans Benefits Administration's (VBA) efforts to realign its compensation and pension claims processing field structure to improve performance. This report (1) identifies the actions VBA has taken to realign its compensation and pension claims processing field structure to improve performance, and (2) examines whether further changes to its field structure could improve performance. Since 2001, VBA has made a number of changes to its field structure and staff deployment in an effort to improve compensation and pension claims processing performance, in particular, to improve the timeliness of claims decisions and reduce inventories. VBA created a Tiger Team to complete very old claims, and claims from elderly veterans; created nine resource centers to decide claims developed at the regional offices of jurisdiction; consolidated pension maintenance work at three regional offices to free up staff at other offices to concentrate on other work; consolidated in-service dependency and indemnity compensation claims at one office; consolidated processing of appeals remanded from VA's Board of Veterans Appeals at one office; and is consolidating decision making on Benefits Delivery at Discharge (BDD) claims at two regional offices. While VBA has taken these steps to improve its claims processing performance through targeted realignments of its field structure and workload, VBA has not changed the basic field structure for processing claims for disability compensation and pension benefits, and it still faces performance challenges. VBA continues to process these claims at 57 regional offices, where large performance variations and questions about decision consistency persist. For example, in fiscal year 2004 the average time to decide a rating-related claim ranged from 99 days at one office to 237 days at another, and accuracy varied across regional offices. Furthermore, productivity improvements are necessary to maintain performance in the face of greater workloads and relatively constant staffing resources. VBA and others who have studied claims processing have suggested that consolidating claims processing into fewer regional offices could help improve claims processing efficiency, save overhead costs, and improve decision accuracy and consistency.
| 5,187 | 437 |
DHS's federal leadership role and responsibilities for emergency preparedness as defined in law and executive order are broad and challenging. To increase homeland security following the September 11, 2001, terrorist attacks on the United States, President Bush issued the National Strategy for Homeland Security in July 2002, and signed the Homeland Security Act in November 2002 creating DHS. The act centralized the leadership of many homeland security activities under a single federal department and, accordingly, DHS has the dominant role in implementing the strategy. As we noted in our review of DHS's mission and management functions, the National Strategy for Homeland Security underscores the importance for DHS of partnering and coordination. For example, 33 of the strategy's 43 initiatives are required to be implemented by 3 or more federal agencies. If these entities do not effectively coordinate their implementation activities, they may waste resources by creating ineffective and incompatible pieces of a larger security program. In addition, more than 20 Homeland Security Presidential Directives (HSPDs) define DHS's and other federal agencies' roles in leading efforts to prepare for and respond to disasters, emergencies, and potential terrorist threats. Directives that focus on DHS's leadership role and responsibilities for homeland security include HSPD-5 and HSPD-8 which are summarized below: Homeland Security Presidential Directive-5 (HSPD-5), issued on February 28, 2003, identifies the Secretary of Homeland Security as the principal federal official for domestic incident management and directs him to coordinate the federal government's resources utilized in response to or recovery from terrorist attacks, major disasters, or other emergencies. The Secretary of DHS, as the principal federal official, is to provide standardized, quantitative reports to the Assistant to the President for Homeland Security on the readiness and preparedness of the nation--at all levels of government--to prevent, prepare for, respond to, and recover from domestic incidents and develop and administer a National Response Plan (NRP). To facilitate this role, HSPD-5 directs the heads of all federal departments and agencies to assist and support the Secretary in the development and maintenance of the NRP. (The plan was recently revised and is now called the National Response Framework. Homeland Security Presidential Directive-8 (HSPD-8), issued in December 2003, called for a new national preparedness goal and performance measures, standards for preparedness assessments and strategies, as well as a system for assessing the nation's overall preparedness. According to the HSPD, the Secretary is the principal federal official for coordinating the implementation of all-hazards preparedness in the United States. In cooperation with other federal departments and agencies, the Secretary coordinates the preparedness of federal response assets. In addition, the Secretary, in coordination with other appropriate federal civilian departments and agencies, is to develop and maintain a federal response capability inventory that includes the performance parameters of the capability, the time (days or hours) within which the capability can be brought to bear on an incident, and the readiness of such capability to respond to domestic incidents. Last year, the President issued an annex to HSPD-8 intended to establish a standard and comprehensive approach to national planning and ensure consistent planning across the federal government. After the hurricane season of 2005, Congress passed the Post-Katrina Emergency Management Reform Act of 2006, that, among other things, made organizational changes within DHS to consolidate emergency preparedness and emergency response functions within FEMA. Most of the organizational changes, such as the transfer of various functions from DHS's Directorate of Preparedness to FEMA, became effective as of March 31, 2007. According to the act, the primary mission of FEMA is to: "reduce the loss of life and property and protect the Nation from all hazards, including natural disasters, acts of terrorism, and other man-made disasters, by leading and supporting the Nation in a risk-based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation." The act kept FEMA within DHS and enhanced FEMA's responsibilities and its autonomy within DHS. As a result of the Post-Katrina Act, FEMA is the DHS component now charged with leading and supporting the nation in a risk- based, comprehensive emergency management system of preparedness, protection, response, recovery, and mitigation. DHS has taken action to define national roles and responsibilities and capabilities for preparedness and response which are reflected in several key policy documents: the National Response Framework, (what should be done and by whom); the National Incident Management System (NIMS) (how it should be done), and the National Performance Guidelines (how well it should be done). To implement requirements of the Homeland Security Act of 2002 and HSPDs 5 and 8, DHS issued initial versions of these documents in 2004 (NIMS and the National Response Plan) and 2005 (National Preparedness Goal) and has developed and issued revisions intended to improve and enhance these national-level policies. Most recently, the National Response Framework (NRF), the successor to the National Response Plan, became effective in March 2008; it describes the doctrine that guides national response actions and the roles and responsibilities of officials and entities involved in response efforts. The NRF also includes a Catastrophic Incident Annex, which describes an accelerated, proactive national response to catastrophic incidents, as well as a Supplement to the Catastrophic Incident Annex--both designed to further clarify federal roles and responsibilities and relationships among federal, state and local governments and responders. Together, these documents are intended to provide a comprehensive structure, guidance, and performance goals for developing and maintaining an effective national preparedness and response system. Because there are a range of federal and nonfederal stakeholders with important responsibilities for emergency preparedness and response, it is important that FEMA and DHS include these stakeholders in its development and revisions of national policies and guidelines. Today we are issuing a report on the process DHS used to revise the NRF, including how DHS integrated key stakeholders. DHS included non-federal stakeholders in the revision process during the initial months when issues were identified and draft segments written, and during the final months when there was broad opportunity to comment on the draft that DHS had produced. However, DHS deviated from the work plan it established for the revision process that envisioned the incorporation of stakeholder views throughout the process and did not provide the first full revision draft to non-federal stakeholders for their comments and suggestions before conducting a closed, internal federal review of the draft. DHS's approach was also not in accordance with the Post-Katrina Act's requirement that DHS establish a National Advisory Council (NAC) to incorporate non-federal input into the revision process. Although the NAC was to be established within 60 days of the Act (i.e., December 4, 2006), FEMA, which assumed responsibility for selecting members, did not name NAC members until June 2007 because of the additional time needed to review hundreds of applications and select a high quality body of advisors, according to the FEMA Administrator. The NAC's first meeting took place in October 2007 after DHS issued the revised plan for public comment. We are recommending that, as FEMA begins to implement and eventually review the 2008 National Response Framework, the Administrator develop and disseminate policies and procedures describing the conditions and time frames under which the next NRF revision will occur and how FEMA will conduct the next NRF revision. These policies and procedures should clearly describe how FEMA will integrate all stakeholders, including the NAC and other non-federal stakeholders, into the revision process and the methods for communicating to these stakeholders. FEMA agreed with our recommendation. The importance of involving stakeholders, both federal and non-federal, was underscored in our review of The National Strategy for Pandemic Influenza (National Pandemic Strategy) and The Implementation Plan for the National Strategy for Pandemic Influenza (National Pandemic Implementation Plan) which were issued in November 2005 and May 2006 respectively, by the President and his Homeland Security Council. Key non-federal stakeholders, such as state and local governments, were not directly involved in developing the National Pandemic Strategy and Implementation Plan, even though these stakeholders are expected to be the primary responders to an influenza pandemic. While DHS collaborated with the Department of Health and Human Services (HHS) and other federal agencies in developing the National Pandemic Strategy and Implementation Plan, we found that there are numerous shared leadership roles and responsibilities, leaving uncertainty about how the federal government would lead preparations for and response to a pandemic. Although the DHS Secretary is to lead overall non-medical support and response actions and the HHS Secretary is to lead the public health and medical response, the plan does not clearly address these simultaneous responsibilities or how these roles are to work together, particularly over an extended period and at multiple locations across the country. In addition to the two Secretaries, we observed that the FEMA Administrator is now the principal domestic emergency management advisor to the President, the Homeland Security Council, and the DHS Secretary, pursuant to the Post-Katrina Act, adding further complexity to the leadership structure in the case of an influenza pandemic. Most of these leadership roles and responsibilities have not been tested under pandemic scenarios, leaving it unclear how they will work. We therefore recommended that DHS and HHS work together to develop and conduct rigorous testing, training, and exercises for pandemic influenza to ensure that federal leadership roles are clearly defined and understood and that leaders are able to effectively execute shared responsibilities to address emerging challenges, and ensure these roles are clearly understood by all key stakeholders. We also recommended that, in updating the National Pandemic Implementation Plan, the process should involve key non- federal stakeholders. DHS and HHS agreed with our recommendations, and said that they were taking or planned to take actions to implement our recommendations. As we noted in our report on the preparation for and response to Hurricane Katrina issued in September 2006, clearly defined and understood roles and responsibilities are essential for an effective, coordinated response to a catastrophic disaster. In any administration, the number of political appointees who depart rises as the President's term nears an end. Many cabinet secretaries and agency heads --in addition to the DHS Secretary and the FEMA Administrator-- have response responsibilities in a major or catastrophic disaster, which could occur at any time. As political appointees depart, it is therefore essential that there be career senior executives who are clearly designated to lead their respective department and agency responsibilities for emergency response and continuity of operations. It is also important that they clearly understand their roles and responsibilities and have training to exercise them effectively. DHS has designated career executives to carry out specific responsibilities in the transition between presidential administrations and recently provided information to this Committee on its transition plans. DHS has also contracted with the Council for Excellence in Government to map key roles and responsibilities for responding to disasters during the transition between administrations. The Council is to produce a visual mapping of these roles, plus supplementary documentation to support/explicate the mapping. Once those materials had been developed, the Council plans to hold a series of trainings/workshops for career civil servants in acting leadership positions and nominated political appointees based on the roles mapped out by the Council. In addition, the project includes training and workshops for those in acting leadership positions outside DHS. DHS is responsible for, but has not yet completed, leading the operational planning needed for an effective national response. Two essential supplements to the new National Response Framework--Federal Partner Response Guides and DHS's Integrated Planning System--are still under development. The partner guides are designed to provide a ready reference of key roles and actions for federal, state, local, tribal, and private-sector response partners. According to DHS, the guides are to provide more specific "how to" handbooks tailored specifically to the federal government and the other non-federal stakeholders: state, local and tribal governments, the private sector and nongovernmental organizations. DHS has not established a schedule for completing these guides. On December 3, 2007, President Bush issued Annex I to HSPD-8, entitled National Planning. The Annex describes the development of a national planning system in which all levels of government work together in a collaborative fashion to create plans for various scenarios and requires that DHS develop a standardized, integrated national planning process. This Integrated Planning System (IPS) is intended to be the national planning system used to develop interagency and intergovernmental plans based upon the National Planning Scenarios. The National Response Framework states that local, tribal, state, regional, and federal plans are to be mutually supportive. Although the Annex calls for the new system to be developed in coordination with relevant federal agencies and issued by February 3, 2008, DHS has not yet completed the IPS, and HSPD-8 Annex 1 (i.e. the White House) does not lay out a timeframe for release of the IPS. According to FEMA's Administrator, the agency's National Preparedness Directorate, in coordination with its Disaster Operations Directorate and the DHS's Office of Operations Coordination, has begun to develop a common federal planning process that will support a family of related planning documents. These related planning documents will include strategic guidance statements, strategic plans, concept plans, operations plans, and tactical plans. The Annex to HSPD-8 is designed to "enhance the preparedness of the United States by formally establishing a standard and comprehensive approach to national planning" in order to "integrate and effect policy and operational objectives to prevent, protect against, respond to, and recover from all hazards." According to the Administrator, FEMA continues to be a significant contributor to the draft IPS, and will also be involved in developing the family of plans for each of the national planning scenarios as required by the Annex. In following up on the status of recommendations we made after Hurricane Katrina related to planning for the evacuation of transportation disadvantaged populations, we found that DHS's leadership in this area had led to the implementation of some, but not all of our recommendations. For example, we recommended that DHS clarify within the National Response Plan that FEMA is the lead and coordinating agency to provide evacuation assistance when state and local governments are overwhelmed, and clarify the supporting federal agencies' responsibilities. In April 2008, we noted that DHS's draft Mass Evacuation Incident Annex to the National Response Framework appears to clarify the role of FEMA and supporting federal agencies, although the annex is still not finalized. Similarly, we recommended that DHS improve its technical assistance by, among other things, providing more detailed guidance on how to plan, train, and conduct exercises for the evacuation of transportation disadvantaged populations. DHS had developed basic guidance on the evacuation of transportation disadvantaged populations and was currently working on targeted guidance for states and localities. However, we had also recommended that DHS require, as part of its grant programs, all state and local governments plan, train, and conduct exercises for the evacuation of transportation-disadvantaged populations, but DHS had not done so. DHS agreed to consider our recommendation. We also recommended that DHS clearly delineate how the federal government will assist state and local governments with the movement of patients and residents out of hospitals and nursing homes to a mobilization center where National Disaster Medical System (NDMS) transportation begins. DHS and HHS have collaborated with state and local health departments in hurricane-prone regions to determine gaps between needs and available resources for hospital and nursing home evacuations and to secure local, state, or federal resources to fill the gaps. Based on this analysis, HHS and DHS contracted for ground and air ambulances and para-transit services for Gulf and East Coast states. At a more tactical level of planning, FEMA uses mission assignments to coordinate the urgent, short-term emergency deployment of federal resources to address disaster needs. Mission assignments may be issued for a variety of tasks, such as search and rescue missions or debris removal, depending on the performing agencies' areas of expertise. According to DHS, the Department has agreements and pre-scripted mission assignments with 31 federal agencies for a total of 223 assignments that essentially pre-arrange for the deployment of health equipment, a national disaster medical system, military equipment, and a whole host of other services in the event that they are necessary to support a state or a locality. FEMA officials said these assignments are listed in the operational working draft of the "Pre-Scripted Mission Assignment Catalogue," which FEMA intends to publish this month. We have previously made recommendations aimed at improving FEMA's mission assignment process and FEMA officials concurred with our recommendations and told us that they are reviewing the management of mission assignments. In addition, reviews by the DHS OIG regarding mission assignments concluded that FEMA's management controls were generally not adequate to ensure that deliverables (missions tasked) met requirements; costs were reasonable; invoices were accurate; federal property and equipment were adequately accounted for or managed; and FEMA's interests were protected. According to the DHS OIG, mission assignment policies, procedures, training, staffing, and funding have never been fully addressed by FEMA, creating misunderstandings among federal agencies concerning operational and fiduciary responsibilities and FEMA's guidelines regarding the mission assignment process, from issuance of an assignment through execution and close-out, are vague. Reflecting upon lessons learned from Hurricane Dean, the California wildfires, and the national-level preparedness exercise for top officials in October 2007, FEMA's Disaster Operations Directorate formed an intra/interagency Mission Assignment Working Group to review mission assignment processes and procedures and develop recommendations for the management of mission assignments, according to the OIG. Most recently, we reported on mission assignments for emergency transit assistance and recommended that DHS draft prescripted mission assignments for public transportation services to provide a frame of reference for FEMA, FTA, and state transportation departments in developing mission assignments after future disasters. DHS agreed to take our recommendation under consideration. DHS issued an update to the national goal for preparedness in National Preparedness Guidelines in September 2007 to establish both readiness metrics to measure progress, and a system for assessing the nation's overall preparedness and response capabilities. However, DHS has not yet completed efforts to implement the system and has not yet developed a complete inventory of all federal response capabilities. According to the September 2007 Guidelines, DHS was still establishing a process to measure the nation's overall preparedness based on the Target Capabilities List (TCL), which accompanies the Guidelines. Our ongoing work on national preparedness and the national exercise program is reviewing DHS's plans and schedules for completing this process. In the Guidelines, the description for each capability includes a definition, outcome, preparedness and performance activities, tasks, and measures and metrics that are quantitative or qualitative levels against which achievement of a task or capability outcome can be assessed. According to the Guidelines, they describe how much, how well, and/or how quickly an action should be performed and are typically expressed in a way that can be observed during an exercise or real event. The measures and metrics are not standards, but serve as guides for planning, training, and exercise activities. However, the Guidelines do not direct development of capabilities to address national priorities to federal agencies. For example, for the national priority to "Strengthen Interoperable and Operable Communications Capabilities" the Guidelines state that interoperable and operable communications capabilities are developed to target levels in the states, tribal areas, territories, and designated urban areas that are consistent with measures and metrics established in the TCL; federal agencies' interoperability is not addressed. Prior disasters and emergencies, as well as State and Urban Area Homeland Security Strategies and status reports on interoperable communications, have shown persistent shortfalls in achieving communications interoperability. These shortfalls demonstrate a need for a national framework fostering the identification of communications requirements and definition of technical standards. State and local authorities, working in partnership with DHS, need to establish statewide interoperable communications plans and a national interoperability baseline to assess the current state of communications interoperability. Achieving interoperable communications and creating effective mechanisms for sharing information are long-term projects that require Federal leadership and a collaborative approach to planning that involves all levels of government as well as the private sector. In April 2007, we reported that DHS's SAFECOM program intended to strengthen interoperable public safety communications at all levels of government had made limited progress in and had not addressed interoperability with federal agencies, a critical element to interoperable communications required by the Intelligence Reform and Terrorism Prevention Act of 2004. We concluded that the SAFECOM program has had a limited impact on improving communications interoperability among federal, state, and local agencies. The program's limited effectiveness can be linked to poor program management practices, such as the lack of a plan for improving interoperability across all levels of government, and inadequate performance measures to fully gauge the effectiveness of its tools and assistance. We recommended, among other things, that DHS develop and implement a program plan for SAFECOM that includes goals focused on improving interoperability among all levels of government. DHS agreed with the intent of the recommendation and stated that the Department was working to develop a program plan. DHS had also not yet developed a complete inventory of federal capabilities, as we reported in August 2007, in assessing the extent to which DHS has met a variety of mission and management expectations. As a result, earlier this year Senate Homeland Security and Governmental Affairs Committee sent letters requesting information from 15 agencies with responsibilities under the National Response Framework to respond in the event of a nuclear or radiological incident. The committee asked for information on a variety of issues--for example, about evacuation, medical care, intelligence, forensics, and tracking fallout--to assess agencies' current capabilities and responsibilities in the event of a nuclear attack. Other federal agencies also need this information from DHS; in reviewing the Department of Defense's (DOD) coordination with DHS, we reported in April 2008 that DOD's Northern Command (NORTHCOM) has difficulty identifying requirements for capabilities it may need in part because NORTHCOM does not have more detailed information from DHS on the specific requirements or capabilities needed from the military in the event of a disaster. This concludes my statement. I would be pleased to respond to any questions that your or other members of the subcommittee may have at this time. For further information about this statement, please contact William O. Jenkins Jr., Director, Homeland Security and Justice Issues, on (202) 512- 8777 or [email protected]. In addition to the contact named above the following individuals from GAO's Homeland Security and Justice Team also made major contributors to this testimony: Chris Keisling, Assistant Director; John Vocino, Analyst-in-Charge, and Adam Vogt, Communications Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
The Homeland Security Act was enacted in November 2002, creating the Department of Homeland Security (DHS) to improve homeland security following the September 11, 2001, terrorist attacks on the United States. The act centralized the leadership of many homeland security activities under a single federal department and, accordingly, DHS has the dominant role in implementing this national strategy. This testimony discusses the status of DHS's actions in fulfilling its responsibilities to (1) establish policies to define roles and responsibilities for national emergency preparedness efforts and prepare for the transition between presidential administrations, and (2) develop operational plans and performance metrics to implement these roles and responsibilities and coordinate federal resources for disaster planning and response. This testimony is based on prior GAO work performed from September 2006 to June 2008 focusing on DHS's efforts to address problems identified in the many post-Katrina reviews. DHS has taken several actions to define national roles and responsibilities and capabilities for emergency preparedness efforts in key policy documents and has begun preparing for the upcoming transition between presidential administrations. DHS prepared initial versions of key policy documents that describe what should be done and by whom (National Response Plan in 2004), how it should be done (the National Incident Management System in 2004) and how well it should be done (the interim National Preparedness Goal in 2005). DHS subsequently developed and issued revisions to these documents to improve and enhance its national-level policies, such as the National Preparedness Guidelines in 2007 which was the successor to the interim National Preparedness Goal. Most recently, DHS developed the National Response Framework (NRF), the successor to the National Response Plan, which became effective in March 2008. This framework describes the doctrine that guides national response actions and the roles and responsibilities of officials and entities involved in response efforts. Clarifying roles and responsibilities will be especially critical as a result of the coming change in administrations and the associated transition of key federal officials with homeland security preparedness and response roles. To cope with the absence of many political appointed executives from senior roles, DHS has designated career executives to carry out specific responsibilities in the transition between presidential administrations and recently provided information to this Committee on its transition plans. To assist in planning to execute an efficient and effective administration transition, DHS has also contracted with the Council for Excellence in Government to identify key roles and responsibilities for the Department and its homeland security partners for responding to disasters during the transition between administrations. DHS is still developing operational plans to guide other federal agencies' response efforts and metrics for assessing federal capabilities. Two essential supplements to the new National Response Framework--response guides for federal partners and an integrated planning system--are still under development. Also, DHS is still establishing a process to measure the nation's overall preparedness based on a list of targeted capabilities and has not yet completed an inventory of all federal response capabilities. The measures and metrics associated with these targeted capabilities are not standards, but serve as guides for planning, training, and exercise activities. However, DHS policy does not direct development of these capabilities to address national priorities for federal agencies. For example, for the national priority to "Strengthen Interoperable and Operable Communications Capabilities" the National Preparedness Guidelines state that communications capabilities are developed to target levels in the states, tribal areas, territories, and designated urban areas that are consistent with measures and metrics established for targeted capabilities; federal agencies' interoperability is not addressed.
| 5,058 | 731 |
At the request of Congress, we have previously studied a number of leading public sector organizations that were successful in pursuing management reform initiatives and becoming more results-oriented.These included selected state governments as well as foreign governments, such as Australia and the United Kingdom. We found that despite obvious and important differences in histories, culture, and political systems, each of the organizations commonly took three key steps as they sought to become more results-oriented and make fundamental improvements in performance. These were to (1) define clear missions and desired outcomes, (2) measure performance to gauge progress, and (3) use performance information to manage programs and support policy decisionmaking. Figure 1 below illustrates the various planning documents that the District has for managing the city, including an annual plan and report to Congress, various scorecards on selected goals that are on the District's Internet site, and proposed neighborhood action plans. November 1999 and a Neighborhood Action Forum in January 2000. The Mayor plans to hold additional Neighborhood Action Forums and use the results to develop Neighborhood Action Plans. 2. Agency strategic plans have been established for 15 of the 45 District agencies under the Mayor's jurisdiction. Although these agency strategic plans are presented in different formats, common elements include mission statements and key agency goals and measures. 3. The Mayor has signed performance contracts with the Directors of 21 city agencies. Under these contracts, the Directors are to be held accountable for achieving selected performance goals and are required to report their progress in meeting these goals on a monthly basis. The first step used by leading organizations--defining clear missions and desired outcomes--corresponds to the requirement in GPRA for federal agencies to develop strategic plans containing mission statements and outcome-related strategic goals. The District has clearly made progress in this regard. The citywide strategic plan contains largely outcome-related goals and measures that relate to the District's five strategic priorities. For example, under the building and sustaining healthy neighborhoods priority, the strategic plan contains nine performance goals, including the goal to enhance the appearance and security of neighborhoods citywide. This goal contains 10 action items with intended results identified, including an initiative to abate 1,500 nuisance properties. In addition, responsibility for each goal is assigned to a lead agency or agencies. Also, the District has taken some steps to align its activities, core processes, and resources. For example, the Mayor has placed a clear emphasis on performance management in his administration. As I noted, one example is the signing of performance contracts with the Directors of 21 city agencies. The performance contracts are important for underscoring the personal accountability the District Government's top leadership has for sound management and contributing to results. The Mayor also created four Deputy Mayor positions to assign responsibility for managing four critical functional areas within the government: Government Operations; Public Safety and Justice; Children, Youth and Families; and Economic Development. ensure that the strategic plan is as useful and informative as it could be. In developing its citywide strategic plan, the District held two meetings with citizens, which gave District residents the opportunity to propose priorities and to articulate a vision for the city. However, it was not clear from reading the strategic plan that the District involved other key stakeholders, specifically Congress, in the development of the plan. As you know, Mr. Chairman, GPRA requires federal executive branch agencies to consult with Congress when preparing their strategic plans. Consulting with Congress on its strategic plan could also benefit the District because of the appropriations and oversight role Congress plays and would be consistent with one of the District's action items to maintain communications with Congress. In addition, the District's strategic plan contains a vision statement and five strategic priorities. However, linking the vision statement to the strategic priorities with a comprehensive mission statement could help further clarify the direction the District wants to take. In our examination of high-performing organizations here in the United States and around the world, we have found that a clearly defined mission statement is one of the key elements of an effective performance management system. A mission statement is important because it brings an organization into focus and concisely tells why it exists, what it does, and how it does it. Finally, as the District continues its efforts to establish a clearly defined strategic direction for the city, it can enhance the usefulness of the plan by more fully articulating the strategies the city plans to use to achieve results. In some cases, it was not clear what strategies the Mayor's office was going to use to achieve action items relating to the strategic plan's performance goals. For example, the goal to enhance the appearance and security of neighborhoods citywide contained an action item of ensuring that 75 percent of youth attend school on a regular basis. However, the strategic plan did not give any indication how this measure would be achieved. Similarly, the goal that all residents have opportunities for lifelong learning contained an action item of increasing access to the Internet, but there was no discussion of how this would be achieved. The second key step that we found leading organizations commonly took--measuring performance to gauge progress toward goals--- corresponds to the GPRA requirement for federal agencies to develop annual performance plans and goals and performance measures to gauge progress. The District has made substantial progress in establishing performance measures for most of its goals. As it develops measures for the remaining goals and gains experience in using the data from the measures it has established, the experiences of high-performing organizations suggests that the District will identify ample opportunities to improve and refine its goals and measures. Specifically, we found that the fiscal year 2000 performance plan contained 447 measures, of which 36 (or 8 percent) had no indicators or performance targets that could be used to determine if the goals were achieved. When the Mayor updated this original plan several months later, there were 30 (or 7 percent) out of 417 measures without indicators to measure performance. You asked us to examine 31 goals drawn from the 417 in the Mayor's updated performance plan for fiscal year 2000. These goals were not meant to be a representative sample of all the District's goals. Of these 31, 29 were to be completed not later than September 30, 2000. As shown in the attachment to my statement, the District reported that as of August 31, 2000--1 month before scheduled completion--it had met 12 of these 29 goals, and it had not met 12 goals. An example of a goal that was met was from the Commission on the Arts and Humanities, which reported that it exceeded its goal of serving 35 percent of D.C. Public School students through the Arts in Education program, stating that 55 percent of students have been served by this program through August 2000. An example of a goal that was not met was from the Office of Banking and Financial Institutions (OBFI), which reported that it did not meet its goal of obtaining baseline data by June 2000 on capital and credit available by Ward. OBFI stated that it was not able to obtain this data from banks in the District due to proprietary issues these banks would face, and it was considering redefining the goal for future years. The District did not provide performance information for one goal, and for four goals it was unclear from the information provided whether the goal had been met. For example, the Department of Employment Services (DOES) had a goal of contacting 600 employers and entering them into the DOES database. However, the data provided by DOES to report progress on this goal showed information on the number of job orders and job openings in the system and the number of individuals placed. It was not clear from the information provided whether DOES accomplished its goal. annual performance reports with information on the extent to which the agency has met its annual performance goals. If policymakers in the District and in Congress are to use the information in the District's annual performance report to make decisions, then that information must be credible. Credible performance information is essential for accurately assessing agencies' progress towards the achievement of their goals and pinpointing specific solutions to performance shortfalls. Agencies also need reliable information during their planning efforts to set realistic goals. In some cases, producing credible performance data is relatively straightforward. For example, a District goal to open three new health centers would not normally need a systematic process to gather data that shows if the goal was met. Far more common, however, are goals and performance measures that would seem to depend upon the existence of a systematic process to efficiently and routinely gather the requisite performance data. In that regard, we found that the District has not yet implemented a system to provide assurance that the performance information it generates is sufficiently credible for decisionmaking. The District's performance report for fiscal year 1999 stated that the performance data was "unaudited." An official in the Mayor's office said that this meant the performance data had not been independently verified. He also said that the Mayor's office has asked the Inspector General to begin audits of the data. The 31 goals selected for our detailed review underscore the challenges confronting the District. In response to our request for evidence that a system existed to ensure that the performance data were sufficiently reliable for measuring progress toward goals, the District did not provide such evidence for 7 of the12 goals that the District reported had been met and for 11 of the 14 goals that the District reported had not been met. As a result, key decisionmakers cannot be certain that the seven goals reported to have been met were in fact met. For example, the Department of Public Works (DPW) did not provide a description of any system or procedures in place for ensuring the credibility of performance data for measuring progress on its goal of permanently repairing 90 percent of utility cuts within 45 days of utility work completion. As part of becoming more results-oriented, leading organizations work to ensure that their annual performance goals and measures "link up" to the organization's mission and long-term strategic goals as well as "link down" to organizational components with specific duties and responsibilities. This "up and down" linkage reinforces the connections between the long- term strategic goals and the day-to-day activities of program managers and staff. These linkages are important to ensuring that the services government provides contribute to results that citizens need and care about. The linkages also are important to underscore to front-line employees the vital role they play in meeting organizational goals. However, we found that additional efforts are needed to ensure that the critical linkages are in place. Specifically, the citywide strategic plan may not yet fully serve as the single unified plan to guide the District that the Mayor intends it to be. The strategic plan contains literally hundreds of action items that serve in essence as detailed performance commitments, often with specified completion dates. However, we found that these detailed action items were not always reflected in the Mayor's scorecard or performance contracts. Likewise, the commitments in the scorecard and the performance contracts were not always captured in the strategic plan. As a result, it can be unclear to city employees and managers as well as other decisionmakers what set of initiatives represents the District's highest priorities. In addition, at the Subcommittee's request, we determined the extent to which the performance contracts that the Mayor signed with the directors of three agencies are aligned with both the Mayor's performance plan and the Mayor's scorecard. The three agencies we looked at were the Metropolitan Police Department (MPD), the Department of Parks and Recreation (DPR), and the Department of Motor Vehicles (DMV). The three directors' contracts that we examined had a common format, which included a discussion of the Mayor's rating system, the agency's mission statement, and a series of performance requirements upon which the agency director was to be assessed and rated. The performance requirements included five common requirements (e.g., alignment of agency mission with the Mayor's strategic plan) that each director is responsible for meeting, as well as additional agency-specific requirements. performance plan. In addition, none of the four goals in the DPR scorecard were included in the DPR contract, and three of the four goals were not in the FY 2000 performance plan. For MPD, 10 of the 23 performance goals that were attached to the contract were not included in the FY 2000 plan. Although two of the four goals in the MPD scorecard were included in the MPD contract, these two goals have different deadlines in the scorecard and contract. The scorecard has a December 2000 deadline for the two goals, but the contract has the end of fiscal year 2000 as the goals' completion date. DMV's performance contract contains nine FY 2000 goals, eight of which are in the FY 2000 plan. However, for seven of these contract goals, the targets have been revised and therefore differ from those in the FY 2000 plan. Three of DMV's four scorecard goals are in the contract and the FY 2000 plan. According to an official in the Mayor's office, the Mayor appointed new directors to DMV and DPR in the summer of 1999 and they established new goals. The challenge confronting the District is by no means unique. As I noted, the histories of high-performing organizations show that their transformations do not come quickly or easily. However, we found that high-performing organizations know how the services they produce contribute to achieving results. In fact, this explicit alignment of daily activities with broader results is one of the defining features of high- performing organizations. At the federal level, we have found that such alignment is very much a work in progress. Many agencies continue to struggle with clearly understanding how what they do on a day-to-day basis contributes to results outside their organizations. The District is beginning to make some progress in this regard. In a comparison of the three District agency head contracts to the FY 2001 performance plan, there is a much more direct alignment, as the performance measures from each agency's section of the FY 2001 plans have been attached to that agency head's contract. As you know, Congress passed legislation in 1994 that is similar to the performance reporting requirement in GPRA in that it requires the District to prepare an annual performance report on each goal in the City's annual performance plan. This law was intended to provide a disciplined approach to improving the District government's performance by providing for public reporting on the District's progress in meeting its goals. On April 14 of this year, we reported to Congress that the District did not comply with this law for fiscal year 1999. Among our findings were that the District did not report actual performance for 460 of the 542 goals in the plan and did not provide the titles of the managers most responsible for achieving each goal as required by law. The fiscal year 1999 report was the first the District prepared under the legislation that was based on a performance plan, so we can expect that subsequent reports will show marked improvement. Moreover, the circumstances that led to this noncompliance were unusual and are not likely to be repeated. The Mayor's performance report was required to be based on goals that the Financial Responsibility and Management Assistance Authority--not the Mayor--had established. In November 1999, Congress returned this reporting responsibility to the Mayor. In addition, the Mayor has asked Congress for legislation that will facilitate the District's ability to comply with this law in the future. Specifically, the Mayor has requested that the date when the performance plan is due to Congress be changed to correspond more directly with the District's budget schedule and that the requirement for reporting on two levels of performance--acceptable and superior--for each goal be eliminated. According to the District, its performance report for fiscal year 2000 will include a discussion of several of the District's management reform projects. In June of this year, we testified on these projects before the House Appropriations Subcommittee on the District. The District budgeted over $300 million to fund these projects from fiscal year 1998 through 2000. Included in the District's budgets for this 3-year period were projected savings of about $200 million. However, we found that after 2- 1/2 years, the District had reported savings of only about $1.5 million. year 2000 and added 7 new management reform initiatives. For example, the Department of Public Works' initiative to improve its correspondence and telephone service was integrated into the Mayor's new goal of developing a Citywide Call Center. Under the federal law, the Mayor is required to report on only the goals that were in his original performance plan sent to Congress. However, the Mayor has updated his fiscal year 2000 plan with many new or modified goals after the plan was sent to Congress to address problems that were not found during the original planning process. As a result, the next performance report is not required to contain performance data on those new or updated goals. As expected, during the early years of a major performance measurement initiative, some of the changes and additions the District made to its performance goals and measures have been significant. Specifically, as of September 27, 2000, the Mayor's scorecard contained a total of 119 goals assigned to agency directors and other managers, including the Mayor. Of these 119 scorecard goals, 82 of them were not included as fiscal year 2000 performance measures in those agencies' corresponding sections of the FY 2000 performance plan. For example, the Department of Public Works' (DPW) scorecard goal to resurface 150 blocks of streets and alleys was not included among the DPW's performance measures in the FY 2000 plan. In addition, for the remaining 37 goals that were also present in the plan, the measures or targets for 28 of them had been revised. For the 119 goals that were in the scorecard, the District has reported, as of September 27, 2000, that 25 have been achieved thus far. Many of the remaining 94 goals have a completion date of December 2000. Many of the goals appearing only in the scorecard arose during the Mayor's meetings with District residents, which occurred after the Mayor completed his original performance plan. As a result, the District's next performance report to Congress to be issued early next year may not contain performance data on certain scorecard goals that represent important initiatives for the District. Although not required to do so, by reporting information on its significant goals--whenever they were established--the District could help Congress achieve a central aim of the 1994 legislation--having the District report on progress in meeting its goals for all significant activities. federal agencies found that they needed to change their performance goals--in some cases substantially--as they learned and gained experience during the early years of their performance measurement efforts. As you know, Mr. Chairman, this last March executive agencies issued their fiscal year 1999 performance reports. However, much has been learned about goal-setting and performance measurement since agencies developed their fiscal year 1999 goals back in the fall of 1997. In reviewing those performance reports issued last March, we saw examples where agencies noted that a goal or performance measure had changed from what had been in the original plan and reported progress in meeting the new goal. The advantage of this approach is that it helped to ensure that performance reports, by reporting on the agencies' actual, as opposed to discarded, goals, provided useful and relevant information for congressional and other decisionmakers. The following table provides information on 31 FY 2000 performance goals selected from the District of Columbia's FY 2001 Proposed Budget. The first column lists the performance goals and the District agency responsible for each goal. The 2nd and 3rd columns provide information on the agencies' reported progress in meeting these goals. The 4th and 5th columns provide information on whether or not the agencies described any system or procedures they have in place for ensuring the credibility of their performance data for these goals. For the 29 selected goals that were to be completed by the end of FY 2000, the District reported that--as of August 31, 2000 for most goals--it had met 12 goals, and that it had not yet met 12 goals. The District did not provide information for one goal, and for four goals it was unclear from the information provided whether the goal had been met. The District described a system that it had in place for ensuring the credibility of its performance data for 8 of the 31 goals. For 21 of these goals the District did not describe such a system that it had in place. In addition, for one goal, it was unclear from the District's response whether it had such a system, and we received no information on the District's progress or its system for assessing data for one goal. Viewing GAO Reports on the Internet For information on how to access GAO reports on the INTERNET, send e-mail message with "info" in the body to: or visit GAO's World Wide Web Home Page at: Reporting Fraud, Waste, and Abuse in Federal Programs To contact GAO's Fraud Hotline use: Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-Mail: [email protected] Telephone: 1-800-424-5454 (automated answering system)
|
This testimony focuses on the District of Columbia's progress and challenges in performance management. GAO discusses whether the District: (1) met the 29 performance goals that it scheduled for completion by the end of fiscal year 2000 that Congress chose from the more than 400 performance measures contained in the Mayor's fiscal year 2001 budget request, and (2) provided evidence that the performance data are sufficiently reliable for measuring progress toward goals. Mayor Williams' performance management system contains many--but not all--of the elements used successfully by leading organizations. The District could improve the usefulness of its mandated annual performance plans and reports by ensuring that the District government's most significant performance goals are included in both the annual performance plan and the annual performance report that federal law requires the Mayor to send to Congress every year.
| 4,419 | 163 |
Illegal drug use, particularly of cocaine and heroin, continues to be a serious health problem in the United States. Under the National Drug Control Strategy, the United States has established domestic and international efforts to reduce the supply and demand for illegal drugs. Over the past 10 years, the United States has spent over $19 billion on international drug control and interdiction efforts to reduce the supply of illegal drugs. The United States has developed a multifaceted drug control strategy intended to reduce the supply and demand for illegal drugs. The 1998 National Drug Control Strategy includes five goals: (1) educate and enable America's youth to reject illegal drugs as well as alcohol and tobacco; (2) increase the safety of U.S. citizens by substantially lowering drug-related crime and violence; (3) reduce health and social costs to the public of illegal drug use; (4) shield America's air, land, and sea frontiers from the drug threat; and (5) break foreign and domestic drug supply sources. The last two goals are the primary emphasis of U.S. international drug control and interdiction efforts. These are aimed at assisting the source and transiting nations in their efforts to reduce drug cultivation and trafficking, improve their capabilities and coordination, promote the development of policies and laws, support research and technology, and conduct other related initiatives. Section 490 of the Foreign Assistance Act of 1961 requires the President to annually certify which drug-producing and -transiting countries are cooperating fully with the United States or taking adequate steps on their own to achieve full compliance with the goals and objectives established by the 1988 United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances during the previous year. On February 26, 1998, the President issued his certification in which 22 countries were certified; 4 were certified with a national interest waiver (Cambodia, Colombia, Pakistan, and Paraguay); and 4 were denied certification or "decertified" (Afghanistan, Burma, Iran, and Nigeria). ONDCP is responsible for producing the National Drug Control Strategy and coordinating its implementation with other federal agencies. ONDCP has authority to review various agencies' funding levels to ensure they are sufficient to meet the goals of the national strategy, but it has no direct control over how these resources are used. The Departments of State and Defense and the Drug Enforcement Administration (DEA) are the principle agencies involved in implementing the international portion of the drug control strategy. Other U.S. agencies involved in counternarcotics activities overseas include the U.S. Agency for International Development, the U.S. Coast Guard, the U.S. Customs Service, various U.S. intelligence organizations, and other U.S. agencies. Over the past 10 years, the U.S. agencies involved in counternarcotics efforts have attempted to reduce the supply and availability of illegal drugs in the United States through the implementation of the National Drug Control Strategy. Although they have achieved some successes, the cultivation of drug crops has not been reduced significantly, and cocaine, heroin, and other illegal drugs remain readily available in the United States. According to a July 1997 report of the National Narcotics Intelligence Consumers Committee, cocaine and heroin were readily available in all major U.S. metropolitan areas during 1996. The report also states that methamphetamine trafficking and abuse in the United States have been increasing during the past few years. Despite long-term efforts by the United States and many drug-producing countries to reduce drug cultivation and eradicate illegal crops, the total net cultivation of coca leaf and opium poppy has actually increased. While the areas under cultivation have changed from year to year, farmers have planted new coca faster than existing crops have been eradicated. For example, while the amount of coca under cultivation in the primary growing area of Colombia was reduced by 9,600 hectares between 1996 and 1997, cultivation in two other Colombian growing areas increased by 21,900 hectares during this period. Overall, there has been very little change in the net area under coca cultivation since 1988. At the same time, the amount of opium poppy under cultivation increased by over 59,000 hectares, or by more than 30 percent between 1988 and 1997. The amount of cocaine and heroin seized between 1990 and 1996 made little impact on the availability of illegal drugs in the United States and on the amount needed to satisfy the estimated U.S. demand. The July 1997 report by the National Narcotics Intelligence Consumers Committee estimates potential cocaine production at about 760 metric tons for 1996, of which about 200 metric tons were seized worldwide. The remaining amount was more than enough to meet U.S. demand, which is estimated at about 300 metric tons per year. A primary reason that U.S. and foreign governments' counternarcotics efforts are constrained is the growing power, influence, adaptability, and capabilities of drug-trafficking organizations. Because of their enormous financial resources, power to corrupt counternarcotics personnel, and operational flexibility, drug-trafficking organizations are a formidable threat. Despite some short-term achievements by U.S. and foreign government law enforcement agencies in disrupting the flow of illegal drugs into the United States, drug-trafficking organizations have found ways to continue to meet and exceed the demand of U.S. drug consumers. According to U.S. agencies, drug-traffickers' organizations use their vast wealth to acquire and make use of expensive modern technology such as global positioning systems and cellular communications equipment. They use this technology to communicate and to coordinate transportation as well as to monitor and report on the activities of government organizations involved in counterdrug activities. In some countries, the complexity and sophistication of their equipment exceed the capabilities of the foreign governments trying to stop them. For example, we reported in October 1997 that many Caribbean countries continue to be hampered by inadequate counternarcotics capabilities and have insufficient resources for conducting law enforcement activities in their coastal waters. When confronted with threats to their activities, drug-trafficking organizations use a variety of techniques to quickly change their modes of operation, thus avoiding capture of their personnel and seizure of their illegal drugs. For example, when air interdiction efforts have proven successful, traffickers have increased their use of maritime and overland transportation routes. According to recent U.S. government reports, even after the capturing or killing of several drug cartel leaders in Colombia and Mexico, other leaders or organizations soon filled the void and adjusted their areas of operations. For example, we reported in February 1998 that, although the Colombian government had disrupted the activities of two major drug-trafficking organizations, the disruption had not reduced drug-trafficking activities and a new generation of relatively young traffickers was emerging. The United States is largely dependent on the countries that are the source of drug production and are transiting points for trafficking-related activities to reduce the amount of coca and opium poppy being cultivated and to make the drug seizures, arrests, and prosecutions necessary to stop the production and movement of illegal drugs. While the United States can provide assistance and support for drug control efforts in these countries, the success of those efforts depends on the countries' willingness and ability to combat the drug trade within their borders. Like the United States, source and transiting countries face long-standing obstacles that limit the effectiveness of their drug control efforts. These obstacles, many of which are interrelated, are competing economic, political, and cultural problems, including terrorism and internal unrest; corruption; and inadequate law enforcement resources and institutional capabilities. The extent to which the United States can affect many of these obstacles is minimal. The governments involved in drug eradication and control have other problems that compete for limited resources. As we reported over the years, drug-producing countries' efforts to curtail drug cultivation were constrained by political, economic, and/or cultural problems that far exceeded counternarcotics program managers' abilities to resolve. For example, these countries often had ineffective central government control over drug cultivation areas, competing demands for scarce host nation resources, weak economies that enhanced financial incentives for drug cultivation, corrupt or intimidated law enforcement and judicial officials, and legal cultivation of drug crops and traditional use of drugs. Internal strife in the source countries is another problem that competes for resources. Two primary source countries--Peru and Colombia--have had to allocate scarce funds to support military and other internal defense operations to combat guerrilla groups, which negatively affects counternarcotics operations. We reported that in Peru, for example, terrorist activities had hampered antidrug efforts. The December 1996 hostage situation at the Japanese Ambassador's residence in Lima is an example of the Peruvian government's having to divert antidrug resources to confront a terrorist threat. Although some key guerrilla leaders in Peru and Colombia have been captured, terrorist groups will continue to hinder efforts to reduce coca cultivation and efforts to reduce its dependence on coca as a contributor to the economy. In 1991, 1993, and 1998, we reported similar problems in Colombia, where several guerrilla groups made it difficult to conduct effective antidrug operations in many areas of the country. Colombia has also encountered resistance from farmers when it has tried to eradicate their coca crops. Narcotics-related corruption is a long-standing problem impacting U.S. and foreign governments' efforts to reduce drug-trafficking activities. Our work has identified widespread corruption in Burma, Pakistan, Thailand, Mexico, Colombia, Bolivia, Peru, and the countries of Central America and the Caribbean--among the countries most significantly involved in the cultivation, production, and transit of illicit narcotics. Corruption remains a serious, widespread problem in Colombia and Mexico, the two countries most significantly involved in producing and shipping cocaine. According to the U.S. Ambassador to Colombia, corruption in Colombia is the most significant impediment to a successful counternarcotics effort. The State Department also reported that persistent corruption within Mexico continued to undermine both police and law enforcement operations. Many law enforcement officers have been arrested and dismissed due to corruption. The most noteworthy was the February 1997 arrest of General Jose Gutierrez Rebollo--former head of the Mexican equivalent of DEA. He was charged with drug trafficking, organized crime and bribery, illicit enrichment, and association with one of the leading drug-trafficking organizations in Mexico. In February 1998, the U.S. embassy reported that three Mexican law enforcement officials who had successfully passed screening procedures were arrested for stealing seized cocaine--illustrating that corruption continues despite measures designed to root it out. The government of Mexico acknowledges that narcotics-related corruption is pervasive and entrenched within the criminal justice system and has placed drug-related corruption in the forefront of its national priorities. Effective law enforcement operations and adequate judicial and legislative tools are key to the success of efforts to stop the flow of drugs from the source and transiting countries. Although the United States can provide assistance, these countries must seize the illegal drugs and arrest, prosecute, and extradite the traffickers, when possible, in order to stop the production and movement of drugs internationally. However, as we have reported on several occasions, these countries lack the resources and capabilities necessary to stop drug-trafficking activities within their borders. In 1991, we reported that the lack of resources and adequately trained police personnel hindered Panama's ability to address drug-trafficking and money-laundering activities. Also, in 1994, we reported that Central American countries did not have the resources or institutional capability to combat drug trafficking and depended heavily on U.S. counternarcotics assistance. In June 1996, we reported that equipment shortcomings and inadequately trained personnel limited the government of Mexico's ability to detect and interdict drugs and drug traffickers, as well as to aerially eradicate drug crops. Our more recent work in Mexico indicates that these problems persist. For example, in 1997 the U.S. embassy reported that the 73 UH-1H helicopters provided by the United States to the Mexican military for eradication and reconnaissance purposes were of little utility above 5,000 feet, where most of the opium poppy is cultivated. Furthermore, the Bilateral Border Task Forces, which were established to investigate and dismantle the most significant drug-trafficking organizations along the U.S.-Mexico border, face operational and support problems, including inadequate Mexican government funding for equipment, fuel, and salary supplements for personnel assigned to the units. Our work over the past 10 years has identified other obstacles to implementing the U.S. international drug control strategy: (1) competing U.S. foreign policy objectives, (2) organizational and operational limitations among and within the U.S. agencies involved, and (3) inconsistent U.S. funding levels. In carrying out its foreign policy, the United States seeks to promote U.S. business and trade, improve human rights, and support democracy, as well as to reduce the flow of illegal drugs into the United States. These objectives compete for attention and resources, and U.S. officials must make tough choices about which to pursue more vigorously. As a result of U.S. policy decisions, counternarcotics issues have often received less attention than other objectives. According to an August 1996 Congressional Research Service report, inherent contradictions regularly appear between U.S. counternarcotics policy and other policy goals and concerns. Our work has shown the difficulties in balancing counternarcotics and other U.S. foreign policy objectives. For example, in 1990 we reported that the U.S. Department of Agriculture and the U.S. Agency for International Development disagreed over providing assistance to Bolivia for the growth of soybeans as an alternative to coca plants. The Agriculture Department feared that such assistance would interfere with U.S. trade objectives by developing a potential competitor for U.S. exports of soybeans. In 1995, we reported that countering the drug trade was the fourth highest priority of the U.S. embassy in Mexico. During our May 1995 visit to Mexico, the U.S. Ambassador told us that he had focused his attention during the prior 18 months on higher priority issues of trade and commerce such as the North American Free Trade Agreement and the U.S. financial support program for the Mexican peso. In 1996, the embassy elevated counternarcotics to an equal priority with the promotion of U.S. business and trade as the top priorities of the embassy, and it still remains that way. In addition, resources allocated for counternarcotics efforts are sometimes shifted to satisfy other policy objectives. For example, as we reported in 1995, $45 million originally intended for counternarcotics assistance for cocaine source countries was reprogrammed by the Department of State to assist Haiti's democratic transition. The funds were used to pay for such items as the cost of non-U.S. personnel assigned to the multinational force, training of a police force, and development of a job creation and feeding program. A similar diversion occurred in the early 1990s when U.S. Coast Guard assets in the Caribbean were reallocated from counternarcotics missions to the humanitarian mission of aiding emigrants in their mass exodus from Cuba and Haiti. The United States terminated most of its efforts to address opium cultivation in Burma, the world's largest opium producer, because of its human rights policies and the failure of the Burmese government to recognize the democratically elected government. The United States faces several organizational and operational challenges that limit its ability to implement effective antidrug efforts. Many of these challenges are long-standing problems. Several of our reports have identified problems involving competing priorities, interagency rivalries, lack of operational coordination, inadequate staffing of joint interagency task forces, and lack of oversight. For example, our 1995 work in Colombia indicated that there was confusion among U.S. embassy officials about the role of the offices involved in intelligence analysis and related operational plans for interdiction. In 1996, we reported that several agencies, including the U.S. Customs Service, DEA, and the Federal Bureau of Investigation, had not provided personnel, as they had agreed, to the Joint Interagency Task Force in Key West because of budgetary constraints. With the exception of a few positions that have been filled at the Task Force since then, staffing shortfalls continued to exist when we reported in October 1997. Furthermore, we have reported that in some cases, the United States did not adequately control the use of U.S. counternarcotics assistance and was unable to ensure that it was used as intended. Despite legislative requirements mandating controls over U.S.-provided assistance, we found instances of inadequate oversight of counternarcotics funds. For example, between 1991 and 1994, we issued four reports in which we concluded that U.S. officials lacked sufficient oversight of aid to ensure that it was being used effectively and as intended in Peru and Colombia. We also reported that the government of Mexico had misused U.S.-provided counternarcotics helicopters to transport Mexican military personnel during the 1994 uprising in the Mexican state of Chiapas. Our recent work in Mexico indicates that oversight and accountability of counternarcotics assistance continues to be a problem. We found that embassy records on UH-1H helicopter usage for the civilian law enforcement agencies were incomplete. Additionally, we found that the U.S. military's ability to provide adequate oversight is limited by the end-use monitoring agreement signed by the governments of the United States and Mexico. We also found instances where lessons learned from past counternarcotics efforts were not known to current planners and operators, both internally in an agency and within the U.S. antidrug community. For example, the United States initiated an operation to support Colombia and Peru in their efforts to curtail the air movement of coca products between the two countries. However, U.S. Southern Command personnel stated that while they were generally aware of the previous operation, they were neither aware of the problems that had been encountered, nor of the solutions developed in the early 1990s when planning the current operation. U.S. Southern Command officials attributed this problem to the continual turnover of personnel and the requirement to destroy most classified documents and reports after 5 years. These officials stated that an after-action reporting system for counternarcotics activities is now in place at the U.S. Southern Command. From 1988 to 1997, the United States spent about $110 billion on domestic and international efforts to reduce the use and availability of illegal drugs in the United States. Of this amount, over $19 billion was expended on international counternarcotics efforts supporting (1) the eradication of drug crops, the development of alternative forms of income for drug crop farmers, and increased foreign law enforcement capabilities ($4.2 billion) and (2) interdiction activities ($15.3 billion). However, from year to year, funding for international counternarcotics efforts has fluctuated and until recently had declined. In some instances, because of budgetary constraints, Congress did not appropriate the level of funding agencies requested; in others, the agencies applied funding erratically, depending on other priorities. The reduction in funding has sometimes made it difficult to carry out U.S. operations and has also hampered source and transiting countries' operations. For fiscal year 1998, the funding levels for counternarcotics activities were increased. For example, the State Department's international narcotics control and law enforcement programs were fully funded for fiscal year 1998 at $210 million. However, without longer-term budget stability, it may be difficult for agencies to plan and implement programs that they believe will reduce drug production and drug trafficking. There is no easy remedy for overcoming all of the obstacles posed by drug-trafficking activities. International drug control efforts aimed at stopping the production of illegal drugs and drug-related activities in the source and transiting countries are only one element of an overall national drug control strategy. Alone, these efforts will not likely solve the U.S. drug problem. Overcoming many of the long-standing obstacles to reducing the supply and smuggling of illegal drugs requires a long-term commitment. In our February 1997 report, we pointed out that the United States can improve the effectiveness of planning and implementing its current international drug control efforts by developing a multiyear plan with measurable goals and objectives and a multiyear funding plan. We have been reporting since 1988 that U.S. counternarcotics efforts have been hampered by the absence of a long-term plan outlining each agency's commitment to achieving the goals and objectives of the international drug control strategy. We pointed out that judging U.S. agencies' performance in reducing the supply of and interdicting illegal drugs is difficult because the agencies have not established meaningful measures to evaluate their contribution to achieving these goals. Also, agencies have not devised multiyear funding plans that could serve as a more consistent basis for policymakers and program managers to determine requirements for effectively implementing a plan and determining the best use of resources. We have issued numerous reports citing the need for an overall implementation plan with specific goals and objectives and performance measures linked to them. In 1988, we reported that goals and objectives had not been established in the drug-producing countries examined and, in 1993, we recommended that ONDCP develop performance measures to evaluate agencies' drug control efforts and incorporate the measures in the national drug control strategy. Under the Government Performance and Results Act of 1993 (P.L. 103-62), federal agencies are required to develop strategic plans covering at least 5 years, with results-oriented performance measures. In February 1998, ONDCP issued its annual National Drug Control Strategy. The strategy contains various performance measures to assess the strategy's effectiveness. In March 1998, ONDCP issued more specific and comprehensive performance measures for this strategy. In the near future, ONDCP plans to publish a classified annex to the strategy which, according to ONDCP officials, will be regional and, in some instances, country specific, and will be results oriented. While we have not reviewed the 1998 Strategy and its related performance measures in detail, we believe this parallels the recommendations we have made over the years to develop a long-term plan with meaningful performance measures. Additionally, the United States and Mexico issued a bi-national drug strategy in February 1998, but it did not contain critical performance measures and milestones for assessing performance. ONDCP officials told us that they plan to issue comprehensive performance measures for the bi-national strategy by the end of the year. Mr. Chairman and members of the Subcommittee, this concludes our statement for the record. Thank you for permitting us to provide you with this information. Drug Control: Counternarcotics Efforts in Colombia Face Continuing Challenges (GAO/T-NSIAD-98-103, Feb. 26, 1998). Drug Control: U.S. Counternarcotics Efforts in Colombia Face Continuing Challenges (GAO/NSIAD-98-60, Feb. 12, 1998). Drug Control: Planned Actions Should Clarify Counterdrug Technology Assessment Center's Impact (GAO/GGD-98-28, Feb. 3, 1998). Drug Control: Update on U.S. Interdiction Efforts in the Caribbean and Eastern Pacific (GAO/NSIAD-98-30, Oct. 15, 1997). Drug Control: Delays in Obtaining State Department Records Relating to Colombia (GAO/T-NSIAD-97-202, July 9, 1997). Drug Control: Reauthorization of the Office of National Drug Control Policy (GAO/T-GGD-97-97, May 1, 1997). Drug Control: Observations on Elements of the Federal Drug Control Strategy (GAO/GGD-97-42, Mar. 14, 1997). Drug Control: Long-standing Problems Hinder U.S. International Efforts (GAO/NSIAD-97-75, Feb. 27, 1997). Customs Service: Drug Interdiction Efforts (GAO/GGD-96-189BR, Sept. 26, 1996). Drug Control: U.S. Heroin Control Efforts in Southeast Asia (GAO/T-NSIAD-96-240, Sept. 19, 1996). Drug Control: Observations on Counternarcotics Activities in Mexico (GAO/T-NSIAD-96-239, Sept. 12, 1996). Terrorism and Drug Trafficking: Technologies for Detecting Explosives and Narcotics (GAO/NSIAD/RCED-96-252, Sept. 4, 1996). Drug Control: Counternarcotics Efforts in Mexico (GAO/NSIAD-96-163, June 12, 1996). Drug Control: Observations on Counternarcotics Efforts in Mexico (GAO/T-NSIAD-96-182, June 12, 1996). Drug Control: Observations on U.S. Interdiction in the Caribbean (GAO/T-NSIAD-96-171, May 23, 1996). Drug Control: U.S. Interdiction Efforts in the Caribbean Decline (GAO/NSIAD-96-119, Apr. 17, 1996). Terrorism and Drug Trafficking: Threats and Roles of Explosives and Narcotics Detection Technology (GAO/NSIAD/RCED-96-76BR, Mar. 27, 1996). Drug Control: U.S. Heroin Program Encounters Many Obstacles in Southeast Asia (GAO/NSIAD-96-83, Mar. 1, 1996). Review of Assistance to Colombia (GAO/NSIAD-96-62R, Dec. 12, 1995). Drug War: Observations on U.S. International Drug Control Efforts (GAO/T-NSIAD-95-194, Aug. 1, 1995). Drug War: Observations on the U.S. International Drug Control Strategy (GAO/T-NSIAD-95-182, June 27, 1995). Honduras: Continuing U.S. Military Presence at Soto Cano Base Is Not Critical (GAO/NSIAD-95-39, Feb. 8, 1995). Drug Activity in Haiti (GAO/OSI-95-6R, Dec. 28, 1994). Drug Control: U.S. Antidrug Efforts in Peru's Upper Huallaga Valley (GAO/NSIAD-95-11, Dec. 7, 1994). Drug Control: U.S. Drug Interdiction Issues in Latin America (GAO/T-NSIAD-95-32, Oct. 7, 1994). Drug Control in Peru (GAO/NSIAD-94-186BR, Aug. 16, 1994). Drug Control: Interdiction Efforts in Central America Have Had Little Impact on the Flow of Drugs (GAO/NSIAD-94-233, Aug. 2, 1994). Drug Control: U.S. Counterdrug Activities in Central America (GAO/T-NSIAD-94-251, Aug. 2, 1994). Illicit Drugs: Recent Efforts to Control Chemical Diversion and Money Laundering (GAO/NSIAD-94-34, Dec. 8, 1993). Drug Control: The Office of National Drug Control Policy-Strategies Need Performance Measures (GAO/T-GGD-94-49, Nov. 15, 1993). Drug Control: Expanded Military Surveillance Not Justified by Measurable Goals (GAO/T-NSIAD-94-14, Oct. 5, 1993). The Drug War: Colombia Is Implementing Antidrug Efforts, but Impact Is Uncertain (GAO/T-NSIAD-94-53, Oct. 5, 1993). Drug Control: Reauthorization of the Office of National Drug Control Policy (GAO/GGD-93-144, Sept. 29, 1993). Drug Control: Heavy Investment in Military Surveillance Is Not Paying Off (GAO/NSIAD-93-220, Sept. 1, 1993). The Drug War: Colombia Is Undertaking Antidrug Programs, but Impact Is Uncertain (GAO/NSIAD-93-158, Aug. 10, 1993). Drugs: International Efforts to Attack a Global Problem (GAO/NSIAD-93-165, June 23, 1993). Drug Control: Revised Drug Interdiction Approach Is Needed in Mexico (GAO/NSIAD-93-152, May 10, 1993). Drug Control: Coordination of Intelligence Activities (GAO/GGD-93-83BR, Apr. 2, 1993). Drug Control: Increased Interdiction and its Contribution to the War on Drugs (GAO/T-NSIAD-93-04, Feb. 25, 1993). Drug War: Drug Enforcement Administration's Staffing and Reporting in Southeast Asia (GAO/NSIAD-93-82, Dec. 4, 1992). The Drug War: Extent of Problems in Brazil, Ecuador, and Venezuela (GAO/NSIAD-92-226, June 5, 1992). Drug Control: Inadequate Guidance Results in Duplicate Intelligence Production Efforts (GAO/NSIAD-92-153, Apr. 14, 1992). The Drug War: Counternarcotics Programs in Colombia and Peru (GAO/T-NSIAD-92-9, Feb. 20, 1992). Drug Policy and Agriculture: U.S. Trade Impacts of Alternative Crops to Andean Coca (GAO/NSIAD-92-12, Oct. 28, 1991). The Drug War: Observations on Counternarcotics Programs in Colombia and Peru (GAO/T-NSIAD-92-2, Oct. 23, 1991). The Drug War: U.S. Programs in Peru Face Serious Obstacles (GAO/NSIAD-92-36, Oct. 21, 1991). Drug War: Observations on Counternarcotics Aid to Colombia (GAO/NSIAD-91-296, Sept. 30, 1991). Drug Control: Impact of DOD's Detection and Monitoring on Cocaine Flow (GAO/NSIAD-91-297, Sept. 19, 1991). The War On Drugs: Narcotics Control Efforts in Panama (GAO/NSIAD-91-233, July 16, 1991). Drug Interdiction: Funding Continues to Increase but Program Effectiveness Is Unknown (GAO/GGD-91-10, Dec. 11, 1990). Restrictions on U.S. Aid to Bolivia for Crop Development Competing With U.S. Agricultural Exports and Their Relationship to U.S. Anti-Drug Efforts (GAO/T-NSIAD-90-52, June 27, 1990). Drug Control: How Drug Consuming Nations Are Organized for the War on Drugs (GAO/NSIAD-90-133, June 4, 1990). Drug Control: Anti-Drug Efforts in the Bahamas (GAO/GGD-90-42, Mar. 8, 1990). Drug Control: Enforcement Efforts in Burma Are Not Effective (GAO/NSIAD-89-197, Sept. 11, 1989). Drug Smuggling: Capabilities for Interdicting Airborne Private Aircraft Are Limited and Costly (GAO/GGD-89-93, June 9, 1989). Drug Smuggling: Capabilities for Interdicting Airborne Drug Smugglers Are Limited and Costly (GAO/T-GGD-89-28, June 9, 1989). Drug Control: U.S.-Supported Efforts in Colombia and Bolivia (GAO/NSIAD-89-24, Nov. 1, 1988). Drug Control: U.S. International Narcotics Control Activities (GAO/NSIAD-88-114, Mar. 1, 1988). Controlling Drug Abuse: A Status Report (GAO/GGD-88-39, Mar. 1, 1988). Drug Control: U.S.-Supported Efforts in Burma, Pakistan, and Thailand (GAO/NSIAD-88-94, Feb. 26, 1988). Drug Control: River Patrol Craft for the Government of Bolivia (GAO/NSIAD-88-101FS, Feb. 2, 1988). Drug Control: U.S.-Mexico Opium Poppy and Marijuana Aerial Eradication Program (GAO/NSIAD-88-73, Jan. 11, 1988). U.S.-Mexico Opium Poppy and Marijuana Aerial Eradication Program (GAO/T-NSIAD-87-42, Aug. 5, 1987). Status Report on GAO Review of the U.S. International Narcotics Control Program (GAO/T-NSIAD-87-40, July 29, 1987). Drug Control: International Narcotics Control Activities of the United States (GAO/NSIAD-87-72BR, Jan. 30, 1987). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed its observations on the effectiveness of U.S. efforts to combat drug production and the movement of drugs into the United States, focusing on: (1) the challenges of addressing international counternarcotics issues; (2) obstacles to implementation of U.S. drug control efforts; and (3) areas requiring attention to improve the operational effectiveness of U.S. drug control efforts. GAO noted that: (1) despite long-standing efforts and expenditures of billions of dollars, illegal drugs still flood the United States; (2) although U.S. counternarcotics efforts have resulted in the arrest of major drug traffickers, the seizure of large amounts of drugs, and the eradication of illicit drug crops, they have not materially reduced the availability of drugs in the United States; (3) the United States and drug-producing and -transiting nations face a number of obstacles in attempting to reduce the production of and trafficking in illegal drugs; (4) international drug-trafficking organizations are sophisticated, multibillion-dollar industries that quickly adapt to new U.S. drug control efforts; (5) as success is achieved in one area, the drug-trafficking organizations change tactics, thwarting U.S. efforts; (6) there are also other obstacles that impede U.S. and drug producing and -transiting countries' drug control efforts; (7) in the drug-producing and -transiting countries, counternarcotics efforts are constrained by corruption, competing economic and political policies, inadequate laws, limited resources and institutional capabilities, and internal problems such as terrorism and civil unrest; (8) morever, drug traffickers are increasingly resourceful in corrupting the countries' institutions; (9) for its part, the United States has not been able to maintain a well-organized and consistently funded international counternarcotics program; (10) U.S. efforts have also been hampered by competing U.S. foreign policy objectives, organizational and operational limitations, and the lack of clear goals and objectives; (11) since GAO's February 1997 report, some countries, with U.S. assistance, have taken steps to improve their capacity to reduce the flow of illegal drugs into the United States; (12) these countries have taken action to extradite drug criminals; enacted legislation to control organized crime, money laundering, and chemicals used in the production of illicit drugs; and instituted reforms to reduce corruption; and (13) while these actions represent positive steps, it is too early to determine their impact, and challenges remain.
| 7,394 | 546 |
The GS classification system is a mechanism for organizing federal white- collar work, notably for the purpose of determining pay, based on a position's duties, responsibilities, and difficulty, among other things. The GS system--which is administered by OPM--influences other human capital practices such as training, since training opportunities link position competencies with the employee's performance. In 2013, the GS system covered about 80 percent of the civilian white-collar workforce, or about 1.6 million employees. Several public policy groups and some OPM reports have questioned the ability of the GS system to meet agencies' needs for flexible talent management tools that enable them to align employees with mission requirements. In our ongoing work, among other things, we are assessing (1) the attributes of a modern, effective classification system and the extent to which the current GS system balances those attributes, and (2) OPM's administration and oversight of the GS system. Our preliminary findings from this work are as follows: While there is no one right way to design a classification system, based on our analysis of subject matter specialists' comments, related literature, and interviews with OPM officials, there are eight key attributes that are important for a modern, effective classification system. Collectively these attributes provide a useful framework for considering refinements or reforms to the current system. These key attributes are described in table 1. While each attribute is individually important, there are inherent tensions between some attributes, and the challenge is finding the optimal balance among them. The weight that policymakers and stakeholders assign to each attribute could have large implications for pay, the ability to recruit and retain mission critical employees, and other aspects of personnel management. This is one reason why-- despite past proposals--changes to the current system have been few, as it is difficult to find the optimal mix of attributes that is acceptable to all stakeholders. In comparing the GS system to these key attributes, during our ongoing work we found a number of examples of how the current system's design reflects some of these key attributes but falls short of achieving them in implementation. As one example, the GS system includes 15 statutorily- defined grade levels intended to distinguish the degrees of difficulty within an occupation. Standard grade levels can simplify the system and provide internal equity. Agency officials assign a grade level to a position after analyzing the duties and responsibilities according to the factor evaluation system.occupation and grade level across different agencies, providing simplicity and internal equity to the system, and may help employees move across agencies. This allows for easy comparisons of employees in the same However, having 15 grades requires officials to make meaningful distinctions between things like the extent of the skills necessary for the work at each level, which may be more difficult to determine in some occupations than others. For example, officials must be able to determine how the work of a GS-12 accountant is different from a GS-13 accountant. Making clear distinctions between these occupations may be nuanced, as the basis for them hinges on, for example, how agency officials determine the degree of complexity of the work. As a result, having 15 grade levels may make the system seem less transparent, as distinguishing between the levels may not be precisely measured by the Otherwise agencies risk having elements of the factor evaluation criteria. two employees performing substantially equal work but receiving unequal pay, which decreases the degree to which the system can ensure internal equity. We believe that, going forward, these eight attributes of a more modern, effective classification system can help provide criteria for policymakers and other stakeholders to use in determining whether refinements to the current GS system or wholesale reforms are needed. Not all occupations comprise all 15 grade levels. The occupational category determines the grade levels covered by an occupational series. because it determined that the reviews were ineffective at overseeing agency compliance with the occupational standards. Specifically, officials said the reviews were time consuming and agencies did not agree with how OPM selected the position descriptions to review. OPM officials said agencies frequently contested the results of the reviews leading to another time- and resource-intensive review process for both OPM and the agencies. OPM officials said they rely on agencies' internal oversight programs to ensure proper application of the classification policies. However, OPM officials told us they do not review agency oversight efforts, nor do they know which agencies, if any, have robust internal oversight mechanisms. OPM officials told us that in 2014 they had 6 full-time classification policy specialists tasked with maintaining the classification standards, compared to 16 that they had in 2001, and many more in the 1980s. OPM officials said that lower staffing levels limit the agency's ability to perform oversight. Based on our ongoing work, we believe that OPM, like all agencies, will have to make difficult tradeoffs between competing demands in this era of limited resources. A key federal human capital management challenge is how best to balance the size and composition of the federal workforce so that it is able to deliver the high quality services that taxpayers demand, within the budgetary realities of what the nation can afford. Recognizing that the federal government's pay system does not align well with modern compensation principles (where pay decisions are based on the skills, knowledge, and performance of employees as well as the local labor market), Congress has provided various agencies with exemptions from the current system to give them more flexibility in setting pay. Thus, a long-standing federal human capital management question is how to update the entire federal compensation system to be more market based and performance oriented. This type of system is a critical component of a larger effort to improve organizational performance. As we reported in January 2014, between 2004 and 2012 spending on total government-wide compensation for each full-time equivalent (FTE)position grew by an average of 1.2 percent per year, from $106,097 in 2004 to $116,828 in 2012 (see figure 1). Much of this growth was driven by increased personnel benefits costs, which rose at a rate of 1.9 percent per year. Other factors included locality pay adjustments, as well as a change in the composition of the federal workforce (with a larger share of employees working in professional or administrative positions, requiring advanced skills and degrees). In terms of employee pay per FTE, spending rose at an average annual rate of 1 percent per year (a 7.9 percent increase overall). As we reported earlier this year, while spending on compensation increased from 2004 to 2012, it remained relatively constant as a proportion of the federal discretionary budget at about 14 percent from 2004 to 2010, with slight increases in 2011 and 2012. The composition of the federal workforce has changed over the past 30 years, with the need for clerical and blue collar roles diminishing and professional, administrative, and technical roles increasing. As a result, today's federal jobs require more advanced skills at higher grade levels than in years past. Additionally, federal jobs, on average, require more advanced skills and degrees than private sector jobs. This is because a higher proportion of federal jobs than nonfederal are in skilled occupations such as science, engineering, and program management, while a lower proportion of federal jobs than nonfederal are in occupations such as manufacturing, construction, and service work. The result is that the federal workforce is on average more highly educated than the private sector workforce. As we reported in 2012, the policy of Congress is for federal workers' pay under the GS system to be in line with comparable nonfederal workers' pay. Annual pay adjustments for GS employees are either determined through the process specified in the Federal Employees Pay Comparability Act of 1990 (FEPCA) or set based on percent increases authorized directly by Congress. GS employees receive an across-the- board increase (ranging from 0 to 3.8 percent since FEPCA was implemented) that has usually been made in accordance with a FEPCA formula linking increases to national private sector salary growth. This increase is the same for each covered employee. Most GS employees also receive a locality payment that varies based on their location. While FEPCA specifies a process designed to reduce federal-nonfederal pay gaps in each locality, in practice locality increases have usually been far less than the recommended amount, which has been between 15 and 20 percent in recent years. The President's Pay Agent, the entity responsible for recommending federal locality pay adjustments to the President, has recommended that the underlying model and methodology for estimating pay gaps be reexamined to ensure that private sector and federal sector pay comparisons are as accurate as possible. To date, no such reexamination has taken place. However, other organizations have compared federal and non-federal pay. The findings of the six studies published between 2009 and 2012 that we reviewed came to different conclusions on which sector had the higher pay and the size of the pay disparities because they used different approaches, methods, and data. With that in mind, when looking within and across these or other studies, it is important to understand the studies' methodologies because they affect how the studies can be interpreted. The across-the-board and locality pay increases discussed above are given to all covered employees nearly every year and are not linked to performance. employees that are linked to performance ratings as determined by agencies' performance appraisal systems include within-grade increases, ratings-based cash awards, and quality step increases. Within-grade increases are the least strongly linked to performance, ratings-based cash awards are more strongly linked to performance depending on the rating system the agency uses, and quality step increases are also more strongly linked to performance. Pay increases and monetary awards available to GS Based on our past work, we believe that implementing a more market- based and more performance-oriented pay system is both doable and desirable. However, experience has shown it certainly is not easy. For one thing, agencies must have effective performance management systems that link individual expectations to organizational results. Moreover, representatives of public, private, and nonprofit organizations, in discussing the successes and challenges they have experienced in designing and implementing their own results-oriented pay systems, told us they had to shift from a culture where compensation is based on position and longevity to one that is performance oriented, affordable, and sustainable. In fiscal years 2011, 2012, and 2013, there was neither an across-the-board or locality pay increase due to a government-wide pay freeze. useful lessons learned that will be important to consider to the extent the federal government moves toward a more results-oriented pay system. Lessons learned include the following: 1. Focus on a set of values and objectives to guide the pay system. 2. Examine the value of employees' total compensation to remain competitive in the labor market. 3. Build in safeguards to enhance the transparency and ensure the fairness of pay decisions. 4. Devolve decision making on pay to appropriate levels. 5. Provide clear and consistent communication so that employees at all levels can understand how compensation reforms are implemented. 6. Provide training on leadership, management, and interpersonal skills to facilitate effective communications. 7. Build consensus to gain ownership and acceptance of pay reforms. 8. Monitor and refine the implementation of the pay system. Our past work has shown that a long-standing challenge for federal agencies has been developing credible and effective performance management systems that can serve as a strategic tool to drive internal change and achieve results. In 2011, various federal agencies, labor unions, and other organizations developed the Goals-Engagement- Accountability-Results (GEAR) framework to help improve performance management by articulating a high-performance culture; aligning employee and organizational performance management; implementing accountability at all levels; creating a culture of engagement; and improving supervisor assessment, selection, development, and training. Five federal agencies volunteered to pilot GEAR, either agency-wide or in specific components, including the Departments of Energy, Homeland Security/Coast Guard, Housing and Urban Development, Veterans Affairs/National Cemetery Administration, and OPM--with the intention to expand GEAR government-wide. In our September 2013 report we found that the GEAR framework generally addressed key practices for effective performance management that we had previously identified, such as aligning individual performance expectations with organizational goals. Additionally, we concluded that the GEAR framework presented an opportunity for federal agencies to increase employee engagement and improve performance management. Even though the GEAR pilot had only been in place for a short time, agency officials described benefits such as improved engagement and communication between employees and supervisors. To improve the dissemination of the GEAR framework, our 2013 report included recommendations for OPM to, among other actions, better define the roles and responsibilities of OPM, the CHCO Council, and participating federal agencies, including how to identify future promising practices and how to update and disseminate information on the government-wide implementation of GEAR. We concluded that clearly defined roles and responsibilities are important for capitalizing on the improvements made at the five pilot agencies, as well as for sustaining and achieving the current administration's goal of implementing GEAR more broadly. OPM agreed with our recommendations and, working with the CHCO Council, has taken some initial steps to implement them. For example, the CHCO Council recently released a "toolkit" describing the next steps for implementing the GEAR principles, and the Executive Director of the CHCO Council described efforts in 2014 to improve employee engagement. Moreover, in June 2014, OPM officials said that OPM will be responsible for facilitating the collaboration and information-sharing between agencies on their approaches to implement GEAR, and OPM will continue to provide technical support and expertise on GEAR and successful practices for performance management. However, OPM officials said that while they will continue working with the CHCO Council to promote GEAR and to encourage other agencies to adopt the framework, it is unclear if any additional agencies have formally adopted GEAR to date. OPM said it will also work with the CHCO Council and implementing agencies to determine effective and appropriate evaluation tools and metrics to assess the progress of the implementation of GEAR. As our past work has demonstrated, effective performance management systems can help create results-oriented organizational cultures by providing objective information to allow managers to make meaningful distinctions in performance in order to reward top performers and deal with poor performers. Although poor performance is not defined by statute, title 5 of the United States Code defines "unacceptable performance" as "performance of an employee which fails to meet established performance standards in one or more critical elements of such employee's position." Even a small number of poor performers can have negative effects on employee morale and agencies' capacity to meet their missions. The 2013 Federal Employee Viewpoint Survey found that only 28 percent of federal employees agreed that their work unit takes steps to deal with a poor performer who cannot or will not improve. Although the exact number of poor performers in the federal government is unknown, it is generally agreed that poor performance should be addressed earlier rather than later, with the objective of improving performance. Various studies, reports, and surveys of federal supervisors and employees we reviewed have identified impediments to dealing with poor performance, including issues related to (1) time and complexity of the processes; (2) lack of training in performance management; and (3) communication, including the dislike of confrontation. It will be important for agencies to hold managers accountable for using probationary periods and other tools for addressing poor performers as well as to ensure that supervisors have adequate support from upper-level management and human capital staff in dealing with poor performance subject to applicable safeguards. Our prior work on this topic identified various tools and approaches for addressing performance "upstream" in the process within a merit-based system that contains appropriate safeguards.the following: An effective performance management system that (1) creates a clear "line of sight" between individual performance and organizational success; (2) provides adequate training on the performance management system; (3) uses core competencies to reinforce organizational objectives; (4) addresses performance on an ongoing basis; and (5) contains transparent processes. A probationary period that provides managers with a provisional period to rigorously review employee performance. Since we first narrowed the strategic human capital high-risk area to focus on identifying and addressing government-wide mission critical skills gaps in February 2011, executive agencies and Congress have continued their efforts to ensure the government takes a more strategic and efficient approach to recruiting, hiring, developing, and retaining individuals with the skills needed to cost-effectively carry out the nation's business. At the same time, we have recommended numerous actions individual agencies should take to address their specific human capital challenges, and we have also made recommendations to OPM to address government-wide human capital issues. For example, in September 2011 OPM and the CHCOs--as part of ongoing discussions between OPM, the Office of Management and Budget (OMB), and GAO on the steps needed to address the federal government's human capital challenges--established the Chief Human Capital Officers Council Working Group (Working Group) to identify and mitigate critical skills gaps. Further, the Working Group's efforts were designated an interim Cross-Agency Priority (CAP) goal within the administration's fiscal year 2013 federal budget. Using a multi-faceted approach including a literature review and an analysis of various staffing gap indicators, the Working Group identified the following government-wide mission critical occupations: human resources specialist; science, technology, engineering, and mathematics occupational group. The Director of OPM--as leader of the cross-agency priority goal to close critical skills gaps--identified key federal officials from each of the six government-wide mission critical occupations to serve as "sub-goal leaders." For example, the sub-goal co-leaders for the cybersecurity workforce are from the White House Office of Science and Technology Policy and the Department of Commerce's National Institute of Standards and Technology. OPM noted that in working with their occupational communities, the sub-goal leaders have selected specific strategies to decrease skills gaps in the occupations they represent. OPM also noted that the OPM Director is to meet quarterly with these officials to monitor their progress, address their challenges, and identify support needed from OPM. The Working Group also identified seven mission critical competencies, including data analysis, strategic thinking, influencing and negotiating, and problem solving, as well as agency-specific mission critical occupations such as nurses at the Department of Veterans Affairs. For both the occupations and competencies, high-risk skills gaps were defined as those where staffing shortfalls could jeopardize the ability of government or specific agencies to accomplish their mission. Under the skills gap CAP goal, OPM reported that "by September 30, 2013, skills gaps will be reduced by 50 percent for three to five critical federal government occupations or competencies, and additional agency- specific high-risk occupations and competency gaps will be closed." However, OPM's progress against this metric cannot be determined because as of June 2014 OPM has not provided any data on it. Nonetheless, in November 2013, sub-goal leaders reported to OPM on the activity and progress made toward targets for each of the mission- critical occupations in fiscal year 2013. Specifically, leaders for three of the six sub-goals (cybersecurity, acquisition, and economist) reported that they had met their planned level of performance for fiscal year 2013, while the other three sub-goal leaders (human resources, auditor and STEM) reported that they did not make their target or were developing action plans for fiscal year 2014. For example, we found in June 2014 that the acquisitions sub-goal group established a target for increasing the certification rate of GS-1102 contract specialists to 80 percent. The final quarterly status update to the closing skill gaps CAP reported that the target was met and the certification rate increased to 81 percent. Conversely, the auditor sub-goal group reported that it was still gathering information on the extent of the skill gaps in the government-wide auditor workforce. Government Efficiency and Effectiveness: Views on the Progress and Plans for Addressing Government-wide Management Challenges, GAO-14-436T (Washington, D.C.: Mar. 12, 2014). remain a priority and efforts related to it will continue to be implemented using various approaches led by the Director of OPM and the team of sub-goal leaders. Metrics that will be tracked include increasing the certification rates for contract specialists to 84 percent in the acquisition area, and for the cybersecurity area, increasing manager satisfaction with the quality of applicants from 65 percent to 67 percent. As we reported in February 2013, extent to which OPM and agencies develop the infrastructure to sustain their planning, implementation, and monitoring efforts. It will be important for OPM and agencies to implement refinements to the approaches the Working Group used to identify and address critical skills gaps in order to enhance their effectiveness. These refinements can include further progress will depend on the identifying ways to document and assemble lessons learned, leading practices, and other useful information for addressing skill gaps into a clearinghouse or database so agencies can draw on one another's experiences and avoid duplicating efforts; examining the cost-effectiveness of delivering tools and shared services such as online training for workforce planning to address issues affecting multiple agencies; reviewing the extent to which new capabilities are needed to give OPM and other agencies greater visibility over skills gaps government-wide to better identify which agencies may have surpluses of personnel in those positions and which agencies have gaps, as well as the adequacy of current mechanisms for facilitating the transfer of personnel from one agency to another to address those gaps as appropriate; and determining whether existing workforce planning and other tools can be used to help streamline the processes developed by the Working Group. GAO-13-283. that these were important areas for consideration and is taking steps to implement them. We will continue to monitor OPM and agencies' efforts in closing mission-critical skills gaps. In addition to mission critical skills gaps, our recent work has identified other management challenges that, collectively, are creating fundamental capacity problems that could undermine the ability of agencies to effectively carry out their missions. Although the way forward will not be easy, agency officials said these same challenges have created the impetus to act and a willingness to consider creative and nontraditional strategies for addressing issues in ways that previously may not have been organizationally or culturally feasible. The Balanced Budget and Emergency Deficit Control Act of 1985, as amended, establishes discretionary spending limits for fiscal years 2012 through 2021, but many agencies had experienced flat or declining budgets for several years prior. agencies are reducing hiring, limiting training, offering employee buyouts and providing early retirement packages. As we concluded in our report, without careful attention to strategic and workforce planning and other approaches to managing and engaging personnel, the reduced investments in human capital can have lasting, detrimental effects on the capacity of an agency's workforce to meet its mission. 2 U.S.C. SS 901(c). Officer (CFO) Act agencies we reviewed,10 had a lower number of career permanent employees in 2012 than they did in 2004, 13 had a greater number, and 1, the Department of Transportation, was unchanged. Earlier this year we reported that 31 percent of all career permanent employees who were on board in September 2012 will become eligible to retire in September 2017. As shown in figure 3 below, not all agencies will be equally affected by this trend. By 2017, 20 of the 24 CFO Act agencies will have a higher percentage of staff eligible to retire than the current 31 percent government-wide. For example, about 21 percent of DHS staff on board as of September 2012 will be eligible to retire in 2017, while over 42 percent will be eligible to retire at both the Department of Housing and Urban Development and the Small Business Administration. Certain occupations--such as air traffic controllers and those involved in program management--will also have particularly high retirement eligibility rates by 2017. Various factors affect when individuals actually retire, and some amount of retirement and other forms of attrition can be beneficial because it creates opportunities to bring fresh skills on board and allows organizations to restructure themselves in order to better meet program goals and fiscal realities. But if turnover is not strategically monitored and managed, gaps can develop in an organization's institutional knowledge and leadership. The high projected retirement eligibility rates across government underscore the importance of effective succession planning. According to OPM, factors such as a 3-year pay freeze, automatic reductions from sequestration that included furloughs for hundreds of thousands of employees, and reductions in training and other areas have taken their toll on the federal workforce. In the 2013 Employee Viewpoint Survey results, the "global satisfaction index" showed an 8-percentage- point decline since 2010. Each of the four factors that make up the global satisfaction index showed downward trends from last year's results: job satisfaction dropped 3 points to 65 percent, pay satisfaction was down 5 points to 54 percent, organization satisfaction fell 3 points to 65 percent, and respondents that said they would recommend their organization declined by 4 points to 63 percent. As we reported earlier this year, to identify strategies for managing the federal workforce and plan for future needs in an era of constrained resources, we used several approaches including convening a full-day forum that included 25 of the 27 members of the CHCO Council. Our analysis and recommendations based on this effort provide an important framework for prioritizing and modernizing current human capital management practices to meet agencies' current and future missions. The strategies included the following: Strengthening collaboration to address a fragmented human capital community. Our analysis found that the federal human capital community is highly fragmented with multiple actors inside government informing and executing personnel policies and initiatives in ways that are not always aligned with broader, government-wide human capital efforts. The CHCO Council was established to improve coordination across federal agencies on personnel issues, but according to CHCOs, the council is not carrying out this responsibility as well as it could. This challenge manifests itself in two ways: across organizations, with many actors making human capital decisions in an uncoordinated manner, and within agencies, excluding CHCOs and the human capital staff from key agency decisions. Using enterprise solutions to address shared challenges. Our analysis found that agencies have many common human capital challenges, but they tend to address these issues independently without looking to enterprise solutions that could resolve them more effectively. Across government, there are examples of agencies and OPM initiating enterprise solutions to address crosscutting issues, including the consolidation of federal payroll systems into shared-services centers. CHCOs highlighted human resource information technology and strategic workforce planning as two areas that are ripe for government-wide collaboration. Creating more agile talent management to address inflexibilities in the current system. Our analysis found talent management tools lack two key ingredients for developing an agile workforce, namely the ability to (1) identify the skills available in existing workforces, and (2) move people with specific skills to address emerging, temporary, or permanent needs within and across agencies. As we reported earlier this year, the CHCOs said OPM needs to do more to raise awareness and assess the utility of the tools and guidance it provides to agencies to address key human capital challenges. CHCOs said they were either unfamiliar with OPM's tools and guidance or they fell short of their agency's needs. OPM officials said they had not evaluated the tools and guidance they provide to the agencies. As a result, a key resource for helping agencies improve the capacity of their personnel offices is likely being underutilized. Among other things, in our May 2014 report we recommended that the Director of OPM, in conjunction with the CHCO Council, strengthen OPM's coordination and leadership of government-wide human capital issues to ensure government-wide initiatives are coordinated, decision makers have all relevant information, and there is greater continuity in the human capital community for key reforms. Specific steps could include, for example, developing a government-wide human capital strategic plan that, among other things, would establish strategic priorities, time frames, responsibilities, and metrics to better align the efforts of members of the federal human capital community with government-wide human capital goals and issues. OPM and the CHCO Council concurred with these recommendations and identified actions they plan to take to address them. In conclusion, strategic human capital management must be the centerpiece of any serious effort to ensure federal agencies operate as high-performing organizations. A high-quality federal workforce is especially critical now given the complex and cross-cutting issues facing the nation. Through a variety of initiatives, Congress, OPM, and individual agencies have strengthened the government's human capital efforts since we first identified strategic human capital management as a high-risk area in 2001. Still, while much progress has been made over the last 13 years in modernizing federal human capital management, the job is far from over. Indeed, the focus areas discussed today are not an exhaustive list of challenges facing federal agencies and are long-standing in nature. Greater progress will require continued collaborative efforts between OPM, the CHCO Council, and individual agencies, as well as the continued attention of top-level leadership. Progress will also require effective planning, responsive implementation, robust measurement and evaluation, and continued congressional oversight to hold agencies accountable for results. In short, while the core human capital processes and functions--such as workforce planning and talent management--may sound somewhat bureaucratic and transactional, our prior work has consistently shown the direct link between effective strategic human capital management, and successful organizational performance. At the end of the day, strategic human capital management is about mission accomplishment, accountability, and responsive, cost-effective government. Chairman Farenthold, Ranking Member Lynch, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. For further information regarding this statement, please contact Robert Goldenkoff, Director, Strategic Issues, at (202) 512-6806, or [email protected]. Individuals making key contributions to this statement include Chelsa Gurkin, Assistant Director; Robyn Trotter, Analyst-in-Charge; Jeffrey Schmerling; Devin Braun; Tom Gilbert; Donald Kiggins; and Steven Lozano. Key contributors for the earlier work that supports this testimony are listed in each product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Strategic human capital management plays a critical role in maximizing the government's performance and assuring its accountability to Congress and to the nation as a whole. GAO designated strategic human capital management as a government-wide high-risk area in 2001 because of a long-standing lack of leadership. Since then, important progress has been made. However, retirements and the potential loss of leadership and institutional knowledge, coupled with fiscal pressures underscore, the importance of a strategic and efficient approach to acquiring and retaining individuals with needed critical skills. As a result, strategic human capital management remains a high-risk area. This testimony is based on preliminary findings of GAO's ongoing work on the classification system and a body of GAO work primarily from 2012 to 2014 and focuses on the progress made by OPM and executive branch agencies in key areas of human capital management, including: (1) how the GS classification system compares to the attributes of a modern, effective classification system, (2) the status of performance management and efforts to address poor performance, (3) progress addressing critical skills gaps, and (4) strategies to address human capital challenges in an era of highly constrained resources. Serious human capital shortfalls can erode the capacity of federal agencies and threaten their ability to cost-effectively carry out their missions. While progress has been made, continued attention is needed to ensure agencies have the human resources to drive performance and achieve the results the nation demands. Specifically, additional areas needing to be addressed include: Classification GAO's preliminary work has found eight key attributes of a modern, effective classification system, such as: internal and external equity, transparency, and simplicity. The attributes require trade-offs and policy choices to implement. In concept, the General Schedule's (GS) design reflects some of the eight attributes, but falls short of achieving them in implementation. For example, the GS system's grade levels provide internal equity by making it easy to compare employees in the same occupation and grade level across different agencies. However, the number of grade levels can reduce transparency because making clear distinctions between the levels may be nuanced, as the basis for them hinges on, for example, how officials determine the complexity of the work. Performance management Effective performance management systems enable managers to make meaningful distinctions in performance in order to reward top performers and deal with poor performers. In 2011, five agencies piloted the Goals-Engagement-Accountability-Results (GEAR) framework to help improve performance management. GEAR addressed important performance management practices, such as aligning individual performance with organizational goals. However, while Office of Personnel Management (OPM) officials said they are working with the Chief Human Capital Officer's Council to promote GEAR, it is unclear if any additional agencies have adopted the GEAR framework. Critical skills gaps Since GAO included identifying and addressing government-wide critical skills gaps as a high-risk area in 2011, a working group led by OPM identified skills gaps in six government-wide mission critical occupations including cybersecurity and acquisition, and is taking steps to address each one. To date, officials reported meeting their planned level of progress for three of the six occupations. Additional progress will depend on the extent to which OPM and agencies develop the infrastructure needed to sustain their planning, implementation, and monitoring efforts for skills gaps, and develop a predictive capacity to identify newly emerging skills gaps. Strategies for an era of highly constrained resources Agency officials have said that declining budgets have created the impetus to act on management challenges and a willingness to consider creative and nontraditional strategies for addressing human capital issues. GAO identified strategies related to (1) strengthening coordination of the federal human capital community, (2) using enterprise solutions to address shared challenges, and (3) creating more agile talent management that can address these challenges. Over the years, GAO has made numerous recommendations to agencies and OPM to improve their strategic human capital management efforts. This testimony discusses some actions taken to implement key recommendations.
| 6,493 | 843 |
In our June 2010 report on DHS's foreign language capabilities, we identified challenges related to the Department's efforts to assess their needs and capabilities and identify potential shortfalls. Our key findings include: DHS has no systematic method for assessing its foreign language needs and does not address foreign language needs in its Human Capital Strategic Plan. DHS components' efforts to assess foreign language needs vary. For example, the Coast Guard has conducted multiple assessments, CBP's assessments have primarily focused on Spanish-language needs, and ICE has not conducted any assessments. By conducting a comprehensive assessment DHS would be better positioned to capture information on all of its needs and could use this to inform future strategic planning. DHS has no systematic method for assessing its existing foreign language capabilities and has not conducted a comprehensive capabilities assessment. DHS components have various lists of foreign language capabilities that are available in some offices, primarily those that include a foreign language award program for qualified employees. Conducting an assessment of all of its foreign language capabilities would better position DHS to effectively manage its resources. DHS and its components have not taken actions to identify potential foreign language shortfalls. DHS officials stated that shortfalls can impact mission goals and officer safety. By using the results of needs and capabilities assessments to identify shortfalls, DHS would be better positioned to develop actions to mitigate shortfalls, execute its various missions that involve foreign language speakers, and enhance the safety of its officers and agents. We and the Office of Personnel Management have developed strategic workforce guidance that recommends, among other things, that agencies (1) assess workforce needs, such as foreign language needs; (2) assess current competency skills; and (3) compare workforce needs against available skills. DHS efforts could be strengthened by conducting a comprehensive assessment of its foreign language needs and capabilities, and using the results of this assessment to identify any potential shortfalls. By doing so, DHS could better position itself to manage its foreign language workforce needs to help fulfill its organizational missions. We recommended that DHS comprehensively assess its foreign language needs and capabilities, and any resulting shortfalls and ensure these assessments are incorporated into future strategic planning. DHS agreed with our recommendation and officials stated that the Department is planning to take action to address it. In June 2010, we also reported that DHS and its components had established a variety of foreign language programs and activities, but had not assessed the extent to which they address potential shortfalls. Coast Guard, CBP, and ICE established foreign language programs and activities, which include foreign language training and monetary awards. Although foreign language programs and activities at these components contributed to the development of DHS's foreign language capabilities, the Department's ability to use them to address potential foreign language shortfalls varies. For example, foreign language training programs generally do not include languages other than Spanish. Furthermore, these programs and activities are managed by individual components or offices within components. According to several Coast Guard, CBP, and ICE officials, they manage their foreign language programs and activities as they did prior to the formation of DHS. At the Department level and within the components, many of the officials we spoke with were generally unaware of the foreign language programs or activities maintained by other DHS components. Given this variation and decentralization, conducting a comprehensive assessment of the extent to which its program and activities address shortfalls could strengthen DHS's ability to manage its foreign language programs and activities and to adjust them, if necessary. DHS agreed with our recommendation and officials stated that the Department is planning to take action to address it. In April 2010, we reported that FEMA had developed a national needs assessment to identify its LEP customer base and how frequently it interacts with LEP persons. We reported that in developing this needs assessment, FEMA combines census data, data from FEMA's National Processing Service Center on the most commonly encountered languages used by individuals applying for disaster assistance sources, literacy and poverty rates, and FEMA's historical data on the geographic areas most prone to disasters. Furthermore, practices identified by other federal and state agencies as well as practitioners in the translation industry are reviewed and used in preparing this assessment. Through its needs assessment, FEMA officials reported that FEMA has identified 13 of the most frequently encountered languages spoken by LEP communities. Locally, in response to a disaster, FEMA conducts a needs assessment by collecting information from the U.S. Census Bureau, data from local school districts, and information from foreign language media outlets in the area to help determine the amount of funding required to ensure proper communication with affected LEP communities. In the spring of 2009, FEMA established new procedures to identify LEP communities at the local level. While the agency's national needs assessment provides a starting point to identify LEP communities across the country, the assessment does not fully ensure that FEMA identifies the existence and location of LEP populations in small communities within states and counties. To that end, officials from FEMA's Multilingual Function developed a common set of procedures for identifying the location and size of LEP populations at the local level. The new procedures, which were initiated as a pilot program, include collecting data from national, state, and local sources, and creating a profile of community language needs, local support organizations, and local media outlets. FEMA initiated this pilot program while responding to a flood affecting North Dakota and Minnesota in the spring of 2009; the program enabled FEMA officials to develop communication strategies targeted to 12 different LEP communities including Bosnian, Farsi, Kirundi, and Somali. FEMA officials stated that they plan to use these procedures in responding to future presidentially declared disasters. According to FEMA officials, it has incorporated the pilot program procedures for identifying local LEP populations into its Standard Operating Procedures (SOP). According to FEMA, it has distributed the revised SOP to FEMA Disaster Assistance and Disaster Operations staff in headquarters, FEMA's 10 regions, and joint field offices. During its recovery operations, FEMA has several staffing options to augment its permanent staff. FEMA officials explained that staff from FEMA's reserve corps, whose language capabilities are recorded in an automated deployment database, can be temporarily assigned to recovery operations. When FEMA lacks enough permanent and temporary staff with the appropriate foreign language skills, it hires individuals from within the affected area to fill unmet multilingual needs. For example, in 2008, FEMA used local hires who spoke Vietnamese in the recovery operations for Hurricanes Gustav and Ike in Galveston and Austin, Texas. FEMA officials stated that these local hires are especially useful during recovery efforts because they have relevant language capabilities as well as knowledge of the disaster area and established relationships with the affected communities. Additionally, when disaster assistance employees and local hires are unavailable, FEMA can use contractors to provide translation and interpretation services. To ensure that the agency has the capacity to handle different levels of disasters, an official stated that FEMA is awarding a 4-year contract of up to $9.9 million to support language access and related activities. DOD has taken some steps to transform its language and regional proficiency capabilities, but additional actions are needed to guide its efforts and provide the information it needs to assess gaps in capabilities and assess related risks. In June 2009, we reported that DOD had designated senior language authorities at the Department-wide level, and in the military services as well as other components. It had also established a governance structure and a Defense Language Transformation Roadmap. At that time, the military services either had developed or were in the process of developing strategies and programs to improve language and regional proficiency. While these steps moved the Department in a positive direction, we concluded that some key elements were still missing. For example, while the Roadmap contained goals and objectives, not all objectives were measurable and linkages between these goals and DOD's funding priorities were unclear. Furthermore, DOD had not identified the total cost of its transformation efforts. Additionally, we reported that DOD had developed an inventory of its language capabilities. In contrast, it did not have an inventory of its regional proficiency capabilities due to the lack of an agreed upon way to assess and validate these skills. DOD also lacked a standard, transparent, and validated methodology to aid its components in identifying language and regional proficiency requirements. In the absence of such a methodology, components used different approaches to develop requirements and their estimates varied widely. Therefore, we recommended that DOD (1) develop a comprehensive strategic plan for its language and regional proficiency transformation, (2) establish a mechanism to assess the regional proficiency skills of its military and civilian personnel, and (3) develop a methodology to identify its language and regional proficiency requirements. At the time, DOD generally agreed with our recommendations and responded it had related actions underway. Based on recent discussions with DOD officials, these actions are still in various stages. Specifically, DOD officials stated that it has a draft strategic plan currently undergoing final review and approval. We understand from officials that this plan includes goals, objectives, and a linkage between goals and DOD's funding priorities, and that an implementation plan with metrics for measuring progress will be published at a later date. DOD officials also stated that they are working to determine a suitable approach to measuring regional proficiency because it is more difficult than originally expected. Lastly, DOD officials stated that, while DOD has completed the assessments intended to produce a standardized methodology to help geographic commanders identify language and regional proficiency requirements, the standardized methodology has not yet been approved. In recent congressional testimony, DOD officials stated the standardized methodology would be implemented later this year. Without a comprehensive strategic plan and until a validated methodology to identify gaps in capabilities is implemented, it will be difficult for DOD to assess risk, guide the military services as they develop their approaches to language and regional proficiency transformation, and make informed investment decisions. Furthermore, it will be difficult for DOD and Congress to assess progress toward a successful transformation. In September 2009, we reported that State continued to face persistent, notable gaps in its foreign language capabilities, which could hinder U.S. overseas operations. We reported that State had undertaken a number of initiatives to meet its foreign language requirements, including an annual review process to determine the number of positions requiring a foreign language, providing language training, recruiting staff with skills in certain languages, and offering incentive pay to officers to continue learning and maintaining language skills. However, we noted that these efforts had not closed the persistent gaps and reflected, in part, a lack of a comprehensive, strategic approach. Although State officials said that the Department's plan for meeting its foreign language requirements is spread throughout a number of documents that address these needs, these documents were not linked to each other and did not contain measurable goals, objectives, or milestones for reducing the foreign language gaps. Because these gaps have persisted over several years despite staffing increases, a more comprehensive, strategic approach would help State to more effectively guide its efforts and assess its progress in meeting its foreign language requirements. We therefore recommended that the Secretary of State develop a comprehensive strategic plan with measurable goals, objectives, milestones, and feedback mechanisms that links all of State's efforts to meet its foreign language requirements. We also recommended that the Secretary of State revise the Department's methodology for measuring and reporting on the extent that positions are filled with officers who meet the language requirements of the position. State generally agreed with our findings, conclusions, and recommendations and described several initiatives to address these recommendations. For example, State convened an inter-bureau language working group to focus on and develop an action plan to address our recommendations. Since our report, State has revised its methodology for measuring and reporting on the extent that positions are filled with officers who meet the language requirements of the position. State officials also told us that they have begun developing a more strategic approach for addressing foreign language shortfalls, but have not developed a strategic plan with measurable goals, objectives, milestones, and feedback mechanisms. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the committee may have. For questions about this statement, please contact David C. Maurer at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony are William W. Crocker III; Yvette Gutierrez-Thomas; Wendy Dye; Lara Miklozek; Linda Miller; Geoffrey Hamilton; Jess Ford; Godwin Agbara; Laverne Tharpes; Robert Ball; Robert Goldenkoff; Steven Lozano; Kisha Clark; Sharon Pickup; Matthew Ullengren, Gabrielle Carrington; and Patty Lentini. Military Training: Continued Actions Needed to Guide DOD's Efforts to Improve Language Skills and Regional Proficiency. GAO-10-879T. Washington, D.C.: June 29, 2010. Department of Homeland Security: DHS Needs to Comprehensively Assess Its Foreign Language Needs and Capabilities, and Identify Shortfalls. GAO-10-714. Washington, D.C.: June 22, 2010. Language Access: Selected Agencies Can Improve Services to Limited English Proficient Persons. GAO-10-91. Washington, D.C.: April 26, 2010. Iraq: Iraqi Refugees and Special Immigrant Visa Holders Face Challenges Resettling in the United States and Obtaining U.S. Government Employment. GAO-10-274. Washington, D.C.: March 9, 2010. State Department: Challenges Facing the Bureau of Diplomatic Security. GAO-10-290T. Washington, D.C.: December 9, 2009. State Department: Challenges Facing the Bureau of Diplomatic Security. GAO-10-156. Washington, D.C.: November 12, 2009. Department of State: Persistent Staffing and Foreign Language Gaps Compromise Diplomatic Readiness. GAO-09-1046T. Washington, D.C.: September 24, 2009. Department of State: Comprehensive Plan Needed to Address Persistent Foreign Language Shortfalls. GAO-09-955. Washington, D.C.: September 17, 2009. Department of State: Additional Steps Needed to Address Continuing Staffing and Experience Gaps at Hardship Posts. GAO-09-874. Washington, D.C.: September 17, 2009. Military Training: DOD Needs a Strategic Plan and Better Inventory and Requirements Data to Guide Development of Language Skills and Regional Proficiency. GAO-09-568. Washington, D.C.: June 19, 2009. Defense Management: Preliminary Observations on DOD's Plans for Developing Language and Cultural Awareness Capabilities. GAO-09-176R. Washington, D.C.: November 25, 2008. State Department: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-07-1154T. Washington, D.C.: August 1, 2007. U.S. Public Diplomacy: Strategic Planning Efforts Have Improved, but Agencies Face Significant Implementation Challenges. GAO-07-795T. Washington, D.C.: April 26, 2007. Department of State: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-06-894. Washington, D.C.: August 4, 2006. Overseas Staffing: Rightsizing Approaches Slowly Taking Hold but More Action Needed to Coordinate and Carry Out Efforts. GAO-06-737. Washington, D.C.: June 30, 2006. U.S. Public Diplomacy: State Department Efforts to Engage Muslim Audiences Lack Certain Communication Elements and Face Significant Challenges. GAO-06-535. Washington, D.C.: May 3, 2006. Border Security: Strengthened Visa Process Would Benefit from Improvements in Staffing and Information Sharing. GAO-05-859. Washington, D.C.: September 13, 2005. State Department: Targets for Hiring, Filling Vacancies Overseas Being Met, but Gaps Remain in Hard-to-Learn Languages. GAO-04-139. Washington, D.C.: November 19, 2003. Foreign Affairs: Effective Stewardship of Resources Essential to Efficient Operations at State Department, USAID. GAO-03-1009T. Washington, D.C.: September 4, 2003. State Department: Staffing Shortfalls and Ineffective Assignment System Compromise Diplomatic Readiness at Hardship Posts. GAO-02-626. Washington, D.C.: June 18, 2002. Foreign Languages: Workforce Planning Could Help Address Staffing and Proficiency Shortfalls. GAO-02-514T. Washington, D.C.: March 12, 2002. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Foreign language skills are an increasingly key element to the success of diplomatic efforts; military, counterterrorism, law enforcement and intelligence missions; and to ensure access to federal programs and services to Limited English Proficient (LEP) populations within the United States. GAO has issued reports evaluating foreign language capabilities at the Department of Homeland Security (DHS), the Department of Defense (DOD), and the State Department (State). This testimony is based on these reports, issued from June 2009 through June 2010, and addresses the extent to which (1) DHS has assessed its foreign language needs and existing capabilities, identified any potential shortfalls, and developed programs and activities to address potential shortfalls; (2) the Federal Emergency Management Agency (FEMA) has conducted a needs assessment to help ensure access to its services for LEP persons; and (3) DOD and State have developed comprehensive approaches to address their foreign language capability challenges. In June 2010, we reported that DHS had taken limited actions to assess its foreign language needs and existing capabilities, and to identify potential shortfalls. For example, while two of three DHS components included in GAO's review had conducted foreign language assessments, these assessments were not comprehensive, as GAO's prior work on strategic workforce planning recommends. In addition, while all three DHS components GAO reviewed had various lists of employees with foreign language capabilities, DHS had no systematic method for assessing its existing capabilities. In addition, DHS and its components had not taken actions to identify potential foreign language shortfalls. Further, DHS and its components established a variety of foreign language programs and activities, but had not assessed the extent to which these programs and activities address potential shortfalls. The Department's ability to use them to address potential shortfalls varied and GAO recommended that DHS comprehensively assess its foreign language needs and capabilities, and any resulting shortfalls; and ensure these assessments are incorporated into future strategic planning. DHS generally concurred with these recommendations, and officials stated that the Department has actions planned to address them. In April 2010, we reported that FEMA had developed a national needs assessment to identify its LEP customer base and how frequently it interacted with LEP persons. Using this assessment, FEMA officials reported that the agency had identified 13 of the most frequently encountered languages spoken by LEP communities. Locally, in response to a disaster, FEMA conducts a needs assessment by collecting information from the U.S. Census Bureau and data from local sources to help determine the amount of funding required to ensure proper communication with affected LEP communities. In June 2009, GAO reported that DOD had taken steps to transform its language and regional proficiency capabilities, but it had not developed a comprehensive strategic plan to guide its efforts and lacked a complete inventory and validated requirements to identify gaps and assess related risks. GAO recommended that DOD develop a comprehensive strategic plan for its language and regional proficiency efforts, establish a mechanism to assess the regional proficiency skills of its personnel, and develop a methodology to identify its language and regional proficiency requirements. DOD concurred with these recommendations; however, as of June 2010, officials stated that related actions are underway, but have not been completed. Furthermore, GAO reported in September 2009 that State's efforts to meet its foreign language requirements had yielded some results but had not closed persistent gaps in foreign-language proficient staff and reflected, in part, a lack of a comprehensive, strategic approach. GAO recommended that State develop a comprehensive strategic plan with measurable goals, objectives, milestones, and feedback mechanisms that links all of State's efforts to meet its foreign language requirements. State generally agreed with GAO's recommendations and is working to address them. GAO is not making any new recommendations; however, GAO made recommendations in prior reports to help DHS, DOD, and State better assess their foreign language capabilities and address potential shortfalls. All three agencies generally concurred with GAO's recommendations and have taken some actions.
| 3,761 | 839 |
The 50 states and the District of Columbia spent an estimated $50 billion on special education for children from birth through age 21 in school year 1999-00. About 12 percent of this amount came from federal funds, specifically IDEA grants (10 percent) and Medicaid funds (2 percent).(See fig. 1.) In addition to federal funds, other sources are used to support the provision of special education and related services for children with disabilities, such as state general and special education funds, local funds, and private insurance. Under IDEA, three grants can fund services to children under age 6. School-age Grants provide money to states to help them serve all eligible children, ranging in age from 3 through 21. Preschool Grants provide money to states to help serve 3 through 5-year-olds with disabilities and require states to have policies and procedures that assure a free appropriate public education for all 3 through 5-year-olds with disabilities as a condition for receiving other IDEA funds for this age range. Infant Grants provide money to states to serve children under age 3 who have developmental delays or a condition that will probably result in a developmental delay, or at a state's discretion, who are otherwise at risk of developmental delays. Unlike the other two grants, Infant Grants provide services to both children and their families, primarily in settings that are not school-based. (See table 1.) Program overlap occurs when programs have the same goals, the same activities or strategies to achieve them, or the same targeted recipients. As noted in a House Government Reform and Oversight Committee report, "A certain amount of redundancy is understandable and can be beneficial if it occurs by design as part of a management strategy to foster competition, provide better service delivery to customer groups, or provide emergency backup." Because both School-age and Preschool grants can be used to serve the same target recipients, children with disabilities ages 3 through 5, they can be characterized as overlapping. However, this overlap does not necessarily lead to duplication of services, which involves providing identical services to identical target groups. Education allocates funds from School-age, Preschool, and Infant Grants to all states, the District of Columbia, and Puerto Rico, based on federal formulas. At the state level, School-age and Preschool Grants are administered by the state educational agency (SEA), and Infant Grants are administered by a designated lead agency, most frequently the state's department of health or human services. A fixed portion of School-age and Preschool grants may be retained for state-level activities and program administration, although the majority of funds from these grants are passed through SEAs to local educational agencies (LEAs), generally school districts, according to a federally mandated formula. Infant Grants may be distributed to local public and private agencies by designated state lead agencies using state-developed criteria. Education's OSEP monitors activities funded by these grants and the extent to which states comply with IDEA in a process known as continuous improvement monitoring. Several states are selected each year for in-depth monitoring, including on-site data collection, based on various factors, such as when they were last monitored, information from grant applications, and information on each state's status in achieving improved results and compliance. School-age and Preschool Grants share the same goal, performance objectives, and performance measures; fund the same range of services; and have similar eligibility requirements except for the age-range served, while Infant Grants differ from these grants in almost all respects. (See table 2.) School-age and Preschool Grants share the common goal of improving results for children with disabilities by assisting state and local educational agencies to provide children with disabilities access to high- quality education that will help them meet challenging standards and prepare them for employment and independent living. The two programs use the same set of performance objectives and performance measures. For example, one objective of both is that preschool children with disabilities receive services that prepare them to enter school ready to learn. As a performance measure for this objective, both programs use an increase in the percentage of preschool children receiving special education services who have readiness skills when they reach kindergarten. These programs also pay for the same range of special education and related services, such as physical and occupational therapy and technology to assist the disabled, such as voice-activated software. Special education and related services are generally provided at school. To be eligible for these services, children must be classified by the state as having a disability and as a result of the disability need special education and related services. The key distinction between the two grants is that School-age grants serve children ages 3 through 21, whereas Preschool Grants serve only children ages 3 through 5. The goal, performance objectives, performance measures, eligibility criteria, and types of services allowed for Infant Grants differ from those for School-age and Preschool Grants. This grant is designed to assist states in developing and implementing a statewide, comprehensive system to provide early intervention services for infants and toddlers with disabilities (and at a state's discretion those who are at risk of experiencing developmental delays) and their families. The goal of Infant Grants is to enhance children's functional development and increase families' capacity to increase their children's development using a comprehensive system of early intervention services, including health services, such as tube feeding or intermittent catheterization, and family training. Objectives are broad and each has performance measures. For example, an increase in the percentage of all children under age 3 receiving age-appropriate services in nonschool settings is a performance measure for the objective of providing services at home or in daycare, when appropriate. To be eligible for services under Infant Grants, children must be under age 3 and have a developmental delay or the potential to develop one. Because of the age-specific developmental needs of infants and toddlers, health and family services provided under Infant Grants are more comprehensive than under the other two grants. These services are provided primarily in nonschool settings, generally in the home or at a day- care site. All 50 states, the District of Columbia, and Puerto Rico receive grants from each of the three programs, which they distribute to various local public and private agencies. Whether a local agency receives funds from any one grant depends on whether it is serving the relevant age group. For example, the Roanoke Interagency Coordinating Council in Virginia, which serves children from birth through age 2, receives Infant Grants but not School-age and Preschool Grants. However, many LEAs receive more than one grant. For example, the Mapleton Local School District in Ohio received School-age and Preschool Grants in School Year 2001-2002, while the South Washington County School District in Minnesota received funds from all three programs. Many states use more than one grant to fund the same range of services for 3 through 5 year-olds. Officials in 18 of the states we contacted told us they may use School-age Grants to serve 3 through 5 year-olds--the same group of children served by Preschool Grants. Only one of the states we contacted, Alaska, does not permit School-age Grants to be used to pay for services for preschoolers. Also, in a survey of SEAs conducted by the National Early Childhood Technical Assistance System, 37 SEAs reported that they use funds from School-age Grants to support the provision of special education and related services for preschool children with disabilities. Since they are not required to track such information, none of the 19 states we contacted were able to tell us the percentage of School- age Grant funds they used to provide services for children aged 3 through 5, although officials in several states said that the amount was small. Similarly, 18 of the 19 states could not provide us with the percentage of children aged 3 through 5 who received services provided with School-age Grants. Many states could not report the extent to which School-age Grants fund services for 3 through 5 year-olds because of how expenditures are tracked. The states we contacted reported they track expenditures by budget functions, such as salaries or transportation, and not by individual services provided or ages of children receiving services. These states do not require LEAs to report expenditure data in a way that would allow them to determine the extent to which School-age Grants fund services for 3 through 5 year-olds, nor does IDEA require it. Education requires only that states report the number of children ages 3 through 5 collectively receiving special education and related services under School-age and Preschool Grants. IDEA does not require specific information about how many children are served under each. The effectiveness of these grant programs has not yet been evaluated, in part, because federal special education funds are only one source used to pay for services for this age group. Rather than functioning as operating programs, these grants add to the stream of funds supporting on-going state and local programs. Therefore, it is difficult to isolate the impact of federal funding for special education from the impact of other funding sources. Instead, studies have tended to focus on how IDEA is being implemented and on the overall progress of children who receive special education services, without directly attributing these outcomes to the receipt of particular services. In the 1997 amendments to IDEA, Congress mandated a full, national assessment to determine the progress in the implementation of IDEA, including the effectiveness of state and local efforts to provide a free public education appropriate for the needs of students with disabilities and to provide early intervention services to infants and toddlers. In response to this mandate, OSEP has contracted with several research organizations to complete a number of studies. None of these studies will attempt to isolate the contribution of IDEA grants from the effects of state and local efforts to improve outcomes for young children; Congress did not prescribe such a stringent assessment of program effectiveness. Instead, three of these studies contracted by OSEP are outcome evaluations, focused on describing the short-term and long-term outcomes for young children enrolled in programs supported, in part, by these grants. The National Early Intervention Longitudinal Study will follow children entering early intervention services supported by Infant Grants. The Pre-Elementary Education Longitudinal Study will follow children who received preschool special education services through their experience in preschool and early elementary school. The Special Education Elementary Longitudinal Study is documenting the experience of children enrolled in special education as they move from elementary school to middle and high school. The results from these studies will not be available for several years, although OSEP has issued initial reports describing the demographic characteristics of some study participants, their families, and schools. In addition to these evaluations describing children's outcomes, OSEP has contracted a study examining how these programs are implemented. The Special Education Expenditure Project is intended to answer a variety of questions on the characteristics of expenditures on programs and services for preschool special education students. The first in an anticipated series of reports derived from this project was issued in March 2002 and provides an overview of special education spending in school year 1999-2000 in the 50 states and the District of Columbia. This report presents aggregate data on how much is spent nationally and how these funds are allocated among broad program areas. In particular, the report notes that preschool programs account for about 9 percent of special education expenditures overall and that 8 percent ($4.1 billion) was spent on preschool programs operated within public schools and 1 percent ($263 million) was spent on preschool programs operated outside public schools. In addition to the efforts now underway to evaluate outcomes and expenditures on services for young children enrolled in programs supported by IDEA grants, we found studies from two states--Delaware and Pennsylvania--on the outcomes of children in special education preschool programs. The Delaware study found that nearly half of the children who participated in Preschool Grants-supported programs in Delaware between 1997 and 1999 were able to transition into the regular education program by the time they were 6 and 7 years-old. The Delaware study also found that children who participated in preschool special education had significantly higher grades in kindergarten and first grade than children with disabilities who did not and that the gap in grades grew between kindergarten and first grade. The researchers responsible for the Delaware study attributed the higher grades to the children's participation in programs supported by Preschool Grants. The Pennsylvania study found that fewer than half of the preschoolers who participated in early intervention services in Pennsylvania between 1991 and 1995 were participating in school-age special education programs between 1996 and 1997, leading researchers to suggest that preschool early intervention services may have helped reduce the severity of the developmental delay for some participating children. We found some opportunities for better local coordination between programs funded with Infant Grants and preschool programs, but we were unable to determine the extent to which overlap between School-age and Preschool Grants may result in service duplication. We found evidence of problems with the transition of 3 year-olds between local programs funded with Infant Grants and preschool programs in 8 of 13 states for which OSEP issued monitoring reports in the last 2 years. While we could not ascertain whether overlap between School-age and Preschool Grants results in service duplication, program officials indicated that the overlap between School-age and Preschool Grants does not result in administrative inefficiencies. We found some lack of coordination between local programs funded by Infant Grants and preschool programs, which can be funded by Preschool or School-age Grants, in several states. Service gaps between programs funded by Infant Grants and preschool programs can occur for children who turn 3 before the beginning of the school year in which they can start attending preschool. IDEA has addressed this problem by allowing Infant Grants to be used after the third birthday to pay a free appropriate public education until the beginning of the next school year. Also, federal rules require that states develop written transition plans for each child to show how the transition between the two programs will be managed, and Education monitors and enforces these rules. Specifically, the designated lead agency for the Infant Grants must discuss with and train parents regarding future placements for their child and have developed procedures to help the child adjust to a new setting. To further coordinate this transition, the Infant Grants statute requires that the lead agency, with the approval of the family, must convene a conference among the Infant Grants lead agency, the family, and the LEA at least 90 days before the child is eligible for preschool services. In addition to the Infant Grants transition requirements, the School-age Grants statute requires LEAs to participate in transition planning conferences arranged by the lead agency. Children should begin receiving services no later than their third birthday. In 2000 and 2001, OSEP identified problems in the transition from programs funded by Infant Grants into preschool programs in 8 of the 13 states that it monitored for compliance with IDEA. OSEP monitoring reports cited a range of problems related to transitions from programs funded with Infant Grants into preschool that resulted in gaps in services for preschoolers. Some of the problems cited include: not holding transition planning conferences within 90 days of a child's third birthday (6 states); the failure of local educational agency representatives to attend these conferences, despite being invited to do so (3 states); and providing inadequate information about the transition process to parents (2 states). The lack of adequate information resulted in confusion and unwillingness to cooperate with service coordinators' requests on the part of some parents and forced other parents to seek out preschool services on their own. In response to these problems, OSEP has required corrective action plans for each of the 8 states to address areas of noncompliance with IDEA related to the transition from programs funded by Infant Grants into preschool programs. In addition to the problems cited in OSEP's monitoring reports, we found some further evidence of problems when children leave Infant Grants programs in our site visit to Ohio. One state official told us that children and families may not receive needed services when they transition from the state's Infant Grants programs into preschool because preschool programs have more stringent eligibility criteria and lack the family focus of early intervention programs funded by Infant Grants. Also, a representative of the Ohio Coalition for the Education of Children with Disabilities told us that some school districts did not understand that they are legally required to provide services for preschool children with disabilities. These problems were not evident in the other states we visited. Officials in Maine, Minnesota, and Virginia did not report a lack of coordination between programs funded by Infant Grants and preschool programs in those states. Transition issues for these programs appear to have been eliminated in Maine and Minnesota because each state operates a single program to provide early intervention and special education services for children from birth through age 5. In those states, programs funded with Infant Grants and preschool programs are both operated through a single agency, rather than through two agencies, as is the case with most of states. In Virginia, transition issues have been minimized because the state requires public education for children with disabilities beginning at age 2--a year earlier than is required under federal law. More than two- thirds of children receiving early intervention services in Virginia transition into public preschool programs as soon as they become eligible. Because of overlap between School-age and Preschool Grants, which can both serve children ages 3 through 5, there is some potential for service duplication if the two programs do not coordinate at the local level. We were unable to further evaluate the extent to which coordination problems may exist because there were no data available that would allow us to do so, in that states are not required to distinguish between funding sources when reporting the ages of children served by School-age and Preschool Grants. Officials from the 4 states where we conducted comprehensive interviews did not report any coordination problems for these programs and were not able to provide evidence about whether or not service duplication occurs. At the federal level, there is no administrative overlap between School-age and Preschool Grants because Education already administers these grants as if they were one program. For example, Education requires a single application for both programs and applies a single set of goals, performance objectives, performance measures, and reporting requirements to both programs. Overlap between School-age and Preschool Grants could be addressed in a number of ways, according to our analysis. Narrowing the age range served by School-age Grants to ages 6 through 21 or consolidating the two grants into a single grant, either with or without a reserved amount of funds for preschool services, could eliminate overlap between these programs. However, advantages and disadvantages exist for each option. (See table 3). The first three options would ensure that children ages 3 through 5 are eligible for services under only one program. The first option, narrowing the age range for School-age Grants, has the advantage of preserving targeted funds to serve preschoolers by continuing Preschool Grants. However, under this option, states would lose the flexibility that they now have to devote a greater share of federal special education funds to serving preschoolers, if that is their priority. The second option, consolidating School-age and Preschool Grants into a single grant, has several advantages. It would eliminate potential overlap for 3 through 5 year-olds, make it easier to track funds spent on preschoolers, and may increase the ability of local school districts to target federal special education funds to children of any age, depending on local needs. However the last advantage could also be seen as a disadvantage, potentially reducing services for preschoolers in those local school districts where they are not considered to be as high a priority as services for older children. State administrators and parents told us they are concerned this option would eliminate safeguards for targeted preschool funding and may lead to a reduction in services for children ages 3 through 5, although states have been required to provide special education services to children ages 3 through 5 since 1992 in order to be eligible for Preschool Grants and other IDEA funds targeted to children ages 3 through 5 with disabilities. State officials told us that serving preschoolers, whether in special education or otherwise, is not a priority for all local school districts, and expenditures of funds for 3 through 5 year-olds would have to compete with the needs of older special education students. School districts generally do not provide regular education services to children ages 3 through 4. Moreover, as of November 2001, 39 states require kindergarten be offered to 5 year-olds, but only 12 require pupil attendance. The third option, consolidating School-age and Preschool Grants into a single grant, but reserving some funds for preschool services also would eliminate potential overlap of IDEA grants for 3 through 5 year-olds. However, this option has the advantage of preserving minimum spending levels for preschoolers by including a set-aside provision in the grant legislation. Depending on how the legislation is written, a potential disadvantage to this option would be the loss of some flexibility in how states may allocate funds for preschoolers and other ages of children. This would occur if the legislation prescribes fixed levels of spending for both age ranges. Nevertheless, some of the officials and other interested parties that we contacted indicated that, if the programs were to be consolidated, they would prefer including a set-aside provision. Although there are potential advantages to eliminating program overlap, overall, changes to the current structure of federal grants for special education would probably not result in a significant reduction in administrative burden at the state and local levels. Retaining the current structure would preserve targeted funds for preschoolers and not introduce different administrative requirements, but the possibility of service duplication would continue. Many of the state and local administrators with whom we spoke indicated they do not see the need for any changes in the current structure of these grants. In addition, Education officials have noted a growing level of support for early intervention programs among lawmakers, program administrators and child advocates, which, in their opinion, justifies maintaining separate grants to support these programs. The Department of Education has recognized that there is some potential for a gap in services for 3 year-olds as they move from programs funded by Infants Grants into preschool programs and requires that states and local agencies minimize these service gaps by following federal requirements for program coordination and transition planning. However, we have seen evidence that in at least a few localities, states have failed to ensure that federal regulations are being followed. Education has addressed known transition problems by requiring these states to develop and implement corrective action plans that will bring them into compliance with IDEA. Although there is overlap between School-age and Preschool Grants because both allow for services to 3 through 5 year-olds, it is not clear whether this overlap presents problems of service duplication or unnecessary administrative burden that would indicate the need to change how the grants are structured. Program experts and federal, state, and local administrators that we interviewed did not report any problems and there were no data available that would allow us to determine the extent to which program overlap resulted in coordination or administrative problems. We provided the Department of Education an opportunity to comment on a draft of this report and received technical comments which we have incorporated into this report as appropriate. We are sending copies of this report to the Chairman of your subcommittee, the Secretary of Education, and appropriate congressional committees. Copies will also be made available to other interested parties upon request. If you have questions regarding this report, please call me at (202) 512-7215 or Eleanor Johnson, assistant director, at (202) 512-7209. Other contributors are listed in the appendix. In addition to those above, Elspeth Grindstaff, Patrick DiBattista and Jon Barker made major contributions to this report. Head Start and Even Start: Greater Collaboration Needed on Measures of Adult Education and Literacy. GAO-02-348. Washington, D.C.: 2002. March 29, 2002. Bilingual Education: Four Overlapping Programs Could Be Consolidated. GAO-01-657. Washington, D.C.: 2001. May 14, 2001. Early Childhood Programs: The Use of Impact Evaluations to Assess Program Effects. GAO-01-542. Washington, D.C.: April 16, 2001. Title I Preschool Education: More Children Served, but Gauging Effect on School Readiness Difficult. GAO/HEHS-00-171. Washington, D.C.: September 20, 2000. Early Education and Care: Overlap Indicates Need to Assess Crosscutting Programs. GAO/HEHS-00-78. Washington, D.C.: April 28, 2000. Evaluations of Even Start Family Literacy Program Effectiveness. GAO/HEHS-00-58R. Washington, D.C.: March 8, 2000. Early Childhood Programs: Characteristics Affect the Availability of School Readiness Information. GAO/HEHS-00-38. Washington, D.C.: February 28, 2000. Managing for Results: Using the Results Act to Address Mission Fragmentation and Program Overlap. GAO/AIMD-97-146. Washington, D.C.: August 29, 1997.
|
In fiscal year 2001, the federal government spent $7 billion on the following three special education grant programs: Special Education Grants to States (School-age Grants), Special Education Grants Preschool (Preschool Grants) and Special Education Grants for Infants and Families with Disabilities (Infants Grants). School-age and Preschool Grants are similar, except for the age ranges served, while Infant Grants differ in goals, performance objectives, performance measures, eligibility, and services. The key distinction between School-Age and Preschool Grants is that School-age Grants serve children ages three through 21, whereas Preschool Grants serve only children ages three through five. States receive funds from all three grants, and some states report they use both School-age and Preschool funds to provide the same range of services to children aged three through five. Although states receive funds from all three grants, local agencies may receive funds from only one grant, or from all three. Eighteen of the 19 states GAO reviewed reported that the range of services they provide to children ages three through five is the same as those they provide with Preschool Grants. Evaluations show that half the children who received preschool services (mainly speech and language therapy) no longer needed them on reaching school age. Consolidating the two grants would eliminate coordination problems, but it is unclear whether program efficiency would increase. At the federal level, Education is already administering School-age and Preschool Grants as one program. State and local officials said that consolidation would not significantly reduce administrative burden.
| 5,308 | 330 |
The financing of terrorism is the financial support, in any form, of terrorism or of those who encourage, plan, or engage in it. Terrorist financing may derive from licit activities, such as fundraising by charities, or from illicit activities, such as selling counterfeit goods, contraband cigarettes, and illegal drugs. Disguising the source of terrorist financing, whether licit or illicit, is important to terrorist financiers: if the source can be concealed, it remains available for future terrorist financing activities. Some international experts on money laundering find that there is little difference in the methods used by criminal organizations or terrorist groups to conceal their proceeds by moving them through national and international financial systems. FATF, an intergovernmental body, sets internationally recognized standards for developing anti-money laundering and counter-terrorism- financing regimes and assesses countries' abilities to meet these standards. To strengthen anti-money-laundering and counter-terrorism- financing worldwide, international entities such as the UN, FATF, World Bank, and IMF, as well as the U.S. government, agree that each country should implement practices and adopt laws that are consistent with international standards. The U.S. government has worked with international donors and organizations--for example, the United Kingdom, Australia, Japan, the European Union, FATF, UN, the Organization of American States, the Asian Development Bank, IMF, and the World Bank--to build counter-terrorism-financing regimes in vulnerable countries. U.S. offices and bureaus--primarily within the Departments of State, the Treasury, Justice, and Homeland Security--and the federal financial regulators provide training and technical assistance, chiefly funded by State and Treasury, to countries deemed vulnerable to terrorist financing. One of TFWG's functions is to prioritize the delivery of such assistance to countries that it deems most vulnerable. To identify priority countries, TFWG considers intelligence community analysis of countries' vulnerabilities to terrorist financing, importance to U.S. security, and capacity to absorb U.S. assistance. NSC guidance for TFWG states that delivery of assistance to other vulnerable countries--that is, those that have not been designated as priority--may proceed so long as it is possible without adversely affecting the delivery of assistance to priority countries. Other vulnerable countries receive counter-terrorism-financing training and technical assistance through other U.S. government programs as well as through TFWG. (See app. 1 for TFWG membership and process.) Although the U.S. government provides a range of training and technical assistance to countries it deems vulnerable to terrorist financing, it lacks an integrated strategy to coordinate the delivery of this assistance. Specifically, the effort lacks key stakeholder acceptance of roles and practices, a strategic alignment of resources with needs, and a process to measure results--three elements that previous GAO work has identified as critical to effective strategic planning within and across agencies. GAO recommended that the Secretaries of State and the Treasury implement an integrated strategic plan and a Memorandum of Agreement for the delivery of training and technical assistance. According to March 2006 correspondence from State and Treasury, the departments have taken several steps to enhance interagency coordination. The training and technical assistance that U.S. agencies provide to vulnerable countries are intended to help the countries develop the five elements that, according to State, are needed for an effective anti-money- laundering and counter-terrorism-financing regime: a legal framework, a financial regulatory system, an FIU, law enforcement capabilities, and judicial and prosecutorial processes. The training and assistance are offered through courses, presentations at international conferences, the use of overseas regional U.S. law enforcement academies or U.S.-based schools, and the placement of intermittent or long-term resident advisors. According to State officials, at the time of our review, TFWG had coordinated the delivery of training and technical assistance in at least one of these five elements to more than 20 priority countries. U.S. agencies involved in providing counter-terrorism-financing training and technical assistance disagree both about agencies' roles relating to the coordination of the training and assistance efforts and about training and assistance procedures and practices. Consequently, the overall effort lacks effective leadership, resulting in less than optimal delivery of training and technical assistance to vulnerable countries. State and Treasury disagree regarding State's role in coordinating the training and technical assistance. According to State, its Office of the Coordinator for Counterterrorism is charged with directing, managing, and coordinating all U.S. agencies' efforts to develop and provide counter- terrorism financing programs, including, but not limited to, those in priority countries. Treasury, a key stakeholder, asserts that there are numerous other efforts outside States' purview and that State's role is limited to coordinating, as chair of TFWG, the provision of such assistance in priority countries. In addition, senior Treasury officials told us that they strongly disagree with the degree of control State asserts over TFWG decisions and said that State creates obstacles rather than coordinating efforts. Officials from Justice, which provides training and technical assistance and receives funding from State, told us that they respect State's role as the TFWG chair and coordinator and said that all counter- terrorism-financing training and technical assistance efforts should be brought under the TFWG decision-making process. While supportive of State's position, Justice's statement demonstrates that State's role lacks clear definition and recognition in practice. In addition, State and Treasury officials disagree about procedures and practices for delivering the training and technical assistance. State cited NSC guidance and an unclassified State document focusing on TFWG as providing procedures and practices for delivering training and technical assistance to all countries. Treasury officials told us that the procedures and practices defined by NSC were pertinent only to the TFWG priority countries and that TFWG has no formal mandate or process to provide technical assistance to non-priority countries. Moreover, Justice officials indicated that differences in the procedures and practices for delivering training and technical assistance to priority countries versus those for other vulnerable countries had created problems. State and Treasury officials cited numerous examples of their disagreements on procedures and practices. For example: According to Treasury officials, funding provided by Treasury's Office of Technical Assistance (OTA) should primarily support intermittent and long-term resident advisors, who are U.S. contractors. According to State officials, OTA should instead supplement State's funding for counter-terrorism-financing training and technical assistance, which primarily funds current employees of other U.S. agencies. According to OTA officials, their contractors provide assistance in drafting counter-terrorism-financing and anti-money-laundering laws in non-priority countries and OTA provides the drafts to Justice and other U.S. agencies for review and comment. State officials cited NSC guidance that current Justice employees should be primarily responsible for working with foreign countries to assist in drafting counter-terrorism-financing and anti-money-laundering laws and voiced strong resistance to use of contractors. Justice cited two examples in which contractors' work resulted in laws that did not meet FATF standards. According to OTA officials, the host country itself is ultimately responsible for final passage of a law that meets international standards. State officials said that OTA's use of confidentiality agreements between contractors and the foreign officials they advise had impeded U.S. interagency coordination in one country and that the continued practice could present future challenges. However, Treasury officials said that the incident was an isolated case involving a contract problem and that procedural steps have been taken to ensure the problem is not repeated. According to TFWG procedures for priority countries, if an assessment trip is determined to be necessary, State is to lead and determine the composition of the teams and set the travel dates. However, this procedure becomes complicated when a vulnerable country is designated a priority country. For example, in November 2004, Treasury conducted an OTA financial assessment in a vulnerable country and subsequently reached agreement with the country's central bank minister to install a resident advisor to set up an FIU. However, after TFWG had changed the country's status to priority, State officials, in May 2005, denied clearance for Treasury officials to visit the country to arrange for the placement of a resident advisor; according to State TFWG officials, State delayed the officials' visit until a TFWG assessment could be completed. At our review's conclusion in July 2005, Treasury's work had been delayed by 2.5 months. However, the U.S. embassy requested that Treasury proceed with its visit and TFWG delay its assessment. The U.S. government, including TFWG, has not strategically aligned its resources with its mission to deliver counter-terrorism-financing training and technical assistance. The U.S. government has no clear record of the budgetary resources available for counter-terrorism-financing assistance. Further, the government has not systematically assessed the suitability and availability of U.S. human capital resources or the potential availability of international resources. As a result, decision makers do not know the full range of resources available to meet the needs and address the related risks they have identified in priority countries and to determine the best match of remaining resources to other vulnerable countries' needs. State and Treasury do not have clear records of the funds that they allocate for counter-terrorism-financing training and technical assistance. Each agency receives separate appropriations that it can use to fund training and technical assistance provided by themselves, other agencies, or contractors. State primarily transmits its training and technical assistance funds to other agencies, while Treasury primarily employs short- and long-term advisors through contracts. However, because funding for counter-terrorism-financing training and assistance is mingled with funding given to the agencies for anti-money-laundering training and assistance and other programs, it is difficult for U.S. government decision- makers to determine the actual amount allocated to these efforts. State officials told us that funding for State counter-terrorism-financing training and technical assistance programs derives from two primary sources: Non-Proliferation, Anti-Terrorism, Demining, and Related Programs. State's Office of the Coordinator for Counterterrorism uses funding from this account to provide counter-terrorism financing training and technical assistance to TFWG countries. Our analysis of State records showed that budget authority for the account included $17.5 million for counter-terrorism-financing training and technical assistance for fiscal years 2002-2005. International Narcotics Control and Law Enforcement. State's Bureau of International Narcotics Control and Law Enforcement uses funding from this account to provide counter-terrorism-financing and anti-money-laundering training and technical assistance to a wide range of countries, including seven priority countries, during fiscal years 2002-2005, as well to provide general support to multilateral and regional programs. Our analysis of State records shows that budget authority for this account included about $9.3 million for anti-money- laundering assistance, counter-terrorism-financing training and assistance, and related multilateral and regional activities for fiscal years 2002-2005. State officials also told us that other State bureaus and offices provide counter-terrorism-financing and anti-money-laundering training and technical assistance (e.g., single-course offerings or "small-dollar" programs) as part of regional, country-specific, or broad-based programs. Treasury officials told us that OTA's counter-terrorism-financing technical assistance is funded through its Financial Enforcement program. Our analysis of Treasury records showed that OTA received budget authority totaling about $30.3 million for all financial enforcement programs for fiscal years 2002-2005. However, because OTA funding for counter- terrorism-financing training and technical assistance is embedded with funding for anti-money-laundering assistance, the exact amount allocated to countering terrorist financing cannot be determined. One OTA official told us that in any given year, as much as two-thirds of these program funds may be spent on counter-terrorism-financing or anti-money- laundering assistance. The U.S. government, including TFWG, has not systematically assessed the availability and suitability of the human capital resources used by the agencies for counter-terrorism-financing training and technical assistance. As a result, agency decision makers lack reliable information to use in determining the optimal balance of government employees and contractors to meet the needs and relative risks of vulnerable countries. According to State and Treasury officials, the effectiveness of contractors and current employees in delivering the various types of training and technical assistance has not been systematically evaluated. Decisions at TFWG appear to be based on anecdotal information rather than transparent and systematic assessments of resources. In addition, according to the State Performance and Accountability Report for fiscal year 2004, a shortage of anti-money-laundering experts continues to hamper efforts to meet the needs of nations that request assistance, including priority countries. According to State officials, U.S. technical experts are especially overextended because of their frequent need to divide their time between assessment, training, and investigative missions. Moreover, officials from State's Office of the Coordinator for Counterterrorism said that a lack of available staff had slowed the disbursement of funding at TFWG's inception. Although Treasury said that there may be a shortage of anti-money laundering experts in the U.S. government who are available to provide technical assistance in foreign countries, Treasury officials told us that many such experts, recently retired from the same U.S. government agencies, are available as contractors. A senior OTA official said that OTA has actively sought to provide programs in more priority countries but that State, as chair of TFWG, has not supported OTA's efforts. Specifically, our analysis showed that OTA obligated about $1.1 million of its financial enforcement program funding in priority countries, in part to place resident advisors, in fiscal years 2002-2005. State officials said that they welcomed more OTA participation in priority countries as a component of applicable resources; however, they questioned whether OTA consistently provides high-quality assistance. At the same time, State officials repeatedly stated that they needed OTA funding, not OTA-contracted staff, to meet current and future needs. The U.S. government, including TFWG, has not systematically consolidated and synthesized available information on other countries' and international entities' counter-terrorism-financing training and technical assistance activities or integrated this information into a decision-making process. Further, TFWG has not developed a strategy for encouraging allies and international entities to contribute resources to help vulnerable countries build counter-terrorism-financing capabilities and coordinate training and technical assistance activities--one of TFWG's stated goals. State and Treasury officials told us that, instead, they take an ad hoc approach to working with allies and international entities on coordinating resources for training and technical assistance. These officials also noted that at TFWG meetings, interagency issues are given higher priority than international resource sharing. Without a systematic way to assess information about international activities and to consolidate, synthesize, and integrate this information into the U.S. interagency decision-making process, the U.S. government cannot easily capitalize on opportunities for resource sharing with allies and international entities. The U.S. government, including TFWG, has not established a system to measure the results of its training and technical assistance efforts and to incorporate this information into its integrated planning efforts. According to an official from Justice's Office of Overseas Prosecutorial Development, Assistance and Training (OPDAT), OPDAT led an interagency effort to develop a system for measuring the results of training and technical assistance provided through TFWG and related assistance results for priority countries. In November 2004, OPDAT assigned an intern to set up a database to track such results. Because the database was not accessible to all TFWG members, OPDAT planned to serve as the focal point for entering the data collected by TFWG members. OPDAT asked agencies to provide statistics on programs, funding, and other information, including responding to questions concerning results that corresponded to the five elements of an effective counter-terrorism- financing regime. OPDAT also planned to track key recommendations for training and technical assistance and progress made in priority countries as provided in FATF and TFWG assessments. However, as of July 2005, OPDAT was still waiting to hire an intern to complete the project. OPDAT and State officials confirmed that the system had not yet been approved or implemented by TFWG. To ensure that U.S. government interagency efforts to provide counter- terrorism-financing training and technical assistance are integrated, efficient, and effective, 'particularly with respect to priority countries, we recommended in our report that the Secretary of State and the Secretary of the Treasury, in consultation with NSC and relevant government agencies, develop and implement an integrated strategic plan for the U.S. government that designates leadership and provides for key stakeholder involvement; includes a systematic and transparent assessment of the allocation of U.S. government resources; delineates a method for aligning the resources of relevant U.S. agencies to support the mission based on key needs and related risks; and provides processes and resources for measuring and monitoring results, identifying gaps, and revising strategies accordingly. We also recommended that the Secretaries of State and the Treasury enter into a Memorandum of Agreement concerning counter-terrorism-financing and anti-money-laundering training and technical assistance to ensure a seamless campaign in providing such assistance programs to vulnerable countries. The agreement should specify, with regard to U.S. counter- terrorism-financing training and technical assistance, the roles of each department, bureau, and office; methods to resolve disputes concerning OTA's use of confidentiality agreements in its contracts; and coordination of funding and other resources. In March 2006 letters to relevant congressional oversight and appropriation committees, State and Treasury describe general steps that they are taking to improve the interagency process in delivering counter- terrorism-financing training and technical assistance abroad. The agencies report engaging with each other at all levels to ensure increased coordination. In addition, they report that, in concert with the NSC and the Departments of Homeland Security and Justice, they are reviewing TFWG and its procedures with a view to enhancing its effectiveness. Also, State reports that it has begun chairing TFWG at the Deputy Assistant Secretary level to further enhance coordination. State also says that it is reconvening a senior-level interagency Training and Assistance Subgroup that is responsible for coordinating all U.S. government assistance on counterterrorism matters, including counter-terrorism-financing training and technical assistance. Although these steps could provide a basis for improved stakeholder acceptance of roles and procedures, State's and Treasury's letters lack sufficient detail to affirm that the preparation of an integrated and risk- based strategic plan is under way. The letters also do not address efforts to strategically align resources with needs or to measure performance. Moreover, the letters do not address our recommendation regarding the Memorandum of Agreement or offer alternative means of ensuring the duration of any improvements in coordination. Treasury's OFAC undertakes a number of activities as part of its efforts to block terrorist assets. However, although Treasury uses some limited performance measures related to OFAC's efforts, Treasury officials acknowledged that the measures do not assess results or show how OFAC's efforts contribute to Treasury's terrorist financing-related goals. In addition, OFAC officials acknowledged that Treasury's annual Terrorist Assets Report to Congress on the nature and extent of blocked terrorists' U.S. assets does not provide the information needed to assess progress achieved. In our report, we recommended that the Secretary of the Treasury finalize the development of the performance measures as well as an OFAC-specific strategic plan and provide more complete information in its annual reports to Congress on terrorist assets blocked. As of March 2006, OFAC had developed new performance measures and said it would work with Congress to provide the information needed regarding OFAC's terrorist asset blocking efforts. OFAC administers and enforces economic sanctions, based on U.S. foreign policy and national security goals, against designated individuals or groups that conduct or facilitate terrorist activity. Once individuals or groups are designated by Treasury or State, OFAC serves as the lead agency responsible for prohibiting transactions and blocking assets subject to U.S. jurisdiction. As part of its efforts, OFAC coordinates and works with other U.S. agencies to identify and investigate prospective terrorist designations; compiles the administrative record or evidentiary material that will serve as the factual basis underlying a decision by OFAC to designate individuals or groups; and engages foreign counterparts to gather information, apply pressure, or request or offer assistance in support of terrorist designation and asset blocking activities. OFAC may use the threat of designation to gain cooperation, forcing key sources of financial support to choose between public exposure of their support of terrorist activity of their good reputation. OFAC also works with the regulatory community and industry groups to assure that assets are expeditiously blocked and the ability to carry out transactions through U.S. parties is terminated. At the time of our October 2005 review, Treasury lacked effective performance measures to assess the results of OFAC's terrorist asset blocking efforts or show how these efforts contribute to the department's goals of disrupting and dismantling terrorist financial infrastructures and executing the nation's financial sanctions policies. Treasury's 2004 Performance and Accountability Report contained limited performance measures related to asset blocking, including terrorist designations, including an increase in the number of terrorist finance designations in which other countries join the United States, an increase in the number of drug trafficking and terrorist-related financial sanctions targets identified and made public, and the estimated number of sanctioned entities no longer receiving funds from the United States. OFAC officials told us that they recognized the inadequacy of these measures to assess progress in blocking terrorist assets. According to the OFAC officials: The measures in the 2004 Performance and Accountability Report are not specific to terrorist financing. Two of the three measures do not separate data on terrorists from data on other entities such as drug traffickers, hostile foreign governments, corrupt regimes, and foreign drug cartels, although OFAC officials acknowledged that they could have reported the data separately. Progress on asset blocking cannot be measured simply by totaling an amount of blocked assets at the end of the year, because the amounts may vary over the year as assets are blocked and unblocked. As of October 2005, Treasury had not developed measures to track activities and results related to asset blocking. For example, Treasury's underlying research to identify terrorist entities and their support systems is used by other U.S. agencies for activities such as law enforcement investigations. However, Treasury lacked measures to track other agencies' use of this research. Treasury officials also noted that measuring the effectiveness of these efforts in terms of their deterrent value is problematic, in part because the direct impact on unlawful activity is unknown and because precise metrics for illegal and clandestine activities are hard to develop. According to Treasury officials, measuring these efforts' effectiveness can also be difficult because many of them involve multiple U.S. agencies and foreign governments and are highly sensitive. However, contrary to a U.S. legislative directive to agencies to ascertain and explain the infeasibility or impracticableness of a performance goal for a program activity, Treasury's annual report does not address the deterrent value of designations or the difficulties in measuring their effectiveness. In October 2005, in commenting on a draft of our draft report, Treasury officials told us that they were in the process of developing better quantitative and qualitative measures for assessing the results of OFAC's terrorist asset blocking efforts. In addition, Treasury officials said that they were developing a strategic plan to guide OFAC's efforts. The officials stated that they expected OFAC's new performance measures to be completed by December 1, 2005, and its new strategic plan to be completed by January 1, 2006. We recommended in our report that the Secretary of the Treasury complete the efforts to develop meaningful performance measures and an OFAC-specific strategic plan to ensure that policy makers and program managers are able to examine the results of U.S. efforts to block terrorists' assets. According to discussions with OFAC officials in March 2006, OFAC has developed new measures to assess its role in administering and enforcing economic sanctions against terrorists; however, we have not assessed the adequacy of these new measures. According to OFAC officials, as of March 30, 2006, the strategic plan had not yet been finalized. Treasury's annual Terrorist Assets Report, which offers a year-end snapshot of dollar amounts of terrorist assets held in U.S. jurisdiction, does not provide sufficient information to demonstrate OFAC's progress in its terrorist asset blocking efforts. In 2004, OFAC reported that the United States blocked almost $10 million in assets belonging to seven international terrorist organizations and related designees. The 2004 report also noted that the United States held more than $1.6 billion in assets belonging to six designated state sponsors of terrorism. However, the report does not document or quantify changes from amounts of assets blocked in previous years. For example, the 2004 report stated that the United States held $3.9 million in al Qaeda assets, but it did not show that this represented a 400 percent increase from the value of al Qaeda assets held by the United State in 2003 or offer an explanation for this increase. We noted in our October 2005 report that although the amounts of assets blocked are not in themselves a complete measure to assess progress over time, such information, along with other key performance metrics, could help policy makers and program managers examine the results of OFAC's asset blocking efforts. We recommended that the Secretary of the Treasury provide more complete information in the annual Terrorist Assets Report on the nature and extent of assets blocked, such as differences in amounts blocked each year, explanations for such differences, results of OFAC's terrorist asset blocking efforts, and obstacles faced by the U.S. government. In commenting on a draft of our report, Treasury observed that the Terrorist Assets Report "is not mandated or designed as an accountability measure." However, nothing in the statutory language or the congressional intent underlying the mandate precludes Treasury from compiling and reporting in this manner. Senior OFAC officials acknowledged that the Terrorist Assets Report is not useful for assessing results of asset blocking efforts. In its March 2006 letter to relevant congressional oversight and appropriation committees, Treasury responded that although it does not believe that the amounts of assets blocked is a meaningful measure of its efforts' effectiveness, it would work with Congress to discuss recrafting the Terrorist Assets Report to address congressional interests. U.S. agencies have accomplished much in their efforts to combat terrorist financing abroad. Despite the difficulties of interagency coordination, TFWG has delivered counter-terrorism-financing training and technical assistance to numerous vulnerable countries and has designated and blocked significant amounts of terrorist assets. However, as GAO's October 2005 report described, several challenges impact the effectiveness of U.S. agencies' efforts. Without a strategic and integrated plan for coordinating the funding and delivery of training and technical assistance by the agencies, the U.S. government cannot maximize the use of its resources in the fight against terrorist financing. Interagency disputes over State-led TFWG roles and procedures have hampered TFWG leadership and wasted staff energy and talent. In addition, decisions based on anecdotal and informal information, rather than transparent and systematic assessments, have hindered managers from effectively addressing problems before they grow and potentially become crises. Further, the U.S. government's, including TFWG's, failure to integrate all available U.S. and international resources may result in missed opportunities to leverage resources to meet related needs and risks, particularly given the scarce expertise available to address counter- terrorism financing. Finally, without a functional performance measurement system, TFWG lacks the information needed for optimal coordination and planning. Although OFAC undertakes a number of important efforts with regard to blocking terrorist assets, the lack of meaningful performance measures and sufficient information regarding these efforts has created uncertainty about their results and progress. The new performance measures that OFAC has recently developed may enable Congress and other officials with oversight responsibilities to ascertain the strengths and weaknesses of these efforts as well as hold OFAC managers accountable. OFAC's strategic plan, when completed, could further facilitate the development of meaningful performance measures by describing the relation of performance goals and measures to OFAC's mission, goals, and objectives. In addition, including information in Treasury's annual Terrorist Assets Reports that shows changes in the amounts of assets blocked from year to year may help Congress and other officials better understand the importance of these efforts in the overall U.S. effort to combat terrorist financing and may assist in the strategic allocation of resources. In view of congressional interest in U.S. government efforts to deliver training and technical assistance abroad to combat terrorist financing and the difficulty of obtaining a systematic assessment of U.S. resources dedicated to this endeavor, as stated in our report, Congress should consider requiring the Secretary of State and the Secretary of the Treasury to submit an annual report to Congress showing the status of interagency efforts to develop and implement an integrated strategic plan and Memorandum of Agreement to ensure TFWG's seamless functioning, particularly with respect to TFWG roles and procedures. Madame Chairwoman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. Should you have any questions about this testimony, please contact Loren Yager at (202) 512-4128 or [email protected]. Other major contributors to this testimony were Christine Broderick, Kathleen Monahan, Tracy Guerrero, Elizabeth Guran, and Reid Lowe. According to the Department of State (State), the Terrorist Finance Working Group (TFWG) was convened in October 2001 to develop and provide counter-terrorism-financing training to countries deemed most vulnerable to terrorist financing. Composed of various agencies throughout the U.S. government, TFWG is cochaired by State's Office of the Coordinator for Counterterrorism and Bureau for International Narcotics and Law Enforcement Affairs. It meets biweekly to receive intelligence briefings, schedule assessment trips, review assessment reports, and discuss the development and implementation of technical assistance and training programs. According to State, the TFWG process for developing counter-terrorism- financing training and assistance programs involves the following steps: 1. With input from the intelligence and law enforcement communities, identify and prioritize countries most vulnerable to terrorist financing , and needing the most assistance in combating it. 2. Evaluate priority countries' counter-terrorism-financing and anti- money-laundering regimes with Financial Systems Assessment Team (FSAT) on-site visits or Washington tabletop exercises. State-led FSAT teams of 6 to 8 members include technical experts from State, Treasury, Justice, and other regulatory and law enforcement agencies. The FSAT on-site visits take about 1 week and include in-depth meetings with host government financial regulatory agencies, the judiciary, law enforcement agencies, the private financial services sector, and nongovernmental organizations. 3. Prepare a formal assessment report on each priority country's vulnerabilities to terrorist financing and make recommendations for training and technical assistance to address these weaknesses. The formal report is shared with the county's government to gauge its receptivity and to coordinate U.S. offers of assistance. 4. Develop a counter-terrorism-financing training implementation plan based on FSAT recommendations. Counter-terrorism-financing assistance programs include financial investigative training to "follow the money," financial regulatory training to detect and analyze suspicious transactions, judicial and prosecutorial training to build financial crime cases, financial intelligence unit development, and training in detecting over- and under-invoicing schemes for money laundering or terrorist financing. 5. Provide sequenced training and technical assistance to priority countries in the country, regionally, or in the United States. 6. Encourage burden sharing with our allies, with international financial institutions (e.g., IMF, World Bank, regional development banks), and through international organizations such as the United Nations (UN), the UN Counterterrorism Committee, Financial Action Task Force on Money Laundering, or the Group of Eight (G-8) to capitalize on and maximize international efforts to strengthen counter-terrorism- financing regimes around the world. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Disrupting terrorists' financing is necessary to impede their ability to organize, recruit, train, and equip adherents. U.S. efforts to strengthen domestic and global security include, among others, the provision of training and technical assistance in countering terrorist financing abroad. An interagency Terrorist Financing Working Group (TFWG), chaired by the U.S. Department of State (State), coordinates the delivery of this training and technical assistance to "priority" countries--those considered most vulnerable to terrorist financing schemes--as well as to other vulnerable countries. In addition, the Department of the Treasury (Treasury) Office of Foreign Assets Control (OFAC) leads U.S. efforts to block access to designated terrorists' assets that are subject to U.S. jurisdiction. In response to multiple congressional requesters, GAO examined U.S. efforts to combat terrorist financing abroad, publishing the report in October 2005. In this testimony, GAO discusses the report's findings about challenges related to (1) TFWG's coordination of the counter-terrorism-financing training and technical assistance abroad and (2) Treasury's measurement of results and provision of information needed to assess OFAC's efforts to block terrorist assets. Under State's leadership, TFWG has coordinated the interagency delivery of counter-terrorism-financing training and technical assistance--for example, providing training and placing resident advisors--in more than 20 priority countries as well as other vulnerable countries. However, TFWG's effort has been hampered by the absence of a strategic and integrated plan. GAO found that the effort lacks three elements that are critical to strategic planning for operations within and across agencies: (1) Key stakeholder acceptance of roles and practices; (2) strategic alignment of resources with countries' needs and risks; and (3) a process to measurement the effort's results For example, two key TFWG stakeholders, State and Treasury, disagree about the extent of State's leadership as chair of TFWG. GAO recommended that State and Treasury, with other government agencies, implement an integrated strategic plan that addresses these challenges and sign a Memorandum of Agreement to improve coordination of counter-terrorism-financing training and technical assistance abroad. State and Treasury responded that they are taking several steps to improve the interagency process, but they did not address all of GAO's recommendations. OFAC undertakes a number of efforts related to the blocking of terrorists' assets. For example, OFAC compiles evidence as a basis for designating terrorist groups and individuals. However, GAO found limitations regarding Treasury's measurement of results and provision of information about FAC's efforts. Inadequate measures. At the time of GAO's review, Treasury lacked adequate measures to assess the results of OFAC's efforts. OFAC was in the process of developing new measures, which it recently completed. Although GAO has not reviewed them, these measures may enable officials overseeing OFAC to ascertain the strengths and weaknesses of its efforts as well as hold OFAC managers accountable. GAO recommended that, in addition, Treasury develop an OFAC-specific strategic plan that describes, among other things, how its performance measures relate to general program goals and objectives. As of March 30, Treasury had not yet finalized the strategic plan. Insufficient information. Treasury's yearly report to Congress on terrorist assets blocked does not provide sufficient information for Congress to assess OFAC's progress. For instance, the report shows the total dollar value of blocked terrorist assets held under U.S. jurisdictions but does not show changes from amounts of assets blocked in previous years. GAO recommended that Treasury provide information on such changes, along with other key performance metrics, in its annual Terrorist Assets Report. Treasury responded that it would discuss with Congress recrafting the report to address congressional interests.
| 7,129 | 834 |
When one steps back and collectively evaluates how the government has traditionally managed and acquired information technology, some conclusions are painfully obvious. On the whole, the federal government's track record in delivering high value information technology solutions at acceptable cost is not a good one. Put simply, the government continues to expend money on systems projects that far exceed their expected costs and yield questionable benefits to mission improvements. Familiar examples, such as the Federal Aviation Administration's Air Traffic Control modernization and the Internal Revenue Service's Tax Systems Modernization projects serve as stark reminders of situations where literally billions of dollars have been spent without clear results. Moreover, agencies have failed to take full advantage of IT by failing to first critically examine and then reengineer existing business and program delivery processes. Federal agencies lack adequate processes and reliable data to manage investments in information technology. Without these key components, agencies cannot adequately select and control their technology investments. As GAO's financial and information management audits have demonstrated over the last decade, it is sometimes impossible to track precisely what agency IT dollars have actually been spent for or even how much has been spent. Even more problematic, rarely do agencies collect information on actual benefits to the organization accruing from their investments. More often than not, results are presented as descriptions of outputs and activities rather than changes in performance or program outcomes. How should the Congress expect this scenario to change once agencies take steps to implement ITMRA? In 5 to 7 years, the Congress should have a much clearer, confident understanding of the benefits to agencies' performance that are attributable to IT expenditures. On a governmentwide basis, there should be higher overall success rates for IT projects completed within reasonable time frames, at acceptable costs, with positive net rates of return on investment. Modular, well-defined IT projects with short-term deliverables should be the rule rather than the exception. And institutionalized, up-to-date management processes should be producing consistent high-value investment decisions and results. Mr Chairman, ITMRA also has to reinforce and be reinforced by other important management reform legislation. Just as technology is most effective when it supports defined business needs and objectives, ITMRA will be more powerful if it can be integrated with the objectives of broader governmentwide management reforms. For example, changes made by the Federal Acquisition Reform Act (FARA) and the Federal Acquisition Streamlining Act (FASA) are focused on removing barriers to agencies obtaining products and services from outside sources in a timely, efficient manner. This is crucial in the technology arena where significant changes occur very rapidly. ITMRA builds in essential investment and performance ingredients that empower agencies to make wiser, not just faster, acquisitions of IT products and services. The Paperwork Reduction Act (PRA) emphasizes the need for an overall information resources management strategic planning framework, with IT decisions linked directly to mission needs and priorities. And, the act also focuses on reducing unnecessary information requirements on industry and citizens. ITMRA can work in concert with PRA by making sure that agencies understand what information is needed, the purpose it is being used for, and ensure it is collected once and shared many times. The CFO Act requires sound financial management practices and systems to be in place essential for tracking program costs and expenditures. ITMRA based-approaches to managing information systems should have a direct, positive impact on the creation of financial systems to support the higher levels of accountability envisioned by the act. GPRA focuses attention on defining mission goals and objectives, measuring and evaluating performance, and reporting on results. Budgets based on performance information provided under GPRA should include clear treatment of IT capital expenditures and its impact on agency operations. Similarly, ITMRA effectively supports GPRA by requiring that performance measures be used to indicate how technology effectively supports mission goals, objectives, and outcomes. Past experiences with other governmentwide reforms--such as the CFO Act, the National Performance Review (NPR), the Paperwork Reduction Act, and GPRA--indicate that implementation requires a significant investment of time at senior levels. Our own experiences in assisting agencies with self-assessments of their strategic information management practices have illustrated the many barriers that must be overcome. To date, our evaluation approach--which involves all levels and types of management--has been used in at least 10 agencies. In every case, it has taken considerable management time, talent, and resources to analyze organizational management strengths and weaknesses and then put corrective action plans in place. From the past, we know that the early days following the passage of reform legislation are telling. The level of governmentwide interest, discussion, and senior management involvement in planning for and directing change all indicate whether a "wait and see" approach versus a "get ready to meet the test" approach is being taken. During this period, consistent oversight leadership, coordination, and clear guidance from the Office of Management and Budget (OMB) is essential to getting agency implementation off to a constructive start. Without common direction and constancy of purpose from OMB, GAO, and the Inspectors General, agency executives are left reacting and responding to advice and directives that may be at cross purposes. It is also important that implementation actions focus on not only the means (i.e., policies, practices, and process) but also end results that are expected from the management reforms. For ITMRA to be successful, improved management processes and practices that focus on capital investment and planning, reengineering, and performance measurement are essential. But these are only the means to achieve the legislation's ultimate goal--implementing high-value technology projects at acceptable costs within reasonable time frames that are contributing to tangible, observable improvements in mission performance. Continuous oversight from the Congress that focuses on these issues and strong support from the Administration are an essential incentives for keeping agency management accountable and focused on changes necessary to ensure more successful outcomes. The pilot efforts being conducted under GPRA also illustrate that outcome and performance-based decision-making will not be an easy, quick transition for federal agencies. Performance reports provided to the Congress under both GPRA and now ITMRA should become one of Congress's major mechanisms for evaluating and ensuring agency accountability. A flurry of activity is underway across the government to implement new management processes required by ITMRA. To its credit, OMB--under the direction of the Deputy Director for Management--has taken a leadership role in organizing and focusing interagency discussions on changes needed to existing policy and executive guidance. Let me briefly summarize some of the major activities now underway. Several policy directives and guidance are being created or revised by OMB to reflect changes required by ITMRA. These include a draft Executive Order on Federal Information Technology which is currently with the President for review and signature. This order will officially create a governmentwide Chief Information Officers Council, composed of agency CIOs and Deputy CIOs and chaired by OMB's Deputy Director for Management, to provide recommendations to OMB on governmentwide IT policies, procedures, and standards; the Government Information Technology Services Board, staffed by agency personnel, to oversee the continued implementation of the NPR IT recommendations and to identify and promote the development of innovative technologies, standards, and practices; and the Information Technology Resources Board, staffed by agency personnel and used to review, at OMB's or an agency's request, an information systems development or acquisition project and provide recommendations as appropriate. In addition, revisions are being made to two important OMB management and budget circulars. Circular A-130, Management of Federal Information Resources, is being changed to include the capital planning and portfolio management requirements of ITMRA. Circular A-11, Preparation and Submission of Budget Estimates, is expected to provide additional information on capital planning, including a new supplement on planning, budgeting, and acquiring fixed assets. Further, an estimated 90 percent of GSA's Federal Information Resources Management Regulation is expected to be eliminated in response to ITMRA. The remaining segments are expected to be issued as parts of the Federal Acquisition Regulation, the Federal Property Management Regulation, or OMB guidance. The OMB IT Investment Guide, issued last November, establishes key elements of the investment process for agencies to follow in selecting, controlling, and evaluating their IT investments. This process will be used in the fiscal year 1998 budget submission cycle. The Investment Guide has been circulated among agency heads, CFOs, and senior IRM officials. In addition, OMB has made copies available to each of its five Resource Management Offices responsible for reviewing agency management, budget, and policy issues. OMB has also organized an interagency CIO Working Group--comprised of the existing senior IRM officials from the major agencies and departments--to assist in developing the policies, guidance, and information needed to effectively implement ITMRA. This working group has been very active, meeting once a month since January. The working group has created several interagency subcommittees that have been working to provide suggestions to OMB on changes needed in governmentwide policies and executive guidance to effectively implement ITMRA. Among these subcommittees are the CIO Subcommittee, which developed a paper on the appointment, placement alternatives, and roles and responsibilities of an agency's CIO; the CIO Charter Workgroup, which developed the proposed charter for the CIO Council; and the Capital Planning and Investment Subcommittee, which has discussed potential approaches to IT capital planning processes and is working on a proposal for pilot testing new processes at several agencies. Because many of these activities are still underway, it is impossible to make conclusions about them at this time. However, taken as a whole, they send several positive signals. In each, OMB has played a proactive leadership role while remaining flexible enough to adapt to individual agency situations and needs. In general, although the depth and impact are uncertain, the direction of the guidance is consistent with ITMRA. First, it is clear that the federal IRM/IT community is widely represented and involved in these efforts. Rather than being the recipients of policy changes, agency officials are actively engaged in helping formulate new guidance and standards. For example, the interagency working groups that have been assembled to provide input on the CIO position and capital planning and investment processes have representation from numerous departments and agencies. Second, initial steps are being taken to emphasize the importance of selecting qualified CIO candidates who are being strategically placed with defined roles and responsibilities within the agencies. OMB has asked that before the major departments and agencies establish and fill these positions they formally submit information on (1) the CIO's background and experience, (2) a description of the organizational placement of the CIO position, including reporting arrangements to the agency head and organizational resources expected to be under the control of the position, and (3) a description of the CIO's authority and responsibilities. OMB expects to conduct discussions with agencies should it have concerns that the intent of the legislation is not being fulfilled. OMB has also responded formally to selected agencies where objections were raised about the CIO position. Third, recognizing the governmentwide shortage of highly skilled managerial and technical talent, several mechanisms are being established to help leverage IT skills and resources across agencies. Establishing the Information Technology Resources Board, the CIO Council, and Government Information Technology Service Board all demonstrate a recognition of the need to channel experienced management and technical resources towards significant problem or opportunity areas, particularly large, complex systems development or modernization projects that show early warning signs related to cost, schedule, risk, or performance. Fourth, a governmentwide implementation focus is being maintained. Especially noteworthy is the broad-based level of support and interaction covering IT issues that transcend specific agency lines. The Government Information Technology Services Working Group (GITS) serves as an excellent example of what can be achieved through interagency cooperation. In implementing many of the IT-related recommendations of the National Performance Review, GITS has effectively promoted electronic sharing of information across agency lines and to citizens. Fifth, special attention is being paid to core requirements of the legislation--establishing CIOs and improving IT capital planning and investment. These two provisions are directly aimed at strengthening pervasive management weaknesses we find in most federal agencies: (1) getting top executives to determine how major technology projects are intended to improve business goals and objectives, (2) getting program managers to take ownership of IT projects and holding them accountable for the project's success, and (3) institutionalizing repeatable processes aimed at scrutinizing project costs and risks against delivered benefits. OMB, in considering revisions to existing management and budget circulars, has recognized the need to better integrate and consolidate existing agency guidance in order to improve its own oversight and alleviate imposing unnecessary reporting burdens on the agencies. The Deputy Director for Management convened a special working group to revise OMB's current management bulletin on agency budgeting and planning for fixed capital assets, which includes major information systems acquisitions. The revised guidance is being made a supplement to OMB's Circular A-11, the primary budget preparation guidance for federal agencies. In addition, OMB has drafted changes to Circular A-130 to be compatible with ITMRA, including the requirement that agencies develop consistent decision criteria that allow IT investments to be prioritized based on costs, benefits, and risks. ITMRA implementation activities, taken as a whole, indicate a willingness among agency officials and OMB to meet their responsibilities and expectations under the act. Nevertheless, we see critical challenges in five specific areas. Without addressing these challenges and additional effort to solidify current initiatives, implementation will be at risk early on. Let me briefly discuss each. Our observations of the implementation activities leads us to conclude that much of the involvement within federal agencies is heavily tilted towards IT and IRM officials and does not include top senior officials. Most of the interagency working groups come exclusively from IRM and strategic planning offices. Yet, our research of leading public and private organizations clearly demonstrates that strong leadership, commitment, and involvement in capital planning, investment control, and performance management must come from the executives who will actually use the information from these processes to make decisions. Although there has been communication with the President's Management Council about ITMRA requirements, it is unclear how seriously this reform is being taken by senior agency management. Many of the agencies' actions to date in contemplating CIO appointments do not reflect a full understanding of either the letter or the intent of the legislation. The CIO position under ITMRA seeks a strong, independent, experienced, executive-level individual who can focus senior management attention on critical information management issues and decisions. Yet, some individuals being considered lack clear track records and adequate business or technical experience. In other cases, the placement of the CIO is at a lower management level than what the legislation intended. According to information from OMB, 13 of the 27 largest departments and agencies have named CIOs. It is our understanding that three of these agencies have been advised by OMB that their CIO positions meet the requirements of the law. Our own review of the information being submitted to OMB by agencies indicates that the breadth of experience varies widely among the individuals being considered for the position. Signals coming from the Administration need to be strong, clear, and more consistent on the importance, placement, and skills associated with the position. In addition, four agencies have provided information to OMB indicating a desire to integrate the functions of the Chief Financial Officer with the Chief Information Officer. Mr. Chairman, as you have noted in your public statements, this was not the intention of the legislation. Moreover, a task force report recently submitted to OMB from the Industry Advisory Council argues strongly against combining the two positions. The CIO was created to give an executive-level focus and accountability for information technology issues and ensure greater accountability for delivering effective technology systems and services. In light of the existing problems in most agencies and the significant duties and responsibilities under each act, agencies would be best served by keeping the two positions separate. The problems associated with financial and information management in most federal agencies are very significant and require attention from separate individuals with the appropriate talent, skills, and experience in each area. Critical, sustained, high-level attention is needed on new skills in government that are essential to proposing, designing, building, and overseeing complex information systems. Recruitment efforts and strategies need to be established and retention of existing skilled staff reexamined. The magnitude of the challenges facing federal agencies in the IT area demand more talent than currently exists. Although one subcommittee of the interagency CIO Working Group has been created to examine this issue, it has not yet been given the attention it deserves. One area of particular concern is how the legislation is to be implemented at the department level versus their major subcomponents, namely agencies and bureaus. To date, neither the federal agencies or OMB have determined how newly required investment control processes, IT performance management, or IT strategic planning will be differentiated by organizational tiers within government entities. Additionally, it remains unclear how OMB's own internal ITMRA implementation responsibilities will be strengthened. Although we have not yet fully evaluated OMB's efforts, there appears to be insufficient attention to preparation for the oversight and evaluation of agencies' IT capital planning and investment processes. OMB has yet to explicitly define as part of its own ITMRA implementation strategy how it expects to fulfill its responsibilities for (1) evaluating agency IT results, (2) ensuring capital planning and investment control processes are in place, (3) using accurate and reliable cost, benefit and risk data for IT investment decision-making, and (4) linking the quality and completeness of agency IT portfolio analyses to actual budget recommendations to the President. With the phasing out of GSA's Time Out program--designed to get problem-plagued systems acquisition projects back on track and force improvements in agency IT management processes--the weight placed on OMB's oversight responsibility has further increased. Under the OMB 2000 reorganization, program examiners in OMB's Resource Management Offices will have primary responsibility for evaluating agency IT budget proposals and evaluating the implementation of governmentwide IRM policies. It remains unclear how OMB expects to train these examiners to evaluate the IT portfolios of the agencies over which they have oversight responsibility. Mr. Chairman, we will soon be issuing a report to you and Chairman Clinger of the House Committee on Government Reform and Oversight that specifically focuses on IT investment decision-making in five case study agencies. Our findings highlight important shortcomings in these agencies' capabilities to meet the expectations of ITMRA's investment control provisions. Based on this work, we will outline specific recommendations to OMB for ways it can improve its oversight role in this area. ITMRA embraces an entire set of comprehensive management reforms to IT decision-making. These parallel the set of strategic information management best practices we recommended in our May 1994 report, Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology--Learning From Leading Organizations. As we have learned from our research of leading private and public organizations, long-term, repeatable improvements in managing IT are most successful when a complete set of information management practices are conducted in concert with each other. While much attention has been focused on the capital planning and the CIO provisions of ITMRA, equally intensive agency attention to other areas (e.g., strategic planning, business process reengineering, performance measurement, and knowledge and skills development) is also essential. Within a short period of time, efforts should begin to marshall agency attention to these key areas. Mr. Chairman, time is of the essence in order to meet congressional expectations that agencies begin acquiring and managing IT according to the approaches outlined in ITMRA. Agencies should be taking short- and long-term actions to change management processes to comply with the legislative requirements and intent. In overseeing the implementation of ITMRA, we suggest that congressional oversight in the short term focus on assessing critical agency actions in four areas that have direct bearing on the ultimate success of the law in producing real, positive change: Closely monitor the caliber and organizational placement of CIO candidates for departments and agencies. Past experience with the initial selection of CFOs in the federal government indicates that the rush to fill the position may take precedence over careful deliberation over choosing the right person. The caliber of the individuals placed in these slots can make a real difference in the likelihood of lasting management changes. This has been demonstrated through the success of the CFO Act. After some initial problems, well-qualified individuals were selected for these positions. Early signs of success will be the establishment of a pool of high-caliber CIOs who can effectively support agency heads on IT issues at appropriations and oversight hearings. However, an early warning sign of failure will be if individuals are elevated or reassigned within their organizations with little regard to qualifications, experience, or skill. Focus on the evaluation of results. The Congress must continually ask agency heads for hard numbers and facts on what was spent on information technology and what the agency got in return for the investment. These evaluations, wherever possible, must focus on information technology's contribution to measurable improvements in mission performance. Improvements in productivity, quality and speed of service delivery, customer satisfaction, and cost savings are common areas where technology's impact can be most immediate. Early signs of success will be examples of measurable impact or where high-risk IT projects with questionable results are stopped or delayed as a result of IT investment control processes. Early signs of failure will be examples where high-risk, low-return projects continue to be funded despite claims of management process changes. Monitor how well agencies are institutionalizing processes and regularly validating cost, return, and risk data used to support IT investment decisions. Informed management decisions can only occur if accurate, reliable, and up-to-date information is included in the decision-making process. Project cost data must be tracked and easily accessible. Benefits must be defined and measured in outcome-oriented terms. And risks must be quantified and mitigated to better ensure project success. Early signs of success will be agency examples where IT contributions to productivity gains, cost reductions, cycle-time reductions, and increases in service delivery quality and satisfaction are quantitatively documented and independently reported. Get the right people asking and answering the right questions. Throughout the budget, appropriations, and oversight processes, top agency executives, OMB program examiners, and members of the Congress must consistently ask what was spent, what was achieved, and was it worth it. Agency heads must be able to clearly answer these questions for their IT capital expenditures. Mr. Chairman, the success or failure of this critical legislative reform will have far reaching impact. Rising public expectations for improved service and the need to improve the efficiency of federal operations to support needed budget reductions all depend on wise investments in modern information technology. We look forward to working with this committee to make ITMRA a success and appreciate your leadership in spearheading this effort. That concludes my statement Mr. Chairman. I would be happy to answer any questions you or other members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO discussed the implementation of the Information Technology (IT) Management Reform Act (ITMRA). GAO noted that: (1) federal agencies have failed to reengineer their business and program delivery processes before acquiring new information systems, which results in costly IT resources; (2) ITMRA should empower federal agencies to make wise acquisitions of IT products and services; (3) lessons learned from other governmentwide management reforms indicate the need for senior management involvement, consistent oversight and leadership from the Office of Management and Budget (OMB), and a focus on capital investment and planning, reengineering, and performance measurement; (4) OMB has organized interagency discussions on needed policy and guidance changes and issued guidance on selecting, controlling, and evaluating agencies' IT investments; (5) OMB has established an interagency Chief Information Officers (CIO) working group to assist in developing policies, guidance, and information needed for effective ITMRA implementation; (6) the Administration is ensuring that CIO candidates are qualified and given sufficient authority and responsibility and that core legislative requirements are met; (7) challenges to ITMRA implementation include involving top agency management, consistently directing CIO appointments and organizational placement, building new IT skills, focusing on internal implementation at the department-level, and continually emphasizing an integrated management approach; and (8) congressional support and oversight is essential to ensuring successful implementation of ITMRA.
| 5,182 | 295 |
TVA is a multipurpose, independent, wholly owned federal corporation established by the Tennessee Valley Authority Act of 1933 (TVA Act). The TVA Act established TVA to improve the quality of life in the Tennessee River Valley by improving navigation, promoting regional agricultural and economic development, and controlling the floodwaters of the Tennessee River. To those ends, TVA erected dams and hydroelectric power facilities on the Tennessee River and its tributaries. To meet the need for more electric power during World War II, TVA expanded beyond hydropower, building coal-fired power plants. In the 1960s, TVA decided to add nuclear generating units to its power system. Today, TVA operates one of the nation's largest power systems, having produced about 152 billion kilowatt-hours (kWh) of electricity in fiscal year 2000. The system consists primarily of 113 hydroelectric units, 59 coal-fired units, and 5 operating nuclear units. TVA sells power in seven states--Alabama, Georgia, Kentucky, Mississippi, North Carolina, Tennessee, and Virginia. TVA sells power at wholesale rates to 158 municipal and cooperative utilities that, in turn, distribute the power on a retail basis to nearly 8 million people in an 80,000 square mile region. TVA also sells power to a number of directly served large industrial customers and federal agencies. In 1959, the Congress amended the TVA Act to authorize TVA to use debt financing to pay for capital improvements for power programs. Under this legislation, the Congress required that TVA's power program be "self- financing" through revenues from electricity sales. For capital needs in excess of internally generated funds, TVA was authorized to borrow by issuing bonds. TVA's debt limit is set by the Congress and was initially established at $750 million in 1959. Since then, TVA's debt limit has been increased four times by the Congress: to $1.75 billion in 1966, $5 billion in 1970, $15 billion in 1975, and $30 billion in 1979. As of September 30, 2000, TVA's outstanding debt was $26.0 billion. TVA's bonds are considered "government securities" for purposes of the Securities and Exchange Act of 1934 and are exempt from registration under the Securities Act of 1933. All of TVA's bonds are publicly held, and several are traded on the bond market of the New York Stock Exchange. Since TVA's first public issue in 1960, Moody's Investors Service and Standard & Poor's have assigned TVA's bonds their highest credit rating--Aaa/AAA. To determine whether TVA's bonds are explicitly or implicitly guaranteed by the federal government, we analyzed various documents, including Section 15d of the TVA Act, as amended, the Basic TVA Power Bond Resolution, TVA's Information Statement, and the language included in TVA's bond offering circulars. We also discussed this issue with bond analysts at two credit rating firms (Moody's Investors Service and Standard & Poor's) and TVA officials. To determine the opinion of bond analysts regarding the effect of an implicit or explicit guarantee on TVA's bonds, we interviewed officials at two credit rating firms that rate TVA's bonds to discuss their rating methodology for TVA and other electric utilities' bonds. In addition, we reviewed recent reports issued by the credit rating agencies for any language about an implicit federal guarantee of TVA's debt. As agreed with your offices, we did not attempt to determine what TVA's bond rating would be without its ties to the federal government as a wholly owned government corporation. To determine the impact of TVA's bond rating on its annual interest expense, we obtained information from TVA about its outstanding bonds as of September 30, 2000. We then obtained comparable information on the average bond ratings and bond yield rates applicable to public utilities for the various bond rating categories. Using the average bond yield rates for public utility debt in the various bond rating categories, we used two approaches to estimate the amount of TVA's annual interest expense if its bonds outstanding at September 30, 2000, carried the lower ratings. Additional information on our scope and methodology is contained in appendix I. We conducted our review from July 2000 through April 2001 in accordance with generally accepted government auditing standards. We requested written comments from TVA on a draft of this report. TVA's Chief Financial Officer provided us with oral comments, which we incorporated, as appropriate. The TVA Act states that the federal government does not guarantee the principal of, or interest on, TVA's bonds. However, the perception of the bond analysts at the two credit rating firms we contacted is that since TVA is a wholly owned government corporation, the federal government would support debt service and would not allow a default to occur. Both of the credit rating firms stated that this perception of an implicit federal guarantee is one of the primary reasons that TVA's bonds have received the highest credit rating. One of the firms cited two other factors--TVA's legislative protections from competition and its strong operational performance--as additional reasons for assigning TVA's bonds its highest rating. The TVA Act specifically states that the federal government does not guarantee TVA bonds. TVA includes similar "no federal guarantee" language in its Basic TVA Power Bond Resolution, Information Statement, and bond offering circulars. The relevant language is as follows: Section 15d of the TVA Act, as amended, 16 USC SS 831n-4--"Bonds issued by the Corporation hereunder shall not be obligations of, nor shall payment of the principal thereof or interest thereon be guaranteed by, the United States." Basic TVA Power Bond Resolution, Section 2.2 Authorization and Issuance of Bonds--"They shall be payable as to both principal and interest solely from Net Power Proceeds and shall not be obligations of or guaranteed by the United States of America." Information Statement--"Evidences of Indebtedness are not obligations of the United States of America, and the United States of America does not guarantee the payment of the principal of or interest on any Evidences of Indebtedness." TVA bond offering circulars--"The interest and principal on the Bonds are payable solely from Net Power Proceeds and are not obligations of, or guaranteed by, the United States of America." Although TVA's bonds expressly disclaim a federal guarantee, the two bond rating firms we contacted perceive TVA's bonds to be implicitly backed by the federal government. This perception of an implied federal guarantee is one of the primary reasons that TVA's bonds have received the highest credit rating. For example, Standard & Poor's, in its January 2001 analysis of TVA's global power bonds, stated that "the rating reflects the US government's implicit support of TVA and Standard & Poor's view that, without a binding legal obligation, the federal government will support principal and interest payments on certain debt issued by entities created by Congress." Further, in its June 2000 opinion update on TVA, Moody's Investors Service (Moody's) reported that "the Aaa rating on Tennessee Valley Authority (TVA) power bonds derives from its strong operational performance and its status as a wholly owned corporate agency of the US Government." In addition, Moody's reported that although the federal government does not guarantee TVA's bonds, the government would not allow a default on TVA's debt because of the impact it would have on the cost of debt issued by government-sponsored enterprises, such as Fannie Mae and Freddie Mac.6, 7 As in the case of TVA, the government does not guarantee the debt of these enterprises. Also as with TVA, there is a perception in the investment community that the federal government would not allow these enterprises to default on their obligations. In its January 2001 analysis of TVA's global power bonds, Standard & Poor's acknowledged that its rating of these bonds did not reflect TVA's underlying business or financial condition and that the rating of these bonds would have been lower without TVA's ties to the federal government. In addition, a Moody's official stated that financial statistics and ratios for other electric utilities are significantly stronger than those for TVA in each rating category and that government ownership was a fundamental underpinning of the Aaa rating it assigned to TVA's debt. Moody's and Standard & Poor's generally use a complex methodology involving both quantitative and qualitative analyses when determining ratings for electric utilities. For example, Moody's examines the volatility and reliability of cash flows, the contributions of the utility to the profits of its corporate parent (if any), and how the utility is positioning itself to operate in a competitive environment. Also included in Moody's analysis is the utility's ability to balance business and financial risk with performance. Similarly, Standard & Poor's measures financial strength by a utility's ability to generate consistent cash flow to service its debt, finance its operations, and fund its investments. In addition, Standard & Poor's analyzes business risk by examining the utility's operating characteristics such as regulatory environment, reliability, and management. Opinion Update: Tennessee Valley Authority, Moody's Investors Service, June 22, 2000. Government-sponsored enterprises are federally established, privately owned corporations designed to increase the flow of credit to specific economic sectors. categories, using A, B, and C, with Aaa/AAA being the highest rating. Triple, double, and single characters distinguish the gradations of credit/investment quality. For example, issuers rated Aaa/AAA indicate exceptional financial security, Baa/BBB indicate adequate financial security, and Ba/BB or below offer questionable to poor financial security. Debt issues rated in the four highest categories, Aaa/AAA, Aa/AA, A, and Baa/BBB, generally are recognized as investment-grade. Table 1 describes the investment-grade rating categories used by Moody's and Standard & Poor's. Debt rated Ba/BB or below generally is referred to as speculative grade. In addition, Moody's applies numerical modifiers, 1, 2, and 3, and Standard & Poor's uses "plus" and "minus" signs in each rating category from Aa/AA through Caa/CCC in their corporate bond rating system. The modifier 1 and "plus" indicate that the issuer/obligation ranks in the higher end of a rating category; 3 and "minus" indicates a ranking in the lower end. According to a Moody's official, the firm places less significance on financial factors in analyzing TVA debt than in analyzing the debt of other electric utilities. Because of TVA's ties to the federal government, Moody's considers other factors more important in its assessment of TVA. Specifically, Moody's looks at how TVA will react to its changing operating environment and places "considerable value" on the legislative framework in which TVA operates. For example, in its June 2000 analysis of TVA, Moody's reported that key provisions in the TVA Act and the Energy Policy Act of 1992 (EPAct) provide credit protection for bondholders. Under the TVA Act, TVA's Board of Directors is required to set rates at levels sufficient to generate revenues to cover operating and financing costs. EPAct provides TVA with certain protections from competition. Under EPAct, TVA is exempt from having to allow other utilities to use its transmission lines to transmit power to customers within TVA's service territory. Further, the Moody's official stated, as long as TVA is able to set its own rates and to benefit from legislative and other competitive advantages over other utilities, Moody's will continue to assign TVA's bonds a Aaa rating. As shown in figure 1, of the 119 electric utilities rated by Moody's as of October 2000, TVA was the only utility rated Aaa. The ratings of other electric utilities range from a high of Aa1 to a low of Ba2, with an average rating at A3. Figure 1 shows the number of utilities in each rating category compared to TVA. As noted previously, the TVA Act authorizes TVA to issue and sell bonds to assist in financing its power program. Investor-owned electric utilities also use debt financing, but unlike TVA, they can and do issue common and preferred stock to finance capital needs. Figure 2 shows the capital structure of electric utilities by rating category. It also shows that, in general, electric utilities that have obtained a greater portion of financing through debt have lower credit ratings. However, even though the capital structure of TVA consists entirely of debt, and, as illustrated in our February 2001 report, it has higher fixed financing costs and less financial flexibility than its likely competitors, TVA remains the only AAA-rated electric utility in the United States. As a result of TVA's high bond ratings, the private lending market has provided TVA with access to billions of dollars of financing at low interest rates, an advantage that in turn results in lower interest expense than if its rating had been lower. To determine the impact of TVA's bond rating on its interest expense, we estimated what TVA's annual interest expense on its bonds outstanding at September 30, 2000, would have been if the debt had been given lower investment-grade ratings. Using two different methodologies, we obtained similar results. In the first methodology, we compared the coupon rate of each of TVA's bonds outstanding at September 30, 2000, to the average bond yield rates applicable to public utility bonds with similar terms at the time of issuance for each investment-grade rating category. For example, TVA's Aaa-rated 2000 Series E Power Bonds that were outstanding at September 30, 2000, have a coupon rate of 7.75 percent. When these bonds were issued on February 16, 2000, the average bond yields for public utility debt averaged 8.16 percent. In total, using the first methodology, we found that the annual interest expense of TVA's bonds outstanding at September 30, 2000, would have been between $137 million and $235 million (about 2 to 3 percent of fiscal year 2000 total expenses) higher if the debt had been given lower investment-grade bond ratings. In the second methodology, we categorized TVA's bonds into long-term (at least 20 years to maturity at time of issuance) and intermediate-term (less than 20 years to maturity at time of issuance) debt issues. We then identified the difference between TVA's average coupon interest rates grouped as long-term and intermediate-term on its bonds outstanding at September 30, 2000, and the average bond yield rates grouped as long-term and intermediate-term for public utilities for the various investment-grade rating categories. Specifically, we compared the average coupon interest rate on TVA's long-term bonds to the 9-year (1992-2000) average bond yield rates for long-term public utility bonds. Similarly, we compared the average coupon interest rate on TVA's intermediate-term bonds to the 5-year (1996-2000) average bond yield rates for intermediate-term public utility bonds. The years used (maturities and time of issuance) for public utility long-term and intermediate-term debt are, in general, comparable to TVA's bonds outstanding at September 30, 2000. For example, the average coupon interest rate for TVA's bonds outstanding at September 30, 2000, with at least 20 years to maturity at time of issuance was 6.96 percent. In comparison, the average bond yield rates for the period 1992-2000 for public utility debt with at least 20 years to maturity averaged 7.82 percent. Using this methodology, we estimated that the annual interest expense on TVA's bonds outstanding at September 30, 2000, would have been about $141 million to $245 million (about 2 to 4 percent of fiscal year 2000 total expenses) higher if its bonds had been rated lower. Table 2 shows the impact of lower bond ratings on annual interest expense using both methodologies. It is important to note that our analyses assumed that TVA's coupon rates on its bonds corresponded to the bond yield rates of other lower-rated public utilities at the time TVA issued its bonds. Assuming that were the case, we estimated that TVA's interest expense would have been higher by the amounts shown in table 2. If TVA's debt were no longer perceived to be implicitly guaranteed by the federal government, the resulting impact on TVA's interest expense would relate to future bonds and refinancings rather than to its bonds outstanding at September 30, 2000. TVA's high bond rating results in lower interest expense, enhancing TVA's competitive prospects by providing it with more financial flexibility to respond to financial or competitive challenges. While the criteria used to rate the bonds of TVA and other electric utilities are the same, they are weighted differently and, as a result, the basis for TVA's bond rating is more nonfinancial in nature than that for other electric utilities. According to bond analysts, TVA's high bond rating is largely based on the perception that its debt is federally backed because of its ties to the federal government as a wholly owned government corporation and its legislative protections from competition. If these conditions were to change, TVA's bond rating would likely be lowered, which in turn would affect the cost of new debt. This would add to its already high interest expense and corresponding financial challenges in a competitive market. TVA's Chief Financial Officer generally agreed with the report and provided oral technical and clarifying comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from its date. At that time, we will send copies of this report to appropriate House and Senate Committees; interested Members of Congress; TVA's Board of Directors; The Honorable Spencer Abraham, Secretary of Energy; The Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. The report will also be on GAO's home page at http://www.gao.gov. We will make copies available to others upon request. Please call me at (202) 512-9508 if you or your staffs have any questions. Major contributors to this report are listed in appendix II. We were asked to answer specific questions regarding TVA's financial condition. This report addresses the questions pertaining to TVA's bond rating; specifically, (1) whether TVA's bonds are explicitly or implicitly guaranteed by the federal government, including the opinion of bond analysts regarding the effect of any such guarantee, and (2) the impact of TVA's bond rating on its annual interest expense. As agreed with your offices, we issued a separate report on February 28, 2001, on the three other issues regarding TVA's (1) debt and deferred assets, (2) financial condition compared to its likely competitors, and (3) potential stranded costs. To determine whether TVA's bonds are explicitly or implicitly guaranteed by the federal government, we reviewed prior GAO products discussing TVA's bonds; reviewed and analyzed the section of the TVA Act pertaining to TVA's bonds; reviewed and analyzed various TVA documents, including the Basic TVA Power Bond Resolution, TVA's Information Statement, and the language included in TVA's outstanding bond offerings at September 30, 2000; interviewed bond analysts at Moody's and Standard & Poor's; and interviewed TVA officials. To determine the opinion of bond analysts regarding the effect of any such guarantee, we interviewed officials at the credit rating firms that rate TVA's bonds-- Moody's and Standard & Poor's; and reviewed and analyzed documents issued by Moody's and Standard & Poor's on their methodology for rating TVA and other electric utilities. To determine the impact of TVA's bond rating on its annual interest expense, we obtained information from TVA about its outstanding bonds at September 30, 2000; reconciled information from TVA about its outstanding bonds at September 30, 2000, to its audited financial statements; reviewed information pertaining to TVA's outstanding debt contained in its annual reports; reviewed a report issued by the Department of Energy's Energy Information Administration which assessed the impact of TVA's bond rating on its interest expense; interviewed Moody's regarding the availability of historical bond yield data by rating category for electric utilities and public utilities; obtained Moody's information on the average bond yields applicable to public utilities in the various bond rating categories from Standard & Poor's DRI (long-term) and Moody's Investors Service Credit Perspectives (intermediate-term); and estimated the additional annual interest expense on TVA's bonds outstanding at September 30, 2000, using the average bond yield rates for public utilities in various investment-grade rating categories. Using Moody's public utility long-term and intermediate-term (unweighted) bond yield data in various investment-grade rating categories, we applied two methods for estimating what the additional annual interest expense on TVA's bonds outstanding at September 30, 2000, would have been if TVA's debt were rated lower. Our analysis considered the characteristics of TVA's bonds, such as date of issuance and term; however, we did not assess the effect of call provisions. Under Methodology 1, we analyzed TVA's annual interest expense on its bonds outstanding at September 30, 2000, to determine, for each issuance outstanding, the (1) coupon rate, (2) date of issuance, (3) term, and (4) maturity; identified the average bond yield rates applicable to public utility bonds with similar terms at the time of issuance of each of TVA's bonds outstanding at September 30, 2000, in the Aa/AA, A, and Baa/BBB rating categories; calculated the annual interest expense for each of TVA's debt issues in the various rating categories; and determined the estimated additional annual interest expense by taking the difference between TVA's annual interest expense and the interest expense in the various rating categories. Under Methodology 2, we categorized TVA's bonds into long-term (at least 20 years to maturity at time of issuance) and intermediate-term (less than 20 years to maturity at time of issuance); calculated TVA's (unweighted) average coupon interest rates for long-term and intermediate-term debt by taking the average of the coupon rates applicable for each category (long-term and intermediate-term) of TVA's bonds outstanding at September 30, 2000; calculated the annual interest expense for TVA's long-term and intermediate-term debt using the average coupon interest rates calculated for each category; determined the (unweighted) average public utility bond yield rates for calendar years 1992 to 2000 in each of the various rating categories for long-term debt and 1996 to 2000 for intermediate-term debt, which, in general, are comparable to the maturities and time of issuance of TVA's bonds outstanding at September 30, 2000; calculated the annual interest expense for TVA's long-term and intermediate-term debt using the average public utility bond yield rates applicable to the various rating categories; and determined the estimated additional annual interest expense (long-term and intermediate-term) by taking the difference between TVA's annual interest expense and the interest expense in the various rating categories. We conducted our review from July 2000 through April 2001 in accordance with generally accepted government auditing standards. We obtained our information on public utility bond yield rates from authoritative sources (e.g., Standard & Poor's DRI, Moody's Investors Service) that provide and/or regularly use that data; however, we did not verify the accuracy of the bond yield data they provided. During the course of our work, we contacted the following organizations. In addition to the individual named above, Richard Cambosos, Philip Farah, Jeff Jacobson, Joseph D. Kile, Mary B. Merrill, Donald R. Neff, Patricia B. Petersen, and Maria Zacharias made key contributions to this report.
|
Although the criteria used to rate the bonds of the Tennessee Valley Authority (TVA) and other electric utilities are the same, they are weighted differently and, as a result, the basis for TVA's bond rating is more nonfinancial in nature than that for other electric utilities. According to bond analysts, TVA's high bond rating is largely based on the perception that its debt is federally backed because of its ties to the federal government as a wholly owned government corporation and its legislative protections from competition. If these conditions were to change, TVA's bond rating would likely be lowered, which, in turn, would affect the cost of new debt. This would add to its already high interest expense and corresponding financial challenges in a competitive market.
| 5,087 | 166 |
GAO was established just over 90 years ago. After World War I, the U.S. Congress wanted better information on, and control over, government spending. In1921, the Budget and Accounting Act required the President to issue an annual federal budget. This law also established GAO--then known as the General Accounting Office--as an independent agency to investigate how federal dollars are spent. In its early years, GAO did mainly voucher auditing. After World War II, however, GAO began to do more comprehensive financial audits that examined the economy and the efficiency of government operations. In the 1960s, the expectations of the types of information GAO could provide evolved further. As a result, GAO began performance auditing and program evaluations to determine whether government programs were meeting their objectives. In the Legislative Reorganization Act of 1970 and the Congressional Budget and Impoundment Control Act of 1974, Congress highlighted the role that GAO audits of government program results can play in support of its oversight and legislative functions. The 1974 language specifically requires the Comptroller General to review and evaluate the results of government programs and activities. As noted in a 1976 statement by then Assistant Comptroller General Ellsworth Morse, "managers and policy makers, particularly in government, want--and need--more from auditors than stereotyped opinions on financial statements. They want independently and objectively obtained and evaluated information on operations and performance and expert advice on such things as how improvements can be made, how money can be saved or used to better advantage and how goals or objectives can be achieved in better fashion and at less cost." In short, they want auditing focused on performance. The U.S. Government Auditing Standards which govern our work describe performance audits as providing objective analysis so that management and those charged with governance and oversight can use the information to improve program performance and operations, reduce costs, facilitate decision making by parties with responsibility to oversee or initiate corrective action, and contribute to public accountability. Performance audits evaluate evidence against stated criteria, such as specific requirements, measures, or defined business practices. This definition of performance auditing is consistent with international auditing standards. Our efforts today fall into three main areas: oversight, insight, and foresight. Our oversight activities determine whether government entities are carrying out their assigned tasks, spending funds for intended purposes, and complying with laws and regulations. Our insight activities determine which programs and policies work well and which ones do not. These efforts include sharing best practices and benchmarking information horizontally across government and vertically through different levels of government. Our foresight activities try to identify key trends and emerging challenges before they reach crisis proportions. Our work is based on a combination of legislative mandates, congressional requests, and work done under the Comptroller General's authority. We meet regularly with Members of Congress and their staff. These outreach efforts ensure that GAO stays at the forefront of high- priority issues facing Congress and the nation. We have protocols for how we respond to congressional requests for our studies. These protocols ensure that we deal consistently and transparently with all congressional committees and members. This is especially important since we do our work for all standing committees of the Congress and about 70 percent of its subcommittees. Our protocols help us to prioritize incoming requests and hold us accountable for the commitments we have made to Congress. In fiscal year 2012, we received 924 congressional requests and new mandates. Performance audits make up the vast majority of our audits. In the 2008 International Peer Review of the Performance Audit Practice of the United States Government Accountability Office, the International Peer Review Team pointed out several features that distinguish our working environment from that of many of our international peers. The Peer Review noted that we carry out a larger volume of performance audit engagements each year and that the majority of the engagements we carry out are requested by Congress and not self-initiated. The Peer Review also noted that in responding to congressional requests, we determine the scope and methodology for the work, the timing and staffing, product content, and the management structure. In addition, we have adopted a number of practices to balance our objective of being responsive to Congress while remaining nonpartisan and independent in serving the long-term interests of the American people. The Peer Review identified the following two practices as being particularly notable: Our strategic planning process involves Congress and other stakeholders in establishing key themes and high-risk areas that the government needs to manage well. Our high-risk series of reports, which I shall discuss shortly, focuses attention on government programs that pose significant risks of fraud, waste, abuse, and mismanagement. Our engagement acceptance process focuses management's attention on the risks associated with each request, including risks to independence, and how the risks will be managed. Our work leads to real results. Each year, we present our findings, conclusions, and recommendations in reports and testimony before Congress. For example, in fiscal year 2012, we issued more than 650 reports and testified 159 times before various congressional committees. In addition, nearly every one of our reports and testimonies is available on our website the day it is made public. We make it a point to regularly measure and report on our performance. Last fiscal year, financial benefits from our work totaled $55.8 billion U.S. dollars. That is a $105 return on every dollar invested in GAO. We also documented 1,440 other benefits that shaped legislation, improved services to the public, and strengthened government operations. A driving force behind these accomplishments is our focus on following up on the status of our recommendations. At the end of fiscal year 2012, we found that 80 percent of the recommendations we had made in fiscal year 2008 had been implemented. We measure over a 4-year period to allow time for proper implementation. This follow-up provides an additional opportunity for Congress to consider our work during oversight activities, for agencies to respond to our recommendations, and for the work needed to successfully address the issues to be completed. Clearly, national audit offices can play a key role in providing public officials with vital information and analyses needed to address country- specific challenges. But it is also true that many of the problems we face today are global in nature and will require international cooperation with international solutions. The International Organization of Supreme Audit Institutions (INTOSAI) has played an important role in this area. INTOSAI is an umbrella organization for the external government audit community that provides an institutionalized framework for the 191-member supreme audit institutions (SAI) to promote development and transfer of knowledge, improve government auditing worldwide, and enhance the professional capacities, standing, and influence of member SAIs in their respective countries. INTOSAI's work includes developing international auditing standards and helping SAIs around the world implement those standards. The INTOSAI Congress in 2010 adopted a comprehensive set of international standards for SAIs. Those standards cover the core audit disciplines of financial, compliance, and performance audits. They provide an institutionalized framework for transferring knowledge, improving government auditing worldwide, and enhancing the professional capabilities and influence of SAIs in their respective countries. Making the transition to include performance audits as well as traditional financial and compliance audits expands the range of tools that national audit offices have to help their respective governments identify and address challenging domestic and global problems. The issues, risks, and problems that governments around the world confront are growing increasingly complex and boundary-spanning. That is, those issues, risks, and problems cut across geographic boundaries, government programs, levels of government, and sectors. As government agencies increasingly rely on collaboration with private and nongovernmental entities and delegate responsibilities for implementing public policy initiatives to these entities, the line between the governmental and the nongovernmental sectors continues to blur. For the United States, policy makers must consider global and local risks, connections, and supply chains if national policy initiatives, such as protecting the security of citizens; reforming national tax laws; modernizing outdated financial regulatory systems; and protecting public safety in the areas of medical products, food and consumer goods, are to be effective. These issues are, of course, not unique to the United States. The January 2012 report The European Parliament in 2025: Preparing for Complexity, an initiative of the Secretary-General of the European Parliament, identified four concepts which give an idea of the increased complexity likely to be present in 2025: the political multi-polarity of the globalised world, the preponderance of multilevel governance, the increase in the number of factors contributing to the drafting and implementation of public policies, and technology as a factor in the speed of change. The report concluded that "the new multi-polarity of the globalised world, the multilevel nature of governance, the multiple players interacting in law-making, are likely to create a new context for the European Parliament directly or indirectly. This heightened complexity may entail risks of fragmentation of (economic) governance, regulation and law. Fragmentation may lead to a loss of coherence, systematic overlaps and lasting conflicts between jurisdictions, as well as to an institutional paralysis, and, then, to democratic frustration, as it becomes more and more difficult to understand who is producing change in regulation and should be made accountable for success and failures. In order to contribute actively to prevent the risk of political and regulatory fragmentation, the European Parliament has to prepare itself for this upcoming complexity." Performance auditing has a vital role in providing decision makers and citizens with the information, analysis, and recommendations they need to respond to this increasingly complex and interconnected environment. To most effectively contribute to fundamental improvements in the performance of 21st century government, we are finding that auditors need to be more and more focused on governance--that is, assessing and improving connections across organizations, levels of government, sectors, and policy tools. In practice, this has several important implications for the focus of our work overall, and for our performance audits in particular. The following are among those implications: Reviewing government's results-orientation, such as the extent to which agencies have an appropriate crosscutting (also often called whole of government or enterprise) perspective to their intended results as well as using innovative approaches to better achieve results. Evaluating collaborative mechanisms, such as efforts to ensure that agencies are effectively coordinating their efforts across levels of government and with other sectors. Examining the interplay of the range of public policy tools, such as grants, contracts, tax expenditures and regulations, that are being used to achieve results to ensure that they are effective and mutually reinforcing. Exploring opportunities to use web and social media technologies to improve government transparency and public reporting to foster greater public participation and civic engagement. Assessing government's capacity to respond to governance challenges such as agencies' risk management programs to ensure that they systematically integrate the identification and management of risk into strategic and program planning. While many of our individual engagements examine the challenges a specific agency or program faces, three GAO-wide initiatives--our High Risk program; our annual reports on overlap, duplication, and fragmentation; and our reviews of the implementation of the Government Performance and Results Act Modernization Act of 2010--offer government-wide perspectives on the progress needed to respond effectively to 21st century governance challenges. High Risk: Our work under our High Risk program documents the challenges of managing in a complex governance environment. We began the High Risk program in 1990 and initially focused on bringing attention to government operations that had greater vulnerabilities to waste, fraud, abuse, and mismanagement. Those issues remain a central focus of the High Risk program today. However, in recognition that many of the high-risk issues we were finding were the product of poor working relationships across organizational boundaries, especially with third parties such as contractors, we expanded our focus to include critical areas needing transformation to address economy, efficiency, and effectiveness challenges. By using the tools of performance auditing as well as financial and compliance audits, we are able to provide forward-looking recommendations to address the High Risk areas. This is especially relevant since more than two-thirds of the 30 areas on our 2013 High Risk List cut across agencies, levels of government, and sectors of the economy. For example, we designated limiting the federal government's fiscal exposure by better managing climate change risks as high risk in 2013 because the federal government is not well positioned to address the fiscal exposure presented by climate change, and needs a government-wide strategic approach with strong leadership to manage related risks. Such an approach includes the establishment of strategic priorities and the development of roles, responsibilities, and working relationships among federal, state, and local entities. Recognizing that each department and agency operates under its own authorities and responsibilities--and can therefore be expected to address climate change in different ways relevant to its own mission--existing federal efforts have encouraged a decentralized approach, with federal agencies incorporating climate-related information into their planning, operations, policies, and programs. The challenge is to develop a cohesive approach at the federal level that also informs action at the state and local levels. Overall, our High Risk program has served to identify and help resolve serious weaknesses in program areas that involve substantial resources and provide critical services to the public. Overlap, Duplication, and Fragmentation: Our work on overlap, duplication, and fragmentation provides additional illustrations of the governance challenges decision makers face. We have issued three annual reports on overlap, duplication, and fragmentation across the federal government. These reports provided a comprehensive look at 162 issue areas and identified more than 380 actions that the executive branch and Congress could take to reduce fragmentation, overlap, and duplication, as well as other cost savings and revenue enhancement opportunities. All told, the three reports have covered virtually every major federal agency and program including agriculture, defense, economic development, education, energy, general government, health, homeland security, international affairs, science and the environment, and social services. For example, we reported that a fundamental re-examination and reform of the United States' surface transportation policies is needed and identified a number of principles to help the Congress in re-examining and reforming the nation's surface transportation policies. These principles included ensuring the federal role is defined based on identified areas of national interest and goals, incorporating accountability for results by entities receiving federal funds, employing the best tools and approaches to emphasize return on targeted federal investment, and ensuring fiscal sustainability. Building on those principles, the Moving Ahead for Progress in the 21st Century Act was signed into law in July 2012, and reauthorized the nation's surface transportation programs through the end of fiscal year 2014. The law addressed fragmentation in those programs and made progress in addressing the issues we raised, including clarifying federal goals and roles and linking federal programs to performance to ensure accountability for results. Specifically, it incorporated accountability for results around clearly identified national goals, providing the framework for the Department of Transportation and the states to implement this approach in the coming years. However, as we reported in January 2013, Congress needs to develop a long-term plan for funding surface transportation and opportunities exist for a more targeted federal role focused around evident national interests. In addition to identifying new areas, and consistent with the commitment expressed in our prior overlap, duplication, and fragmentation reports, we monitor the progress executive branch agencies and Congress have made in addressing the areas previously identified. GAO's Action Tracker--available on our website--contains the status of the specific suggestions for improvement that we identified in our three annual reports. Overall, the executive branch and Congress have made some progress in addressing the areas that we identified in our 2011 and 2012 annual reports. Specifically, as of March 6, 2013, the date we completed our audit work for our most recent report, about 12 percent of the areas identified in our 2011 and 2012 reports were addressed, 66 percent were partially addressed, and 21 percent were not addressed. More recently, both the administration and Congress have taken additional steps, including proposals in the President's Fiscal Year 2014 Budget. The Government Performance and Results Act Modernization Act of 2010: The Government Performance and Results Act Modernization Act of 2010 (the act) seeks, among other things, to instill a more coordinated and crosscutting perspective to federal performance through actions such as requiring the administration to select a set of cross-agency priority goals. For example, goals concerning workforce development, export promotion, and sustainability are among the interim goals that the administration has established. It also requires federal agencies in setting their own goals to identify other entities that are involved in achieving those goals. The act requires GAO to assess implementation and results at several key points. Through the use of performance audits, we reported in June 2013 that the executive branch has taken a number of steps to implement key provisions of the act. Nevertheless, our work has shown that the executive branch needs to do more to fully implement and leverage the act's provisions to address governance challenges in several key areas including: The Office of Management and Budget and agencies have made some progress addressing crosscutting issues but are missing additional opportunities. Ensuring performance information is useful and used by managers to improve results remains a weakness, but key performance management and program evaluation practices hold promise. Agencies have taken steps to align daily operations with agency results but continue to face difficulties measuring performance. Communication of performance information could better meet users' needs. Agency performance information is not always useful for congressional decision making. Over the past four decades our experience in incorporating performance auditing into our overall audit approach has enabled us to help address the complexity facing government. Performance audits, in addition to financial and compliance audits, have allowed us to meet our mission to support the Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. Performance auditing provides GAO with the tools necessary to provide the oversight, insight, and foresight needed to address issues of today. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I look forward to responding to any questions that you may have. If you or your staffs have any questions about this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues, at +1- 202-512-6806 or at [email protected]. Individuals who made key contributions to this testimony are Bill Reinsberg (Assistant Director) and Jon Stehle. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
GAO's mission is to support the U.S. Congress in meeting its constitutional responsibilities and to help improve the performance and ensure the accountability of the federal government for the benefit of the American people. Each year, GAO presents its findings, conclusions, and recommendations in reports and testimony before Congress. In fiscal year 2012, GAO issued more than 650 reports and testified 159 times before various congressional committees. Last fiscal year, financial benefits from GAO's work totaled $55.8 billion U.S. dollars, a $105 return on every dollar invested in GAO. In addition to financial benefits, GAO also documented 1,440 other benefits that shaped legislation, improved services to the public and strengthened government operations. GAO and its counterparts, such as the European Court of Auditors, have unprecedented opportunities to help our respective governments plan ahead and address increasingly complex issues in meeting the challenges posed by global interconnections and worldwide fiscal difficulties. The experiences of GAO with respect to performance auditing illustrate how audit organizations can help decision makers address these challenges. Performance audits as well as traditional financial and compliance audits are essential tools that national audit offices have to help their respective governments identify and address challenging national and global problems. Performance auditing provides objective analysis so that management and those charged with governance and oversight can use the information to improve program performance and operations, reduce costs, facilitate decision making by parties with responsibility to oversee or initiate corrective action, and contribute to public accountability. Increasingly complex issues, risks and problems that governments around the world confront are expanding the perspective of performance auditing. These issues, risks and problems such as modernizing outdated financial regulatory systems and protecting public safety, cut across geographic boundaries, government programs, levels of government and sectors. As government agencies increasingly rely on collaboration with private and nongovernmental entities and delegate responsibilities for implementing public policy initiatives to these entities, the line between the governmental and the nongovernmental sectors continues to blur. From GAO's experience, performance auditing has a vital role in providing decision makers and citizens with the information, analysis, and recommendations they need to respond to this increasingly complex and interconnected environment. To most effectively contribute to fundamental improvements in the performance of 21st century government, GAO has found that auditors need to be more and more focused on governance, including assessing and recommending how to improve connections across organizations, levels of government, sectors, and policy tools. While many of GAO's individual engagements examine the challenges a specific agency or program faces, three GAO-wide initiatives--the High Risk program; the annual reports on overlap, duplication, and fragmentation; and the reviews of the implementation of the Government Performance and Results Act Modernization Act of 2010--offer government-wide perspectives on the progress needed to respond effectively to 21st century governance challenges.
| 3,995 | 598 |
On November 19, 2002, pursuant to ATSA, TSA began a 2-year pilot program at 5 airports using private screening companies to screen passengers and checked baggage. In 2004, at the completion of the pilot program, and in accordance with ATSA, TSA established the SPP, whereby any airport authority, whether involved in the pilot or not, could request a transition from federal screeners to private, contracted screeners. All of the 5 pilot airports that applied were approved to continue as part of the SPP, and since its establishment, 21 additional airport applications have been accepted by the SPP. In March 2012, TSA revised the SPP application to reflect requirements of the FAA Modernization Act, enacted in February 2012. Among other provisions, the act provides that Not later than 120 days after the date of receipt of an SPP application submitted by an airport operator, the TSA Administrator must approve or deny the application. The TSA Administrator shall approve an application if approval would not (1) compromise security, (2) detrimentally affect the cost-efficiency of the screening of passengers or property at the airport, or (3) detrimentally affect the effectiveness of the screening of passengers or property at the airport. Within 60 days of a denial, TSA must provide the airport operator, as well as the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Homeland Security of the House of Representatives, a written report that sets forth the findings that served as the basis of the denial, the results of any cost or security analysis conducted in considering the application, and recommendations on how the airport operator can address the reasons for denial. All commercial airports are eligible to apply to the SPP. To apply, an airport operator must complete the SPP application and submit it to the SPP Program Management Office (PMO), as well as to the FSD for its airport, by mail, fax, or e-mail. Figure 1 illustrates the SPP application process. Although TSA provides all airports with the opportunity to apply for participation in the SPP, authority to approve or deny the application resides in the discretion of the TSA Administrator. According to TSA officials, in addition to the cost-efficiency and effectiveness considerations mandated by FAA Modernization Act, there are many other factors that are weighed in considering an airport's application for SPP participation. For example, the potential impacts of any upcoming projects at the airport are considered. Once an airport is approved for SPP participation and a private screening contractor has been selected by TSA, the contract screening workforce assumes responsibility for screening passengers and their property and is required to adhere to the same security regulations, standard operating procedures, and other TSA security requirements followed by federal screeners at non-SPP airports. Since our December 2012 report, TSA has developed guidance to assist airport operators in completing their SPP applications, as we recommended. In December 2012, we reported that TSA had developed some resources to assist SPP applicants, but it had not provided guidance on its application and approval process to assist airports. As the application process was originally implemented in 2004, the SPP application process required only that an interested airport operator submit an application stating its intention to opt out of federal screening as well as its reasons for wanting to do so. In 2011, TSA revised its SPP application to reflect the "clear and substantial advantage" standard announced by the Administrator in January 2011. Specifically, TSA requested that the applicant explain how private screening at the airport would provide a clear and substantial advantage to TSA's security operations. At that time, TSA did not provide written guidance to airports to assist them in understanding what would constitute a "clear and substantial advantage to TSA security operations" or TSA's basis for determining whether an airport had met that standard. As previously noted, in March 2012 TSA again revised the SPP application in accordance with provisions of the FAA Modernization Act, which became law in February 2012. Among other things, the revised application no longer included the "clear and substantial advantage" question, but instead included questions that requested applicants to discuss how participating in the SPP would not compromise security at the airport and to identify potential areas where cost savings or efficiencies may be realized. In December 2012, we reported that while TSA provided general instructions for filling out the SPP application as well as responses to frequently asked questions (FAQ), the agency had not issued guidance to assist airports with completing the revised application nor explained to airports how it would evaluate applications given the changes brought about by the FAA Modernization Act. For example, neither the application instructions or the FAQs addressed TSA's SPP application evaluation process or its basis for determining whether an airport's entry into the SPP would compromise security or affect cost-efficiency and effectiveness. Further, we found that airport operators who completed the applications generally stated that they faced difficulties in doing so and that additional guidance would have been helpful. For example, one operator stated that he needed cost information to help demonstrate that his airport's participation in the SPP would not detrimentally affect the cost-efficiency of the screening of passengers or property at the airport and that he believed not presenting this information would be detrimental to his airport's application. However, TSA officials at the time said that airports do not need to provide this information to TSA because, as part of the application evaluation process, TSA conducts a detailed cost analysis using historical cost data from SPP and non-SPP airports. The absence of cost and other information in an individual airport's application, TSA officials noted, would not materially affect the TSA Administrator's decision on an SPP application. Therefore, we reported in December 2012 that while TSA had approved all applications submitted since enactment of the FAA Modernization Act, it was hard to determine how many more airports, if any, would have applied to the program had TSA provided application guidance and information to improve transparency of the SPP application process. Specifically, we reported that in the absence of such application guidance and information, it may be difficult for airport officials to evaluate whether their airports are good candidates for the SPP or determine what criteria TSA uses to accept and approve airports' SPP applications. Further, we concluded that clear guidance for applying to the SPP could improve the transparency of the application process and help ensure that the existing application process is implemented in a consistent and uniform manner. Thus, we recommended that TSA develop guidance that clearly (1) states the criteria and process that TSA is using to assess whether participation in the SPP would compromise security or detrimentally affect the cost- efficiency or the effectiveness of the screening of passengers or property at the airport, (2) states how TSA will obtain and analyze cost information regarding screening cost-efficiency and effectiveness and the implications of not responding to the related application questions, and (3) provides specific examples of additional information airports should consider providing to TSA to help assess an airport's suitability for the SPP. TSA concurred with our recommendation and has taken actions to address it. Specifically, TSA updated its SPP website in December 2012 by providing (1) general guidance to assist airports with completing the SPP application and (2) a description of the criteria and process the agency will use to assess airports' applications to participate in the SPP. While the guidance states that TSA has no specific expectations of the information an airport could provide that may be pertinent to its application, it provides some examples of information TSA has found useful and that airports could consider providing to TSA to help assess their suitability for the program. Further, the guidance, in combination with the description of the SPP application evaluation process, outlines how TSA plans to analyze and use cost information regarding screening cost- efficiency and effectiveness. The guidance also states that providing cost information is optional and that not providing such information will not affect the application decision. We believe that these actions address the intent of our recommendation and should help improve transparency of the SPP application process as well as help airport officials determine whether their airports are good candidates for the SPP. In our December 2012 report, we analyzed screener performance data for four measures and found that there were differences in performance between SPP and non-SPP airports, and those differences could not be exclusively attributed to the use of either federal or private screeners. The four measures we selected to compare screener performance at SPP and non-SPP airports were Threat Image Projection (TIP) detection rates, recertification pass rates, Aviation Security Assessment Program (ASAP) test results, and Presence, Advisement, Communication, and Execution (PACE) evaluation results (see table 1). For each of these four measures, we compared the performance of each of the 16 airports then participating in the SPP with the average performance for each airport's category (X, I, II, III, or IV), as well as the national performance averages for all airports for fiscal years 2009 through 2011. As we reported in December 2012, on the basis of our analyses, we found that, generally, certain SPP airports performed slightly above the airport category and national averages for some measures, while others performed slightly below. For example, SPP airports performed above their respective airport category averages for recertification pass rates in the majority of instances, while the majority of SPP airports that took PACE evaluations in 2011 performed below their airport category averages. For TIP detection rates, SPP airports performed above their respective airport category averages in about half of the instances. However, we also reported in December 2012 that the differences we observed in private and federal screener performance cannot be entirely attributed to the type of screeners at an airport, because, according to TSA officials and other subject matter experts, many factors, some of which cannot be controlled for, affect screener performance. These factors include, but are not limited to, checkpoint layout, airline schedules, seasonal changes in travel volume, and type of traveler. We also reported in December 2012 that TSA collects data on several other performance measures but, for various reasons, the data cannot be used to compare private and federal screener performance for the purposes of our review. For example, passenger wait time data could not be used because we found that TSA's policy for collecting wait times changed during the time period of our analyses and that these data were not collected in a consistent manner across all airports. We also considered reviewing human capital measures such as attrition, absenteeism, and injury rates, but did not analyze these data because TSA's Office of Human Capital does not collect these data for SPP airports. We reported that while the contractors collect and report this information to the SPP PMO, TSA does not validate the accuracy of the self-reported data nor does it require contractors to use the same human capital measures as TSA, and accordingly, differences may exist in how the metrics are defined and how the data are collected. Therefore, we found that TSA could not guarantee that a comparison of SPP and non- SPP airports on these human capital metrics would be an equal comparison. Since our December 2012 report, TSA has developed a mechanism to regularly monitor private versus federal screener performance, as we recommended. In December 2012, we reported that while TSA monitored screener performance at all airports, the agency did not monitor private screener performance separately from federal screener performance or conduct regular reviews comparing the performance of SPP and non-SPP airports. Beginning in April 2012, TSA introduced a new set of performance measures to assess screener performance at all airports (both SPP and non-SPP) in its Office of Security Operations Executive Scorecard (the Scorecard). Officials told us at the time of our December 2012 review that they provided the Scorecard to FSDs every 2 weeks to assist the FSDs with tracking performance against stated goals and with determining how performance of the airports under their jurisdiction compared with national averages. According to TSA, the 10 measures used in the Scorecard were selected based on input from FSDs and regional directors on the performance measures that most adequately reflected screener and airport performance. Performance measures in the Scorecard included the TIP detection rate, and the number of negative and positive customer contacts made to the TSA Contact Center through e-mails or phone calls per 100,000 passengers screened, among others. We also reported in December 2012 that TSA had conducted or commissioned prior reports comparing the cost and performance of SPP and non-SPP airports. For example, in 2004 and 2007, TSA commissioned reports prepared by private consultants, while in 2008 the agency issued its own report comparing the performance of SPP and non-SPP airports.performed at a level equal to or better than non-SPP airports. However, TSA officials stated at the time that they did not plan to conduct similar analyses in the future, and instead, they were using across-the-board mechanisms of both private and federal screeners, such as the Scorecard, to assess screener performance across all commercial airports. Generally, these reports found that SPP airports In addition to using the Scorecard, we found that TSA conducted monthly contractor performance management reviews (PMR) at each SPP airport to assess the contractor's performance against the standards set in each SPP contract. The PMRs included 10 performance measures, including some of the same measures included in the Scorecard, such as TIP detection rates and recertification pass rates, for which TSA establishes acceptable quality levels of performance. Failure to meet the acceptable quality levels of performance could result in corrective actions or termination of the contract. However, as we reported in December 2012, the Scorecard and PMR did not provide a complete picture of screener performance at SPP airports because, while both mechanisms provided a snapshot of private screener performance at each SPP airport, this information was not summarized for the SPP as a whole or across years, which made it difficult to identify changes in performance. Further, neither the Scorecard nor the PMR provided information on performance in prior years or controlled for variables that TSA officials explained to us were important when comparing private and federal screener performance, such as the type of X-ray machine used for TIP detection rates. We concluded that monitoring private screener performance in comparison with federal screener performance was consistent with the statutory requirement that TSA enter into a contract with a private screening company only if the Administrator determines and certifies to Congress that the level of screening services and protection provided at an airport under a contract will be equal to or greater than the level that would be provided at the airport by federal government personnel. Therefore, we recommended that TSA develop a mechanism to regularly monitor private versus federal screener performance, which would better position the agency to know whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non- SPP airports. TSA concurred with our recommendation, and has taken actions to address it. Specifically, in January 2013, TSA issued its first SPP Annual Report. The report highlights the accomplishments of the SPP during fiscal year 2012 and provides an overview and discussion of private versus federal screener cost and performance. The report also describes the criteria TSA used to select certain performance measures and reasons why other measures were not selected for its comparison of private and federal screener performance. The report compares the performance of SPP airports with the average performance of airports in their respective category, as well as the average performance for all airports, for three performance measures: TIP detection rates, recertification pass rates, and PACE evaluation results. Further, in September 2013, the TSA Assistant Administrator for Security Operations signed an operations directive that provides internal guidance for preparing the SPP Annual Report, including the requirement that the SPP PMO must annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. We believe that these actions address the intent of our recommendation and should better position TSA to determine whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non-SPP airports. Further, these actions could also assist TSA in identifying performance changes that could lead to improvements in the program and inform decision making regarding potential expansion of the SPP. Chairman Mica, Ranking Member Connolly, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Glenn Davis (Assistant Director), Stanley Kostyla, Brendan Kretzschmar, Thomas Lombardi, Erin O'Brien, and Jessica Orr. Key contributors for the previous work that this testimony is based on are listed in the product. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
TSA maintains a federal workforce to screen passengers and baggage at the majority of the nation's commercial airports, but it also oversees a workforce of private screeners at airports who participate in the SPP. The SPP allows commercial airports to apply to have screening performed by private screeners, who are to provide a level of screening services and protection that equals or exceeds that of federal screeners. In recent years, TSA's SPP has evolved to incorporate changes in policy and federal law, prompting enhanced interest in measuring screener performance. This testimony addresses the extent to which TSA (1) has provided guidance to airport operators for the SPP application process and (2) assesses and monitors the performance of private and federal screeners. This statement is based on a report GAO issued in December 2012 and selected updates conducted in January 2014. To conduct the selected updates, GAO reviewed documentation, such as the SPP Annual Report issued in January 2013, and interviewed agency officials on the status of implementing GAO's recommendations. Since GAO reported on this issue in December 2012, the Transportation Security Administration (TSA) has developed application guidance for airport operators applying to the Screening Partnership Program (SPP). In December 2012, GAO reported that TSA had not provided guidance to airport operators on its application and approval process, which had been revised to reflect requirements in the Federal Aviation Administration Modernization and Reform Act of 2012. Further, airport operators GAO interviewed at the time generally stated that they faced difficulties completing the revised application, such as how to obtain cost information. Therefore, GAO recommended that TSA develop application guidance, and TSA concurred. To address GAO's recommendation, TSA updated its SPP website in December 2012 by providing general application guidance and a description of the criteria and process the agency uses to assess airports' SPP applications. The guidance provides examples of information that airports could consider providing to TSA to help assess their suitability for the program and also outlines how the agency will analyze cost information. The new guidance addresses the intent of GAO's recommendation and should help improve transparency of the SPP application process as well as help airport operators determine whether their airports are good candidates for the SPP. TSA has also developed a mechanism to regularly monitor private versus federal screener performance. In December 2012, GAO found differences in performance between SPP and non-SPP airports based on its analysis of screener performance data. However, while TSA had conducted or commissioned prior reports comparing the performance of SPP and non-SPP airports, TSA officials stated at the time that they did not plan to conduct similar analyses in the future, and instead stated that they were using across-the-board mechanisms to assess screener performance across all commercial airports. In December 2012, GAO found that these across-the-board mechanisms did not summarize information for the SPP as a whole or across years, which made it difficult to identify changes in private screener performance. GAO concluded that monitoring private screener performance in comparison with federal screener performance was consistent with the statutory provision authorizing TSA to enter into contracts with private screening companies and recommended that TSA develop a mechanism to regularly monitor private versus federal screener performance. TSA concurred with the recommendation. To address GAO's recommendation, in January 2013, TSA issued its first SPP Annual Report, which provides an analysis of private versus federal screener performance. Further, in September 2013, a TSA Assistant Administrator signed an operations directive that provides internal guidance for preparing the SPP Annual Report, including the requirement that the report annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. These actions address the intent of GAO's recommendation and could assist TSA in identifying performance changes that could lead to improvements in the program. GAO is making no new recommendations in this statement.
| 3,759 | 840 |
The value of DOD inventory requirements needed to support acquisition leadtime grew from about $8 billion in 1979 to about $21 billion in 1989. Recognizing that excessively long acquisition leadtime was a major contributor to the large growth in defense inventories in the 1980s, in May 1990 DOD directed the military services and DLA to take a number of initiatives to reduce acquisition leadtime as a part of a 10-point Inventory Reduction Plan. The recommended initiatives included (1) establishing procurement leadtime reduction goals, (2) shortening production leadtimes by gradually reducing the required delivery dates in contract solicitations, and (3) expanding multiyear contracting and indefinite quantity requirements contracts. Similar policy guidance for reducing acquisition leadtime, except for establishing reduction goals, was included in DOD Material Management Regulation 4140.1-R, dated January 1993. The leadtime reduction initiatives were based on a December 1986 DOD memorandum that included the recommendations of a study performed for DOD by the Logistics Management Institute. The DOD memorandum and the Institute study showed that a 25-percent reduction in leadtime was achievable by adopting methods proven successful in the private sector. In stressing the significance of the initiatives, DOD commented that each day the DOD-wide average leadtime is reduced future purchases can be reduced by $10 million. Since 1990, DOD has had only limited success in achieving the 25-percent reduction indicated by the study. As shown in table 1, DOD's average leadtime decreased by about 9 percent. On the basis of DOD's estimate that $10 million can be saved for each day the average leadtime is reduced, the 56-day leadtime reduction resulted in procurement savings of $560 million. A further leadtime reduction of 91 days will be needed to achieve the 25-percent reduction indicated by the study. Such a reduction would result in additional procurement savings of $910 million. None of the DOD components have fully implemented DOD's 1990 leadtime reduction initiatives or its 1993 policy guidance for reducing leadtime, but some have made greater efforts than others. As shown in table 1, the Navy had the greatest success and the Air Force had the least success in reducing acquisition leadtime. From 1990 to 1994, the Navy reduced the overall average acquisition leadtime by 193 days, or about 27 percent. This was accomplished by a number of actions. In accordance with DOD initiatives, the Navy first established a leadtime reduction goal of 25 percent. The Navy then had the inventory control points reduce the leadtimes shown in their databases by 25 percent for each item managed. Finally, the Navy took aggressive action over the next 4 years to shorten required delivery dates in contract solicitations and negotiations. From 1990 to 1994, the Army's average acquisition leadtime decreased by 21 days, or about 3 percent. Unlike the Navy, the Army did not establish a leadtime reduction goal, nor did it take action to obtain leadtime reductions through contract solicitations and negotiations. Instead, the Army emphasized another of DOD's initiatives to reduce leadtime by using more flexible procurement methods such as multiyear procurements and indefinite quantity type contracts. According to Army officials, quantities for follow-on years can be easily added to multiyear and indefinite quantity type contracts, which will reduce administrative leadtime to a matter of days instead of months. Also, delays in starting up production are minimized. As an example of the impact of these types of contracts, in 1993 the Army reported that a 3-year vehicle roadwheel purchase by the Tank-Automotive Command reduced acquisition leadtime by 13 months (7 months' administrative and 6 months' production) resulting in a savings of about $19 million. Similarly, by using an indefinite quantity type contract to purchase sprockets, this command reduced acquisition leadtime by 15 months and saved about $5 million. From 1990 to 1994, the Air Force's average acquisition leadtime increased by 6 days, or about 1 percent. The Air Force did not implement DOD's 1990 leadtime reduction initiatives because it felt that no action was needed to reduce leadtime based on a comparison with the leadtimes of the Navy. The Air Force delayed implementation of the initiatives pending an evaluation of the Navy's reported success in achieving a 25-percent decrease in production leadtime without degrading mission support. In its evaluation, the Air Force compared aviation data due to the similarity of parts. On the basis of this evaluation, which was completed in December 1993, the Air Force concluded that its production leadtimes for both repairable and consumable aviation parts were lower than the Navy's leadtimes, even after the 25-percent reduction. The Air Force, therefore, concluded that no action was needed to reduce production leadtime. We analyzed and compared leadtime data for the Air Force and the Navy as shown on their latest available inventory stratification reports of March 31, 1993, and September 30, 1993, respectively. We found that the Air Force's production leadtime was lower for consumable parts, but considerably higher for repairable parts. The Air Force's average production leadtime for repairable parts of 596 days was 176 days, or about 42 percent, higher than the Navy's leadtime of 420 days. Also, the Air Force's overall average acquisition leadtime of 818 days for repairable parts was 299 days, or 58 percent, higher than the Navy's acquisition leadtime of 519 days. From 1990 to 1994, DLA's average acquisition leadtime decreased by 16 days, or about 5 percent. DLA did not establish a leadtime reduction goal or attempt to reduce leadtime through contract solicitations and negotiations, as recommended by DOD's leadtime reduction initiatives. Instead, DLA concentrated on various initiatives to automate the procurement source selection process and on increased use of long-term contracting techniques, such as indefinite quantity type contracts. As the result of a study by its supply centers that identified the potential for shorter leadtimes for high dollar, high demand, long leadtime items, in February 1994 DLA drafted proposed policy guidance for implementing acquisition leadtime reduction initiatives. The proposed policy would require the supply centers to reduce leadtime by 30 percent over a 2-year period from a base of fiscal year 1992 (a reduction of 86 days). To accomplish this reduction, the supply centers would request shorter delivery times in contract solicitations, consider shorter production leadtimes as a factor in competitive bid evaluations, and periodically validate and update production leadtimes through market surveys. As of October 1994, DLA had not implemented the proposed policy, pending its decision to incorporate the policy as a part of a broader business plan it was developing. With the exception of the Navy, the military services and DLA placed no timely emphasis on the effective implementation of DOD's 1990 leadtime reduction initiatives or its 1993 leadtime reduction policy. Also, DOD was not aware of the general lack of progress made over the past 4 years in reducing leadtime because of an absence of adequate oversight information. The Navy's success in reducing leadtime by 27 percent in comparison to the limited progress made by the other DOD components shows that DOD can benefit by placing renewed emphasis on effective implementation of the leadtime reduction initiatives. One way would be to focus on the Navy's success in establishing a 25-percent reduction goal and achieving that goal by taking aggressive action to reduce production leadtime in contract solicitations and negotiations. DOD was not aware of the general lack of progress in implementing the initiatives because the annual progress reports required of the military services and DLA did not provide sufficient oversight information to make a meaningful assessment. The reports did not show historical trends in leadtime days before and after the 1990 initiatives. Also, the reports did not provide any meaningful statistics showing the extent of implementation. For example, Army and DLA reports stated that an expansion of multiyear procurements was a primary means of reducing leadtime, but the reports did not provide statistics showing the extent of the expansion. We identified additional opportunities for significant reductions in acquisition leadtime that were overlooked by the DOD initiatives. These opportunities are having inventory management activities (1) periodically validate recorded leadtime data, (2) work closely with major contractors to update old leadtime data for items with long production leadtimes (e.g., over 18 months), and (3) consider potential reductions in leadtime as a factor in deciding whether to purchase spare parts through the prime contractor or directly from the actual manufacturer. We reviewed the accuracy of acquisition leadtimes at the Air Force's Oklahoma City and San Antonio Air Logistics Centers and the Army's Aviation and Troop Command and found that the Army's leadtimes were more accurate. The Army command had a higher accuracy rate than the centers because it had recently worked closely with eight major contractors to update production leadtimes for all items with leadtimes of 18 months or longer. As a result, leadtime changes were made for 1,129 items, or 75 percent of the items reviewed. Leadtime decreases accounted for 1,061, or 94 percent of the changes. The command estimated net annual procurement savings of $88 million from using updated leadtimes to compute buy requirements. Although the Army command reduced leadtimes, our review still identified inaccuracies. We tested 26 items and found that the leadtimes for 5 items, or 19 percent, were inaccurate. For example, in July 1994 the Aviation and Troop Command used an administrative leadtime of 9 months in the requirement computation for a rotor blade tip used on the UH-60 Black Hawk helicopter (NSN 1560-01-331-3845). However, procurement history records showed that the administrative leadtime required to process the last two purchases was only 2 months. The item manager told us that the 9-month administrative leadtime was based on the time it took to award a multiyear contract and that the 2 months' administrative leadtime represented the time it took to place orders against the contract. The 2-month administrative leadtime should have been used in making purchasing decisions because it represents the actual ordering time to acquire additional parts once a multiyear contract is awarded. Command officials agreed that an adjustment should be made in the requirements system for the reduced leadtime. The two Air Force air logistics centers had a higher percentage of leadtime inaccuracies than the Army command. We reviewed the accuracy of acquisition leadtimes for 106 items and found that leadtimes for 53 items, or 50 percent, were inaccurate, resulting in overstated requirements of $7.3 million. These inaccuracies resulted from the failure to periodically validate and update leadtime data in the requirement computation database. The following examples illustrate the leadtime inaccuracies found. In November 1993, the Oklahoma City Air Logistics Center was using a production leadtime of 44 months in the requirement computation for a circuit card used on the B-2 bomber (NSN 5998-01-262-8124FW). Procurement history records showed that the 44 months was based on information provided by the contractor in July 1991. We asked center officials to contact the contractor to verify the accuracy of the leadtime. According to the officials, the contractor stated that the 44-month leadtime was outdated and quoted a current leadtime of 25 months. The 19-month reduction in production leadtime caused the value of requirements for this item to be reduced by $69,962. The circuit card is one of six B-2 bomber sample items with old and long leadtimes that the contractor updated. As a result, the Oklahoma City Air Logistics Center reduced leadtimes by an average of 14 months for five items, thus deferring future purchases. In another case, the San Antonio Air Logistics Center was using an acquisition leadtime of 100 months in the requirement computation for a signal generator used on the F-15 aircraft (NSN 6625-01-051-6832DQ). In response to our inquiries, the item manager said a keypunch error had occurred in March 1993 during file maintenance and corrected the acquisition leadtime to 38 months. Correcting the leadtime reduced the value of requirements and budget estimates for this item by $408,857. DOD promotes the purchase of spare parts from actual manufacturers rather than from prime contractors as a way to increase competition. This process is called spare parts breakout and is recognized as an effective means of achieving price reductions. Spare parts breakout has the added benefit of reducing acquisition leadtime by eliminating the processing time that a prime contractor adds for passing an order to the actual manufacturer. As part of the inventory reduction plan initiatives, the Army undertook a major program to breakout spare parts from the prime contractor for direct purchase from the actual manufacturer. Although the intent of this program was to bring about procurement economies through elimination of middleman profits, the program also contributed to a reduction in procurement leadtime. In the 1993 progress report on inventory reductions, the Army reported that the inventory commands had screened about 12,000 items for breakout in fiscal year 1992 and identified approximately 6,000 items for breakout from the prime contractor. At the Aviation and Troop Command, for example, the purchase of spare parts for the Blackhawk helicopter had been almost completely broken out. The program manager told us that in his experience production leadtime always goes down, often times by half, when a spare part is broken out for direct purchase from the actual manufacturer. Additional opportunities to buy directly from manufacturers continue to exist. For example, in response to our inquiries on six sample items managed by the Air Force's Oklahoma City Air Logistics Center, the prime contractor for the B-2 bomber advised the center that it was not the actual manufacturer for five of the six items. The contractor stated that it added 5 months' leadtime to process the Air Force's order to the actual manufacturer. Center officials agreed that the leadtime to acquire these items could be reduced simply by buying from the actual manufacturer instead of from the prime contractor and informed us that the next purchases would be made directly from the manufacturer. We recommend that the Secretary of Defense direct the Secretaries of the Army and the Air Force and the Director of DLA to place renewed emphasis on implementing the DOD leadtime reduction initiatives and to improve oversight information reported to DOD so that the progress being achieved can be measured. In doing so, we recommend that the other military services and DLA follow the Navy's lead in setting a leadtime reduction goal and achieving this goal through contract solicitations and negotiations. We also recommend that the Secretary of Defense direct the Secretaries of the Army, the Navy, and the Air Force and the Director of DLA to have their inventory management activities periodically validate recorded leadtime data to detect and correct errors, work closely with major contractors in updating old leadtime data for items with long production leadtimes (e.g., over 18 months), and consider potential leadtime reductions as a factor in evaluating the feasibility of buying directly from manufacturers instead of from prime contractors. DOD agreed that further action to reduce acquisition leadtimes is required (see app. I). However, DOD views full implementation of the policy guidance on methods of reducing leadtimes included in DOD Material Management Regulation 4140.1-R, dated January 1993, as the most effective means to accomplish this reduction. DOD stated that the military services and DLA would be reminded of the need to fully implement that guidance. In a November 23, 1994, memorandum to the military services and DLA, DOD stated that renewed emphasis on acquisition leadtime reduction was appropriate. The memorandum stated that while the greatest emphasis should be placed on full implementation of the guidance in the DOD regulation, such as gradually reducing required delivery dates in solicitations, consideration should be given to the usefulness of leadtime reduction goals and the importance of periodically validating recorded leadtime data. The memorandum also stated that full implementation of the spare parts breakout program could help reduce leadtime and that contractor furnished data could be a useful source of information in validating leadtime data. DOD asked to be advised of the actions taken to reduce leadtimes by February 15, 1995. With regard to our reference to additional savings of $910 million from further leadtime reductions leading to a DOD-wide average reduction of 25 percent, DOD commented that the Secretary of Defense issued a memorandum dated September 14, 1994, that challenges DOD components to reduce business-process cycle times by at least 50 percent by the year 2000. DOD stated further that application of this challenge to acquisition leadtime will include an estimate of possible savings. While DOD's actions are constructive, we do not believe that relying on the military services and DLA to fully implement the January 1993 policy guidance is the most effective means of achieving a 25-percent reduction in acquisition leadtime. The guidance already has been in effect for almost 2 years, and our report points out that only the Navy has been successful in reducing leadtime by 25 percent since 1990. At that time, DOD directed the military services and DLA to take a number of initiatives to reduce acquisition leadtime that are similar to those in the January 1993 guidance. Also, the guidance does not contain a leadtime reduction goal. Furthermore, we believe that improved oversight is needed if leadtime reductions are to be achieved. DOD's comments do not address this part of our recommendation and the January 1993 guidance does not require the military services and DLA to provide DOD with oversight information on their progress in reducing leadtimes. Also, DOD no longer requires annual reports from the military services and DLA showing their progress in implementing the 1990 inventory reduction plan. Alternative means are available for providing DOD with oversight information. One way would be to require that the military services and DLA include leadtime data in their annual Defense Business Operations Fund budget submissions to DOD. These submissions could show the progress being made in achieving a 25-percent reduction in acquisition leadtime, using fiscal year 1990 as the base year for measuring progress. To evaluate the effectiveness of DOD's leadtime reduction initiatives, we held discussions and collected information at headquarters of DOD, Army, Navy, Air Force, and DLA, Washington, D.C.; the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the San Antonio Air Logistics Center, Kelly Air Force Base, Texas; and the Army Aviation and Troop Command, St. Louis, Missouri. We reviewed DOD guidance and initiatives for managing acquisition leadtimes and the implementing policies, procedures, and practices of the military services and DLA. To determine if additional leadtime reduction opportunities exist, we obtained computer tapes from the Air Force and the Army that identified acquisition leadtimes for all spare parts managed by the two Air Force air logistics centers and the Army command as of March 31, 1993. From data extracted from the tapes, we selected 106 Air Force items and 26 Army items for review. These items represented a mix of items either planned to be bought in fiscal year 1995 or having long leadtimes of more than 50 months. We compared leadtime estimates used in requirement computations to leadtimes actually experienced and other leadtime information in item manager files. We selected Air Force and Army locations for detailed review because of their large acquisition leadtime requirements. We used the same computer programs, reports, records, and statistics DOD, the military services, and DLA use to manage inventories, make decisions, and determine requirements. We did not independently determine the reliability of all of these sources. However, as stated above, we did assess the accuracy of the leadtime information by comparing data contained in the requirements system with data contained in item manager files. We performed our review between October 1993 and August 1994 in accordance with generally accepted government auditing standards. As you know, the head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on our recommendations to the House Committee on Government Operations and the Senate Committee on Governmental Affairs not later than 60 days after the date of this report. A written statement must also be submitted to the House and Senate Committees on Appropriations with the agency's first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Chairmen and Ranking Minority Members, Senate and House Committees on Appropriations and on Armed Services, Senate Committee on Governmental Affairs, and House Committee on Government Operations; the Secretaries of the Army, the Navy, and the Air Force; the Director, DLA; and the Director, Office of Management and Budget. Please contact me at (202) 512-5140 if you have any questions. The major contributors to this report are listed in appendix II. The following are GAO's comments on the Department of Defense's (DOD) letter dated November 22, 1994. 1. We revised page 2 in accordance with DOD's suggestions. 2. We revised page 2 as suggested by DOD. 3. We revised page 4 to address DOD's concern. 4. We added references to DOD's policy guidance on reducing leadtime, as set forth in DOD Regulation 4140.1-R, dated January 1993, on page 2. 5. We changed "inventory managers" to "inventory management activities" on pages 5 and 8, as suggested by DOD. Roger Tomlinson, Evaluator-in-Charge Bonnie Carter, Evaluator Rebecca Pierce, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
GAO reviewed the Department of Defense's (DOD) efforts to reduce acquisition leadtimes. GAO found that: (1) DOD has made only limited progress in reducing its acquisition leadtimes because the military services and the Defense Logistics Agency (DLA) have unevenly implemented the leadtime reduction initiatives; (2) no DOD agency has fully implemented the 1990 initiatives or the 1993 policy guidance for reducing leadtimes; (3) the Navy has been the most successful and the Air Force the least successful in reducing acquisition leadtimes; (4) additional leadtime reductions can be achieved by prompt implementation of DOD initiatives, periodic validation and updating of leadtime data, and purchasing spare parts directly from the original manufacturer; and (5) DOD could reduce costs by $1 billion over a 4-year period by reducing acquisition leadtimes.
| 4,800 | 174 |
The United States and many of its trading partners have established laws to remedy the unfair trade practices of other countries and foreign companies that cause injury to domestic industries. U.S. law authorizes the imposition of AD/CV duties to remedy these unfair trade practices, namely dumping (i.e., sales at less than normal value) and foreign government subsidies. The U.S. AD/CV duty system is retrospective, in that importers pay estimated AD/CV duties at the time of importation, but the final amount of duties is not determined until later. By contrast, other major U.S. trading partners have AD/CV duty systems that, although different from one another, are fundamentally prospective in that AD/CV duties assessed at the time a product enters the country are essentially treated as final. Two key U.S. agencies are involved in assessing and collecting AD/CV duties owed. The Department of Commerce (Commerce) is responsible for calculating the appropriate AD/CV duty rate, which it issues in an AD/CV duty order. Commerce typically determines two types of AD/CV duty rates in the course of an initial AD/CV duty investigation on a product: a rate applicable to a product associated with several specific manufacturers and exporters, as well as an "all others" rate for all other manufacturers and exporters of the product who were not individually investigated. After the initial AD/CV duty investigation, Commerce can often conduct two subsequent types of review: administrative and new shipper. Administrative review: One year after the initial rate is established, Commerce can also conduct a review to determine the actual, rather than estimated, level of dumping or subsidization. At the conclusion of the administrative review, the final duty rate, also known as the liquidation rate, is established for the product. New shipper review: After an initial rate is established, a new shipper (i.e., a shipper who has not previously exported the product to the United States during the initial period of investigation and is not affiliated with any exporter who exported the subject merchandise) who is subject to the "all others" rate can request that Commerce conduct a review to establish the shipper's own individual AD/CV duty rate. U.S. Customs and Border Protection (CBP), part of the Department of Homeland Security, is responsible for collecting the AD/CV duties. The initial AD/CV duty order issued by Commerce instructs CBP to collect cash deposits at the time of importation on the products subject to the order. Once Commerce establishes a final duty rate, it communicates the rate to CBP through liquidation instructions, and CBP instructs staff at each port of entry to assess final duties on all relevant products (technically called liquidating). This may result in providing importers-- who are responsible for paying all duties, taxes, and fees on products brought into the United States--with a refund or sending an additional bill. CBP is also responsible for setting the formula for establishing the bond amounts that importers must pay. To ensure payment of unforeseen obligations to the government, all importers are required to post a security, usually a general obligation bond, when they import products into the United States. This bond is an insurance policy protecting the U.S. government against revenue loss if an importer defaults on its financial obligations. In general, the importer is required to obtain a bond equal to 10 percent of the amount the importer was assessed in duties, taxes, and fees over the preceding year (or $50,000, whichever is greater). In addition, importers purchasing from the new shipper can pay estimated AD/CV duties by providing a bond in lieu of paying cash to cover the duties--an option known as the new shipper bonding privilege. We previously reported that over $613 million in AD/CV duties from fiscal years 2001 through 2007 went uncollected, with the uncollected duties highly concentrated among a few industries, products, countries of origin, and importers. Recent CBP data indicate that uncollected duties from fiscal year 2001 to 2010 have grown to over $1 billion and are still highly concentrated. For example, according to CBP, five products from China account for 84 percent of uncollected duties. CBP, Congress, and Commerce have undertaken several initiatives to address the problem of uncollected AD/CV duties. However, these initiatives have not resolved the problems associated with collections. In response to the problems of collecting AD/CV duties, in July 2004, CBP announced a revision to bonds covering certain imports subject to these duties, significantly increasing the value of bonds required of importers. CBP's goal was to increase protection for securing AD/CV duty revenue for certain imports when the final amount of duties owed exceeds the amount paid at the time of importation, without imposing an "excessive burden" on importers. In February 2005, CBP applied this revision to imports of shrimp from six countries as a test case, which covered a potential increase in the final AD duty rate of up to 85 percent from the initial rate. However, shrimp importers reported that the costs were substantial because they had to pay up front higher premiums and larger collateral requirements to obtain the bonds for the initial duties. These increased up-front costs can deter malfeasance by illegitimate importers by increasing the cost of importing merchandise subject to AD/CV duties, but may also impose costs on legitimate importers that pose little risk of failing to pay retrospective AD/CV duties. The enhanced bonding requirement was subject to domestic and World Trade Organization (WTO) litigation, and CBP decided to terminate the requirement in April 2009. Congress partially addressed the risk that CBP would not be able to collect AD/CV duties from new shippers by suspending the new shipper bonding privilege from August 2006 to July 2009. As a result, importers purchasing from new shippers were required to post a cash deposit for estimated AD/CV duties, like all other importers. This requirement eliminated the risk of uncollected AD/CV revenues when the final duty amounts were assessed at the cash deposit rate or less because CBP did not have to issue a bill for the bonded amount. Upon the July 2009 expiration of the requirement, the new shipper bonding privilege was reinstated. The Treasury stated in a 2008 report to Congress that the added risk associated with the bond compared with the cash deposit is low. Commerce has taken steps to improve the transmission of liquidation instructions to CBP, which should improve CBP's ability to liquidate AD/CV duties in a timely manner. Once Commerce determines the final AD/CV duty, it publishes a notice in the Federal Register, and CBP has 6 months to complete the liquidation process. If CBP fails to complete the liquidation process within 6 months, an entry is "deemed liquidated" at the rate asserted by the importer at the time of entry. Once an entry has been deemed liquidated, CBP cannot attempt to collect any supplemental additional duties that might have been owed because of an increase in the AD/CV duty rate from initial to final. Commerce's liquidation instructions are necessary for CBP to assess and collect the appropriate amount of AD/CV duties in a timely manner. However, we reported in 2008 that there were frequent delays in Commerce's transmission of liquidation instructions to CBP, and that about 80 percent of the time, Commerce failed to send liquidation instructions within its self-imposed 15-day deadline. In addition, we found that Commerce's liquidation instructions were sometimes unclear, thereby causing CBP to take extra time to obtain clarification. In December 2007, after we made Commerce officials aware of the untimely liquidation instructions, Commerce announced a plan for tracking timeliness, including a quarterly reporting requirement. In April 2011 Commerce officials told us that Commerce had deployed a system for tracking Commerce's liquidation instructions. In addition, Commerce and CBP established a mechanism for CBP port personnel to submit questions to Commerce regarding liquidation issues. The House and Senate Appropriations Committees directed us to examine whether international agreements to which the United States is a party could be strengthened to improve the collection of AD/CV duties from importers with no attachable assets in the United States. We reported in 2008 that U.S. agency officials believed this would be both difficult and ineffective because of two key obstacles: Few countries are willing to enter into negotiations, and U.S. and foreign governments have a practice of not enforcing a revenue claim based upon the revenue laws of another country. In addition, agency officials stated that strengthening international agreements would not substantially improve the collection of AD/CV duties, given the retrospective nature of the AD/CV duty system and the high cost of litigation. There are two key components of the U.S. AD/CV duty system that have not been addressed but could improve the collection of AD/CV duties: the retrospective nature of the system and the new shipper review process. In addition, Commerce and CBP are contemplating changes to the bonding process. One key component of the U.S. AD/CV duty system is its unique retrospective nature, which creates risks of uncollected duties both because of time lags and rate changes. As discussed earlier, importers pay the estimated amount of AD/CV duties when products enter the United States, but the final amount of duties owed is not determined until later. In 2008, we found that the average time elapsed between entry of goods and liquidation was more than 3 years. The long time lag between the initial entry of a product and the final assessment of duties heightens the risk that the government will be unable to collect the full amount owed, as importers may disappear, cease business operations, or declare bankruptcy. The final amount owed under the retrospective system of the United States can also be substantially more than the original estimate, putting revenue at risk. We reported that, while final AD duty rates are lower than or the same as the estimated duty rates the vast majority of the time, in some cases final duty rates are significantly higher. On the basis of our analysis of more than 6 years of CBP data covering over 900,000 entries subject to AD duties, we found that duty rates went up 16 percent of the time, went down 24 percent of the time, and remained the same 60 percent of the time. When duty rates increased, the median increase was less than 4 percentage points. However, because of some large increases, the average rate increase was 62 percentage points, with some increases greater than 150 to 200 percentage points. The majority of uncollected duty bills over $500,000 are attributed to rate increases greater than 150 percentage points. In our 2008 report, we noted that the advantages and disadvantages of prospective and retrospective AD/CV duty systems differ and depend on specific design features. In prospective AD/CV duty systems, the amount of AD/CV duties paid by the importer at the time of importation is essentially treated as final. This eliminates the risk of being unable to collect AD/CV duties and creates certainty for importers. In a retrospective AD/CV duty system, however, the amount of AD/CV duties owed is not determined until well after the time of importation. This time lag can result in "bad actors," those importers who intentionally avoid paying required duties, not being identified until they have been importing for a long time. Only after its collections efforts are unsuccessful does the government clearly know that duties owed by this importer are at serious risk for noncollection. Prospective AD/CV duty systems create a smaller burden for customs officials because the full and final amount of AD/CV duties is assessed at the time of importation, whereas, according to CBP, the retrospective AD/CV duty system of the United States places a unique and significant burden on CBP's resources. Depending on the design of the prospective AD/CV duty systems, the amount of duties assessed is based on dumping or subsidization that occurred in a previous period, and therefore may not equal the amount of actual dumping or subsidization, whereas under a retrospective AD/CV duty system, the amount of duties assessed reflects the actual amount of dumping by the exporter for the period of review. However, in practice, a substantial amount of retrospective AD/CV duty bills are not collected. In response to a recommendation in our 2008 report, Commerce reported to Congress in 2010 on the advantages and disadvantages of retrospective and prospective systems. While the Commerce report cites a variety of strengths and weaknesses for both systems, it states that retroactive increases in AD/CV duties are particularly harmful for small businesses such as shrimp and seafood importers. Under a retrospective system, the Commerce report notes, such small U.S. importers potentially face years of uncertainty over duty liability that can hinder their ability to make informed business decisions, plan investments, and create jobs. Another component of the AD/CV duty collection system that has not been resolved is the new shipper review process. This process allows new manufacturers or exporters to petition for their own separate AD/CV duty rate. However, U.S. law does not specify a minimum amount of exports or number of transactions that a company must make to be eligible for a new shipper review, and according to Commerce officials, they do not have the legislative authority to create any such requirement. As a result, a shipper can be assigned an individual duty rate based on a minimal amount of exports--as little as one shipment, according to Commerce--and can intentionally set a high price for this small amount of initial exports. This creates the possibility that companies may be able to get a low (or 0 percent) initial duty rate, which will subsequently rise when the exporter lowers its price. This creates additional risk by putting the government in the position of having to collect additional duties in the future rather than at the time of importation. Importers that purchased goods from companies undergoing a new shipper review are responsible for approximately 40 percent of uncollected AD/CV duties. Commerce and CBP have proposed additional changes to the bonding process to try to reduce the risk of uncollected AD/CV duties. In April 2011, Commerce proposed a rule that would eliminate the bond that all shippers post when entering products under an AD/CV investigation and require a cash deposit instead. A key reason for the change is that importers bear full responsibility for future duties, according to Commerce. Separately, in May 2011, CBP's Commissioner of International Trade stated in a Senate hearing that CBP is developing internal guidance to require that importers at risk of evasion take out onetime bonds that cover at least the full value of the shipment (single-transaction bonds). Currently, shippers typically take out a "continuous bond" that covers all import transactions over the course of a year, and is calculated at 10 percent of the prior year's duties (or $50,000, whichever is greater). GAO has not reviewed these proposals or assessed their potential effect on the collection of additional AD/CV duties . The existence of a substantial amount of uncollected AD/CV duties undermines the effectiveness of the U.S. government's efforts to remedy unfair foreign trade practices for U.S. industry. While Congress and federal agencies have taken actions to address the problem of uncollected duties, these initiatives have met with little success. Some additional options exist that Congress could pursue to further protect government revenue. In particular, Congress could eliminate the retrospective component of the U.S. AD/CV duty system and consider the variety of alternative prospective systems available. Congress could also make adjustments to specific aspects of the U.S. AD/CV duty system without altering its retrospective nature, such as by providing Commerce the discretion to require companies applying for a new shipper review to have a minimum amount or value of imports before establishing an individual AD/CV duty rate. However, any effort to improve the U.S. AD/CV duty system should consider the additional costs placed on legitimate importers while attempting to address the issue of illegitimate importers. We continue to respond to congressional interest in this issue, and have recently begun a review of the evasion of trade duty laws, in response to a request from the Subcommittee on International Trade, Customs, and Global Competitiveness, Senate Committee on Finance. Chairman Landrieu, Ranking Member Coats, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For further information about this statement, please contact Loren Yager at (202) 512-4347 or [email protected]. Individuals who made key contributions to this statement include Christine Broderick (Assistant Director), Jason Bair, Ken Bombara, Aniruddha Dasgupta, Grace Lui, Diahanna Post, and Julia Roberts. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
Since fiscal year 2001, the federal government has been unable to collect over $1 billion in antidumping (AD) and countervailing (CV) duties imposed to remedy injurious, unfair foreign trade practices. These include AD duties imposed on products exported to the United States at unfairly low prices (i.e., dumped) and CV duties on products exported to the United States that were subsidized by foreign governments. These uncollected duties show that the U.S. government has not fully remedied the unfair trade practices for U.S. industry and has lost out on a substantial amount of duty revenue to the U.S. Treasury. This statement summarizes key findings from prior GAO reports on (1) past initiatives to improve AD/CV duty collection and (2) additional options for improving AD/CV duty collection. U.S. Customs and Border Protection (CBP), Congress, and Commerce have undertaken several initiatives to address the problem of uncollected AD/CV duties, but these initiatives have not resolved the problems associated with collections. Some of these initiatives include the following: (1) Temporary adjustment of standard bond-setting formula. Importers generally provide a general bond to secure the payment of all types of duties, but CBP determined in 2004 that the amount of this bond inadequately protected AD/CV duty revenue. CBP took steps to address this by revising its standard bond-setting formula and tested it on one product (shrimp) to increase protection for AD/CV duty revenue when the final amount of duties owed exceeds the amount paid at the time of importation. The enhanced bonding requirement was subject to domestic and World Trade Organization litigation, and CBP decided to terminate the requirement in 2009. (2) Temporary suspension of new shipper bonding privilege. Importers purchasing from "new shippers"--shippers who have not previously exported products subject to AD/CV duties--are allowed to provide a bond in lieu of cash payment to cover the initial AD/CV duties assessed, which is known as the new shipper bonding privilege. Congress partially addressed the risk that CBP would not be able to collect initial AD/CV duties from such importers by suspending the new shipper bonding privilege for 3 years and requiring cash deposits for initial AD/CV duties, but the privilege was reinstated in July 2009. The Department of the Treasury stated, however, that the added risk associated with the bond compared with the cash deposit is low. Additional options exist for improving the collection of AD/CV duties. First, the retrospective nature of the U.S. system could be revised. Under the existing U.S. system, importers pay the estimated amount of AD/CV duties when products enter the United States, but the final amount of duties owed is not determined until later, a process that can take more than 3 years on average. This creates a risk that the importer may disappear, cease business operations, or declare bankruptcy before the government can collect the full amount owed. Other major U.S. trading partners have AD/CV duty systems that, while different from one another, treat as final the AD/CV duties assessed at the time a product enters the country. Second, Congress could revise the level of exports required for exporters applying for new shipper status. Under U.S. law, new shippers to the United States can petition for their own separate AD/CV duty rate. According to Commerce, a shipper can be assigned an individual duty rate based on as little as one shipment, intentionally set at a high price, resulting in a low or 0 percent duty rate. This creates additional risk by putting the government in the position of having to collect additional duties in the future rather than at the time of importation.
| 3,753 | 802 |
The NSLP is designed to provide school children with nutritionally balanced and affordable lunches to safeguard their health and well-being. The program, administered by the U.S. Department of Agriculture's Food and Consumer Service, is available in all 50 states, the District of Columbia, and the U.S. territories. The schools participating in the NSLP receive a cash reimbursement for each lunch served. In turn, the schools must serve lunches that meet federal nutritional requirements and offer lunches free or at a reduced price to children from families whose income falls at or below certain levels. For school year 1995-96, the schools were reimbursed $1.795 for each free lunch, $1.395 for each reduced-price lunch, and $0.1725 for each full-price lunch. Furthermore, for each lunch served, the schools receive commodity foods--14.25 cents' worth in school year 1995-96. The Department provides a billion pounds of commodity foods annually to states for use in the NSLP. States select commodity foods from a list of more than 60 different kinds of food, including fresh, canned, and frozen fruits and vegetables; meats; fruit juices; vegetable shortening and oil; and flour and other grain products. The variety of commodities depends on the quantities available and market prices. According to the Department, federal commodities account for about 20 percent of the food in the school lunch program. Through school year 1995-96, the schools were required to offer lunches that met a "meal pattern" established by the Department. The meal pattern specified that a lunch must include five items--a serving of meat or meat alternate; two or more servings of vegetables and/or fruits; a serving of bread or bread alternate; and a serving of milk. The meal pattern was designed to provide nutrients sufficient to approximate one-third of the National Academy of Sciences' Recommended Dietary Allowances. Effective school year 1996-97, the schools participating in the program will be required to offer lunches that meet the Dietary Guidelines for Americans. Among other things, these guidelines, which represent the official nutritional policy of the U.S. government, recommend diets that are low in fat, saturated fat, and cholesterol. In meeting these guidelines, the schools may use any reasonable approach, within guidelines established by the Secretary of Agriculture, including using the school meal pattern that was in effect for the 1994-95 school year. All students attending the schools that participate in the NSLP are eligible to receive an NSLP lunch. In fiscal year 1995, about 58 percent of the eligible students participated in the program. About 49 percent of the participating students received free lunches, 7 percent received reduced-price lunches, and 44 percent received full-price lunches. The students who do not participate in the program include those who bring lunch from home, eat off-campus, buy lunch a la carte at school or from a school canteen or vending machine, or do not eat at all. Concerns about plate waste prompted the introduction into the NSLP of the offer versus serve (OVS) option more than a decade ago. Under this option, a school must offer all five food items in the NSLP meal pattern, but a student may decline one or two of them. In a school that does not use this option, a student must take all five items. All high schools must use the OVS option, and middle and elementary schools may offer it at the discretion of local officials. According to a 1993 Department report, 71 percent of the elementary schools and 90 percent of the middle schools use the OVS option. Cafeteria managers varied in the extent to which they perceived plate waste as a problem in their school during the 1995-96 school year. Ninety percent of the managers provided an opinion on plate waste. The majority of those with an opinion did not perceive it as a problem. However, 23 percent of those with an opinion reported that it was at least a moderate problem. Figure 1 presents cafeteria managers' perceptions of the extent to which plate waste was a problem in their school. By school level, we found some variation in cafeteria managers' perceptions of plate waste. As figure 2 shows, managers at elementary schools were more likely than those at middle or high schools to report that plate waste from school lunches was at least a moderate problem during the 1995-96 school year. By school location and by schools serving different proportions of free and reduced-price lunches, we found no statistically significant differences in cafeteria managers' perceptions of plate waste. We also considered the extent to which cafeteria managers perceived plate waste as a problem by asking them to compare the amount of waste from school lunches with the amount of waste from packed lunches from home. Sixty-three percent of the managers were able to make this comparison. Of these, 79 percent believed that the amount from school lunches was less than or the same as the amount from packed lunches. (See fig. 3.) Cafeteria managers reported large variations in the amount of waste from eight different types of food that may be included as part of the school lunch. For each food type, managers reported how much of the portions served, on average, was wasted. On the basis of the managers' responses, we estimate that the average amount wasted ranged from a high of 42 percent for cooked vegetables to a low of 11 percent for milk. Figure 4 shows our estimate of the average percent of waste for each of the eight food types. By school level, the amount of waste varied for all food types except canned or processed fruits. In general, the waste reported for each food type was highest in the elementary schools and lowest in the high schools. (See fig. 5.) By school location, the amount of waste varied for three food types--cooked vegetables, raw vegetables/salads, and milk. For example, for each of these food types, the urban schools reported more waste than the rural schools. (See fig. 6.) By schools serving different proportions of free and reduced-price lunches, the average amount of waste varied for four food types--raw vegetables/salads, fresh fruits, canned or processed fruits, and milk. (See fig. 7.) When responding to a list of possible reasons for plate waste at their school, the cafeteria managers most frequently selected a nonfood reason--"student attention is more on recess, free time or socializing than eating." When responding to a list of possible ways to reduce plate waste, the managers most often viewed actions that would involve students, such as letting students select only what they want, as more likely to reduce plate waste than other actions. Seventy-eight percent of the cafeteria managers cited a nonfood reason--students' attention on recess, free time, or socializing--when asked why students at their school did not eat all of their school lunch. Figure 8 shows the percent of managers who identified each of the nine reasons listed in our survey as either a minor, moderate, or major reason for plate waste in their school. By school level, the percent of managers selecting a reason for plate waste varied for four of the reasons provided in our survey. (See fig. 9.) For example, elementary school managers were much more likely than middle or high school managers to report "amount served is too much for age or gender" as a reason for plate waste. By school location, the percent of cafeteria managers selecting a reason for plate waste varied for four of the reasons provided in our survey. (See fig. 10.) For example, managers at urban schools were more likely than those at suburban and rural schools to report that students "do not like that food" as a reason for plate waste. By schools serving different proportions of free and reduced-price lunches, cafeteria managers' perceptions differed somewhat for three reasons. For example, managers in schools serving under 30 percent free and reduced-price lunches were more likely than managers in schools serving over 70 percent free and reduced-price lunches to cite "take more than they can eat" as a reason for plate waste. (See fig. 11.) In addition to asking cafeteria managers to respond to a list of possible reasons for plate waste, we asked them to identify the effect on plate waste of the NSLP's requirements for types of food and serving sizes that were in effect at the time of our survey. The managers believed that, overall, the minimum federal serving sizes provided about the right amount of food for the students at their school. (See fig. 12.) Furthermore, for each of four minimum serving size requirements that were in effect at the time of our survey, most cafeteria managers reported that each requirement did not result in more plate waste at their school. However, two requirements--serving at least three-fourths of a cup of fruits/vegetables daily and serving at least eight servings of breads/grains weekly--were viewed as resulting in more plate waste by about one-third and one-quarter of the managers, respectively. Figure 13 shows the percent of cafeteria managers who reported that the minimum serving sizes for the four requirements resulted in more waste. In addition, we asked cafeteria managers about the potential effect on plate waste of increasing the minimum serving sizes for fruits/vegetables and breads/grains. For fruits/vegetables, 62 percent of the middle and high school managers said that increasing the amount from three-fourths of a cup to one cup daily would cause more waste. For breads/grains, 53 percent of the middle and high school managers said that increasing the number of weekly servings from 8 to 15 would increase plate waste; and 69 percent of the elementary school managers reported that increasing the number of servings of breads/grains from 8 to 12 weekly would cause more plate waste. Of 11 possible actions listed in the survey to reduce plate waste, cafeteria managers viewed actions involving students in the choice of food, such as letting students select only what they want and seeking students' opinions regularly about menus, as more likely to reduce plate waste than other actions. (See fig. 14.) By school level, there was some variation in the views of cafeteria managers for two of the actions to reduce plate waste listed in our survey. (See fig. 15.) For example, elementary school managers were more likely than high school managers to identify "reduce federally required portion sizes" as an action that would cause a little or a lot less plate waste. By school location, there was some variation in the views of cafeteria managers for four of the actions listed in our survey. For example, managers in urban schools were more likely than managers in rural schools to cite "seek student opinions regularly about menus" as an action that would cause less plate waste. (See fig. 16.) By schools serving different proportions of free and reduced-price lunches, there was no variation in cafeteria managers' views on ways to reduce plate waste. Managers in each group--schools serving under 30 percent free and reduced-price lunches, schools serving between 30 and 70 percent free and reduced-price lunches, and schools serving over 70 percent free and reduced-price lunches--had similar opinions about the general level of effectiveness for the 11 potential actions to reduce waste that were listed in the survey. In addition, most managers reported that two approaches already in place in most schools result in less plate waste. Eighty percent of the managers said that the OVS option results in less waste, and 55 percent said that offering more than one main dish or entree daily results in less waste. Most cafeteria managers reported satisfaction with various aspects of the federal commodities received at their school for use in school lunches. The managers' level of satisfaction was highest for the taste and packaging of the commodities and lowest for the variety of foods available and the quantity of individual commodities. Figure 17 shows the percent of cafeteria managers who were satisfied, and the percent who were dissatisfied, with the federal commodities provided for school lunches. Over 70 percent of the managers reported that they wanted all or almost all of the different commodities received. However, about 10 percent reported that they would prefer not to receive about half or more of the different commodities they were sent. (See fig. 18.) We provided copies of a draft of this report to the Department's Food and Consumer Service for its review and comment. We met with agency officials, including the Deputy Administrator, Special Nutrition Programs. Agency officials questioned why our survey results generalize to 80 percent, rather than 100 percent, of all the public schools that participated in the NSLP in the 1993-94 school year. Relatedly, agency officials asked if we had analyzed the characteristics of nonrespondents. We generalized our results to 80 percent of the public schools because we used a conservative statistical approach that required us to generalize our results only to the overall level reflected by our response rate, in this case 80 percent. We did not analyze the characteristics of nonrespondents because we believe that such an analysis alone would not allow us to generalize our survey results to 100 percent of the public schools that participated in the NSLP in the 1993-94 school year. To generalize to 100 percent of the public schools, we believe it would also be necessary to analyze information about perceptions of plate waste from a subsample of cafeteria managers who did not respond to our survey. This analysis would allow us to assess whether the opinions of these managers differed significantly from those of the managers who completed and returned a survey. Further, the Department commented that our survey's list of possible reasons for plate waste did not permit cafeteria managers to select other possible reasons, including meal quality and palatability. We agree that these reasons may affect plate waste. However, we included two related reasons for plate waste--"they do not like that food" and "they do not like the way the food looks or tastes." We believe these two reasons address, in part, meal quality and palatability. In addition, respondents had the opportunity to identify other reasons contributing to plate waste. Less than 5 percent of the respondents specified other reasons that they considered to be at least a minor reason for plate waste. The Department also commented that we did not solicit the views of children or their parents/caretakers. We agree that the views of cafeteria managers present only one perspective on the extent of, and reasons for, plate waste and that valuable information could be obtained from a comprehensive, nationwide study of the views of children and their parents/caretakers. The time and resources associated with such a study could be substantial. In addition, the Department commented that our study did not address whether there was more or less plate waste in the NSLP than in other lunch settings--such as at home or in restaurants. While identifying the amount of waste in different lunch settings was not an objective of our study, our survey asked cafeteria managers if they perceived the amount of waste from school lunches as more, less, or about the same as the amount of waste from lunches brought from home. Our survey results found that, of those cafeteria managers who were able to assess differences in the amount of plate waste, 79 percent believed that the amount from school lunches was less than or the same as the amount from lunches brought from home. Finally, agency officials provided some technical and clarifying comments that we incorporated into the report as appropriate. To develop the questions used in our survey of cafeteria managers, we reviewed the NSLP's regulations and research addressing the issue of waste in the program. Furthermore, we spoke with representatives from school food authorities, the American School Food Service Association, and the Department's Food and Consumer Service. We refined our questions by pretesting our survey with the cafeteria managers of 18 schools in Illinois, Pennsylvania, South Carolina, Texas, Virginia, West Virginia, and the District of Columbia. We mailed our survey to a random sample of 2,450 cafeteria managers in public schools in the 50 states and the District of Columbia. We selected our sample from the 87,100 schools listed in the National Center for Education Statistics' Common Core of Data Public School Universe, 1993-94, the latest year for which a comprehensive list of public schools was available. This document did not identify whether a school participated in the NSLP. Eighty percent (1,967) of those surveyed returned a survey. Of these, about 4 percent (80) reported that their school did not participate in the NSLP, while the remainder (1,887) reported that their school participated in the program. Our survey results generalize to 65,743 of the 81,911 public schools nationwide that participated in the NSLP in the 1993-94 school year. This number may vary for individual questions, depending on the response rate to the question. As with all sample surveys, our results contain sampling error--potential error that arises from not collecting data from the cafeteria managers at all schools. Unless otherwise indicated in appendix I, the sampling error for the survey results presented in this report is plus or minus no more than 5 percentage points. Sampling error must be considered when interpreting differences between subgroups, such as urban and rural schools. All differences we report are statistically significant unless otherwise noted. Statistical significance means that the difference we observed between subgroups is too large to be attributed to chance. We conducted our review from July 1995 through June 1996 in accordance with generally accepted government auditing standards. We did not, however, independently verify the accuracy of the cafeteria managers' responses to our survey. Appendix II contains a more detailed description of our survey methodology. Appendix III contains a copy of our survey and summarizes the responses. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date of this letter. At that time, we will send copies of this report to the appropriate congressional committees, interested Members of Congress, the Secretary of Agriculture, and other interested parties. We will also make copies available to others on request. If you have any questions, please call me at (202) 512-5138. Major contributors to this report are listed in appendix IV. Middle school cafeteria managers reporting students "do not like that food" as reason for plate waste at their school (fig. 9) High school cafeteria managers reporting students "do not like that food" as reason for plate waste at their school (fig. 9) Middle school cafeteria managers reporting students "take more than they can eat" as reason for plate waste at their school (fig. 9) High school cafeteria managers reporting students "take more than they can eat" as reason for plate waste at their school (fig. 9) Middle school cafeteria managers reporting that "amount served is too much for age or gender" as reason for plate waste at their school (fig. 9) High school cafeteria managers reporting "amount served is too much for age or gender" as reason for plate waste at their school (fig. 9) Urban school cafeteria managers reporting "not hungry" as reason for plate waste at their school (fig. 10) Suburban school cafeteria managers reporting "not hungry" as reason for plate waste at their school (fig. 10) Urban school cafeteria managers reporting "take more than they can eat" as reason for plate waste at their school (fig. 10) Cafeteria managers at schools serving over 70 percent free and reduced-price lunches reporting students "take more than they can eat" as reason for plate waste at their school (fig. 11) Middle school cafeteria managers reporting "reduce federally required portion sizes" as a way to reduce plate waste (fig. 15) High school cafeteria managers reporting "reduce federally required portion sizes" as a way to reduce plate waste (fig. 15) Middle school cafeteria managers reporting "replace federal commodities with cash" as a way to reduce plate waste (fig. 15) High school cafeteria managers reporting "replace federal commodities with cash" as a way to reduce plate waste (fig. 15) Urban school cafeteria managers reporting "replace federal commodities with cash" as a way to reduce plate waste (fig. 16) Suburban school cafeteria managers reporting "replace federal commodities with cash" as a way to reduce plate waste (fig. 16) The Chairman of the House Committee on Economic and Educational Opportunities asked us to study plate waste in the National School Lunch Program (NSLP). Specifically, we agreed to survey cafeteria managers in public schools nationwide that participate in the NSLP to obtain their perceptions on the (1) extent to which plate waste is a problem, (2) amount of plate waste by type of food, and (3) reasons for and ways to reduce plate waste. We agreed to determine whether the perceptions of managers differed by their school's level (elementary, middle, or high school), their school's location (urban, suburban, or rural), and the proportion of their school's lunches served free and at a reduced price (under 30 percent free and reduced price, 30 to 70 percent free and reduced price, or over 70 percent free and reduced price). In addition, we agreed to ask cafeteria managers about their level of satisfaction with federal commodities used in the program. To develop the questions used in our survey of cafeteria managers, we reviewed the NSLP's regulations and research addressing the issue of waste in the program. Furthermore, we spoke with representatives from school food authorities, the American School Food Service Association, and the U.S. Department of Agriculture's Food and Consumer Service. We refined our questions by pretesting our survey with the cafeteria managers of 18 schools in Illinois, Pennsylvania, South Carolina, Texas, Virginia, West Virginia, and the District of Columbia. Generally, the questions on our survey concerned the 1995-96 school year. We mailed our survey to a random sample of 2,450 cafeteria managers in public schools in the 50 states and the District of Columbia. We selected our sample from the 87,100 schools listed in the National Center for Education Statistics' Common Core of Data Public School Universe, 1993-94, the latest year for which a comprehensive list of public schools was available from the National Center for Education Statistics. This document did not identify whether a school participated in the NSLP. We sent as many as two followup mailings to each cafeteria manager to encourage response. Eighty percent (1,967) of those surveyed returned a survey. Of these, about 4 percent (80) reported that their school did not participate in the NSLP, while the remainder (1,887) reported that their school participated in the program. Our survey results generalize to 65,743 of the 81,911 public schools nationwide that participated in the NSLP in the 1993-94 school year. This number may be lower for individual questions, depending on the response rate for the question. The results of our survey of cafeteria managers cannot be generalized to schools that opened after school year 1993-94; to private schools; to most residential child care institutions; to schools in the U.S. territories; and to schools represented by the survey nonrespondents. We matched the 1,887 survey responses to information about each school in the Common Core of Data. We used the Common Core of Data to identify school location and to validate survey responses on student enrollment and school level. From this validation, we determined that a number of the surveys were completed for the surveyed school's district rather than for the individual school. In those cases, we used information from the Common Core of Data to determine the surveyed school's level (e.g., elementary) and student enrollment. We assumed that the school served the same proportion of free and reduced-price lunches as the district. Unless otherwise stated in the survey response, we also assumed that districtwide opinions about plate waste applied to the surveyed school. Table II.1 shows the number of cafeteria managers responding to our survey, by school level. Table II.2 shows the number of cafeteria managers responding, by school location. Table II.3 shows the number of cafeteria managers responding, by schools serving different proportions of free and reduced-price lunches. As with all sample surveys, our results contain sampling error--potential error that arises from not collecting data from cafeteria managers at all schools. We calculated the sampling error for each statistical estimate at the 95-percent confidence level. This means, for example, that if we repeatedly sampled schools from the same universe (i.e., Common Core of Data) and performed our analyses again, 95 percent of the samples would yield results within the ranges specified by our statistical estimates, plus or minus the sampling errors. In calculating the sampling errors, we used a conservative formula that did not correct for sampling from a finite population. The sampling error for most of the survey results presented in this report is plus or minus no more than 5 percentage points. Sampling error must be considered when interpreting differences between subgroups, such as urban and rural schools. For each comparison of subgroups that we report, we calculated the statistical significance of any observed differences. Statistical significance means that the difference we observed between two subgroups is larger than would be expected from the sampling error. When this occurs, some phenomenon other than chance is likely to have caused the difference. Statistical significance is absent when an observed difference between two subgroups, plus or minus the sampling error, results in an interval that contains zero. The absence of a statistically significant difference does not mean that a difference does not exist. The sample size or the number of respondents to a question may not have been sufficient to allow us to detect a difference. We used the chi square test of association to test the significance of differences in percentages between two subgroups and the t-test for differences in means. We conducted our review from July 1995 through June 1996 in accordance with generally accepted government auditing standards. We did not, however, independently verify the accuracy of the cafeteria managers' responses to our survey. Thomas Slomba, Assistant Director Rosellen McCarthy, Project Leader Sonja Bensen Carolyn Boyce Jay Scott Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
|
Pursuant to a congressional request, GAO provided information on food waste from school lunches provided to school children under the National School Lunch Program. GAO found that: (1) school cafeteria managers had varying perceptions about the degree to which food waste was a problem; (2) elementary school cafeteria managers were more likely to perceive food waste as a more serious problem; (3) the amount of food wasted varied by the type of food, with cooked vegetables being wasted more often; (4) many cafeteria managers believed that students' attention on recess or free time, rather than lunch, contributed to waste; (5) many cafeteria managers believed that allowing students to select what they wanted to eat would reduce waste; and (6) most cafeteria managers were satisfied with the federal commodities they received for use in the School Lunch Program, but about 10 percent reported that they would rather not receive at least half of the different types of commodities they received under the program.
| 5,981 | 206 |
Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business and is especially important for government agencies, where maintaining the public's trust is essential. While the dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have enabled agencies such as SEC to better accomplish their missions and provide information to the public, agencies' reliance on this technology also exposes federal networks and systems and the information stored on them to various threats. Cyber threats can be unintentional or intentional. Unintentional or nonadversarial threat sources include failures in equipment, environmental controls, or software due to aging, resource depletion, or other circumstances that exceed expected operating parameters. They also include natural disasters and failures of critical infrastructure on which the organization depends but are outside of the control of the organization. Intentional or adversarial threats sources include threats originating from foreign nation states, criminals, hackers, and disgruntled employees. Concerns about these threats are well-founded because of the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and advances in the sophistication and effectiveness of cyberattack technology, among other reasons. Without proper safeguards, systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain or manipulate sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. We and federal inspectors general have reported on persistent information security deficiencies that place federal agencies at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, since 1997, we have designated federal information security as a government-wide high-risk area. This was expanded to include the protection of critical cyber infrastructure in 2003 and protecting the privacy of personally identifiable information in 2015 The Federal Information Security Modernization Act (FISMA) of 2014 is intended to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. FISMA requires each agency to develop, document, and implement an agency-wide security program. The program is to provide security for the information and systems that support the operations and assets of the agency, including information and information systems provided or managed by another agency, contractor, or other source. Additionally, FISMA assigns responsibility to the National Institute of Standards and Technology (NIST) to provide standards and guidelines to agencies on information security. Accordingly, NIST has issued related standards and guidelines, including Recommended Security Controls for Federal Information Systems and Organizations, NIST Special Publication (NIST SP) 800-53, and Contingency Planning Guide for Federal Information Systems, NIST SP 800-34. To support its financial operations and store the sensitive information it collects, SEC relies extensively on computerized systems interconnected by local- and wide-area networks. For example, to process and track financial transactions, such as filing fees paid by corporations or disgorgements and penalties paid from enforcement activities, and for financial reporting, SEC relies on numerous enterprise applications, including: Delphi-Prism is the financial accounting and reporting system operated by the Federal Aviation Administration's Enterprise Service Center (ESC). SEC uses various modules of this system for financial accounting, analyses, and reporting. Delphi-Prism also produces the SEC financial statements. Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system which performs the automated collection, validation, indexing, acceptance, and forwarding of submissions by companies and others that are required to file certain information with SEC. Its purpose is to accelerate the receipt, acceptance, dissemination, and analysis of time-sensitive corporate information filed with the commission. EDGAR/Fee Momentum, a subsystem of EDGAR, which maintains accounting information pertaining to fees received from registrants. FedInvest, which invests funds related to disgorgements and penalties. Federal Personnel and Payroll System/Quicktime (FPPS/Quicktime), which processes personnel and payroll transactions. General Support System (GSS), which provides (1) business application services to internal and external customers and (2) security services necessary to support these applications. SEC's GSS is a combination of infrastructure that includes the Windows-based local area network that authorizes SEC employees and contractors to use the underlying network environment, and various perimeter security devices such as routers, firewalls, and switches. Under FISMA, the SEC Chairman has responsibility for, among other things, (1) providing information security protections commensurate with the risk and magnitude of harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency's information systems and information; (2) ensuring that senior agency officials provide security for the information and systems that support the operations and assets under their control; and (3) delegating to the agency chief information officer (CIO) the authority to ensure compliance with the requirements imposed on the agency. FISMA also requires the CIO to designate a senior agency information security officer to carry out the information security-related responsibilities. During GAO's fiscal year 2016 audit, SEC had demonstrated considerable progress in improving information security by implementing 47 of the 58 recommendations we had made in prior audits that had not been implemented by the conclusion of the fiscal year 2015 audit. Nevertheless, although SEC submitted evidence of taking action to resolve all 58 previously reported recommendations, its actions were not sufficient to fully resolve 11 recommendations. In addition, 15 deficiencies identified during the fiscal year 2016 audit limited the effectiveness of SEC's controls for protecting the confidentiality, integrity, and availability of its information systems. For example, the commission did not consistently control logical access to its financial and general support systems. It also used unsupported software to process financial data. Further, while SEC generally implemented separation of duties, it allowed incompatible duties for one person. These deficiencies existed, in part, because the commission did not fully implement key elements of its information security program. The newly identified deficiencies resulted in 2 recommendations to SEC to more fully implement aspects of its information security program and 13 recommendations to enhance access controls and other security controls over its financial systems. Table 1 summarizes SEC's progress toward addressing the prior and newly identified information security recommendations. Cumulatively, the deficiencies decreased assurance about the reliability of the data processed by key SEC financial systems. While not individually or collectively constituting a material weakness or significant deficiency, these deficiencies warrant SEC management's attention. Until SEC mitigates these deficiencies, its financial and support systems and the information they contain will continue to be at unnecessary risk of compromise. SEC resolved 47 of the 58 previously reported information system control deficiencies in the areas of security management, access controls, configuration management, and separation of duties. For example, the commission offered physical security awareness training to its employees; enforced password expiration on the key financial application server; set access permission for sensitive files; and operated a fully functioning contingency operations site that would be used in the event of a disaster. Nevertheless, SEC had not fully mitigated 11 of the 58 previously reported deficiencies affecting its financial and general support systems. For example, SEC had not maintained and monitored firewall configuration baseline rules for its firewalls and it had not documented a comprehensive physical inventory of the systems and applications in the production environment. As of September 2016, SEC was still at risk because it did not have baselines needed to define and monitor changes to its systems, applications, and inventory. A basic management objective for any organization is to protect the resources that support its critical operations and assets from unauthorized access. Organizations accomplish this by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computer resources (e.g., data, programs, equipment, and facilities), thereby protecting them from unauthorized disclosure, modification, and loss. Specific access controls include (1) boundary protection, (2) identification and authentication of users, (3) authorization restrictions, (4) cryptography, (5) audit and monitoring procedures, and (6) physical security. Without adequate access controls, unauthorized individuals, including intruders and former employees, can surreptitiously read and copy sensitive data and make undetected changes or deletions for malicious purposes or for personal gain. In addition, authorized users could intentionally or unintentionally modify or delete data or execute changes that are outside of their authority. Although SEC had issued policies and implemented controls based on those policies, it did not consistently: (1) protect its network boundaries from possible intrusions; (2) identify and authenticate users; (3) authorize access to resources; (4) audit and monitor actions taken on the commission's systems and network; and (5) encrypt sensitive information while in transmission. Boundary protection controls provide logical connectivity into and out of networks as well as connectivity to and from network-connected devices. Implementing multiple layers of security to protect an information system's internal and external boundaries provides defense in depth. By using a defense-in-depth strategy, entities can reduce the risk of a successful cyberattack. For example, multiple firewalls can be deployed to prevent both outsiders and trusted insiders from gaining unauthorized access to systems. At the host or device level, logical boundaries can be controlled through inbound and outbound filtering provided by access control lists (ACL) and host-based firewalls. At the system level, any connections to the Internet, or to other external and internal networks or information systems, should occur through controlled interfaces. To be effective, remote access controls should be properly implemented in accordance with authorizations that have been granted. For one key financial system, SEC consolidated all internal firewalls in order to better manage its boundary protection controls; however, it configured the ACLs on the host-based firewalls supporting the key financial system's servers to allow excessive inbound and outbound traffic. As a result, SEC introduced a vulnerability that could allow unauthorized access to the system. Information systems need to be managed to effectively control user accounts and identify and authenticate users. Users and devices should be appropriately identified and authenticated through the implementation of adequate logical access controls. Users can be authenticated using mechanisms such as a password and user identification combination. SEC policy requires default passwords in operating systems, databases, and web servers to be changed upon installation. Also, the policy states that information system owners should review user accounts and associated access privileges policy to ensure appropriate access and that terminated or transferred employees do not retain improper information system access. However, SEC did not fully implement controls for identifying and authenticating users. For example, it did not always enforce individual accountability as 13 of 42 user accounts reviewed had the same default password in the three key financial systems' servers that we reviewed. Also, SEC did not disable these 13 active user accounts although they had never been used. As a result, increased risk exists that the accounts could be compromised and used by unauthorized individuals to access sensitive financial data. Authorization encompasses access privileges granted to a user, program, or process. It involves allowing or preventing actions by that user based on predefined rules. Authorization includes the principles of legitimate use and "least privilege." Access rights and privileges are used to implement security policies that determine what a user can do after being allowed into the system. Maintaining access rights, permissions, and privileges is one of the most important aspects of administering system security. SEC policy states that system owners shall explicitly authorize access to file permissions and privileges, including approving, authorizing, and documenting system account actions (create, modify, disable, remove) for the specified resources in which the users have primary responsibility as well as reviewing access authorizations and granting or denying access to SEC information and information systems. SEC policy also states that information systems must prevent nonprivileged users from executing privileged functions; including disabling, circumventing, or altering implemented security safeguards or countermeasures. However, SEC did not always adequately restrict access privileges to ensure that only authorized individuals were granted access to its systems. In addition, SEC did not consistently monitor the role-based access privileges assigned to user groups for an externally managed financial system. The Enterprise Service Center (ESC) assigned SEC users to user groups with access privileges in the ESC Prism application that were not always consistent with the privileges authorized by SEC policy or access request forms. For example, ESC assigned 16 of 24 ESC Prism users to groups that were not used by SEC. As a result, users had excessive levels of access that were not required to perform their jobs. This could lead insiders or attackers who penetrate SEC networks to inadvertently or deliberately modify financial data or other sensitive information. Cryptographic controls can be used to help protect the integrity and confidentiality of data and computer programs by rendering data unintelligible to unauthorized users and/or protecting the integrity of transmitted or stored data. NIST guidance states that the use of encryption by organizations can reduce the probability of unauthorized disclosure of information. NIST also recommends that organizations employ cryptographic mechanisms to prevent unauthorized disclosure of information stored on agency networks. However, SEC did not fully encrypt sensitive information stored on servers supporting a key financial system. Without proper encryption, increased risk exists that unauthorized users could identify and use the information to gain inappropriate access to system resources. Audit and monitoring involves the regular collection, review, and analysis of auditable events for indications of inappropriate or unusual activity, and the appropriate investigation and reporting of such activity. These controls can help security professionals routinely assess computer security, perform investigations during and after an attack, and recognize an ongoing attack. Audit and monitoring technologies include network and host-based intrusion detection systems, audit logging, security event correlation tools, and computer forensics. Using automated mechanisms can help integrate audit monitoring, analysis, and reporting into an overall process for investigating and responding to suspicious activities. SEC policy states that intrusion detection parameters should be explicitly set. However, SEC did not fully implement an intrusion detection capability for key financial systems. As a result, SEC may not be able to detect or investigate some unauthorized system activity. Configuration management controls provides reasonable assurance that systems are configured securely and operating as intended. As part of its configuration management efforts, SEC policy requires protection from malicious code, including detection and eradication. In addition, patch management, a component of configuration management, is an important element in mitigating the risks associated with known vulnerabilities. When a vulnerability is discovered, the vendor may release a patch to mitigate the risk. If a patch is not applied in a timely manner or if a vendor no longer supports the system and does not prepare a patch, an attacker can exploit a known vulnerability not yet mitigated, enabling unauthorized access to the system or enabling users to have access to greater privileges than authorized. SEC improved several configuration management controls for its financial information systems. For example, it conducted malicious code reviews and ensured only approved software changes were made. In addition, SEC enhanced its patch management process by scheduling and deploying patches for its two operating system platforms on its financial application servers. However, SEC also used software that was no longer supported by the software's vendor. Specifically, the commission continued to use an outdated version of an operating system on its key financial systems although the operating system's vendor stopped supporting this version of the software over a decade ago and no longer develops or releases patches for the software. As a result, increased risk exists that an attacker could exploit newly discovered vulnerabilities associated with the outdated operating system. To reduce the risk of error or fraud, duties and responsibilities for authorizing, processing, recording, and reviewing transactions should be separated to ensure that one individual does not control all critical stages of a process. Effective separation of duties starts with effective entity-wide policies and procedures that are implemented at the system and application levels. Often, separation of incompatible duties is achieved by dividing responsibilities among two or more organizational groups, which diminishes the likelihood that errors and wrongful acts will go undetected because the activities of one individual or group will serve as a check on the activities of the other. Inadequate separation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. SEC policy states that information system owners must separate duties of individuals as necessary to provide appropriate management and security oversight and define information system access authorizations to support the separation of duties. SEC was successful in employing separation of duties control, with one exception. Of the 217 ESC Prism users, the commission assigned one user to two roles that violated the separation of duties' principle. Although the violation only involved one person, it was significant because of the importance of the roles involved. The user was assigned to both the "contracting officer's security group" and the "requisitioner's security group with requisition approval." According to an SEC official, users assigned to the contracting officers security group have the access permissions to approve and obligate awards, and users assigned to the requisitioner's security group can, with approval, commit funds. As a result of being in both security groups, this person had the ability to both approve and obligate awards and then commit funds. An information security program should establish a framework and continuous cycle of activity for assessing risk, developing and implementing effective security procedures, and monitoring the effectiveness of these procedures. An underlying reason for the information security control deficiencies in SEC's financial systems was that, although the agency developed and documented an information security program, it did not fully implement aspects of the program. In particular, SEC did not always update system security plans or fully implement its continuous monitoring capability. In addition, SEC made significant progress resolving previous-reported deficiencies but several deficiencies remained partially unresolved. FISMA requires each federal agency to have policies and procedures that ensure compliance with minimally acceptable system configuration requirements, including subordinate plans for providing adequate information security for networks, facilities, and systems or groups of systems, as appropriate. Consistent with this requirement, SEC policy states that information system owners of the GSS and major applications should be responsible for developing, documenting, and maintaining an inventory of information system components that: accurately reflects the current system; includes all components within the authorization boundary of the system; and provides the level of granularity deemed necessary for tracking and reporting within the system. In addition, SEC policy requires that the system component inventory be reviewed and updated when components are installed or removed and when system security plans are updated. Further, SEC policy states that the system security plan should be updated throughout the system life cycle. However, SEC did not update its system security plans to reflect the current operational environment. For example, it did not update network diagrams and asset inventories in the system security plans for GSS and a key financial system. Each of the several iterations of network diagrams and supporting schedules SEC provided to us during the audit reflected incomplete or inaccurate representations of the operating environment. To illustrate, inconsistencies existed among the network diagrams, reports from SEC's automated asset tracking tool, and results from the automated scanning of the environment. Additionally, several previously decommissioned components remained installed, powered on, and accessible on its network. The system security plans were not current because SEC personnel did not update the plans, asset inventory, or network diagrams during the current modernization of the key financial system's environment. The modernization effort, along with other routine maintenance, had increased the frequency of hardware added to or removed from the environment. The commission did not remove assets from the inventory or update the network diagram until the hardware had been physically removed from the data center even though the hardware was not operational. Without up-to-date, complete, and accurate system inventories and network diagrams in the system security plans, SEC lacks the baseline configurations settings to adequately secure its systems. An important element of risk management is ensuring that policies and controls intended to reduce risk are effective on an ongoing basis. To do this effectively, top management should understand the agency's security risks and actively support and monitor the effectiveness of its security policies. NIST guidance and SEC policy state that the agency should develop a continuous monitoring strategy. SEC policy requires implementation of a continuous monitoring program that is to include (1) establishment of system-dependent monthly automated scans, (2) ongoing security control assessments, and (3) correlation and analysis of security related information generated by assessments. SEC did not fully implement and continuously monitor its secure configurations. While it made improvements to address prior-year GAO recommendations by developing and documenting approved secure configuration baselines based on NIST's National Checklist Program, SEC had not fully implemented those secure configurations across the infrastructure present in the GSS and key financial systems. Further, although the commission employed a technology to facilitate automated configuration compliance scanning throughout the GSS and the key financial systems, it determined this technology to be too inefficient and cumbersome to facilitate automated scanning of technical configuration compliance and, during the fiscal year 2016 audit, was in the process of replacing it with a new capability. Thus, it did not consistently perform compliance scanning on multiple operating systems, databases, and network devices. However, such scanning is important for identifying vulnerabilities existing in a network. Our scans of SEC IT resources identified vulnerabilities affecting operating systems, databases, and network devices. Although additional analysis and coordination by responsible SEC organizations may have determined that some of the potential vulnerabilities may have been mitigated by compensating controls or other factors, the lack of processes noted above increase the risk that known vulnerabilities or misconfigurations will not be identified and remediated in a timely manner. Without implementing an effective process for monitoring, evaluating, and remedying identified deficiencies, SEC would not be aware of potential deficiencies that could affect the integrity and availability of its information systems. Information security control deficiencies in the SEC computing environment may jeopardize the confidentiality, integrity, and availability of information residing in and processed by its systems. Specifically, SEC configured its internal firewalls to allow too many internal users without legitimate business needs to access a key financial system environment. SEC also did not enable host based firewalls on all key financial system and a major operating system server, which made them vulnerable to unauthorized changes. In addition, SEC operated a financial system server with an unsupported operating system, risking exposure of financial data. Further, deficiencies exist in part because SEC did not maintain up-to- date network diagrams and asset inventories in the system security plans for GSS and a key financial system to accurately and completely reflect the current operating environment, and it also did not fully implement and continuously monitor GSS and the key financial system's secure configurations. Cumulatively, these deficiencies decreased assurance regarding the reliability of the data processed by key financial systems. Until SEC mitigates its control deficiencies, its financial and support systems and the information they contain will continue to be at unnecessary risk of compromise. We recommend that Chairman of the SEC take two actions to more effectively manage its information security program: Maintain up-to-date network diagrams and asset inventories in the system security plans for GSS and a key financial system to accurately and completely reflect the current operating environment. Perform continuous monitoring using automated configuration and vulnerability scanning on the operating systems, databases, and network devices. To address specific deficiencies in information security controls, we made 13 detailed recommendations in a separate limited official use only report. Those recommendations address access control, configuration management, and separation of duties. We received written comments on a draft of this report from SEC. In its comments, which are reprinted in appendix II, the commission concurred with the two recommendations addressing its information security program. If effectively implemented, these actions should enhance the effectiveness of SEC's controls over its financial systems. In addition, SEC's Chief Information Security Officer provided technical comments on the draft report via e-mail, which we considered and incorporated, as appropriate. We acknowledge and appreciate the cooperation and assistance provided by SEC management and staff during our audit. If you have any questions about this report or need assistance in addressing these issues, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected] or Nabajyoti Barkakati at (202) 512-4499 or [email protected]. GAO staff who made significant contributions to this report are listed in appendix III. Pursuant to statutory authority, GAO assesses the effectiveness of the Securities and Exchange Commission's (SEC) internal control structure and procedures for financial reporting. Our objective was to determine the effectiveness of SEC's information security controls for ensuring the confidentiality, integrity, and availability of its key financial systems and information. To assess information systems controls, we identified and reviewed SEC information systems control policies and procedures, conducted tests of controls, and held interviews with key security representatives and management officials concerning whether information security controls were in place, adequately designed, and operating effectively. This work was performed to support our opinion on SEC's internal control over financial reporting as of September 30, 2016. We concentrated our evaluation primarily on the controls for systems and applications associated with financial processing. These systems were the (1) Delphi-Prism; (2) Electronic Data Gathering, Analysis, and Retrieval (EDGAR); (3) EDGAR/Fee Momentum; (4) FedInvest; (5) Federal Personnel and Payroll System/Quicktime and (6) general support systems. Our selection of the systems to evaluate was based on consideration of financial systems and service providers integral to SEC's financial statements. We evaluated controls based on our Federal Information System Controls Audit Manual (FISCAM), which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; National Institute of Standards and Technology standards and special publications; and SEC's plans, policies, and standards. We assessed the effectiveness of both general and application controls by performing information system controls walkthroughs surrounding the initiation, authorization, processing, recording, and reporting of financial data (via interviews, inquiries, observations, and inspections); reviewing SEC policies and procedures; observing technical controls implemented on selected systems; testing specific controls; and scanning and manually assessing SEC systems and applications, including EDGAR/Fee Momentum, and related general support system network devices, and servers. We also evaluated the Statement on Standards for Attestation Engagements report and performed testing on key information technology controls on the following applications and systems: Delphi- Prism, FedInvest, and Federal Personnel and Payroll System. To determine the status of SEC's actions to correct or mitigate previously reported information security deficiencies, we identified and reviewed its information security policies, procedures, practices, and guidance. We reviewed prior GAO reports to identify previously reported deficiencies and examined the commission's corrective action plans to determine which deficiencies it had reported as corrected. For those instances where SEC reported that it had completed corrective actions, we assessed the effectiveness of those actions by reviewing appropriate documents, including SEC-documented corrective actions, and interviewing the appropriate staffs, including system administrators. To assess the reliability of the data we analyzed, such as information system control settings, specific control evaluations for each accounting cycle, and security policies and procedures, we corroborated them by interviewing SEC officials, including programmatic personnel, and system administrators to determine whether the data obtained were consistent with system configurations in place at the time of our review. In addition, we observed configuration of these settings in the network. Based on this assessment, we determined the data were reliable for the purposes of this report. We performed this work in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provided a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contacts named above, GAO staff who made major contributions to this report are Michael Gilmore and Duc Ngo (Assistant Directors); Angela Bell; Monica Perez-Nelson; Priscilla Smith; Henry Sutanto (Analyst-in-Charge) and Adam Vodraska.
|
SEC enforces securities laws, issues rules and regulations that provide protection for investors, and helps to ensure that securities markets are fair and honest. SEC uses computerized information systems to collect, process, and store sensitive information, including financial data. Having effective information security controls in place is essential to protecting these systems and the information they contain. Pursuant to statutory authority, GAO assesses the effectiveness of SEC's internal control structure and procedures for financial reporting. As part of its audit of SEC's fiscal years 2016 and 2015 financial statements, GAO assessed whether controls were effective in protecting the confidentiality, integrity, and availability of key financial systems and information. To do this, GAO examined SEC's information security policies and procedures, tested controls, and interviewed key officials on whether controls were in place, adequately designed, and operating effectively. The Securities and Exchange Commission (SEC) improved the security controls over its key financial systems and information. In particular, as of September 2016, the commission had resolved 47 of the 58 recommendations we had previously made that had not been implemented by the conclusion of the FY 2015 audit. However, SEC had not fully implemented 11 recommendations that included consistently protecting its network boundaries from possible intrusions, identifying and authenticating users, authorizing access to resources, auditing and monitoring actions taken on its systems and network, or encrypting sensitive information while in transmission. In addition, 15 newly identified control deficiencies limited the effectiveness of SEC's controls for protecting the confidentiality, integrity, and availability of its information systems. For example, the commission did not consistently control logical access to its financial and general support systems. In addition, although the commission enhanced its configuration management controls, it used unsupported software to process financial data. Further, SEC did not adequately segregate incompatible duties for one of its personnel. These weaknesses existed, in part, because SEC did not fully implement key elements of its information security program. For example, SEC did not maintain up-to-date network diagrams and asset inventories in its system security plans for its general support system and its key financial system application to accurately and completely reflect the current operating environment. The commission also did not fully implement and continuously monitor those systems' security configurations. Twenty-six information security control recommendations related to 26 deficiencies found in SEC's financial and general support systems remained unresolved as of September 30, 2016. (See table.) Cumulatively, the deficiencies decreased assurance about the reliability of the data processed by key SEC financial systems. While not individually or collectively constituting a material weakness or significant deficiency, these deficiencies warrant SEC management's attention. Until SEC mitigates these deficiencies, its financial and support systems and the information they contain will continue to be at unnecessary risk of compromise. In addition to the 11 prior recommendations that have not been fully implemented, GAO recommends that SEC take 13 actions to address newly identified control deficiencies and 2 actions to more fully implement its information security program. In commenting on a draft of this report, SEC concurred with GAO's recommendations.
| 6,108 | 657 |
The missions of DOE's 23 laboratories have evolved over the last 55 years. Originally created to design and build atomic bombs under the Manhattan Project, these laboratories have since expanded to conduct research in many disciplines--from high-energy physics to advanced computing at facilities throughout the nation. DOE's goal is to use the laboratories for developing clean energy sources and pollution-prevention technologies, for ensuring enhanced security through reductions in the nuclear threat, and for continuing leadership in the acquisition of scientific knowledge. The Department considers the laboratories a key to a growing economy fueled by technological innovations that increase U.S. industrial competitiveness and create new high-skill jobs for American workers. Missions have expanded in the laboratories for many reasons, including changes in the world's political environment. Nine of DOE's 23 laboratories are multiprogram national laboratories; they account for about 70 percent of the total laboratory budget and about 80 percent of all laboratory personnel. Three of these multiprogram national laboratories (Lawrence Livermore, Los Alamos, and Sandia) conduct the majority of DOE's nuclear weapons defense activities. Facing reduced funding for nuclear weapons as a result of the Cold War's end and the signing of the comprehensive nuclear test ban treaty, these three laboratories have substantially diversified to maintain their preeminent talent and facilities. The remaining laboratories in DOE's system are program- and mission-dedicated facilities. (See app. I for a list of all DOE laboratories.) DOE owns the laboratories and contracts with universities and private-sector organizations for the management and operation of 19, while providing federal staff for the remaining 4. The Congress is taking a growing interest in how the national laboratories are being managed. Recently introduced legislation would restructure the missions of the laboratories or manage them in new ways. Some previously proposed organizational options include converting the laboratories that are working closely with the private sector into independent entities or transferring the responsibility for one or more laboratories to other federal agencies whose missions are closely aligned with those of particular DOE laboratories. We have reported to the Congress that DOE's efforts to sharpen the focus and improve the management of its laboratories have been elusive and that the challenges facing the Department raise concerns about how effectively it can manage reform initiatives. Over the past several years, many government advisory groups have raised concerns about how DOE manages its national laboratory system. Major concerns centered on three issues: The laboratories' missions are unfocused. DOE micromanages the laboratories. The laboratories are not operating as an integrated system. More recent advisory groups have reported similar weaknesses, prompting the Congress to take a close look at how the national laboratory system is meeting its objectives. We identified nearly 30 reports by a wide variety of advisory groups on various aspects of the national laboratories' management and missions. (See app. II for a list of past reports.) Most of these reports have been prepared since the early 1980s. The reports include the following: In 1982, DOE's Energy Research Advisory Board reported that the national laboratories duplicate private-sector research and that while DOE could take better advantage of the national laboratories' capabilities, it needed to address its own management and organizational inefficiencies, which hamper the achievement of a more effective laboratory system.In 1983, a White House Science Council Panel found that while DOE's laboratories had well-defined missions for part of their work, most activities were fragmented and unrelated to the laboratories' main responsibilities.In 1992, DOE's Secretary of Energy Advisory Board found that the laboratories' broad missions, coupled with rapidly changing world events, had "caused a loss of coherence and focus at the laboratories, thereby reducing their overall effectiveness in responding to their traditional missions as well as new national initiatives. . . ." A 1993 report by an internal DOE task force reported that missions "must be updated to support DOE's new directions and to respond to new national imperatives. . . ." The most recent extensive review of DOE's national laboratories was performed by a task force chaired by Robert Galvin, former Chairman of the Motorola Corporation. Consisting of distinguished leaders from government, academia, and industry, the Galvin Task Force was established to examine alternatives for directing the laboratories' scientific and engineering resources to meet the economic, environmental, defense, scientific, and energy needs of the nation. Its 1995 report identified many of the problems noted in earlier studies and called for a more disciplined focus for the national laboratories, also reporting that the laboratories may be oversized for their role. The Galvin Task Force reported that the traditional government ownership and contractor operation of the laboratories has not worked well. According to its report, increasing DOE's administration and oversight transformed the laboratories from traditional contractor-operated systems into a virtual government-operated system. The report noted that many past studies of DOE's laboratories had resulted in efforts to fine-tune the system but led to little fundamental improvement. Regarding the management structure of DOE's non-weapons-oriented laboratories, the task force recommended a major change in the organization and governance of the laboratory system. The task force envisioned a not-for-profit corporation governed by a board of trustees, consisting primarily of distinguished scientists and engineers and experienced senior executives from U.S. industry. Such a change in governance, the task force reported, would improve the standards and quality of work and at the same time generate over 20 percent in cost savings. Other findings by the task force and subsequent reports by other advisory groups have focused on the need for DOE to integrate R&D programs across the Department and among the laboratories to increase management efficiencies, reduce administrative burdens, and better define the laboratories' missions. In June 1995, DOE's Task Force on Strategic Energy Research and Development, chaired by energy analyst Daniel Yergin, issued a report on DOE's energy R&D programs. The report assessed the rationale for the federal government's support of energy R&D, reviewed the priorities and management of the overall program, and recommended ways of making it more efficient and effective. The task force recommended that DOE streamline its R&D management, develop a strategic plan for energy R&D, eliminate duplicative laboratory programs and research projects, and reorganize and consolidate dispersed R&D programs at DOE laboratories. In August 1995, the National Science and Technology Council examined laboratories in DOE, the Department of Defense (DOD), and the National Aeronautics and Space Administration (NASA). The Council reported that DOE's existing system of laboratory governance needs fundamental repair, stating that DOE's laboratory system is bigger and more expensive than is needed to meet essential missions in energy, the environment, national security, and fundamental science. The Council recommended that DOE develop ways to eliminate apparent overlap and unnecessary redundancy between its laboratory system and DOD's and NASA's. DOE's Laboratory Operations Board was created in 1995 to focus the laboratories' missions and reduce DOE's micromanagement. Members serving on the Board from outside DOE have issued four different reports, which have noted the need to focus and define the laboratories' missions in relation to the Department's missions, integrate the laboratories' programmatic work, and streamline operations, including the elimination or reduction of administrative burdens. In March 1997, the Office of Science and Technology Policy reported on laboratories managed by DOE, DOD, and NASA. The Office cited efforts by the three agencies to improve their laboratory management but found that DOE was still micro-managing its laboratories and had made little progress toward reducing the administrative burdens it imposes on its laboratories. The Office recommended a variety of improvements in performance measures, incentives, and productivity and urged more streamlined management. In March 1997, a report by the Institute for Defense Analyses (IDA) found that DOE's processes for managing environment, safety, and health activities were impeding effective management. According to IDA, DOE's onerous review processes undermined accountability and prevented timely decisions from being made and implemented throughout the entire nuclear weapons complex, including the national laboratories. IDA specifically noted that DOE's Defense Programs had confusing line and staff relationships, inadequately defined roles and responsibilities, and poorly integrated programs and functions. IDA concluded that DOE needed to strengthen its line accountability and reorganize its structure in several areas. At our request, DOE provided us with a listing of the actions it took in response to repeated calls for more focused laboratory missions and improved management. But while DOE has made progress--principally by reducing paperwork burdens on its laboratories--most of its actions are still in process or have unclear expectations and deadlines. Furthermore, the Department cannot demonstrate how its actions have resulted, or may result, in fundamental change. To analyze progress in laboratory management reform, we talked to DOE and laboratory officials and asked DOE to document the actions it has taken, is taking, or has planned to address the recommendations from several advisory groups. We used DOE's responses, which are reprinted in appendix III, as a basis for discussions with laboratory and DOE officials and with 18 experts familiar with national laboratory issues. We asked these experts to examine DOE's responses. Several of these experts had served on the Galvin Task Force and are currently serving on DOE's Laboratory Operations Board (app. IV lists the experts we interviewed). The actions DOE said it is taking include creating various internal working groups; strengthening the Energy R&D Council to facilitate more effective planning, budgeting, management, and evaluation of the Department's R&D programs and to improve the linkage between research and technology development; increasing the use of private-sector management practices; adopting performance-based contracting and continuous improvement concepts; improving the oversight of efforts to enhance productivity and reduce overhead costs at the laboratories; expanding the laboratories' work for other federal agencies; evaluating the proper balance between laboratories and universities for basic research; improving science and technology partnerships with industry; reducing unnecessary oversight burdens on laboratories; developing the Strategic Laboratory Missions Plan in July 1996 that identified laboratory activities in mission areas; creating the Laboratory Operations Board, which includes DOE officials and experts from industry and academia, to provide guidance and direction to the laboratories; and developing "technology roadmaps," a strategic planning technique to focus the laboratories' roles. Most of the actions DOE reported to us are process oriented, incomplete, or only marginally related to past recommendations for change. For example, creating new task forces and strengthening old ones may be good for defining problems, but these measures cannot force decisions or effect change. DOE's major effort to give more focus to laboratory missions was a Strategic Laboratory Missions Plan, published in July 1996. The plan describes the laboratories' capabilities in the context of DOE's missions and, according to the plan, will form the basis for defining the laboratories' missions in the future. However, the plan is essentially a descriptive document that does not direct change. Nor does the plan tie DOE's or the laboratories' missions to the annual budget process. When we asked laboratory officials about strategic planning, most discussed their own planning capabilities, and some laboratories provided us with their own self-generated strategic planning documents. None of the officials at the six laboratories we visited mentioned DOE's Strategic Laboratory Missions Plan as an essential document for their strategic planning. A second action that DOE officials reported as a major step toward focusing the laboratories' missions is the introduction of its "technology roadmaps." These are described by DOE as planning tools that define the missions, goals, and requirements of research on a program-by-program basis. Officials told us that the roadmaps are used to connect larger departmental goals and are a way to institutionalize strategic planning within the Department. Roadmaps, according to DOE, will be an important instrument for melding the laboratories into a stronger and more integrated national system. DOE reports that roadmaps have already been developed in some areas, including nuclear science, high-energy physics, and the fusion program. Experts we interviewed agreed that creating roadmaps can be a way to gain consensus between DOE and the laboratories on a common set of objectives while also developing a process for reaching those objectives. However, some experts also stated that it is too soon to tell if this initiative will succeed. One expert indicated that the Department has not adequately analyzed its energy R&D problems on a national basis before beginning the roadmap effort. Another was uncertain about just how the roadmaps will work. According to a laboratory director who was recently asked to comment on the roadmap process, more emphasis needs to be placed on the results that are expected from the roadmaps, rather than on the process of creating them. Furthermore, roadmapping may be difficult in some areas, especially for activities involving heavy regulatory requirements. When we asked DOE officials about roadmapping, we were told that it is still a work in progress and will not be connected directly to the budget process for months or even years. Other DOE actions are also described as works in progress. For example, the use of performance-based contracts is relatively new, and the results from the strengthened R&D Council are still uncertain. The R&D Council includes the principal secretarial officers who oversee DOE's R&D programs and is chaired by the Under Secretary. According to DOE, the Council has a new charter that will promote the integration and management of the Department's R&D. One area in which DOE reports that it has made significant improvements is reducing the burden of its oversight on the national laboratories. Although some laboratory directors told DOE that their laboratories are still micromanaged, most officials and experts we interviewed credited DOE with reducing oversight as the major positive change since the Galvin Task Force issued its report in 1995. DOE's major organizational action in response to recent advisory groups' recommendations was to create the Laboratory Operations Board in April 1995. The purpose of the Board is to provide dedicated management attention to laboratory issues on a continuing basis. The Board includes 13 senior DOE officials and 9 external members drawn from the private sector, academia, and the public. The external members have staggered, 6-year terms and are required to assess DOE's and the laboratories' progress in meeting such goals as management initiatives, productivity improvement, mission focus, and programmatic accomplishments. The Board's external members have issued four reports, the results of which largely mirror past findings by the many previous advisory groups. These reports have also concluded that DOE has made some progress in addressing the problems noted by the Galvin Task Force but that progress has been slow and many of the recommendations need further actions. Several experts we interviewed generally viewed the Board positively. Some, however, recognized that the Board's limited advisory role is not a substitute for strong DOE leadership and organizational accountability. One expert commented that the effectiveness of the Board was diminished by the fact that it meets too infrequently (quarterly) and has had too many changes in membership to function as an effective adviser. Other experts agreed but indicated that the Board still has had a positive influence on reforming the laboratory system. One expert said that the Board's membership is not properly balanced between internal and external members (although originally specifying 8 of each, the Board's charter was recently changed to require 13 DOE members and only 9 external members). Another expert indicated that the Board could increase its effectiveness by more carefully setting an agenda for each year and then aggressively monitoring progress to improve its management of the laboratory system. Laboratory officials we interviewed also viewed the Board in generally positive terms; some commented that the Board's presence gives the laboratories a much needed voice in headquarters. Others noted that the Board could eventually play a role in integrating the laboratories' R&D work across program lines, thereby addressing a major concern about the laboratories' lack of integration noted by past advisory groups. Although the Board can be an effective source of direction and guidance for the laboratories, it has no authority to carry out reform operations. One expert said that even though the Board monitors the progress of reform and makes recommendations, it is still advisory and cannot coordinate or direct specific actions. " remains in the future. We have seen nothing yet." "The response appears to sidestep the important need for lab-focused budgeting and strategic planning. The response discusses strategic planning in terms of DOE roadmaps for each program, not in terms of plans for each lab. Many labs continue to have a broad mission which crosses several . . . . While there may be an ongoing review by the , the labs have no evidence this is occurring and there have been no actions to address this." "The wanted one clear lead lab in each mission or program, and DOE did not do that; there are 2 to 4 "principal" labs for each major business. Even for major program areas, 12 of the 15 programs listed in the department's laboratory mission plan have more than one laboratory listed as primary performer." ". . . it is not clear that DOE has made any significant progress as the response implies. . . ." " tone of the response in [DOE's response] is a bit more optimistic than actual experience in the field justifies. . . . Only modest improvements have occurred to this point. . . ." "No reorganization has occurred . . . no integration has occurred." "the examples provided to substantiate the labs working together as a system are not all new, some were in place when wrote report. Also, there have been a number of meetings between the multi-program labs but that is the extent of any progress in this area (little change has been made)." "The labs have largely been held at arm's length rather than included as part of the team. There have been recent efforts to correct this but there is no plan or action in place to correct it." Additionally, when we asked several laboratory officials for examples of their progress in responding to past advisory groups, most spoke of actions they have taken on their own initiative. Few could cite an example of a step taken in direct response to a DOE action. For example, several laboratory officials cited an increased level of cooperation and coordination among the laboratories involved with similar R&D activities. They also mentioned adopting "best business practices" to increase productivity, reduce overhead costs, and measure progress by improved metrics. However, many laboratory officials told us that many of their actions were taken to meet other demands, such as legislative and regulatory mandates, rather than as direct responses to the studies' recommendations or to DOE's policies. Despite its efforts to respond to the advisory groups' recommendations, DOE has not established either a comprehensive plan with goals, objectives, and performance measures or a system for tracking results and measuring accountability. As a result, DOE is unable to document its progress and cannot show how its actions address the major issues raised by the advisory groups. Experts we contacted noted that while DOE is establishing performance measures for gauging how well its contractors manage the laboratories, DOE itself lacks any such measurement system for ensuring that the objectives based on the advisory groups' recommendations are met. "lack of clarity, inconsistency, and variability in the relationship between headquarters management and field organizations has been a longstanding criticism of DOE operations. This is particularly true in situations when several headquarters programs fund activities at laboratories. . . ." DOE's Laboratory Operations Board also reported in 1997 on DOE's organizational problems, noting that there were inefficiencies due to DOE's complicated management structure. The Board recommended that DOE undertake a major effort to rationalize and simplify its headquarters and field management structure to clarify roles and responsibilities. Similarly, the 1997 IDA report cited serious flaws in DOE's organizational structure. Noting long-standing concerns in DOE about how best to define the relationships between field offices and the headquarters program offices that sponsor work, the Institute concluded that "the overall picture that emerges is one of considerable confusion over vertical relationships and the roles of line and staff officials." DOE's complex organization stems from the multiple levels of reporting that exist between the laboratories, field offices (called operations offices), and headquarters program offices. DOE's laboratories are funded and directed by program offices--the nine largest laboratories are funded by many different DOE program offices. The program office that usually provides the dominant funding serves as the laboratory's "landlord". The landlord program office is responsible for sitewide management at the laboratory and coordinates crosscutting issues, such as compliance with environment, safety, and health requirements at the laboratories. DOE's Energy Research is landlord to several laboratories, including the Brookhaven and Lawrence Berkeley laboratories. Defense Programs is the landlord for the Los Alamos and Lawrence Livermore national laboratories. The program offices, in turn, report to either the Deputy Secretary or the Under Secretary. Further complicating reporting, DOE assigns each laboratory to a field operations office, whose director serves as the contract manager and also prepares the laboratory's annual appraisal. The operations office, however, reports to a separate headquarters office under the Deputy Secretary, not to the program office that supplies the funding. Thus, while the Los Alamos National Laboratory is primarily funded by Defense Programs, it reports to a field manager who reports to another part of the agency. As a consequence of DOE's complex structure, IDA reported that unclear chains of command led to the weak integration of programs and functions across the Department, wide variations among field activities and relationships and processes, and confusion over the difference between line and staff roles. Weaknesses in DOE's ability to manage the laboratories as an integrated system of R&D facilities is one the most persistent findings from past advisory groups, as well as from our 1995 management review of laboratory issues. We concluded that DOE had not coordinated the laboratories' efforts as part of a diversified research system to solve national problems. Instead, DOE was managing the laboratories on a program-by-program basis. We recommended that DOE evaluate alternatives for managing the laboratories that would more fully support the achievement of clear and coordinated missions. To help achieve this goal, we said that DOE should strengthen the Office of Laboratory Management to facilitate the laboratories' cooperation and resolve management issues across all DOE program areas. DOE did not strengthen this office. DOE's primary response to our recommendations and those made by the Galvin Task Force was creating the Laboratory Operations Board. "DOE's organization is a mess. You cannot tell who is the boss. DOE would be much more effective if layers were removed." "DOE has not been responsive to recommendations for organizational changes and improvements in relationships." Experts we consulted noted that DOE's organizational weaknesses prevent reform. According to experts, DOE's establishment of working groups to implement recommendations can be helpful for guiding reform, but these groups often lack the authority to make critical decisions or to enforce needed reforms. One expert commented that "the current DOE organizational structure is outdated . . . there is no DOE leadership to implement changes." We believe these organizational weaknesses are a major reason why DOE has been unable to develop long-term solutions to the recurring problems reported by advisory groups. The absence of a senior official in the Department with program and administrative authority over the operations of all the laboratories prevents effective management of the laboratories on an ongoing basis. As far back as 1982, an advisory group recognized the need for a strong central focus to manage the laboratories' activities. In its 1982 report, DOE's Energy Research Advisory Board noted "layering and fractionation of managerial and research and development responsibilities in DOE on an excessive number of horizontal and vertical levels. . . ." The Board recommended that DOE designate a high level official, such as a Deputy Under Secretary, whose sole function would be to act as DOE's chief laboratory executive. Although DOE did not make this change, the Under Secretary has assumed responsibility for ensuring that laboratory reforms are accomplished. Despite many studies identifying similar deficiencies in the management of DOE's national laboratories, fundamental change remains an elusive goal. While the Department has many steps in process to improve its management of the laboratories--such as new strategic planning tools and the Laboratory Operations Board--the results of these efforts may be long in coming and may fall short of expectations. Other actions DOE is taking are focused more on process than on results, and most are still incomplete, making it difficult to show how DOE intends to direct the laboratories' missions and manage them more effectively as an integrated system--a major recommendation of past advisory groups. The Department has not developed a way to show how its actions will result in practical and permanent laboratory reform. We believe that without a strategy for ensuring that reforms actually take place, DOE will make only limited progress in achieving meaningful reforms. Establishing accountability for ensuring that its actions will take place in a timely manner is a challenge for DOE. The Department's complex organizational structure creates unclear lines of authority that dilute accountability and make reforms difficult to achieve. In our 1995 management review of DOE's laboratories, we reported that if DOE is unable to refocus the laboratories' missions and develop a management approach consistent with these new missions, the Congress may wish to consider alternatives to the present relationships between DOE and the laboratories. Such alternatives might include placing the laboratories under the control of different agencies or creating a separate structure for the sole purpose of developing a consensus on the laboratories' missions. Because of DOE's uncertain progress in reforming the laboratories' management, we continue to believe that the Congress may wish to consider such alternatives. To ensure the timely and effective implementation of recommendations from the many past laboratory advisory groups, we recommend that the Secretary of Energy develop a comprehensive strategy with objectives, milestones, DOE offices and laboratories responsible for implementation actions, performance measures that will be used to assess success in meeting implementation objectives, a tracking system to monitor progress, and regular progress reports on the status of implementation. We provided a draft of this report to DOE for review and comment. Although DOE did not comment directly on our conclusions and recommendation, the Department said that we did not take into account the full range of changes that it has undertaken. Changes discussed by DOE include a series of initiatives implemented to strengthen management, streamline the strategic planning processes, and enhance interactions between DOE and the laboratories. The Department also said that the cumulative effect of these changes reflects significant progress in implementing the recommendations of past advisory groups. While stating that much has been accomplished to improve the management of the national laboratories, DOE also acknowledges that more needs to be done to ensure a fully integrated management system, including better focusing the laboratories' missions and tying them to the annual budget process. DOE anticipates that these actions will take at least 2 more years to accomplish. In preparing our report, we considered the actions the Department reports it has taken to implement past recommendations from laboratory advisory groups. While the types of reported actions are positive, progress made toward the goals and objectives of reform cannot be determined without a plan for measuring progress. As we state in our report, some laboratory directors have reported to DOE that they have not seen the results of some of these actions at their level. We continue to believe that DOE needs to monitor, measure, and evaluate its progress in accomplishing reforms. If it does not do so, it will have difficulty holding its managers accountable for making the needed changes and determining if funds are being spent wisely on the reform process. Appendix VI includes DOE's comments and our response. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy and the Director, Office of Management and Budget. We will make copies available to other interested parties on request. Our review was performed from December 1997 through August 1998 in accordance with generally accepted government auditing standards. See appendix V for a description of our scope and methodology. If you or your staff have any questions about this report, please call me at (202) 512-3841. Major contributors to this report are listed in appendix VII. Lockheed Martin Idaho Technologies Co. Sandia Corp. (Lockheed Martin) University Research Assoc., Inc. Southeastern Univ. Research Assoc., Inc. Westinghouse Electric Corp. KAPL, Inc. (Lockheed Martin) Westinghouse Savannah River Co. Department of Energy: Clearer Missions and Better Management Are Needed at the National Laboratories (GAO/T-RCED-98-25, Oct. 9, 1997). External Members of the Laboratory Operations Board Analysis of Headquarter and Field Structure Issues, Secretary of Energy Advisory Board, DOE (Sept. 30, 1997). Third Report of the External Members of the Department of Energy Laboratory Operations Board, Secretary of Energy Advisory Board, DOE (Sept. 1997). DOE Action Plan for Improved Management of Brookhaven National Laboratory, DOE (July 1997). The Organization and Management of the Nuclear Weapons Program, Institute for Defense Analyses (Mar. 1997). Status of Federal Laboratory Reforms. The Report of the Executive Office of the President Working Group on the Implementation of Presidential Decision Directive PDD/NSTC-5, Office of Science and Technology Policy, Executive Office of the President (Mar. 1997). Roles and Responsibilities of the DOE Nuclear Weapons Laboratories in the Stockpile Stewardship and Management Program (DOE/DP-97000280, Dec. 1996). Second Report of the External Members of the Department of Energy Laboratory Operations Board, Secretary of Energy Advisory Board, DOE (Sept. 10, 1996). First Report of the External Members of the Department of Energy Laboratory Operations Board, Secretary of Energy Advisory Board, DOE (Oct. 26, 1995). Future of Major Federal Laboratories, National Science and Technology Council (Aug. 1995). Energy R&D: Shaping Our Nation's Future in a Competitive World, Final Report of the Task Force on Strategic Energy Research and Development, Secretary of Energy Advisory Board, DOE (June 1995). Interagency Federal Laboratory Review Final Report, Office of Science and Technology Policy, Executive Office of the President (May 15, 1995). Department of Energy: Alternatives for Clearer Missions and Better Management at the National Laboratories (GAO/T-RCED-95-128, Mar. 9, 1995). Report of the Department of Energy for the Interagency Federal Laboratory Review in Response to Presidential Review Directive/NSTC-1 (Mar. 1995). Alternative Futures for the Department of Energy National Laboratories, Secretary of Energy Advisory Board Task Force on Alternative Futures for the Department of Energy National Laboratories, DOE (Feb. 1995). Department of Energy: National Laboratories Need Clearer Missions and Better Management (GAO/RCED-95-10, Jan. 27, 1995). DOE's National Laboratories: Adopting New Missions and Managing Effectively Pose Significant Challenges (GAO/T-RCED-94-113, Feb. 3, 1994). Changes and Challenges at the Department of Energy Laboratories: Final Draft Report of the Missions of the Laboratories Priority Team, DOE (1993). Final Report, Secretary of Energy Advisory Board (1992). U.S. Economic Competitiveness: A New Mission for the DOE Defense Programs' Laboratories, Roger Werne, Associate Director for Engineering, Lawrence Livermore National Laboratory (Nov. 1992). A Report to the Secretary on the Department of Energy National Laboratories, Secretary of Energy Advisory Board Task Force on the Department of Energy National Laboratories, DOE (July 30, 1992). Progress Report on Implementing the Recommendations of the White House Science Council's Federal Laboratory Review Panel, Office of Science and Technology Policy, Executive Office of the President (July 1984). The Management of Research Institutions: A Look at Government Laboratories, Hans Mark and Arnold Levine, Scientific and Technical Information Branch, National Aeronautics and Space Administration (1984). Report of the White House Science Council Federal Laboratory Review Panel, Office of Science and Technology Policy, Executive Office of the President (May 20, 1983). President's Private Sector Survey on Cost Control Report on the Department of Energy, the Federal Energy Regulatory Commission, and the Nuclear Regulatory Commission (1983). The Department of Energy Multiprogram Laboratories: A Report of the Energy Research Advisory Board to the United States Department of Energy (Sept. 1982). Final Report of the Multiprogram Laboratory Panel, Volume II: Support Studies, Oak Ridge National Laboratory (Sept. 1982). The Multiprogram Laboratories: A National Resource for Nonnuclear Energy Research, Development and Demonstration (GAO/EMD-78-62, Mar. 22, 1978). Robert Galvin (Chairman) Chairman, Executive Committee Motorola, Inc.
|
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) progress in making needed management reforms in its national laboratories, focusing on: (1) the recommendations made by various advisory groups for addressing management weaknesses at DOE and the laboratories; and (2) how DOE and its laboratories have responded to these recommendations. GAO noted that: (1) for nearly 20 years, many advisory groups have found that while DOE's national laboratories do impressive research and development, they are unfocused, micromanaged by DOE, and do not function as an integrated national research and development system; (2) weaknesses in DOE's leadership and accountability are often cited as factors hindering fundamental reform of the laboratories' management; (3) as a result, advisory groups have made dozens of recommendations ranging from improving strategic planning to streamlining internal processes; (4) several past advisory groups have also suggested major organizational changes in the way the laboratories are directed; (5) to address past recommendations by advisory groups, DOE, at GAO's request, documented the actions it has taken, from creating new task forces to developing strategic laboratory plans; (6) while DOE has made some progress--principally by reducing paperwork burdens on its laboratories--most of its actions are still under way or have unclear outcomes; (7) furthermore, these actions lack the objectives, performance measures, and milestones needed to effectively track progress and account for results; (8) consequently, the Department cannot show how its actions have resulted, or may result, in fundamental change; (9) for example, its Strategic Laboratory Missions Plan, which was developed to give more focus and direction to the national laboratories, does not set priorities and is not tied to the annual budget process; (10) few experts and officials GAO consulted could show how the plan is used to focus missions or integrate the laboratory system; (11) DOE's latest technique for focusing the laboratories' missions is the technology roadmap; (12) roadmaps are plans that show how specific DOE activities relate to missions, goals, and performers; (13) roadmaps are a promising step but have been used in only a few mission areas and are not directly tied to DOE's budget process; (14) moreover, several laboratory directors questioned both the accuracy of the actions DOE has reported taking and their applicability at the laboratory level; (15) DOE's organizational weaknesses, which include unclear lines of authority, are a major reason why the Department has been unable to develop long-term solutions to the recurring problems reported by advisory groups; and (16) although DOE created the Laboratory Operations Board to help oversee laboratory management reform, it is only an advisory body within DOE's complex organizational structure and lacks the authority to direct change.
| 7,006 | 572 |
Under the Rehabilitation Act, a person is considered to have a disability if the individual has a physical or mental impairment that substantially limits one or more major life activities. Existing federal efforts are intended to promote the employment of individuals with disabilities in the federal workforce and help agencies carry out their responsibilities under the Rehabilitation Act. For example, federal statutes and regulations provide special hiring authorities for people with disabilities. These include Schedule A excepted service hiring authority--which permits the noncompetitive appointment of qualified individuals with intellectual, severe physical, or psychiatric disabilities without posting and publicizing the position--and appointments and noncompetitive conversion for veterans who are 30 percent or more disabled. To qualify for a Schedule A appointment, an applicant must generally provide proof of disability and a certification of job readiness. Proof of disability can come from a number of sources, including a licensed medical professional, or a state agency that issues or provides disability benefits. The proof of disability document does not need to detail the applicant's medical history or need for an accommodation. Executive Order 13548 committed the federal government to many of the goals of an executive order issued a decade earlier, but went further by requiring federal agencies to take certain actions. For example, Executive Order 13548 requires federal agencies to develop plans for hiring and retaining employees with disabilities and to designate a senior-level official to be accountable for meeting the goals of the order and to develop and implement the agency's plan. In addition, OPM and Labor have oversight responsibilities to ensure the successful implementation of the executive order (see table 1). For the purposes of determining agency progress in the employment of people with disabilities and setting targeted goals, the federal government tracks the number of individuals with disabilities in the workforce through OPM's Standard Form 256, Self-Disclosure of Disability (SF-256). Federal employees voluntarily submit this form to disclose that they have a disability, as defined by the Rehabilitation Act. For reporting purposes, disabilities are separated into two major categories: Targeted and Other Disabilities. Targeted disabilities, generally considered to be more severe, include such conditions as total deafness, complete paralysis, and psychiatric disabilities. Other disabilities include such conditions as partial hearing or vision loss, gastrointestinal disorders, and learning disabilities. Further, Labor is given responsibilities in the executive order to improve efforts to help employees who sustain work-related injuries and illnesses return to work. In July 2010, the Protecting Our Workers and Ensuring Reemployment (POWER) Initiative was established, led by Labor. This initiative aims to improve agency return-to-work outcomes by setting performance targets, collecting and analyzing injury and illness data, and prioritizing safety and health management programs that have proven effective in the past. 5 U.S.C. SS8101, et seq. Workers' Compensation Programs (OWCP) reviews FECA claims and makes decisions on eligibility and payments. We have completed a number of reviews that have identified steps that agencies could take to provide equal employment opportunity to qualified individuals with disabilities in the federal workforce. In July 2010, we held a forum that identified barriers to the federal employment of people with disabilities and leading practices to overcome these barriers. Participants said that the most significant barrier keeping people with disabilities from the workplace is attitudinal and identified eight leading practices that agencies could implement to help the federal government become a model employer: (1) top leadership commitment; (2) accountability, including goals to help guide and sustain efforts; (3) regular surveying of the workforce on disability issues; (4) better coordination within and across agencies; (5) training for staff at all levels to disseminate leading practices (6) career development opportunities inclusive of people with (7) a flexible work environment; and (8) centralized funding at the agency level for reasonable accommodations. GAO, Highlights of a Forum: Participant-Identified Leading Practices that Could Increase the Employment of Individuals with Disabilities in the Federal Workforce, GAO-11-81SP (Washington, D.C.: Oct. 5, 2010). OPM, in consultation with EEOC, OMB, and Labor, issued a memorandum in November 2010 to heads of executive departments and agencies outlining the key requirements of the executive order and what elements must be included in agency disability hiring plans. These elements include listing the name of the senior-level official to be held accountable for meeting the goals of the executive order and describing how the agency will hire individuals with disabilities at all grade levels and in various job occupations. The memorandum also described strategies that agencies could take to become model employers of people with disabilities, such as reviewing all recruitment materials to ensure accessibility for people with disabilities. To help implement the strategies, OPM contracted in December 2010 with a private firm to recruit and to manage a list of Schedule A-certified individuals from which federal agencies can hire. OPM received 66 agency plans for promoting the employment of individuals with disabilities, representing over 99 percent of the federal civilian executive branch workforce. OPM officials reviewed all the plans, recording whether they met criteria developed by OPM based on the executive order and its model strategies memorandum. OPM also identified and informed agencies about innovative ideas included in plans. In reviewing the plans, OPM found that many agency plans did not meet one or more of its review criteria (see fig. 1). For example, OPM's review found that 29 of the 66 agency plans did not include numerical goals for the hiring of people with disabilities. OPM also found that 9 of the 66 agency plans did not identify a senior-level official responsible for the development and implementation of the plan. Finally, only 7 of the 66 plans met all of the criteria; over half of the plans met 8 or fewer of the 13 criteria. However, OPM expected agencies to begin implementing their plans immediately, regardless of any unaddressed deficiencies. Agencies met some criteria more successfully than others. For example, OPM found that 40 of the 66 agency plans included a process for increasing the use of Schedule A to increase the hiring of people with disabilities. In contrast, 29 of the 66 agency plans provided for the quarterly monitoring of the rate at which employees injured on the job successfully return to work. OPM provided agencies with written feedback on plan deficiencies and strongly encouraged agencies to address them numerous times beginning in June 2011. However, 32 out of the 59 agencies with deficiencies in their plans had not addressed them as of April 2012. Specifically, in June 2011, OPM provided agencies with access to reviews of their plans, which identified deficiencies, through OMB's Max Information System (MAX). According to OPM, in July 2011, a White House official told agency senior executives that they were required to address deficiencies in their plans. In October and November 2011, OPM provided agencies with a list of the deficiencies identified in their plans, and asked agencies to determine how their plans could be improved. In December 2011, OPM again told agencies they were strongly encouraged to review and address plan deficiencies and provided agencies with several examples of plans that met all of the criteria. Though the executive order does not specifically authorize OPM to require agencies to address plan deficiencies, it calls for OPM to regularly report on agencies' progress in implementing their plans to the White House and others. In response to the executive order's reporting requirement, OPM officials told us that they had briefed White House officials on issues related to agencies' implementation of the executive order, but did not provide information on the deficiencies in all of the agency plans. In addition, OPM does not think that the federal government is on target to achieve the goals set in the executive order. While the executive order did not provide additional detail as to what information should be reported, providing information on the extent to which agencies' plans have met OPM's criteria would better enable the White House to hold agencies accountable for addressing plan deficiencies. In addition to reviewing agency plans, the executive order required OPM to develop mandatory training programs on the employment of people with disabilities for both human resources personnel and hiring managers, within 60 days of the executive order date. We have previously reported that training at all staff levels, in particular training on hiring, reasonable accommodations, and diversity awareness, can help disseminate leading practices throughout an agency and communicate expectations for implementation of policies and procedures related to improving employment of people with disabilities. Such policies and procedures could be communicated across the federal government with training on topics such as how to access and efficiently use the list of Schedule A- certified individuals, the availability of internships and fellowships, such as Labor's Workforce Recruitment Program, and online communities of practice established to help officials share best practices on hiring people with disabilities, such as eFedlink. In its November 2010 model strategies memorandum to heads of executive agencies, OPM stated that, in consultation with Labor, EEOC, and OMB, it was developing the mandatory training programs required by the executive order and that further information would be forthcoming. OPM officials told us in March 2012 that they are working with federal Chief Human Capital Officers (CHCO) to develop modules on topics such as using special hiring authority that will be available through HR University. Officials explained that they need to ensure that the training is uniform to ensure all personnel receive consistent information, and they expect the training modules to be ready by August 2012. Although it has yet to fully develop mandatory training programs, OPM has taken steps to train and inform federal officials about tools available to them. For example, OPM partnered with Labor, EEOC, and other agencies to provide elective training courses for federal officials involved in implementing the executive order on topics including: the executive order, model recruitment strategies, guidance on developing disability hiring plans, and return-to-work strategies. OPM also conducted training on implementation of the executive order in July 2011 specifically for senior executives accountable for their agencies' plans. It also offers short online videos for hiring managers on topics such as Schedule A hiring authority. Further, other governmentwide training on employing people with disabilities exists. For example, Labor's Job Accommodation Network offers online training on relevant issues like applying the Americans with Disabilities Amendments Act and providing reasonable accommodations. Moreover, the Department of Defense's Computer/Electronic Accommodations Program offers online training modules to help federal employees understand the benefits of hiring people with disabilities. Nevertheless, agency officials we interviewed told us that they would like to have more comprehensive training on strategies for hiring and retaining individuals with disabilities, confirming the need for OPM to complete the development of the training programs required by the executive order. For example, officials from one agency said that more training on the relationship between return-to-work efforts and providing reasonable accommodations is needed, while officials from another agency identified a need for increased awareness of the Schedule A hiring process. Executive Order 13548 requires OPM to implement a system for reporting regularly to the president, heads of agencies, and the public on agencies' progress in implementing the objectives of the executive order. OPM is also to compile, and post on its website, governmentwide statistics on the hiring of individuals with disabilities. This is important because effectively measuring workforce demographics requires reliable data to inform decisions and to allow for individual and agencywide accountability. To measure and assess their progress towards achieving the goals of the executive order, agencies and OPM use data about disability status that employees voluntarily self-report on the SF-256. OPM's guidance to agencies for implementing the executive order explained that the data gathered from the SF-256 is crucial for agencies to determine whether they are achieving their disability hiring goals. Agencies also report these data to EEOC in an effort to identify and develop strategies to eliminate potential barriers to equal employment opportunities. According to the form, the data are used to develop reports to bring to light agency specific or governmentwide deficiencies in the hiring, placement, and advancement of individuals with disabilities. The information is confidential and cannot be used to affect an employee in any way. Only staff who record the data in an agency's or OPM's personnel systems have access to the information. According to draft data from OPM, as stated earlier, the government hired approximately 20,000 employees with disabilities during fiscal years 2010 and 2011. However, according to officials at OPM, EEOC, VA, Education, and SSA, accurately measuring the number of current and newly hired employees with disabilities is an ongoing challenge. While the accuracy of the SF- 256 data is unknown, agency officials and advocates for people with disabilities believe there is an undercount of employees with disabilities. For example, despite the safeguards in place explaining the confidentiality of the data, agency officials and advocates for people with disabilities told us that some individuals with disabilities may not disclose their disability status out of concern that they will be subjected to discrimination. Similarly, EEOC reported that some persons with disabilities are reluctant to self-identify because they are concerned that such disclosure will preclude them from advancement. Additionally, some individuals may develop disabilities during federal employment and may not know how to or why they should update their disability status. We have reported that regularly encouraging employees to update their disability status allows agencies to be aware of any changes in their workforce. EEOC guidance recommends that agencies request that employees update their disability status every 2 to 4 years. As previously noted, disabled veterans with a compensable service-connected disability of 30 percent or more may be noncompetitively appointed and converted to a career appointment under 5 U.S.C. SS 3112. agency's ability to establish appropriate policies and goals, and assess progress towards those goals. Labor has taken several steps toward meeting the requirements of the executive order to improve return-to-work outcomes for employees injured on the job, including pursuing overall reform of the FECA system. Specifically, Labor developed new measures and targets to hold federal agencies accountable for improving their return-to-work outcomes within a 2-year period. Agencies were expected to improve return-to-work outcomes by 1 percent for fiscal year 2011 and an additional 2 percent in each of the following 3 years over the 2009 baseline. In fiscal year 2011, the federal government had a cumulative return-to-work rate of 91.6 percent, almost 5 percent better than the target rate of 86.7 percent. Goals such as these are useful tools to help agencies improve performance. Labor is also researching strategies that agencies can use to increase the successful return-to-work of employees who have sustained disabilities as a result of workplace injuries or illnesses. The results of this study are expected to be released in September 2012. Another Labor initiative is aimed at helping the federal government rehire injured federal workers who are not able to return to the job at which they were injured. OWCP initiated a 6-month pilot project in May 2011 to explore how Schedule A noncompetitive hiring authority might be used to rehire injured federal workers under FECA. As part of the project, OWCP provided guidance to claims staff, rehabilitation specialists, rehabilitation counselors, and employing agencies on the process of Schedule A certification and the steps it will take to facilitate Schedule A placements. According to Labor, the pilot identified obstacles to reemployment and provided input needed to determine whether such an effort can be expanded to other federal agencies. Identified obstacles included unanticipated questions from potential workers, such as if acceptance of a Schedule A designation would require a "probationary" period, and what impact acceptance of a Schedule A position would have on their retirement benefits. Of the 48 individuals Labor screened for Schedule A certification, 45 obtained certification, of whom 5 have been placed into federal employment. Each of the four agencies we reviewed submitted a plan for implementing the executive order as required. Only VA's plan, as initially submitted, met all of OPM's criteria for satisfying the requirements of the executive order (see table 2). Education and SSA revised their plans based on feedback from OPM. Specifically, Education's revised plan states that Education will hire individuals with disabilities in all occupations and across all job series and grades. Education also clarified its commitment to coordinate with Labor to improve return-to-work outcomes through the POWER Initiative, and to engage and train managers on Schedule A hiring authority. Further, Education increased its goals for the percentage of job opportunity announcements that include information related to individuals with disabilities. SSA revised its plan to include goals and planned activities under the POWER Initiative, including quarterly monitoring of return-to-work successes under the program and a strategy for identifying injured employees who would benefit from reasonable accommodations and reassignment. OMB submitted its plan in March 2012 but, according to OMB officials, the agency has not received feedback from OPM. Agencies had positive views about the executive order's requirement that they develop written plans to increase the number of federal employees with disabilities. In particular, Education, SSA, and VA said that the executive order provided an opportunity to further develop the written plans they already had in place for hiring and retaining employees with disabilities. Agencies were supportive of the goal of increasing the hiring and retention of federal employees with disabilities, and reported few challenges in implementing their plans to achieve this goal. Officials at all of the agencies we interviewed cited funding constraints as a potential obstacle to hiring more employees with disabilities. OMB officials also said that it was a challenge to identify individuals with the right skills and experience to fill their positions. For example, officials said that many of the candidates on OPM's list of Schedule A-certified individuals have entry level skills and not the more advanced skills and experience that are required for positions at OMB. Agency officials cited no special challenges with respect to retaining employees with disabilities at their agencies. In October 2010, we reported on eight leading practices that could help the federal government become a model employer for individuals with disabilities. These practices, which are consistent with the executive order's goal of increasing the number of individuals with disabilities in the federal government, have been implemented to varying degrees by the four agencies we contacted for this review. Top leadership commitment: Involvement of top agency leadership is necessary to overcome the resistance to change that agencies could face when trying to address attitudinal barriers to hiring individuals with disabilities. When leaders communicate their commitment throughout the organization, they send a clear message about the seriousness and business relevance of diversity management. Leaders at the agencies we talked with have, to varying degrees, communicated their commitment to hiring and retaining individuals with disabilities to their employees. Education has issued annual policy statements to its employees ensuring equal employment opportunity for all applicants and employees, including those with targeted disabilities, and officials told us that they routinely host events that address issues related to hiring and promoting equal employment opportunity. For example, in October 2008, Education hosted an event to encourage hiring individuals with disabilities and distributed a written guide about using Schedule A hiring authority to facilitate hiring individuals with targeted or severe disabilities, as well as disabled veterans. OMB officials said that it is briefing managers on the requirements of the executive order and that it planned to communicate the agency's commitment to implementing the executive order to all staff in May 2012. SSA's Commissioner announced his support for employing individuals with disabilities and encouraged employees to continue efforts to hire and promote these individuals in a March 2009 broadcast to all employees. VA said that the Secretary regularly communicates his commitment to hiring and retaining employees with disabilities through memorandums to all employees. In a September 2010 memorandum, the Secretary announced the agency's goal of increasing the percentage of individuals with targeted disabilities that it hires and employs to 2 percent in fiscal year 2011. Accountability: Accountability is critical to ensuring the success of an agency's efforts to implement leading practices and improve the employment of individuals with disabilities. To ensure accountability, agencies should set goals, determine measures to assess progress toward goals, evaluate staff and agency success in helping meet goals, and report results publicly. Education, SSA and VA's disability hiring plans all include goals that will allow them to measure their progress toward meeting the goals of the executive order. Prior to the executive order, Education issued a Disability Employment Program Strategic Plan for fiscal years 2011-2013 that established goals related to reasonable accommodations, and recruitment and retention, and offered strategies for meeting these goals, as well as ways to track and measure agency progress. At SSA, accountability for results related to the executive order is included in the performance plan of the senior-level official responsible for implementing it. VA specifically holds senior executives accountable for meeting agency numerical goals by including these goals in their contracts. Additionally, VA senior executives' contracts include a performance element for meeting hiring goals for individuals with targeted disabilities. OMB has not yet developed such goals. Regular surveying of the workforce on disability issues: Regularly surveying their workforces allows agencies to have more information about potential barriers to employment for people with disabilities, the effectiveness of their reasonable accommodation practices, and the extent to which employees with disabilities find the work environment friendly. To collect this information, agencies should survey their workforces at all stages of their employment, including asking employees to complete the SF-256 when they are hired, and asking relevant questions on employee feedback surveys and in exit interviews. VA officials said that they encourage new employees to complete the SF- 256, and SSA reminds all employees to annually review their human resource records and update or correct information, including disability data. In addition, all of the agencies we contacted survey employees to solicit feedback on a range of topics. However, only SSA and VA include a question on disability status or reasonable accommodations on these surveys. In addition, Education and SSA said that they routinely conduct exit surveys to solicit information from employees who separate from service about their reasons for leaving. While VA has an exit survey, officials said it is not consistently administered to all employees who separate. Education officials said that they have additional means of obtaining information about barriers for employees with disabilities. For example, senior managers hold open forums with staff, and employees can submit feedback to management through the agency's Intranet. Education officials also reported that employees with disabilities have formed their own group to address access to assistive technology, which has helped Education to obtain improved technology, such as videophones. OMB officials said that their Diversity Council and Personnel Advisory Board provide forums for employees to discuss diversity issues, including those related to disabilities, and share them with senior leadership. Better coordination of roles and responsibilities: Often the responsibilities related to employment of people with disabilities are dispersed, which can create barriers to hiring if agency staff defer taking action, thinking that it is someone else's responsibility. Coordination across agencies can encourage agencies with special expertise in addressing employment obstacles for individuals with disabilities to share their knowledge with agencies that have not yet developed this expertise. All of the agencies we interviewed had, to some extent, coordinated within and across agencies to improve their recruitment and retention efforts. Specifically, each agency has a designated section 508 coordinator who assists the agency in ensuring that, as required by section 508 of the Rehabilitation Act, employees with disabilities have access to information and data that are comparable to that provided to those without disabilities. In addition, each agency has a single office or primary point of contact that is responsible for overseeing activities related to hiring and retaining employees with disabilities. Officials at all of the agencies we talked to said their agencies engaged in one or more interagency efforts to address disability issues. All of these agencies participate in the CHCO Council, which facilitates sharing of best practices and challenges related to human capital issues, including those related to employees with disabilities. In addition, Education, OMB and SSA officials said that they work with state vocational rehabilitation agencies, which can help them identify accommodations that may be needed for new hires with disabilities. Education and SSA also participate in the Federal Disability Workforce Consortium, an interagency partnership working to improve recruitment, hiring, retention, and advancement of individuals with disabilities by sharing information on disability employment issues across government. SSA and VA have also participated in the Workforce Recruitment Program for College Students VA and Education have also worked together to assist with Disabilities; disabled veterans by providing unpaid work experience at Education, which may lead to permanent employment. Managed by Labor's Office of Disability Employment Policy and the Department of Defense's Office of Diversity Management and Equal Opportunity, this program is a recruitment and referral effort that connects federal sector employers nationwide with highly motivated college students and recent graduates with disabilities. said that the site is useful for seeing what other agencies are doing, and that they have also shared their own practices on the site. Training for staff at all levels: Agencies can leverage training to communicate expectations about implementation of policies and procedures related to improving employment of people with disabilities, and help disseminate leading practices that can help improve outcomes. All of the selected agencies provide some training for staff at all levels on the importance of workforce diversity. They also require managers and supervisors to take training on hiring procedures related to individuals with disabilities, and the use of Schedule A hiring authority. In addition, VA requires employees at all levels to take training specifically devoted to the legal rights of individuals with disabilities. At Education, this training is required for managers and supervisors, while at SSA it is available but optional for all employees. Career development opportunities: Opportunities for employees with disabilities to participate in work details, rotational assignments, and mentoring programs can lead to increased retention and improved employee satisfaction, and improve employment outcomes by helping managers identify employees with high potential. All of the agencies we interviewed provided special work details or rotational assignments for all employees; one reported having a program exclusively for those with disabilities. Specifically, Education uses Project SEARCH to provide internships for students with disabilities to help them become ready to work through on-the-job training. Education officials reported that some of these internships have led to permanent employment at Education. A flexible work environment: Flexible work schedules, telework, and other types of reasonable accommodations are valuable tools for the recruitment and retention of employees, regardless of disability status. Such arrangements can make it easier for employees with health impairments to successfully function in the work environment or facilitate an injured employee's return to work. All of the agencies we interviewed provide flexible work arrangements, including flexible work schedules and teleworking. These agencies also make assistive technologies, such as screen reader software, available for employees with disabilities, which can facilitate their ability to take advantage of flexible work arrangements. Education, OMB, and SSA also offer all employees opportunities for job sharing. Centralized funding for reasonable accommodations: Having a central budget at the highest level of the agency can help ensure that employees with disabilities have access to reasonable accommodations by removing these expenses from local operational budgets and thus reducing managers' concerns about their costs. Education, SSA, and VA use centralized funding accounts to pay for reasonable accommodations for employees with disabilities. At Education, a centralized fund is usually used to cover expenses related to providing readers, interpreters, and personal attendants. However, in cases where these services are needed on a daily basis, Education may require the operating unit to hire someone full-time and pay for this from their unit budget. OMB provides funding from its own budget to pay for reasonable accommodations, rather than receiving funding from the Executive Office of the President. OMB officials also told us that they also have been able to rely on the Department of Defense's Computer/Electronic Accommodations Program to help provide reasonable accommodations for some of the employees. This program facilitates access to assistive technology and services to people with disabilities, federal managers, supervisors, and information technology professionals by providing a single point of access for executive branch agencies. As the nation's largest employer, the federal government has the opportunity to be a model for the employment of people with disabilities. Consistent with the July 2010 executive order, OPM, Labor, and other agencies have helped provide the framework for federal agencies to take proactive steps to improve the hiring and retention of persons with disabilities. However, nearly 2 years after the executive order was signed, the federal government is not on track to achieve the executive order's goals. Although federal agencies have taken the first step by submitting action plans to OPM for review, many agency plans do not meet the criteria identified by OPM as essential to becoming a model employer of people with disabilities. Though the executive order does not specifically authorize OPM to require agencies to address deficiencies, regularly reporting to the president and others on agency progress in addressing these deficiencies may compel agencies to address them and better position the federal government to reach the goals of the executive order. Further, officials responsible for hiring at federal agencies need to acquire the necessary knowledge and skills to proactively recruit, hire, and retain individuals with disabilities. Agency officials we spoke with said more comprehensive training on the tools available to them, including the requirements of Schedule A hiring authority, is needed. While the mandatory training program remains in development, until it is fully developed and communicated to agencies, opportunities to better inform relevant agency officials on how to increase the employment of individuals with disabilities may be missed. Finally, concerns have been raised by stakeholders, including EEOC, OPM, and advocates for people with disabilities, about the reliability of government statistics on the number of individuals with disabilities in the federal government. Most of the concerns focus on the likelihood of underreporting given the reliance on voluntary disclosure, but the extent of the underreporting is unknown. Unreliable data hinder OPM's ability to measure the population of federal workers with disabilities and may prevent the federal government from developing needed policies and procedures that support efforts to become a model employer of people with disabilities. Determining the accuracy of SF-256 data, for example, by examining the extent to which employees voluntarily disclose their disability status and reasons for nondisclosure, is an essential step for ensuring that OPM can measure progress towards the executive order's goals. To ensure that the federal government is well positioned to become a model employer of individuals with disabilities, we recommend that the Director of OPM take the following three actions: 1. Incorporate information about plan deficiencies into its regular reporting to the president on agencies' progress in implementing their plans, and inform agencies about this process to better ensure that the plan deficiencies are addressed. 2. Expedite the development of the mandatory training programs for hiring managers and human resource personnel on the employment of individuals with disabilities, as required by the executive order. 3. Assess the extent to which the SF-256 accurately measures progress toward the executive order's goal and explore options for improving the accuracy of SF-256 reporting, if needed, including strategies for encouraging employees to voluntarily disclose their disability status. Any such strategies must comply with legal standards governing disability-related inquiries, including ensuring that employee rights to voluntarily disclose a disability are not infringed upon. We provided a draft of this report to Education, EEOC, Labor, OMB, OPM, SSA, and VA for review and comment. In written comments, OPM agreed with findings and recommendations identified in the report, and described actions being implemented in an effort to address them. To better ensure agencies address deficiencies identified in their disability hiring plans, OPM has begun notifying agencies that it plans to report remaining deficiencies to the president and on the OPM website by August 2012. With regard to the need to expedite the development of the mandatory training program, OPM, in coordination with partner agencies has identified training for hiring managers and supervisors, and Human Resource personnel. Finally, OPM stated that it is engaged in discussions with the White House and stakeholder agencies to better define questions on the SF-256 to increase response rates. OPM also said it plans to work with EEOC and Labor to develop guidance for agencies to encourage voluntary self-disclosure through annual re-surveying of the workforce and providing employees with the option to complete the SF-256 when they request a reasonable accommodation. OPM expects to complete these efforts by January 2013. While these actions may help improve the accuracy of the SF-256 data, we think taking steps to assess the accuracy of the data will enhance OPM's efforts. For example, understanding the extent to which employees do not voluntarily self- disclose their disability status and the reasons why may help target the messages agencies can use to encourage voluntary self-disclosure. Without such an understanding, OPM and agencies may miss opportunities to increase the accuracy of the data collected on the SF- 256. Education, EEOC, OMB, OPM, and SSA provided technical comments, which have been incorporated into the report as appropriate. Labor and VA had no comments. We are sending copies of this report to Education, EEOC, Labor, OMB, OPM, SSA, and VA and to the appropriate congressional committees and other interested parties. In addition, the report will be available at no charge on GAO's website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Yvonne Jones at (202) 512-2717 or [email protected], or Daniel Bertoni at (202) 512-7215 or [email protected]. Contact information for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Daniel Bertoni, (202) 512-7215, [email protected]. Yvonne D. Jones, (202) 512-2717, [email protected]. In addition to the contacts named above, Neil Pinney, Assistant Director; Debra Prescott, Assistant Director; Charlesetta Bailey; Benjamin Crawford; Catherine Croake; Karin Fangman; David Forgosh; Robert Gebhart; Michele Grgich; Amy Radovich; Terry Richardson; and Regina Santucci made key contributions to this report. Federal Employees' Compensation Act: Preliminary Observations on Fraud-Prevention Controls. GAO-12-402. Washington, D.C.: January 25, 2012. Coast Guard: Continued Improvements Needed to Address Potential Barriers to Equal Employment Opportunity. GAO-12-135. Washington, D.C.: December 6, 2011. Federal Workforce: Practices to Increase the Employment of Individuals with Disabilities. GAO-11-351T. Washington, D.C.: February 16, 2011. Highlights of a Forum: Participant-Identified Leading Practices That Could Increase the Employment of Individuals with Disabilities in the Federal Workforce. GAO-11-81SP. Washington, D.C.: October 5, 2010. Highlights of a Forum: Actions that Could Increase Work Participation for Adults with Disabilities. GAO-10-812SP. Washington, D.C.: July 29, 2010. Federal Disability Programs: Coordination Could Facilitate Better Data Collection to Assess the Status of People with Disabilities. GAO-08-872T. Washington, D.C.: June 4, 2008. Federal Disability Programs: More Strategic Coordination Could Help Overcome Challenges to Needed Transformation. GAO-08-635. Washington, D.C.: May 20, 2008. Highlights of a Forum: Modernizing Federal Disability Policy. GAO-07-934SP. Washington, D.C.: August 3, 2007. Equal Employment Opportunity: Improved Coordination Needed between EEOC and OPM in Leading Federal Workplace EEO. GAO-06-214. Washington, D.C.: June 16, 2006. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005.
|
In July 2010, the president signed Executive Order 13548 committing the federal government to become a model employer of individuals with disabilities and assigned primary oversight responsibilities to OPM and Labor. According to OPM, the federal government is not on track to meet the goals of the executive order, which committed the federal government to hire 100,000 workers with disabilities over the next 5 years. GAO was asked to examine the efforts that (1) OPM and Labor have made in overseeing federal efforts to implement the executive order; and (2) selected agencies have taken to implement the executive order and to adopt leading practices for hiring and retaining employees with disabilities. To conduct this work, GAO reviewed relevant agency documents and interviewed appropriate agency officials. GAO conducted case studies at Education, SSA, VA, and OMB. The Office of Personnel Management (OPM) and the Department of Labor (Labor) have taken steps to implement the executive order and help agencies recruit, hire, and retain more employees with disabilities. OPM provided guidance to help agencies develop disability hiring plans and reviewed the 66 plans submitted. OPM identified deficiencies in most of the plans. For example, though 40 of 66 agencies included a process for increasing the use of a special hiring authority to increase the hiring of people with disabilities, 59 agencies did not meet all of OPM's review criteria, and 32 agencies had not addressed plan deficiencies as of April 2012. In response to executive order reporting requirements, OPM officials said they had briefed the White House on issues related to implementation, but they did not provide information on deficiencies in all plans. While the order does not specify what information these reports should include beyond addressing progress, providing information on deficiencies would enable the White House to hold agencies accountable. OPM is still developing the mandatory training programs for officials on the employment of individuals with disabilities, as required by the executive order. Several elective training efforts exist to help agencies hire and retain employees with disabilities, but agency officials said that more information would help them better use available tools. To track and measure progress towards meeting the executive order's goals, OPM relies on employees to voluntarily disclose a disability. Yet, agency officials, including OPM's, are concerned about the quality of the data. For example, agency officials noted that people may not disclose their disability due to concerns about how the information may be used. Without quality data, agencies may be challenged to effectively implement and assess the impact of their disability hiring plans. The Department of Education (Education), Social Security Administration (SSA), Office of Management and Budget (OMB), and Department of Veterans Affairs (VA) submitted disability hiring plans, and have taken steps to implement leading practices for increasing employment of individuals with disabilities, such as demonstrating top leadership commitment. The executive order provided SSA, VA, and Education an opportunity to further develop existing written plans. However, officials at these agencies cited funding constraints as a potential obstacle to hiring more employees with disabilities. In terms of leading practices, all four agencies have communicated their commitment to hiring and retaining individuals with disabilities and coordinated within or across other agencies to improve their recruitment and retention efforts. For example, each agency has a single point of contact to help ensure that employees with disabilities have access to information that is comparable to that provided to those without disabilities, and for overseeing activities related to hiring and retaining employees with disabilities. In addition, VA holds senior managers accountable for meeting hiring goals by including targets in their contracts. Each agency requires training for managers and supervisors on procedures for hiring individuals with disabilities, and VA further requires that all employees receive training on the legal rights of individuals with disabilities. Education, SSA, and VA rely on centralized funding accounts to pay for reasonable accommodations. GAO recommends that OPM: (1) incorporate information about plan deficiencies into its required regular reporting to the president on implementing the executive order and inform agencies about this process; (2) expedite the development of the mandatory training programs required by the executive order; and (3) assess the accuracy of the data used to measure progress toward the executive order's goals and, if needed, explore options for improving its ability to measure the population of federal employees with disabilities, including strategies for encouraging employees to voluntarily disclose disability status. OPM agreed with GAO's recommendations.
| 7,601 | 890 |
NCIC is a law enforcement database maintained by the FBI's Criminal Justice Information Services (CJIS) Division and was first established in 1967 to assist LEAs in apprehending fugitives and locating stolen property. In 1975, NCIC expanded to include the missing persons file to include law enforcement records associated with missing children and certain at-risk adults. The missing persons file contains records for individuals reported missing who: (1) have a proven physical or mental disability; (2) are missing under circumstances indicating that they may be in physical danger; (3) are missing after a catastrophe; (4) are missing under circumstances indicating their disappearance may not have been voluntary; (5) are under the age of 21 and do not meet the above criteria; or (6) are 21 and older and do not meet any of the above criteria but for whom there is a reasonable concern for their safety. The unidentified persons file was implemented in 1983 to include law enforcement records associated with unidentified remains and living individuals who cannot be identified, such as those individuals who cannot identify themselves, including infants or individuals with amnesia. When a missing persons record is entered or modified, NCIC automatically compares the data in that record against all unidentified persons records in NCIC. These comparisons are performed daily on the records that were entered or modified on the previous day. If a potential match is identified through this process, the agency responsible for entering the record is notified. Management of NCIC is shared between CJIS and the authorized federal, state, and local agencies that access the system. CJIS Systems Agencies (CSA)--criminal justice agencies with overall responsibility for the administration and usage of NCIC within a district, state, territory, or federal agency--provide local governance of NCIC use. A CSA generally operates its own computer systems, determines what agencies within its jurisdiction may access and enter information into NCIC, and is responsible for assuring LEA compliance with operating procedures within its jurisdiction. An Advisory Policy Board, with representatives from criminal justice and national security agencies throughout the United States, and working groups are responsible for establishing policy for NCIC use by federal, state, and local agencies and providing advice and guidance on all CJIS Division programs, including NCIC. NamUs became operational in 2009, and was designed to improve access to database information by people who can help solve long-term missing and unidentified persons cases--those cases that have been open for 30 days or more. NamUs is comprised of three internet-based data repositories that can be used by law enforcement, medical examiners, coroners, victim advocates or family members, and the general public to enter and search for information on missing and unidentified persons cases. These repositories include the missing person database (NamUs-MP), the unidentified person database (NamUs-UP), and the unclaimed persons database. NamUs-MP and NamUs-UP allow automated and manual comparison of the case records contained in each. The University of North Texas Health Science Center, Center for Human Identification (UNTCHI) has managed and administered the NamUs program under a cooperative agreement with NIJ since October 2011. Two Directors within UNTCHI's Forensic and Investigative Services Unit are responsible for daily management, oversight, and planning associated with NamUs. Additionally, eight regional system administrators (RSAs) and eight forensic specialists provide individualized case support. To gain access to NCIC, an agency must have authorization under federal law and obtain an Originating Agency Identifier (ORI). In general, to be authorized under federal law for full access to NCIC, an agency must be a governmental agency that meets the definition of a CJA. Specifically, data stored in NCIC is "criminal justice agency information and access to that data is restricted to duly authorized users," namely CJAs as defined in regulation. The CJIS Security Policy allows data associated with the missing and unidentified persons files to be disclosed to and used by government agencies for official purposes or private entities granted access by law. For example, there is a specific provision that allows these files to be disclosed to the National Center for Missing and Exploited Children, a nongovernmental organization, to assist in its efforts to operate a nationwide missing children hotline, among other things. As of February 2016, there were almost 118,000 active ORI numbers that granted authorized agencies at least limited access to NCIC. Table 1 shows the different types of users granted ORI numbers to access NCIC and their associated access levels. Unlike NCIC, any member of the public may register to use NamUs and access published case information. When cases are entered, the RSA carries out a validation process by reviewing each case entered within his or her region to ensure the validity and accuracy of the information provided and determine whether the case may be published to the public website. Before any case may be publicly published to the NamUs site, the RSA must confirm the validity of that case with the LEA or other responsible official with jurisdiction by obtaining an LEA case number or an NCIC number. The RSA also vets registration applications for non- public users--professionals affiliated with agencies responsible for missing or unidentified persons cases. In addition to the published case information, these non-public registered users may also access unpublished case information. Table 2 shows the types of individuals that may register as NamUs users for the missing persons and unidentified persons files, and their access levels. NCIC data include criminal justice agency information and access to such data is restricted by law to only authorized users. Because many users of NamUs are not authorized to access NCIC, there are no direct links or data transfers between the systems. In addition, NCIC and NamUs only contain information manually entered by their respective authorized users. As a result, while both NCIC and NamUs contain information on long-term missing and unidentified persons, they remain separate systems. DOJ could facilitate more efficient sharing of information on missing and unidentified persons cases contained in NCIC and NamUs. The two systems have overlapping purposes specifically with regard to data associated with long-term missing and unidentified persons cases--both systems collect and manage data that officials can use to solve these cases. Further, three key characteristics of NCIC and NamUs--the systems' records, registered users, and data validation efforts--are fragmented or overlapping, creating the risk of duplication. We found that, as CJIS and NIJ proceed with planned upgrades to both databases, opportunities may exist to more efficiently use data related to missing and unidentified persons cases, in part because no mechanism currently exists to share information between NCIC and NamUs. Figure 3 below describes the purpose of each system and explains how certain characteristics contribute to fragmentation, overlap, or both. See appendix II for a non-interactive version of figure 3. Interactive graphic Figure 3: Comparison of Fragmentation and Overlap in Key Characteristics of the National Crime Information Center (NCIC) and National Missing and Unidentified Persons System (NamUs) Move mouse over headers for description. For a noninteractive version, please see appendix II. Database Records: NCIC and NamUs contain fragmented information associated with long-term missing and unidentified persons. Specifically, information about long-term missing or unidentified persons may be captured in one system, but not the other. As a result, if users do not have access to or consult the missing and unidentified persons files in both data systems, they may miss vital evidence that could help to solve a given case. For example, in fiscal year 2015, 3,170 missing persons cases were reported to NamUs. During the same time period, 84,401 of the missing persons records reported to NCIC remained open after 30 days and became long-term cases. Conversely, in fiscal year 2015, 1,205 unidentified persons cases were reported to NamUs, while 830 records were reported to NCIC. NamUs also accepts and maintains records of missing and unidentified persons cases that are not published on its public website, in part because they may not meet criteria for entry into NCIC. According to NamUs officials, cases may remain unpublished for several reasons, including (1) they are undergoing the validation process, (2) they lack information required to complete the entry, (3) the responsible agency has requested the report go unpublished for investigative reasons, (4) a report has not been filed with law enforcement, or (5) law enforcement does not consider the person missing. For example, according to NamUs officials, a non-profit agency entered approximately 800 missing migrant cases that have remained unpublished on the NamUs public website because they do not have active law enforcement investigations associated with the cases. Because they do not have active law enforcement investigations on file and NCIC only accepts documented criminal justice information, it is highly unlikely that these approximately 800 cases are present in NCIC. Since access to unpublished cases is limited to authorized LEA and medicolegal investigators that have registered as NamUs users, investigators using only NCIC cannot use information from these NamUs cases to assist in solving unidentified persons cases. In addition, the number of NCIC cases that are also recorded in NamUs varies greatly among states, further contributing to fragmentation. For example, of the long-term missing persons cases officials in each state reported to NCIC in fiscal year 2015, the proportion of these NCIC cases that were also recorded in NamUs ranged from less than 1 to almost 40 percent. However, in our nongeneralizeable review of laws in Arizona, California, and New York, the state laws specifically associated with reporting missing persons cases to NCIC or NamUs did not contribute to variation in reporting rates. Specifically, in fiscal year 2015, approximately 2 to 3.5 percent of the long-term cases reported by officials in each state to NCIC were ultimately reported to NamUs. These reporting rates are very similar despite the fact that, as discussed previously, we chose these three states because they had different reporting requirements associated with reporting missing and unidentified persons. Registered Users: Fragmentation between the records reported to NCIC and NamUs also exists because different user groups with different responsibilities enter data on missing and unidentified persons. The fact that different user bases report information to each system means that certain types of cases may be found in one system but not the other. This creates inefficiencies for officials seeking to solve long-term missing and unidentified persons cases who have to enter information and search both systems to get all the available information. Further, the NCIC user base is significantly larger than the NamUs user base, which likely contributes to the discrepancies in the number of long-term missing persons cases reported to each system. As of February 2016, almost 118,000 agencies had at least limited access to NCIC, with approximately 113,000 granted full access to all 21 NCIC files, including the missing and unidentified persons files. As of November 2015, just over 3,000 individuals were registered as non- public users of NamUs-MP and approximately 2,000 individuals were registered as non-public users of NamUs-UP. These registered users represent at least 1,990 agencies, less than 2 percent of the number of agencies registered to use NCIC. In 1996, a person was reported missing and the case was entered into National Crime Information Center (NCIC). Three days later, a decomposed body was found a few miles away; however, no police report was ever generated for the person's death nor was an entry made into NCIC. In 2013, the detective following up on the missing person case searched National Missing and Unidentified Persons System (NamUs) and found that a medical examiner had entered the unidentified remains case into NamUs. As a result, 16 years after the missing persons case was originally reported, DNA testing verified a match between the unidentified remains reported by a medical examiner to NamUs and the missing person case reported by law enforcement to NCIC in 1996. In addition to the difference in the number of agencies registered to use NCIC or NamUs, there is variation in the types of agencies that are registered with each system, possibly contributing to differences in the type of case information reported. For instance, NamUs has a larger number of registered users in the medicolegal field (either as medical examiners, coroners, forensic odontologists, or other forensic personnel), which may explain why a greater number of unidentified persons cases are reported to NamUs. Specifically, while medical examiners and coroners represent less than 0.1 percent of NCIC's total active ORIs, approximately 18 percent of agencies registered with NamUs have at least one user registered in the medicolegal field. Similarly, virtually all LEAs use NCIC, with only a small fraction registered to use NamUs, likely contributing to the low proportion of long term missing persons cases reported to both NCIC and NamUs by LEAs. Additionally, members of the public who do not have access to NCIC and are not affiliated with any type of agency can report missing persons cases to NamUs. The variation in the types of users registered with NCIC or NamUs ultimately limits the usefulness of either system, as important case information may be missed by individuals who do not access both systems. According to one LEA official we spoke with, his unit has had more than a dozen resolutions of cold cases as a result of information contained in NamUs since NamUs was established in 2009. Data Validation Efforts: NamUs uses a validation process to ensure that all missing and unidentified persons cases include either the local LEA case number or an NCIC number before they are published to the public website. NamUs also has some ad hoc processes in place, beyond routine RSA responsibilities, designed to help ensure that data in selected states on missing and unidentified persons contained in NCIC are captured by NamUs. However, while intended in part to minimize fragmentation, these processes introduce additional inefficiencies caused by overlapping and potentially duplicative activities. Specifically, as part of the NamUs validation process, at least once a year, the RSA requests records from NCIC and manually reviews the data in both systems to ensure consistency. For example, from January 2015 through September 2015, RSAs requested and manually reviewed statewide NCIC records for at least 22,000 missing persons and 4,532 unidentified persons cases to ensure that if cases entered into NamUs were present in NCIC, the two systems contained comparable information. According to NIJ officials, if RSAs identify errors or missing information in an NCIC record during the course of their work, they will alert the agency responsible for the case. It is then the responsibility of that agency to enter or update the NCIC record. The potential for duplication also exists when agencies want to utilize both NCIC and NamUs. For example, if agencies with access wanted their case data to exist in both systems, the system limitations would require them to enter the information in one system and then enter the same data in the second system, resulting in duplicative data entry. Officials from one state agency we interviewed noted that they have a full time employee who is solely responsible for entering case data into NamUs after it has been entered into NCIC. Further, when attempting to use information from either NCIC or NamUs, users are required to access and search each system separately, and then manually compare results. Fragmentation and overlap between NCIC and NamUs result in inefficiencies primarily because there is no systematic mechanism for sharing information between the systems. According to CJIS officials, in lieu of a systematic sharing of information mechanism, they created a standard search that state and local agencies can use to request an extract of all of their missing and unidentified persons data contained in NCIC. Upon receipt of the resulting data extract, the requesting agency would then be responsible for entering the provided data into NamUs. However, this solution to share information does not address the inefficiencies created by the lack of an automated mechanism, as it requires additional work on the part of responsible officials and results in the potential for duplication. We have previously reported that when fragmentation or overlap exists, there may be opportunities to increase efficiency. In particular, our prior work identified management approaches that may improve efficiency, including implementing process improvement methods and technology improvements while documenting such efforts to help ensure operations are carried out as intended. Additionally, we have reported that federal agencies have hundreds of incompatible information-technology networks and systems that hinder governmentwide sharing of information and, as a result, information technology solutions can be identified to help increase the efficiency and effectiveness of these systems. According to CJIS officials, the most significant limiting factors to a systematic sharing of information mechanism between NCIC and NamUs are that (1) access to NCIC is restricted to authorized users, (2) NamUs has not been granted specific access to NCIC by law, and (3) NamUs has a public interface. Because NamUs lacks specific statutory authority to access NCIC and the public is prohibited from accessing NCIC data, CJIS officials stated that fully exchanging data with NamUs would constitute an unauthorized dissemination of NCIC information. As a result, these officials stated that the CJIS Advisory Policy Board determined that NCIC could not be fully connected to NamUs. While there are statutory limitations regarding direct access to NCIC, there may be options to better share information that are technically and legally feasible. Thus, opportunities may exist within the current statutory framework to address fragmentation and overlap between the two systems. Our review of the data elements required by each system indicates a high degree of commonality between the data that can be collected by NCIC and NamUs, which could help facilitate the sharing of information. Specifically, 12 of the 15 data fields required by NamUs for a missing persons case and 12 of the 14 data fields required by NamUs for an unidentified persons case are also present in NCIC. Further, stakeholders we interviewed from three states offered a variety of solutions to address the fragmentation and overlap between NCIC and NamUs. For example, A law enforcement official in one state noted that a notification alert could be added to NCIC to inform users when related case data was also present in NamUs. Another official stated that a query process that allowed authorized users to search information from both systems simultaneously would be helpful in minimizing the need to regularly check both systems. According to CJIS officials, a joint search function would likely require the systems to be fully integrated; however, CJIS officials noted that they had not formally evaluated the option because they believe it is currently precluded by federal law. While full integration of the two systems may be precluded, a joint search function may not equate to full integration. Authorized users with access to both systems could benefit from the efficiencies of such a search function. However, DOJ will not know whether this type of function could be technically or legally feasible until it evaluates the option. Implementing mechanisms to share information without fully integrating the systems could help improve the efficiency of efforts to solve long-term missing and unidentified persons cases using NCIC and NamUs. Officials in another state suggested that a single data entry point could be used to populate both NCIC and NamUs to minimize duplicate data entry. This solution to share information has also been put forward as a requirement in several bills that have been introduced in Congress since 2009. In 2010, DOJ undertook an effort in response to the requirement in proposed legislation to determine whether it would be technically possible for a check box to be added to NCIC that would allow users to indicate that they would like the case information to be automatically entered into NamUs as well. According to CJIS officials, this type of check box is already in use for other NCIC files, which means it could be technically feasible for the missing and unidentified persons files. However, according to CJIS officials, this system change was not pursued for the missing and unidentified persons files because the proposed legislation did not pass, and consequently there was no legal requirement that CJIS implement this mechanism to share information. Nevertheless, without evaluating this mechanism, DOJ will not know whether it is technically and legally feasible. As a result, DOJ may be missing an opportunity to share information between NCIC and NamUs that would better help users close their missing or unidentified persons cases. Both NCIC and NamUs are in the early stages of upgrading their systems; however, neither effort includes plans to improve sharing information between these systems. These ongoing upgrade processes provide DOJ with an opportunity to evaluate and document the technical and legal feasibility of options to improve sharing NCIC and NamUs missing and unidentified persons information, and to integrate appropriate changes, if any, into the next versions of the systems. According to NIJ officials, the discovery phase of the NamUs upgrade to NamUs 2.0 has been completed, and officials have developed a prioritized list of 793 items that they would like to include in the upgrade. The feasibility of each item and timelines for implementation will be determined in an iterative process based on time and funding considerations. According to the officials, the highest priority items are related to enhancing the existing capabilities of NamUs to make them more efficient and user-friendly. Our review of the prioritization document does not indicate that efforts to improve sharing of information with NCIC are included in the ongoing upgrade. NIJ officials stated that their goal for the upgrade is to share data more easily with a variety of state and local systems. According to CJIS officials, the upgrade process for NCIC began in 2014, with a canvas of 500 state, local, tribal, and federal NCIC users to identify the type of functionality users would like to see included in an updated system. The officials said that this process yielded more than 5,500 recommendations related to all 21 files contained in NCIC. CJIS officials did not specify how many recommendations were related to the missing and unidentified persons files, but did note that they received some feedback related to improving the ability to share data with NamUs. Based on the user canvas, CJIS developed a high-level concept paper that will be discussed at the Advisory Policy Board's June 2016 meeting. Following Advisory Policy Board approval, CJIS will begin the development process, including identifying specific tasks. CJIS officials explained that because of the uncertainty regarding approval, and the way in which the upgrade development process will be structured, there are no specific timeframes available related to the update. The officials stated it will likely be several years before there are any deliverables associated with the effort. While we understand there are statutory restrictions regarding access to NCIC that must be adhered to, and we recognize that stakeholders may use NCIC and NamUs in distinct ways, DOJ has opportunities to explore available options that could potentially allow for more efficient use of information on missing and unidentified persons by reducing fragmentation and overlap. Without evaluating the technical and legal feasibility of options for sharing information, documenting the results of the evaluation, and, as appropriate, implementing one or more of these options, potential inefficiencies will persist. As a result, users who do not have access to information from both systems may continue to miss vital case information. Every year, more than 600,000 people are reported missing, and hundreds of sets of human remains go unidentified. Solving thousands of long-term missing and unidentified persons cases requires the coordinated use of case data contained in national databases, such as NCIC and NamUs. However, because no mechanism exists to share information between these systems, the fragmented and overlapping nature of the systems leads to inefficiencies in solving cases. Although there are statutory differences between the systems, there are potential options for sharing information--such as a notification to inform NCIC users if related case data were present in NamUs --that could reduce inefficiencies between NCIC and NamUs within the existing legal framework. The ongoing upgrade processes for both systems provide DOJ with the opportunity to evaluate the technical and legal feasibility of various options, document the results, and incorporate feasible options, as appropriate. Without doing so, and without subsequently implementing options determined to be appropriate during the next cycle of system upgrades, potential inefficiencies will persist and users who do not have access to information from both systems may be missing vital information that could be used to solve cases. To allow for more efficient use of data on missing and unidentified persons contained in the NCIC's Missing Persons and Unidentified Persons files and NamUs, the Directors of the FBI and NIJ should evaluate the feasibility of sharing certain information among authorized users, document the results of this evaluation, and incorporate, as appropriate, legally and technically feasible options for sharing the information. We provided a draft of this product to DOJ for review and comment. On May 13, 2016, an official with DOJ's Justice Management Division sent us an email stating that DOJ disagreed with our recommendation, because DOJ believes it does not have the legal authority to fulfill the corrective action as described in the proposed recommendation. Specifically, DOJ stated that NamUs does not qualify, under federal law, for access to NCIC and is not an authorized user to receive NCIC data. Therefore, DOJ does not believe there is value in evaluating the technical feasibility of integrating NamUs and NCIC. As stated throughout this report, we understand the legal framework placed on NCIC and that it may be restricted from fully integrating with a public database. However, this statutory restriction does not preclude DOJ from exploring options to more efficiently share information within the confines of the current legal framework. Moreover, our recommendation is not about the technical feasibility of integrating NCIC and NamUs but about studying whether there are both technically and legally feasible options for better sharing long-term missing and unidentified persons information. We continue to believe that there may be mechanisms for better sharing this information--such as a notification alert in NCIC to inform users when related case data is also present in NamUs--that would comply with the legal restrictions. However, until DOJ studies whether such feasible mechanisms exist, it will be unable to make this determination. Without evaluating the technical and legal feasibility of options for sharing information, DOJ risks continued inefficiencies through fragmentation and overlap. Moreover, authorized users who do not have automated or timesaving access to information from both systems may continue to miss critical information that would help solve these cases. DOJ also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Attorney General of the United States, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In response to Senate Report 113-181 (accompanying the Consolidated and Further Continuing Appropriations Act of 2015) this report addresses the following objectives: 1. Describe access to and use of missing and unidentified persons information contained in the National Crime Information Center (NCIC) and the National Missing and Unidentified Persons System (NamUs). 2. To what extent do opportunities exist to improve the use of missing and unidentified persons information contained in NCIC and NamUs? To describe the access to and use of missing and unidentified persons information contained in NCIC and NamUs, we reviewed and compared NCIC and NamUs operating and policy manuals and data entry guides. In addition, we observed access to and use of missing and unidentified persons information in NamUs. To corroborate information above, we conducted interviews with officials who access and use NCIC and NamUs, including state criminal justice agencies, state and local law enforcement agencies (LEA), medical examiners, and coroners. To determine the extent to which opportunities exist to improve the use of missing and unidentified persons information using NCIC and NamUs, we analyzed summary level case data by state for each system for fiscal year 2015. Because of statutory limitations on access to criminal justice information contained in NCIC we did not assess record level case data from either NCIC or NamUs. However, we compared NCIC summary level data to NamUs summary level data, and found it sufficient for demonstrating the extent to which information contained in the two systems is similar or different. We assessed the reliability of the data contained in NCIC and NamUs by, among other things, reviewing database operating manuals and quality assurance protocols, and by interviewing officials responsible for managing the systems. We found the data to be reliable for our purposes. We also reviewed and compared NCIC and NamUs operating manuals and data entry guides to determine the comparability of minimum data requirements for record entry, individual data elements in each system, and their definitions. Our review of these documents allowed us to identify details about the purpose and design of each system that may support or preclude data sharing. In addition, we reviewed past and current CJIS and NIJ plans related to sharing information between NCIC and NamUs. We reviewed laws, policies, and information associated with reporting and sharing information on missing and unidentified persons, to include information about the types of users that can access or enter information into each system within three categories: (1) LEA, (2) non-LEA criminal justice agency (CJA)--such as a court; and (3) medicolegal investigator-- such as a coroner. We assessed this information against Standards for Internal Control in the Federal Government and GAO's evaluation and management guide for fragmentation, overlap, and duplication. NCIC and NamUs assign user access differently, with NCIC assigning access at the agency level, while NamUs provides access directly to individuals. Because of this, for the purposes of comparing NCIC and NamUs users, we consolidated information from NamUs for non-public users into their relevant agencies so as not to overstate the number of NamUs users as compared to NCIC. However, there are some limitations associated with this effort. For example, for a city-wide LEA such as the New York City Police Department, NCIC assigns Originating Agency Identifiers (ORI) numbers to each office within that particular agency, as the ORI number is used to indicate the LEA office directly responsible for a given NCIC record entry. When individuals register for NamUs, they may or may not provide the same level of detail regarding their specific office within a greater LEA, which means we may count an agency once for NamUs, even though that agency likely has multiple ORIs associated with it for NCIC. Further, because of the way user permissions are determined in NamUs, some LEAs with DNA or forensic specialists may also be included in the medicolegal investigator category, whereas they are likely to use only a single LEA ORI in NCIC. To address these limitations, this report presents information about both the number and type of individual users registered with NamUs, as well as the number and type of agencies that these users represent. To corroborate information above, and to obtain more in-depth perspectives about the extent to which opportunities exist to improve the collection and use of missing and unidentified persons information, we conducted interviews. Specifically, we interviewed Department of Justice (DOJ) officials, relevant stakeholders from selected states, and officials from nongovernmental agencies, in part to learn about past and current efforts to share information between NCIC and NamUs. In addition, we selected Arizona, California, and New York to include in this review, based in part on their respective state laws and policies associated with missing and unidentified persons, as well as the number of cases reported to each database for fiscal year 2015. Specifically, after identifying the 10 states that reported the highest number of cases to both NCIC and NamUs, we then compared four characteristics of state laws and policies related to reporting missing and unidentified persons. These included whether the state law specified (1) required reporting to NCIC, NamUs, or other federal databases; (2) reporting requirements for specific populations; (3) a timeframe for reporting missing persons cases; and (4) a timeframe for reporting unidentified remains. We chose Arizona, California, and New York to provide illustrative examples of different types of state laws. Table 1 provides a high-level comparison of the reporting laws for each state we reviewed. We then selected a nongeneralizeable sample of relevant stakeholders from each state to interview. Specifically, we interviewed relevant stakeholders in 3 state criminal justice agencies, 4 state and local LEAs, 2 medical examiner offices, and 1 coroner office. Although the views expressed from these interviews cannot be generalized to each state, they provide valuable insights about the types of experiences different stakeholder groups experience in states with varied reporting requirements. We also reviewed state documents associated with the data systems used by each state to report missing and unidentified persons information to NCIC. We conducted this performance audit from September 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comparison of Fragmentation and Overlap in Key Characteristics of the National Crime Information Center (NCIC) and National Missing and Unidentified Persons System (NamUs) Purpose Both systems contain data designed to be used to solve long- term missing and unidentified persons cases. Registered Users Registered users of both systems must populate one system with missing and unidentified persons cases and then go through the process again to enter the same data in the second system. To utilize information from either system, registered users must go through an inefficient process of accessing and searching each system separately, and then manually comparing results. Data Validation Efforts NamUs Regional System Administrators (RSA) check NCIC as part of the NamUs validation process. In fiscal year 2015, RSAs requested and manually reviewed NCIC records for at least 22,000 missing persons and 4,532 unidentified persons cases. NCIC contains significantly more missing persons cases than NamUs, while NamUs contains more unidentified persons cases, limiting the usefulness of either system. Specifically, in fiscal year 2015, 3,170 missing persons cases were reported to NamUs, while 84,401 long-term cases were reported to NCIC during the same time period. In contrast, 1,205 unidentified persons cases were reported to NamUs in fiscal year 2015, while 830 cases were reported to NCIC. Less than 0.1 percent of registered NCIC users are medical examiner or coroner offices, while approximately 18 percent of the agencies with at least one registered NamUs user are considered part of the medicolegal field. Additionally, many missing persons cases are initially reported in NamUs by members of the public who do not have access to NCIC. Consequently, potentially valuable information on missing persons cases may not be getting to all those who need it. Diana C. Maurer, (202) 512-9627 or [email protected]. In addition to the contact named above, Dawn Locke (Assistant Director), Elizabeth Kowalewski, Susanna Kuebler, Amanda Miller, Jan Montgomery, Heidi Nielson, Janay Sam, Monica Savoy, and Michelle Serfass made key contributions to this report.
|
Every year, more than 600,000 people are reported missing, and hundreds of human remains go unidentified. Two primary federal databases supported by DOJ--NCIC and NamUs--contain data related to missing and unidentified persons to help solve these cases. NCIC contains criminal justice information accessed by authorized agencies to assist with daily investigations. NamUs information can be used by law enforcement, medical examiners, coroners, and the general public to help with long-term missing and unidentified persons cases. Senate Report 113-181 (accompanying the Consolidated and Further Continuing Appropriations Act of 2015) includes a provision for GAO to review NCIC and NamUs. This report describes the access to and use of missing and unidentified persons information contained in NCIC and NamUs, and the extent to which there are opportunities to improve the use of this information. GAO reviewed NCIC and NamUs data, and relevant state and federal statutes. GAO also conducted nongeneralizeable interviews with stakeholders in three states, selected in part on state laws. The Federal Bureau of Investigation's (FBI) National Crime Information Center (NCIC) database includes criminal justice agency information and access to such data is restricted to authorized users. In contrast, the Department of Justice's (DOJ) National Institute of Justice (NIJ) funds and oversees the National Missing and Unidentified Persons System (NamUs), a database for which the public may register to access published case information. Because many users of NamUs are not authorized to access NCIC, there are no direct links between the systems. As a result, while both NCIC and NamUs contain information on long-term missing and unidentified persons, they remain separate systems. DOJ could facilitate more efficient sharing of information on missing persons and unidentified remains (referred to as missing and unidentified persons cases) contained in these systems. GAO found, in part, that the following three key characteristics of NCIC and NamUs are fragmented or overlapping, creating the risk of duplication. Database Records: NCIC and NamUs contain fragmented information associated with long-term missing and unidentified persons (cases open for more than 30 days). For example, in fiscal year 2015, 3,170 long-term missing persons cases were reported to NamUs while 84,401 missing persons records reported to NCIC became long-term cases. NamUs also accepts and maintains records of missing and unidentified persons cases that may not be found in NCIC because, for example, they have not yet been filed with law enforcement. As a result, users relying on only one system may miss information that could be instrumental in solving these types of cases. Registered Users: The NCIC user base is significantly larger than the NamUs user base, and the types of users vary, which may contribute to the discrepancies in each system's data. For instance, almost all law enforcement agencies use NCIC, with only a small fraction registered to use NamUs. Additionally, members of the public do not have access to NCIC, but can report missing persons cases to NamUs. Data Validation Efforts: In part to minimize fragmentation, NamUs uses a case validation process and other ad hoc efforts to help ensure that data on missing and unidentified persons contained in NCIC is captured by NamUs. However, these processes introduce additional inefficiencies because they require officials to manually review and enter case data into both systems, resulting in duplicative data entry. Inefficiencies exist in the use of information on missing and unidentified persons primarily because there is no mechanism to share information between the systems, such as a notifier to inform NCIC users if related case data were present in NamUs. According to FBI officials, federal law precludes full integration of NCIC and NamUs; however, opportunities to share information may exist within the legal framework to address fragmentation and overlap without full system integration. By evaluating the technical and legal feasibility of options to share information, documenting the results, and implementing feasible options, DOJ could better inform those who are helping solve missing and unidentified persons cases and increase the efficiency of solving such cases. To allow for more efficient use of missing and unidentified persons information, GAO recommends that DOJ evaluate options to share information between NCIC and NamUs. DOJ disagreed because it believes it lacks the necessary legal authority. GAO believes DOJ can study options for sharing information within the confines of its legal framework, and therefore believes the recommendation remains valid.
| 7,811 | 1,006 |
The 13th Congressional District of Florida comprises DeSoto, Hardee, Sarasota, and parts of Charlotte and Manatee Counties. In the November 2006 general election, there were two candidates in the race to represent the 13th Congressional District: Vern Buchanan, the Republican candidate, and Christine Jennings, the Democratic candidate. The State of Florida certified Vern Buchanan the winner of the election. The margin of victory was 369 votes out of a total of 238,249 votes counted. Table 1 summarizes the results of the election and shows that the results from Sarasota County exhibited a significantly higher undervote rate than in the other counties in the congressional district. In Florida, the Division of Elections in the Secretary of State's office helps the Secretary carry out his or her responsibilities as the chief election officer. The Division of Elections is responsible for establishing rules governing the use of voting systems in Florida. Voting systems cannot be used in any county in Florida until the Florida Division of Elections has issued a certification of the voting system's compliance with the Florida Voting System Standards. The Florida Voting Systems Certification program is administered by the Bureau of Voting Systems Certification in the Division of Elections. An elected supervisor of elections is responsible for implementing elections in each county in Florida in accordance with Florida election laws and rules. The supervisor of elections is responsible for the purchase and maintenance of the voting systems as well the preparation and use of the voting systems to conduct each election. In the 2006 general election, Sarasota County used voting systems manufactured by ES&S. The State of Florida has certified different versions of ES&S voting systems. The version used in Sarasota County was designated ES&S Voting System Release 4.5, Version 2, Revision 2, and consisted of iVotronic DREs, a Model 650 central count optical scan tabulator for absentee ballots, and the Unity election management system. It was certified by the State of Florida on July 17, 2006. The certified system includes different configurations and optional elements, several of which were not used in Sarasota County. The election management part of the voting system is called Unity; the version that was used was 2.4.4.2. Figure 1 shows the overall election operation using the Unity election management system and the iVotronic DRE. Sarasota County used iVotronic DREs for early and election day voting. Specifically, Sarasota County used the 12-inch iVotronic DRE, hardware version 1.1 with firmware version 8.0.1.2. Some of the iVotronic DREs are configured with Americans with Disabilities Act (ADA) functionality, which includes the use of audio ballots. The iVotronic DRE uses a touch screen--a pressure-sensitive graphics display panel--to display and record votes (see fig. 2). The machine has a storage case that also serves as the voting booth. The operation of the iVotronic DRE requires using a personalized electronic ballot (PEB), which is a storage device with an infrared window used for transmission of ballot data to and from the iVotronic DRE. The iVotronic DRE has four independent flash memory modules, one of which contains the program code--firmware--that runs the machine and the remaining three flash memory modules store redundant copies of ballot definitions, machine configuration information, ballots cast by voters, and event logs. The iVotronic DRE includes a VOTE button that the voter has to press to cast a ballot and record the information in the flash memory. The iVotronic DRE also includes a compact flash card that can be used to load sound files onto iVotronic DREs with ADA functionality. The iVotronic DRE's firmware can be updated through the compact flash card. Additionally, at the end of polling, the ballots and audit information are to be copied from the internal flash memory module to the compact flash card. To use the iVotronic DRE for voting, a poll worker activates the iVotronic DRE by inserting a PEB into the PEB slot after the voter has signed in at the polling place. After the poll worker makes selections so that the appropriate ballot will appear, the PEB is removed and the voter is ready to begin using the system. The ballot is presented to the voter in a series of display screens, with candidate information on the left side of the screen and selection boxes on the right side (see fig. 3). The voter can make a selection by touching anywhere on the line, and the iVotronic DRE responds by highlighting the entire line and displaying an X in the box next to the candidate's name. The voter can also change his or her selection by touching the line corresponding to another candidate or by deselecting his or her choice. "Previous Page" and "Next Page" buttons are used to navigate the multipage ballot. After completing all selections, the voter is presented with a summary screen with all of his or her selections (see fig. 4). From the summary screen, the voter can change any selection by selecting the race. The race will be displayed to the voter on its own ballot page. When the voter is satisfied with the selections and has reached the final summary screen, the red VOTE button is illuminated, indicating the voter can now cast his or her ballot. When the VOTE button is pressed, the voting session is complete and the ballot is recorded on the iVotronic DRE. In Sarasota County's 2006 general election, there were nine different ballot styles with between 28 and 40 races, which required between 15 and 21 electronic ballot pages to display, and 3 to 4 summary pages for review purposes. Our analysis of the 2006 general election data from Sarasota County does not identify any particular voting machines or machine characteristics that could have caused the large undervote in Florida's 13th Congressional District race. The undervotes in Sarasota County for the congressional race were generally distributed across all machines and precincts. Using voting system data that we obtained from Sarasota County, we found that 1,499 iVotronic DREs recorded votes in the 2006 general election; 84 iVotronic DREs recorded votes during early voting, and 1,415 iVotronic DREs recorded votes on election day. Using these data, we verified that the vote counts for the contestant, contestee, and undervotes match the reported vote totals for Sarasota County in Florida's 13th Congressional District race. As can be seen in table 2, the undervote rate in early voting was significantly higher than in election day voting. The range of the undervote rate for all machines was between 0 and 49 percent, with an average undervote rate of 14.3 percent. When just the early voting machines are considered, the undervote rate ranged between 5 and 28 percent. The largest number of undervotes cast on any one machine on election day was 39. While the range of ballots cast on any one machine on election day was between 1 and 121, the median number of ballots cast on any one machine was 66. The range of undervote rate by precinct was between 0 and 41 percent, and the average undervote by precinct was about 14.8 percent. Prior to the elections, Sarasota County's voting systems were subjected to several different tests that included testing by the manufacturer, certification testing by the Florida Division of Elections, testing by independent testing authorities, and logic and accuracy testing by Sarasota County's Supervisor of Elections. After the 2006 general election, an audit of Sarasota County's election was conducted by the State of Florida that included a review of the iVotronic source code, parallel tests, and an examination of Sarasota County's election procedures. Although these tests and reviews provide some assurance, as do certain controls that were in place during the election, that the voting systems in Sarasota County functioned correctly, they do not provide reasonable assurance that the iVotronic DREs did not contribute to the undervote. According to ES&S officials, ES&S tested the version of the iVotronic DRE that was used in Sarasota County in 2001-2002, but they could not provide us documentation for those tests because the documentation had not been retained. The Florida Division of Elections conducted certification testing of the iVotronic DRE and the Unity election management system before Sarasota County acquired the system from the manufacturer. The certification process included tests of the election management system and the conduct of mock primary and general elections on the entire voting system. ES&S Voting System, Release 4.5, Version 2, Revision 2, was certified by the Florida Division of Elections on July 17, 2006. According to Florida Division of Elections officials, testing of each version focuses on the new components, and components that were included in prior versions are not as vigorously tested. The 8.0.1.2 version of the iVotronic firmware was first tested as a part of ES&S Release 4.5, Version 1, which was certified in 2005. Version 2 introduced version 2.4.4.2 of the Unity Election Management System, which was certified in August 2005. Certification testing was conducted on software that was received from an independent test authority, who witnessed the building of the firmware from the source code. An independent test authority also conducted environmental testing of the iVotronic DRE in 2001 that was relied upon by the Florida Division of Elections for certification. A logic and accuracy test was conducted by Sarasota County on October 20, 2006, on 32 iVotronic DREs, and it successfully verified that all ballot positions on all nine ballot styles could be properly recorded. In addition, the use of a provisional ballot and audio ballot were tested, as well as machines configured for early voting with all nine ballot styles. After the 2006 general election, the Florida Division of Elections conducted an audit of Sarasota County's 2006 general election that included two parallel tests, an examination of the certified voting system and conduct of election by Sarasota County's elections office, and an independent review of the iVotronic DRE firmware's source code. After the conduct of this audit, the audit team concluded that there was no evidence that suggested the official election results were in error or that the voting systems contributed to the undervote in Sarasota County. The parallel tests were performed using 10 iVotronic DREs--5 used in the 2006 general election and 5 that were not used. Four of the machines in each test replicated the votes cast on four election day iVotronic DREs. The fifth machine in each test used an ad hoc test script that involved picking a random vote pattern along with a specific vote selection pattern picked from 10 predetermined vote patterns for the 13th Congressional District for each ballot cast. The audit report asserts that testing a total of 10 machines is more than adequate to identify any machine problems or irregularities that could have contributed to undervotes in the Florida-13 race. However, we concluded that the results from the testing of 10 machines cannot be applied to all 1,499 iVotronic DREs used during the 2006 general election because the sample was not random and the sample size was too small. In examining whether voting systems that were used in Sarasota County matched the systems that were certified by the Florida Division of Elections, the Florida audit team examined the Unity election management system and the firmware installed on six iVotronic DREs. The audit team confirmed that the software running on the Unity election management system and the firmware in the six iVotronic DREs matched the certified versions held in escrow by the Florida Division of Elections. On the basis of its review, the audit team concluded that there is no evidence to indicate that the iVotronic DREs had been compromised or changed. We agree that the test verifies that those six machines were not changed, but any extrapolation beyond this cannot be statistically justified because the size of the sample is too small. Therefore, these tests cannot be used to obtain reasonable assurance that the 1,499 machines used in the general election used the certified firmware. A software review and security analysis of the iVotronic firmware version 8.0.1.2 was conducted by a team led by Florida State University's SAIT Laboratory. The eight experts in the software review team attempted to confirm or refute many different hypotheses that, if true, might explain the undervote in the race for the 13th Congressional District. In doing so, they made several observations about the code, which we were able to independently verify. The software review and our verification of the observations were helpful, but a key shortcoming was the lack of assurance whether the source code reviewed by the SAIT team or by us, if compiled, would correspond to the iVotronic firmware that was used in Sarasota County for the 2006 election. According to ES&S and Florida Division of Elections officials, in May 2005 an independent testing authority witnessed the process of compiling the source code and building the version of firmware that was eventually certified by the Florida Division of Elections. According to ES&S officials, if necessary, ES&S can recreate the firmware from the source code, but the firmware would not be exactly identical to the firmware certified by the Florida Division of Elections because the embedded date and time stamp in the firmware would be different. The software review team also looked for security vulnerabilities in software that could have been exploited to cause the undervote. Although the team found several software vulnerabilities, the team concluded that none of them were exploited in Sarasota in a way that would have contributed to the undervote. We did not independently verify the team's conclusion. The Unity election management system and the iVotronic DREs are the major voting system components that may require testing to determine whether they contributed to the large undervote in Sarasota County. Our review of tests already conducted and documentation from the election provide us reasonable assurance that the key functions of the Unity election management system--election definition and vote tabulation-- did not contribute to the undervote. The election definitions created using the Unity election management system are tested during logic and accuracy testing to demonstrate that they include all races, candidates, and issues and that each of the items can be selected by a voter. The votes tabulated on the iVotronic DRE at each precinct matched the data uploaded to the Unity election management system, and the totals from the precinct results tapes agree with that obtained by Unity. Further, the state audit confirmed that the Unity election management system software running in Sarasota County matched the escrowed version certified by the Florida Division of Elections. We have reasonable assurance that the number of ballots recorded by the iVotronic DREs is correct because this number is very close to the number of people recorded on the precinct registers as showing up at the polling places to vote either during early voting or on election day. This assurance also allows us to conclude that issues, such as votes cast by "fleeing voters"--votes that are cast by poll workers for voters who leave the polling place before pressing the button to cast the vote--and the potential loss of votes during a system shutdown, did not affect the undervote in this election. If these issues had occurred, they would have caused a discrepancy between the number of voters who sign in at the polling place to vote and the public counts recorded on the iVotronic DREs. We have reasonable assurance that provisional ballots were appropriately handled by the iVotronic DREs and the Unity election management system. We also verified that during the Florida certification test process, the Division of Elections relied on successful environmental and shock testing conducted by an independent test authority. We found that prior testing and activities do not provide reasonable assurance that all iVotronic DREs used in Sarasota County on election day were using the hardware and firmware certified for use by the Florida Division of Elections. Sarasota County has records indicating that only certified versions were procured from ES&S, and the firmware version is checked in an election on the zero and results tapes. However, because there was no independent validation of the system versions, we cannot conclude that no modifications were made to the systems that would have likely made them inconsistent with the certified version. As we previously mentioned, the firmware comparison of only 6 iVotronic DREs in the state audit is insufficient to support generalization to all 1,499 iVotronic DREs that recorded votes during the election. Without reasonable assurance that all iVotronic DREs are running the same certified firmware, it is difficult for us to rely on the results of other testing that has been conducted, such as the parallel tests or the logic and accuracy tests. Prior testing of the iVotronic DREs only verified 13 of the 112 ways that we identified that a voter may use to select a candidate in Florida's 13th Congressional District race. Specifically, on an iVotronic DRE, a voter could (1) initially select either candidate or neither candidate (i.e. undervote), (2) change the vote on the initial screen, and (3) use a combination of page back and review screen options to change or verify his or her selection before casting the ballot. By taking into account these variations, our analysis has found at least 112 different ways a voter could make his or her selection in Florida's 13th Congressional District race, assuming that it was the only race on the ballot. Out of 112 different ways to select a candidate in the congressional race, Florida certification tests and the Sarasota County logic and accuracy tests verified 3 ways to select a candidate; and the Florida parallel tests verified 10 ways to select a candidate--meaning that of the 112 ways, 13 have been tested. By not verifying these different ways to select a candidate, we do not have reasonable assurance that the system will properly handle expected forms of voter behavior. During the setup of the iVotronic DRE, sometimes referred to as the clear and test process, the touch screens are calibrated by using a stylus to touch the screen at 20 different locations. The calibration process is designed to align the display screen with the touch screen input. It has been reported that a miscalibrated machine could affect the selection process by highlighting a candidate that is not aligned with what the voter selected. We identified two reported cases on election day where the miscalibration of the iVotronic DRE led to its closure and discontinued use for the rest of the day. While a miscalibrated machine could certainly make an iVotronic DRE harder to use, it is not clear it would have helped to contribute to the undervote. We did not identify any prior testing or activities that would help us understand the effect of a miscalibrated iVotronic DRE on the undervote. On the basis of our analysis of all prior test and audit activities, we propose that a firmware verification test, a ballot test, and a calibration test be conducted to try to obtain increased assurance that the iVotronic DREs used in Sarasota County during the 2006 general election did not cause the undervote. We propose that the firmware verification testing be started first, once the necessary arrangements have been made, such as access to the needed machines and the development of test protocols and detailed test procedures. Once we have reasonable assurance that the iVotronic DREs are running the same certified firmware, we could conduct the ballot test and calibration test on a small number of machines to determine whether it is likely the machines accurately recorded and counted the ballots. If the firmware verification tests are successfully conducted, we would have much more confidence that the iVotronic DREs will behave similarly when tested. If there are differences in the firmware running on the iVotronic DREs, we would need to reassess the number of machines that need to be tested for ballot testing and calibration testing in order for us to have confidence that the test results would be true for all 1,499 iVotronic DREs used during the election. In other words, if we are reasonably confident that the same software is used in all 1,499 machines, then we are more confident that the results of the other tests on a small number of machines can be used to obtain increased assurance that the iVotronic DREs did not cause the undervote. Although the proposed tests would provide increased assurance, they would not conclusively eliminate the machines as a cause of the undervote. We propose to conduct a firmware verification test using a statistical sampling approach that can provide reasonable assurance that all 1,499 iVotronic DREs are running the certified version of firmware. The exact number of machines that would be tested depends on the confidence level desired and how much error can be tolerated. We propose drawing a representative sample from all the iVotronic DREs that recorded votes in the general election. With a sample size of 115 iVotronic DREs, which would be divided between sequestered and nonsequestered machines, and assuming that there are no test failures, we would be able to conclude with a 99 percent confidence level that no more than 4 percent of the 1,499 iVotronic DREs used in the election were using uncertified firmware. We suggest a test approach similar to what was used by the Florida Division of Elections when it verified the firmware for 6 iVotronic DREs. We estimate that the firmware testing for 115 machines could be conducted in about 5 to 7 days and would require about 5 or 6 people, once the necessary arrangements have been made. The machines would be transported to a test facility specified by Sarasota County election officials where we could perform the test. The activities involved in conducting a firmware validation test would include locating and retrieving the selected iVotronic DRE from the storage facility, transporting it to the test facility, opening the DRE, extracting the chip with the firmware, reading the contents of the chip using a specialized chip reader, and conducting a comparison between the contents and the certified firmware to determine if any differences exist. To conduct this test, we would need commercially available specialized hardware and software similar to that used by the Florida Division of Elections in its firmware comparison test. We propose conducting ballot testing on 10 iVotronic DREs, each configured with one of the nine different ballot styles, with the 10th machine configured as an early voting machine with all nine ballot styles. We would test 112 ways to select a candidate on the early voting machine. On the election day machines, we would test the 112 different ways distributed across the 9 machines in a random manner, meaning each machine would on average record 12-13 ballots. Assuming that (1) reasonable assurance is obtained that all iVotronic DREs used during the election were using the same certified firmware, and (2) we found no failures during the ballot testing, this testing would provide increased assurance that the iVotronic DREs used during the election, both in early voting and in election day voting, were able to accurately record and count ballots when using any of the 112 ways to select a candidate in the Florida-13 race. We would plan to code each ballot by including an identifier in the write-in candidate field for either the U.S. senator or governor's race. Using this write-in coding, we could examine the ballot image and confirm that each ballot was accurately recorded and counted by the iVotronic DRE. Any encountered failures would also be more rapidly attributed to a specific test case, and we would be able to more readily repeat the test case to determine if we have a repeatable condition. Testing 112 ways to select a candidate on a single machine would also provide us some additional assurance that the volume of ballots cast on election day did not cause a problem. We note that casting 112 ballots on a single machine is more than that cast on over 99 percent of the 1,415 machines used on election day. We estimate the ballot testing would take about 2 to 3 days and require the equivalent of 2 people, once the necessary arrangements have been made. Because little is known about the effect of a miscalibrated machine on the behavior of an iVotronic DRE, we propose to deliberately miscalibrate an iVotronic DREs and verify the functioning of the machine. We propose to identify different ways to miscalibrate a ballot and to test ballots on the miscalibrated iVotronic DRE to verify that it still properly records votes. With this test we would confirm whether (1) the review screen displays the same selection in the Florida-13 race as was highlighted in the selection screen, and (2) that the vote is recorded as it was displayed on the review screen. Again, we would plan to use the write-in candidate option to verify the proper recording of the ballot. This test would demonstrate whether the system correctly records a vote for the race and hence whether it contributed to the undervote. We estimate that the calibration test could be completed in about 1 day by 2 people, once the necessary arrangements have been made. Should the task force ask us to conduct the proposed testing, we want to make the task force aware of several other matters that would need to be addressed before we could begin testing. These activities would require some time and resources to complete before testing could commence. First, we would need to gain access to iVotronic DREs that have been subject to a sequestration order in the state court system of Florida. If we do not have access to the needed machines, we would be unable to obtain reasonable assurance that the machines used on election day were using certified software, and without this assurance, the results from prior tests and any results of our ballot and calibration tests would be less meaningful because we would be unable to apply the results to all 1,499 iVotronic DREs used during the election. Second, we would need to agree upon an appropriate facility for the tests. Sarasota County Supervisor of Elections has indicated that we can use its warehouse space, but because of upcoming elections in November and January, the only time the election officials would be able to provide us this space and the necessary support is between November 26 and December 7, 2007. If testing cannot be completed during this time period, Sarasota County officials stated that they would not be able to assist us until February 2008. Third, some tests may require commercially available specialized software, hardware, or other tools to conduct the tests. We would need to make arrangements to either borrow or to purchase such testing tools before commencing testing. Fourth, in order to conduct any tests, we would need to develop test protocols and detailed test procedures and steps. We also anticipate that we would need to conduct a dry run, or dress rehearsal, of our test procedures to ensure that our test tools function properly and that our time estimates are reasonable. Finally, we would need to make arrangements for video recording of our testing. It would be our preference to have a visual record of the tests to document the actual test conduct and to facilitate certain types of test analysis. We recognize that human interaction with the ballot layout could be a potential cause of the undervote. Although we have not explored this issue in our review, we note that there is an ongoing academic study that is exploring this issue using voting machines obtained from ES&S. We believe that such experiments could be useful and could provide insight into the ballot layout issue. During our review, we noted that several suggestions have been offered as possible ways to establish that voters are intentionally undervoting and to provide some assurance that the voting systems did not cause the undervote. First, a voter-verified paper trail could provide an independent confirmation that the touch screen voting systems did not malfunction in recording and counting the votes from the election. The paper trail would reflect the voter's selections and, if necessary, could be used in the counting or recounting of votes. This issue is recognized in the Florida State University SAIT source code review as well as the 2005 and draft 2007 Voluntary Voting Systems Guidelines prepared for the Election Assistance Commission. We have previously reported on the need to implement such a function properly. Second, explicit feedback to voters that a race has been undervoted and a prompt for voters to affirm their intent to undervote might help prevent many voters from unintentionally undervoting a race. On the iVotronic DREs, such feedback and prompts are provided only when the voter attempts to cast a completely blank ballot, but not when a voter undervotes in individual races. Third, offering a "none of the above" option in a race would provide voters with the opportunity to indicate that they are intentionally undervoting. The State of Nevada provides this option in certain races in its elections. Decisions about these or other suggestions about ballot layout or voting system functions should be informed by human factors studies that assess their effectiveness in accurately recording voters' preferences, making voting systems easier to use, and preventing unintentional undervotes. The high undervote encountered in Sarasota County in the 2006 election for Florida's 13th Congressional District has raised questions about whether the voting systems accurately recorded and counted the votes cast by eligible voters. Other possible reasons for the undervote could be that voters intentionally undervoted or voters did not properly cast their ballots on the voting systems, potentially because of issues relating to the interaction between voters and the ballot. The focus of our review has been to determine whether the voting systems--the iVotronic DREs, in particular--contributed to the undervote. We found that the prior reviews of Sarasota County's 2006 general election have provided valuable information about the voting systems. Our review found that in some cases we were able to rely on this information to eliminate areas of concern. This allowed us to identify the areas where increased assurances were needed to answer the questions being raised. Accordingly, the primary focus of the tests we are proposing is to obtain increased assurance that the results of the prior reviews and our proposed testing can be applied to all the iVotronic DREs used in the election. Our proposed tests involving the firmware comparison, ballot testing, and calibration testing could help reduce the possibility that the undervote was caused by the iVotronic DREs. However, even after completing the tests, we would not have absolute assurance that the iVotronic DREs did not play any role in the large undervote. Absolute assurance is impossible to achieve because we are unable to recreate the conditions of the election in which the undervote occurred. By successfully conducting the proposed tests, we could reduce the possibility that the iVotronic DREs were the cause of the undervote and shift attention to the possibilities that the undervote was the result of intentional actions by the voter or voters that did not properly cast their votes on the voting system. We provided draft copies of this statement to the Secretary of State of Florida, the Supervisor of Elections of Sarasota County, and ES&S for review and comment. The Florida Department of State provided technical comments, which we incorporated. The Sarasota County Supervisor of Elections appreciated the opportunity to review the draft, but provided us no comments. In its comments, ES&S stated that it believes that the collective results of testing already conducted on the Sarasota County voting systems have demonstrated that they performed properly and as they were designed to function and that all votes were accurately captured and counted as cast in Florida's 13th Congressional District race. Further, ES&S asserts that tests and analyses should be conducted to examine the effect of the ballot display on the undervote, which it believes is the most probable cause of the undervote. We disagree that the collective results of testing already conducted on the Sarasota County voting systems adequately demonstrate that the voting systems could not have contributed to the undervote in the Florida-13 race. First, as we have cited, we do not have adequate assurance that all the iVotronic DREs used in Sarasota County used the firmware certified by the Florida Division of Elections. Without this assurance, it is difficult for us to apply the results from the other tests to all 1,499 machines that recorded votes during the election because we are uncertain that all machines would have behaved in a similar manner. Further, we believe that expected forms of voter behavior to select a candidate in the Florida-13 race were not thoroughly tested. While ES&S asserts that such processes would have no effect on the iVotronic DRE's ability to capture and record a voter's selection, we did not identify testing that verified this. Further, while ES&S states that the testing of a deliberately miscalibrated iVotronic DRE would result in a clearly visible indication of which candidate was selected, we could not identify any testing that demonstrated this. We acknowledge that the large undervote in Florida's 13th Congressional District race could have been caused by voters who intentionally undervoted or voters who did not properly cast their ballots, potentially because of issues related to the human interaction with the ballot. However, the focus of our review, as agreed with the task force, was to review whether the voting systems could have contributed to the large undervote. ES&S also provided technical comments, which we incorporated as appropriate. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the task force may have at this time. For further information about this statement, please contact Keith Rhodes, Chief Technologist, at (202) 512-6412 or [email protected], or Naba Barkakati at (202) 512-4499 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other key contributors to this statement include James Ashley, James Fields, Jason Fong, Cynthia Grant, Geoffrey Hamilton, Richard Hung, John C. Martin, Jan Montgomery, Jennifer Popovic, Sidney Schwartz, and Daniel Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In November 2006, about 18,000 undervotes were reported in Sarasota County in the race for Florida's 13th Congressional District (FL-13). After the contesting of the election results in the House of Representatives, the task force unanimously voted to seek GAO's assistance in determining whether the voting systems contributed to the large undervote in Sarasota County. GAO agreed with the task force on an engagement plan, including the following review objectives: (1) What voting systems were used in Sarasota County and what processes governed their use? (2) What was the scope of the undervote in Sarasota County in the general election? (3) What tests were conducted on the voting systems in Sarasota County prior to the general election and what were the results of those tests? (4) Considering the voting systems tests conducted after the general election, are additional tests needed to determine whether the voting systems contributed to the undervote? To conduct its work, GAO met with officials from the State of Florida, Sarasota County, and Election Systems and Software (ES&S)--the voting systems manufacturer--and reviewed voting systems test documentation. GAO analyzed election data to characterize the undervote. On the basis of its assessments of prior testing and other activities, GAO identified potential additional tests for the Sarasota County voting systems. In the 2006 general election, Sarasota County used voting systems manufactured by ES&S, specifically iVotronic direct recording electronic (DRE) voting systems during early and election day voting and the Unity election management system, which handles the election administration functions, such as ballot design and election reporting. GAO's analysis of the 2006 general election data from Sarasota County did not identify any particular voting machines or machine characteristics that could have caused the large undervote in the FL-13 race. The undervotes in Sarasota County were generally distributed across all machines and precincts. GAO's analysis found that some of the prior tests and reviews conducted by the State of Florida and Sarasota County provide assurance that certain components of the voting systems in Sarasota County functioned correctly, but they are not enough to provide reasonable assurance that the iVotronic DREs did not contribute to the undervote. Specifically, GAO found that assurance is lacking in three areas, and proposes that tests be conducted to address those areas. First, because there is insufficient assurance that the firmware in all the iVotronic DREs used in the election matched the certified version held by the Florida Division of Elections, GAO proposes that a firmware verification test be conducted on a representative sample of 115 (of the 1,499) machines that were used in the general election. Second, because an insufficient number of ways to select a candidate in the FL-13 race were tested, GAO proposes that a test be conducted to verify all 112 ways that GAO identified to select a candidate. Third, because no prior tests were identified that address the effect of a miscalibrated iVotronic DRE on the undervote, GAO proposes that an iVotronic DRE be deliberately miscalibrated to verify the accurate recording of ballots under these conditions. GAO expects these three tests would take 2 weeks, once the necessary arrangements are made. Should the task force ask GAO to conduct the proposed tests, several matters would need to be addressed before testing could begin, including obtaining access to the iVotronic DREs that have been subject to a sequestration order, arranging for a test site, obtaining some commercially available test tools, developing test protocols and detailed test procedures, and arranging for the video recording of the tests. Sarasota County election officials have indicated that they can help GAO access the machines and provide a test site between November 26 and December 7, 2007. Although the proposed tests could help provide increased assurance, they would not provide absolute assurance that the iVotronic DREs did not cause the large undervote in Sarasota County. The successful conduct of the proposed tests could reduce the possibility that the voting systems caused the undervote and shift attention to the possibilities that the undervote was the result of intentional actions by voters or voters that did not properly cast their votes on the voting system.
| 7,727 | 965 |
VA provides medical services to various veteran populations--including an aging veteran population and a growing number of younger veterans returning from the military operations in Afghanistan and Iraq. VA operates approximately 170 VAMCs, 130 nursing homes, and 1,000 outpatient sites of care. In general, veterans must enroll in VA health care to receive VA's medical benefits package--a set of services that includes a full range of hospital and outpatient services, prescription drugs, and long-term care services provided in veterans' own homes and in other locations in the community. The majority of veterans enrolled in the VA health care system typically receive care in VAMCs and community-based outpatient clinics, but VA may also authorize care through community providers to meet the needs of the veterans it serves. For example, VA may provide care through its Care in the Community (CIC) programs, such as when a VA facility is unable to provide certain specialty care services, like cardiology or orthopedics. CIC services must generally be authorized by a VAMC provider prior to a veteran receiving care. In addition to its longstanding CIC programs, VA may also authorize veterans to receive care from community providers through the Veterans Choice Program, a new CIC program which was established through the Veterans Access, Choice, and Accountability Act of 2014 (Choice Act), enacted on August 7, 2014. Implemented in fiscal year 2015, the program generally provides veterans with access to care by non-VA providers when a VA facility cannot provide an appointment within 30 days or when veterans reside more than 40 miles from the nearest VA facility. The Veterans Choice Program is primarily administered using contractors, who, among other things, are responsible for establishing nationwide provider networks, scheduling appointments for veterans, and paying providers for their services. The Choice Act also created a separate account, known as the Veterans Choice Fund, which can only be used to pay for VA obligations incurred for the Veterans Choice Program. The use of Choice funds for any other program requires legislative action. The Choice Act appropriated $10 billion to be deposited in the Veterans Choice Fund. Amounts deposited in the Veterans Choice Fund are available until expended and are available for activities authorized under the Veterans Choice Program. However, the Veterans Choice Program activities are only authorized through August 7, 2017 or until the funds in the Veterans Choice Fund are exhausted, whichever occurs first. As part of the President's request for funding to provide medical services to veterans, VA develops an annual estimate detailing the amount of services the agency expects to provide as well as the estimated cost of providing those services. VA uses the Enrollee Health Care Projection Model (EHCPM) to develop most elements of the department's budget estimate to meet the expected demand for VA medical services. Like many other agencies, VA begins to develop these estimates approximately 18 months before the start of the fiscal year for which the funds are provided. Unlike many agencies, VA's Veterans Health Administration receives advance appropriations for health care in addition to annual appropriations. VA's EHCPM makes these projections 3 or 4 years into the future for budget purposes based on data from the most recent fiscal year. In 2012, for example, VA used actual fiscal year 2011 data to develop the budget estimate for fiscal year 2014 and for the advance appropriations estimate for fiscal year 2015. Similarly, in 2013, VA used actual fiscal year 2012 data to update the budget estimate for fiscal year 2015 and develop the advance appropriations estimate for fiscal year 2016. Given this process, VA's budget estimates are prepared in the context of uncertainties about the future--not only about program needs, but also about future economic conditions, presidential policies, and congressional actions that may affect the funding needs in the year for which the estimate is made--which is similar to the budgeting practices of other federal agencies. Further, VA's budget estimates are typically revised during the budget formulation process to incorporate legislative and department priorities as well as to respond to successively higher levels of review in VA and OMB. Each year, Congress provides funding for VA health care primarily through the following appropriation accounts: Medical Support and Compliance, which funds, among other things, the administration of the medical, hospital, nursing home, domiciliary, construction, supply, and research activities authorized under VA's health care system. Medical Facilities, which funds, among other things, the operation and maintenance of the Veterans Health Administration's capital infrastructure, such as the costs associated with nonrecurring maintenance, utilities, facility repair, laundry services, and groundskeeping. Medical Services, which funds, among other things, health care services provided to eligible veterans and beneficiaries in VA's medical centers, outpatient clinic facilities, contract hospitals, state homes, and CIC services. With the exception of the Veterans Choice Program, which is funded through the Veterans Choice Fund, medical services furnished by community providers have been, and will continue to be, funded through this appropriation account through fiscal year 2016. Starting in fiscal year 2017 and thereafter, with the exception of the Veterans Choice Program, it is anticipated that Congress will fund medical services that VA authorizes veterans to receive from community providers through a new appropriations account--Medical Community Care--which the VA Budget and Choice Improvement Act requires VA to include in its annual budget submission. Higher-than-expected obligations identified by VA in April 2015 for VA's CIC programs accounted for $2.34 billion (or 85 percent) of VA's projected funding gap of $2.75 billion in fiscal year 2015. These higher- than-expected obligations for VA's CIC programs were driven by an increase in utilization of VA medical services across VA, reflecting, in part, VA's efforts to improve access to care after public disclosure of long wait times at VAMCs. VA officials expected that the Veterans Choice Program would absorb much of the increased demand from veterans for health care services delivered by non-VA providers. However, veterans' utilization of Veterans Choice Program services was much lower than expected in fiscal year 2015. VA had estimated that obligations for the Veterans Choice Program in fiscal year 2015 would be $3.2 billion, but actual obligations totaled only $413 million. According to VA officials, the lower-than-expected utilization of the Veterans Choice Program in fiscal year 2015 was due, in part, to administrative weaknesses in the program, such as provider networks that had not been fully established and VAMC staff who lacked guidance on when to refer veterans to the program, both of which slowed enrollment in the program. Instead of relying on its Choice Program, VA provided a greater amount of services through its CIC programs, resulting in total obligations of $10.2 billion in fiscal year 2015, which VA officials stated were much higher than expected. The unexpected increase in CIC obligations in fiscal year 2015 exposed weaknesses in VA's ability to estimate costs for CIC services and track associated obligations. While VA officials first became concerned that CIC obligations might be significantly higher than projected in January 2015, they did not determine that VA faced a projected funding gap until April 2015--6 months into the fiscal year. VA officials made this determination after they compared authorizations in the Fee Basis Claims System (FBCS)--VA's system for recording CIC authorizations and estimating costs for this care--with obligations in the Financial Management System (FMS)--the centralized financial management system VA uses to track all of its obligations, including those for medical services. In its 2015 Agency Financial Report (AFR), VA's independent public auditor identified the following issues as contributing to a material weakness in estimating costs for CIC services and tracking CIC obligations: VAMCs individually estimate costs for each CIC authorization and record these estimates in FBCS. This approach leads to inconsistencies because each VAMC may use different methodologies to estimate the costs they record. Having more accurate cost estimates for CIC authorizations is important to help ensure that VA is aware of the amount of money it must obligate for CIC services. VAMCs do not consistently adjust the estimated costs associated with authorizations for CIC services in FBCS in a timely manner to ensure greater accuracy, and they do not perform a "look-back" analysis of historical obligations to validate the reasonableness of estimated costs. Furthermore, VA does not perform centralized, consolidated, and consistent monitoring of CIC authorizations. FBCS is not fully integrated with FMS, VA's system for recording and tracking the department's obligations. As a result, the obligations for CIC services recorded in the former system may not match the obligations recorded in the latter. Notably, the estimated costs of CIC authorizations recorded in FBCS are not automatically transmitted to VA's Integrated Funds Distribution, Control Point Activity, Accounting, and Procurement (IFCAP) system, a procurement and accounting system used to send budgetary information, such as information on obligations, to FMS. According to VA officials, because FBCS and IFCAP are not integrated, at the beginning of each month, VAMC staff typically record in IFCAP estimated obligations for outpatient CIC services, and they typically use historical obligations to make these estimates. Depending on the VAMC, these estimated obligations may be entered as a single lump sum covering all outpatient care or as separate estimated obligations for each category of outpatient care, such as radiology. Regardless of how they are recorded, the estimated obligations recorded in IFCAP are often inconsistent with the estimated costs of CIC authorizations recorded in FBCS. In fiscal year 2015, the estimated obligations that VAMCs recorded in IFCAP were significantly lower than the estimated costs of outpatient CIC authorizations recorded in FBCS. VA officials told us that they did not determine a projected funding gap until April 2015, because they did not complete their analysis of comparing estimated obligations with estimated costs until then. A key factor contributing to the weaknesses identified in VA's AFR was the absence of standard policies across VA for estimating and monitoring the amount of obligations associated with authorized CIC services. Specifically, in fiscal year 2015, the Chief Business Office within the Veterans Health Administration had not developed and implemented standardized and comprehensive policies for VAMCs, VISNs, and the office itself to follow when estimating costs for CIC authorizations and for monitoring these obligations. The AFR and VA officials we interviewed explained that because oversight of the CIC programs was consolidated under the Chief Business Office in fiscal year 2015 pursuant to the Choice Act, this office did not have adequate time to implement efficient and effective procedures for monitoring CIC obligations. To address the fiscal year 2015 projected funding gap, on July 31, 2015, VA obtained temporary authority to use up to $3.3 billion in Veterans Choice Program appropriations for amounts obligated for medical services from non-VA providers--regardless of whether the obligations were authorized under the Veterans Choice Program or CIC--for the period from May 1, 2015 until October 1, 2015. Table 1 shows the sequence of events that led to VA's request for and approval of additional budget authority for fiscal year 2015. Unexpected obligations for new hepatitis C drugs accounted for $0.41 billion of VA's projected funding gap of $2.75 billion in fiscal year 2015. Although VA estimated that obligations in this category would be $0.7 billion that year, actual obligations totaled about $1.2 billion. VA officials told us that VA did not anticipate in its budget the obligations for new hepatitis C drugs--which help cure the disease--because the drugs were not approved by the Food and Drug Administration until fiscal year 2014, after VA had already developed its budget estimate for fiscal year 2015. According to VA, the new drugs cost between $25,000 and $124,000 per treatment regimen, and demand for the treatment was high. Officials told us that about 30,000 veterans received these drugs in fiscal year 2015. In October 2014, VA reprogrammed $0.7 billion within its medical services appropriation account to cover projected obligations for the new hepatitis C drugs, after VA became aware of the drugs' approval. However, in January 2015, VA officials recognized that obligations for the new hepatitis C drugs would be significantly higher than expected by year's end, due to higher-than-expected demand for the drugs. VA officials told us that they assessed next steps and then limited access to the drugs to those veterans with the most severe cases of hepatitis C. In June 2015, VA requested statutory authority to use amounts from the Veterans Choice Fund to address the projected funding gap. To help prevent future funding gaps, VA has made efforts to improve its cost estimates for CIC services and the department's tracking of associated obligations. VA has also taken steps to more accurately estimate future utilization of VA health care services, though uncertainties about utilization of VA health care services and emerging treatments remain. Faced with a projected funding gap in fiscal year 2015, VA made efforts to improve its cost estimates for CIC services as well as the department's tracking of associated obligations. First, in August 2015, VA issued a policy to VAMCs for recording estimated costs for inpatient and outpatient CIC authorizations in FBCS. This policy, among other things, stipulates that VAMCs are to base estimated costs on historical cost data provided by VA. These data, which represent average historical costs for a range of procedures, are intended to help improve the accuracy of VAMCs' cost estimates. To help implement this policy, in December 2015 VA updated its FBCS software so that the system automatically generates estimated costs for CIC authorizations based on historical CIC claims data. As a result, in many cases, VAMC staff will no longer need to individually estimate costs using various methods and manually record these estimates in FBCS. Officials we interviewed at six selected VISNs shortly after the implementation of the software update told us that the update sometimes produces inaccurate cost estimates or no cost estimates at all. VA officials told us that the problems affecting the software update were largely due to VA's adoption of a revised medical classification system in October 2015. The change in the classification system meant that there were relatively few paid claims with the new codes to inform FBCS's automated cost estimates for CIC services. VA officials told us they anticipate this problem diminishing throughout fiscal year 2016 as more CIC claims using the new codes are paid and as the amount of data used to inform the cost estimates increase. Second, in November 2015, VA issued a policy requiring VAMCs to systematically review and correct potentially inaccurate estimated costs for CIC authorizations recorded in FBCS, a step which was previously not required. VA officials told us this policy was created to detect and correct obvious errors in the cost estimates, such as data entry errors that fall outside of the range of reasonable cost estimates. Additionally, this policy requires VISNs to certify monthly to VA's Chief Business Office that the appropriate review and corrective actions have been completed. We found that all six VISNs certified that they had implemented this policy. Third, in November 2015, VA issued a policy requiring VAMCs to identify any discrepancies between the estimated costs for CIC authorizations recorded in FBCS and the amount of estimated obligations recorded in FMS. VA's policy also requires VAMCs to correct discrepancies they identify--such as increasing unreasonably low estimated obligations to make the estimates more accurate--and document the corrections they make. This policy also requires VISNs to certify monthly to VA's Chief Business Office that the appropriate review and corrective actions have been taken and appropriately documented. As we previously stated, in part because FBCS is not fully integrated with FMS, VA officials concluded this policy was necessary to detect and address discrepancies between the two systems. According to VA officials, if estimated costs for CIC authorizations recorded in FBCS are higher than estimated obligations recorded in FMS, it may leave VA at risk of potentially being unable to pay for authorized care. Alternatively, if estimated costs for CIC authorizations recorded in FBCS are lower than estimated obligations recorded in FMS, VA may be dedicating more resources than needed for this care. While we found that all six selected VISNs and the VAMCs they manage certified that they had implemented this new policy, the methods used to identify and correct discrepancies between estimated costs for CIC authorizations in FBCS and the amount of estimated obligations in FMS varied. Moreover, in some cases, we found that discrepancies VAMCs identified and associated corrections were not documented or that documentation lacked specificity, making it difficult to determine whether appropriate corrections were made. To achieve greater consistency in how VAMCs implement this new policy, VA officials reviewed VAMCs' reports and in February 2016, provided VISNs and VAMCs with additional guidance and best practices for identifying discrepancies and documenting corrections. For example, VA instructed VAMCs to be as specific as possible in documenting corrections they make to the estimated obligations. VA officials also told us that they are developing additional guidance that would define an acceptable level of variation between estimated costs for CIC authorizations and the amount of estimated obligations in FMS. This guidance, once implemented, would require that VAMCs ensure that estimated costs and estimated obligations were no more than $50,000 or 10 percent apart, whichever is less. Finally, to better track that VAMCs' obligations for CIC do not exceed available budgetary resources for fiscal year 2016, VA allocated funds specifically for CIC to each VAMC. VA officials, including some VISN officials we interviewed, told us that they identify VAMCs that may be at risk for exhausting their funds before the end of the fiscal year by reviewing monthly reports comparing each VAMC's obligations for CIC to the amount of funds allocated for that purpose to the VAMC. Officials from the Office of Finance within the Veterans Health Administration told us that once a VAMC had obligated all of its CIC funds, it would have to request realignment of funds from other VA programs, assuming additional funds could be made available. VA would, in turn, evaluate the validity of a VAMC's request. VA is employing a similar process to track VAMCs' use of funds for hepatitis C drugs. Officials told us that these steps are intended to reduce the risk of VAMCs obligating more funds than VA's budgetary resources allow. Despite these efforts, VA still faces challenges accurately estimating CIC costs and tracking associated obligations, in large part because of the uncertainty inherent in predicting the CIC services veterans will actually receive. According to VA Chief Business Office and VISN officials, a single authorization may allow for multiple episodes of care, such as up to 10 visits to a physical therapist. Alternatively, a veteran may choose not to seek the care that was authorized. Furthermore, system deficiencies also complicate both the development of accurate CIC cost estimates and the tracking of related obligations. Chief Business Office and VISN officials told us that due to systems limitations, cost estimates for inpatient CIC authorizations are estimated in FBCS based on a veteran's diagnosis at the time the care is authorized and cannot be adjusted if a veteran's diagnosis--and associated treatment plan--changes. For example, a veteran may be authorized to obtain inpatient care to treat fatigue and nausea, but may be subsequently diagnosed as having a heart attack and receive costly surgery that was not included in the cost estimate. Chief Business Office officials told us that while the cost estimate cannot be adjusted in FBCS, VAMC officials should adjust the estimated obligation that corresponds to the authorization in IFCAP to reflect the cost difference; they should also document why they made the adjustment. To better align cost estimates for CIC authorizations with associated obligations, in the long term, VA officials told us that VA is exploring options for replacing IFCAP and FMS, which officials describe as antiquated systems based on outdated technology. The department has developed a rough timeline and estimate of budgetary needs to make these changes. Officials told us that the timeline and cost estimate would be refined once concrete plans for replacing IFCAP and FMS are developed. Officials told us that replacing IFCAP and FMS is challenging due to the scope of the project and the requirement that the replacement system interface with various VA legacy systems, such as the Veterans Health Information Systems and Technology Architecture, VA's system containing veterans' electronic health records. Moreover, as we have previously reported, VA has made previous attempts to update IFCAP and FMS that were unsuccessful. In October 2009, we reported that these failures could be attributed to the lack of a reliable implementation schedule and cost estimates, among other factors. To more accurately project future health care utilization of VA services given the implementation of the Veterans Choice Program, in November 2015 VA took steps to update its EHCPM projection to better inform future budget estimates. Officials told us that the updated EHCPM projection in November 2015 included available data from fiscal year 2015 to inform the department's budget estimate for fiscal years 2017 and 2018. Without the updated projection, VA would have relied on the EHCPM projection from April 2015 using actual data from fiscal year 2014. The updated EHCPM projection using fiscal year 2015 data showed increased utilization of CIC services in that year. According to VA officials, this increase was an unexpected result of implementing the Veterans Choice Program. Specifically, because of administrative weaknesses affecting the Veterans Choice Program, veterans seeking services through this program were generally provided care through other VA CIC programs instead. Additionally, according to VA, analysis of fiscal year 2015 data showed that the implementation of the Veterans Choice Program resulted in veterans relying on VA services rather than on services provided by other health care benefit programs for a greater share of their health care needs. VA officials told us that they plan to continue relying on the EHCPM projection from April of each year using data from the most recently completed fiscal year and updating the EHCPM later in the year using more current data. As we have previously reported, while the EHCPM projection informs most of VA's budget estimate, the amount of the estimate is determined by several factors, including VA policy decisions and the President's priorities, and will not necessarily match the EHCPM projection in any given year. Historically, the final budget estimate for VA has consistently been lower than the amount projected by the EHCPM. For example, in December 2015, to develop the budget estimates for fiscal year 2017 and advance appropriations for fiscal year 2018, VA officials made a policy decision to use a previous EHCPM projection that does not take into account the increased utilization of CIC services by veterans in fiscal year 2015. VA officials told us that if demand for VA services exceeds the amount requested for VA's Medical Services Account in the President's budget request for fiscal year 2017, the difference can be made up by greater utilization of the Veterans Choice Program. VA officials also told us that VA will likely request an increase in funding for health care services in the President's budget request for fiscal year 2018, which is expected to be submitted to Congress in February 2017. To help increase utilization of the Veterans Choice Program, VA issued policy memoranda to VAMCs in May and October 2015, requiring them to refer veterans to the Veterans Choice Program if timely care cannot be delivered by a VAMC, rather than authorizing care through VA's other CIC programs. In addition, on July 31, 2015, the VA Budget and Choice Improvement Act eliminated the requirement that veterans must be enrolled in the VA health care system by August 2014 in order to receive care through the program. While data from January 2016 indicate that utilization of care under the Veterans Choice Program has begun to increase, VA officials, including at the VISNs we interviewed, expressed concerns whether existing contracts were sufficient to address veterans' needs in a timely manner. For example, officials we interviewed from five of the six selected VISNs cited inadequate provider networks, delays in scheduling appointments, and delays in providers receiving payment for services delivered, as factors limiting program utilization. To address these concerns, VA is granting VAMCs the authority to establish agreements directly with providers to deliver services through the Veterans Choice Program and schedule appointments for veterans if VA's contractors are unable to schedule them in a timely manner. These efforts have the potential to increase Veterans Choice Program utilization beyond the levels VA estimated for fiscal year 2016, which, according to VA officials, may limit the funds available to the program in fiscal year 2017. Conversely, some of these officials told us that if VA does not succeed in increasing Veterans Choice Program utilization in fiscal years 2016 and 2017, veterans may have to seek care through other CIC programs, which may not have the funds available to meet the demand for services. In either case, according to VA officials, veterans may face delays in accessing VA health care services. In addition to the challenges associated with the Veterans Choice Program, VA, like other health care payers, faces uncertainties estimating the utilization--and associated costs--of emerging health care treatments--such as costly drugs to treat chronic diseases affecting veterans. VA, like other federal agencies, prepares its budget estimate 18 months in advance of the start of the fiscal year for which funds are provided. At the time VA develops its budget estimate, it may not have enough information to estimate the likely utilization and costs for health care services or these treatments with reasonable accuracy. Moreover, even with improvements to its projection, VA, like other federal agencies, must make tradeoffs in formulating its budget estimate that requires it to balance the expected demand for health care services against other competing priorities. Close scrutiny and careful monitoring in all these areas should assist VA in managing its available resources and better protect against a reoccurrence of budgetary circumstances similar to those that existed in fiscal year 2015. VA provided written comments on a draft of this report, which we have reprinted in appendix I. While we are not making any recommendations in this report, in its comments, VA agreed with our findings and reiterated the uncertainty the department faces in estimating the cost of emerging health care treatments. VA also provided technical comments on the draft report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Veterans Affairs. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Rashmi Agarwal, Assistant Director; Luke Baron; Krister Friday; Jacquelyn Hamilton; and Michael Zose made key contributions to this report. VA's Health Care Budget: Preliminary Observations on Efforts to Improve Tracking of Obligations and Projected Utilization. GAO-16-374T. Washington, D.C.: February 10, 2016. Veterans' Health Care Budget: Improvements Made, but Additional Actions Needed to Address Problems Related to Estimates Supporting President's Request. GAO-13-715. Washington, D.C.: August 8, 2013. Veterans' Health Care: Improvements Needed to Ensure That Budget Estimates Are Reliable and That Spending for Facility Maintenance Is Consistent with Priorities. GAO-13-220. Washington, D.C.: February 22, 2013. Veterans' Health Care Budget: Better Labeling of Services and More Detailed Information Could Improve the Congressional Budget Justification. GAO-12-908. Washington, D.C.: September 18, 2012. Veterans' Health Care Budget: Transparency and Reliability of Some Estimates Supporting President's Request Could Be Improved. GAO-12- 689. Washington, D.C.: June 11, 2012. VA Health Care: Estimates of Available Budget Resources Compared with Actual Amounts. GAO-12-383R. Washington, D.C.: March 30, 2012. VA Health Care: Methodology for Estimating and Process for Tracking Savings Need Improvement. GAO-12-305. Washington, D.C.: February 27, 2012. Veterans Affairs: Issues Related to Real Property Realignment and Future Health Care Costs. GAO-11-877T. Washington, D.C.: July 27, 2011. Veterans' Health Care Budget Estimate: Changes Were Made in Developing the President's Budget Request for Fiscal Years 2012 and 2013. GAO-11-622. Washington, D.C.: June 14, 2011. Veterans' Health Care: VA Uses a Projection Model to Develop Most of Its Health Care Budget Estimate to Inform the President's Budget Request. GAO-11-205. Washington, D.C.: January 31, 2011.
|
VA projected a funding gap of about $3 billion in its fiscal year 2015 medical services appropriation account, which funds VA health care services except for those authorized under the Veterans Choice Program. To close this gap, VA obtained temporary authority to use up to $3.3 billion from the $10 billion appropriated to the Veterans Choice Fund in August 2014. GAO was asked to examine VA's fiscal year 2015 projected funding gap and any changes VA has made to prevent potential funding gaps in future years. This report examines (1) the activities or programs that accounted for VA's fiscal year 2015 projected funding gap in its medical services appropriation account and (2) changes VA has made to prevent potential funding gaps in future years. GAO reviewed VA obligations data and related documents to determine what activities accounted for the projected funding gap in its fiscal year 2015 medical services appropriation account, as well as the factors that contributed to the projected funding gap. GAO interviewed VA officials to identify the steps taken to address the projected funding gap. GAO also examined changes VA made to prevent future funding gaps and reviewed the implementation of these changes at the VAMCs within six VISNs, selected based on geographic diversity. GAO found that two areas accounted for the Department of Veterans Affairs' (VA) fiscal year 2015 projected funding gap of $2.75 billion. Higher-than-expected obligations for VA's longstanding care in the community (CIC) programs--which allow veterans to obtain care from non-VA providers--accounted for $2.34 billion or 85 percent of VA's projected funding gap. VA officials expected that the Veterans Choice Program--which is a relatively new CIC program implemented in fiscal year 2015 that allows veterans to access care from non-VA providers under certain conditions--would absorb veterans' increased demand for more timely care after public disclosure of long wait times. However, administrative weaknesses slowed enrollment into this program, and use of the Veterans' Choice Fund was far less than expected. Moreover, as utilization of CIC programs overall increased, VA's weaknesses in estimating costs and tracking obligations for CIC services resulted in VA facing a projected funding gap. Unanticipated obligations for hepatitis C drugs accounted for the remaining $408 million of VA's projected funding gap. VA did not anticipate in its budget the obligations for these costly, new drugs because the drugs did not gain approval from the Food and Drug Administration until fiscal year 2014--after VA had already developed its budget estimate for fiscal year 2015. To help prevent future funding gaps, VA has made efforts to better estimate costs and track obligations for CIC services and better project future utilization of VA's health care services. Specifically, VA implemented new policies directing VA medical centers (VAMC) and Veterans Integrated Service Networks (VISN) to better estimate costs for CIC authorizations--by using historical data and correcting for obvious errors--and to better track CIC obligations by comparing estimated costs with estimated obligations, correcting discrepancies, and certifying each month that these steps were completed. These policies are necessary, in part, because deficiencies in VA's financial systems make tracking obligations challenging. The VISNs and associated VAMCs GAO reviewed have implemented these policies. VA also allocated funds to each VAMC for CIC and hepatitis C drugs and began comparing VAMCs' obligations in these areas to the amount of funds allocated to help ensure that obligations do not exceed budgetary resources. VA updated the projection it uses to inform budget estimates 3 to 4 years in the future, adding fiscal year 2015 data reflecting increased CIC utilization. While VA has made these efforts to better manage its budget, uncertainties remain regarding utilization of VA's health care services. For example, utilization of the Veterans Choice Program in fiscal years 2016 and 2017 is uncertain because of continued enrollment delays affecting the program. Moreover, even with improvements to its projection, VA, like other federal agencies, must make tradeoffs in formulating its budget estimate that requires it to balance the expected demand for health care services against other competing priorities. GAO is not making any recommendations. After reviewing a draft of this report, VA agreed with what GAO found.
| 6,161 | 881 |
As part of its mission to enforce the law and defend the interests of the United States, DOJ undertakes a number of law enforcement activities through its component agencies. The following six reports--which we issued in 2015 and 2016--contain key findings and recommendations in this area, and highlight potential areas for continued oversight. Collectively, the reports resulted in 28 recommendations to DOJ; the Drug Enforcement Administration (DEA); the Federal Bureau of Investigation (FBI); the National Institute of Justice (NIJ); the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF); and other DOJ components. As of March 2017, DOJ and its component agencies have implemented 5 of the 28 recommendations. DOJ and its components have also begun taking actions to address 11 of the remaining recommendations, which remain open. DOJ or its components have not taken actions for 8 of our recommendations and disagreed with the remaining 4 recommendations. DOJ and ATF have not always complied with federal law and ATF firearm-related policies. In June 2016, we reported that ATF did not always comply with the appropriations act restriction prohibiting consolidation or centralization of federal firearm licensee records, and did not consistently adhere to ATF policies. To carry out its enforcement responsibilities, ATF maintains 25 firearm-related databases, 16 of which contain firearms purchaser information from a federal firearm licensee. While ATF has the statutory authority and responsibility to obtain firearms transactions records from federal firearms licensees under certain circumstances, ATF is restricted from using appropriated funds to consolidate or centralize federal firearm licensee records. We examined four federal firearm licensee databases--selected based on factors such as the inclusion of retail purchaser information and original data--and found that two of the four did not always comply with ATF appropriations act restrictions and two of the four did not adhere to ATF policies. ATF addressed the violations of the appropriations act restrictions during the course of our review. To address identified policy deficiencies, we made three recommendations that ATF provide guidance to federal firearm licensees, align system capabilities with ATF policies, and align timing and ATF policy for deleting records related to multiple firearm sales. ATF concurred with the recommendations. As of March 2017, ATF has implemented one recommendation and reported progress towards implementing the other two recommendations by improving practices and modifying data systems to better align with ATF policy. DOJ should study options for reducing overlap and fragmentation on missing persons databases. In June 2016, we reported that DOJ could facilitate more efficient sharing of information on missing persons and unidentified remains. The FBI's National Crime Information Center database includes criminal justice agency information and is restricted to authorized users. DOJ's NIJ oversees the National Missing and Unidentified Persons System, a database open to the public for access to published case information. We found that data contained in these systems were overlapping and fragmented, creating the risk of duplication. Because there is no mechanism to share information between the systems, users relying on only one system may miss information that could be instrumental in solving these types of cases. Although federal law precludes full integration, there may still be opportunities to share information between the systems, which could reduce overlap and fragmentation of data on missing and unidentified persons. To allow for more efficient use of missing and unidentified persons information, we recommended that the FBI and NIJ evaluate options to share information between the two systems. DOJ disagreed with our recommendation, citing that it lacks legal authority. In March 2017, DOJ reiterated its position that any such sharing was prohibited by the law. Specifically, DOJ stated that the FBI's system can only share information with authorized users, dissemination is limited to those individuals performing law enforcement, and that additional efforts to examine other options would waste taxpayer funds. We continue to believe that our recommendation is valid and that DOJ should further study options for sharing information within the confines of its legal framework. For example, our work identified a variety of solutions to address the fragmentation and overlap between the two systems such as developing a notification alert for the FBI's system when related case data was also present in the other system. DOJ and the FBI have not addressed privacy and accuracy concerns related to the FBI's use of face recognition technology. Whenever agencies develop or change technologies that collect personal information, federal law requires them to publish certain privacy impact statements. In May 2016, we reported that the FBI did not publish updated privacy impact assessments (PIA) and a System of Records Notice (SORN) for a face recognition service that allows law enforcement agencies to search a database of over 30 million photos to support criminal investigations. Users of this service include the FBI and selected state and local law enforcement agencies, which can submit search requests to help identify an unknown person using, for example, a photo from a surveillance camera. DOJ issued an initial PIA in 2008, before the FBI and state and local law enforcement agencies began using this service on a pilot basis. However, the FBI did not update the PIA until September 2015, during the course of our review and after the system underwent significant changes. Further, although the FBI, state, and local law enforcement agencies had been using the system since 2011; DOJ did not publish a SORN until May 2016, after completion of our review. Similarly, DOJ did not publish a PIA for the FBI's internal use of additional face recognition technologies until May 2015, during the course of our review and almost 4 years after the FBI began its new use of face recognition searches. In addition, we found that the FBI had not audited the actual use of face recognition technology and, as a result, could not demonstrate compliance with applicable privacy protection requirements. We also reported that the FBI had conducted limited testing to evaluate the detection rate of the face recognition searches, but had not (1) assessed how often errors occurred or (2) taken steps to determine whether systems used by external partners are sufficiently accurate for the FBI's use. By taking steps to evaluate the detection rates of the various systems, the FBI could better ensure the data received were sufficiently accurate and do not include photos of innocent people as investigative leads. We made three recommendations to DOJ and the FBI to determine why privacy-related documents were not published as required and to audit the use of the face recognition technology to better ensure face image searches are conducted in accordance with policy requirements. We made three additional recommendations to the FBI to verify that the systems are sufficiently accurate and are meeting users' needs. DOJ and the FBI partially agreed with two recommendations and disagreed with one recommendation concerning privacy. The FBI agreed with one recommendation and disagreed with two recommendations concerning accuracy. In response, we clarified one recommendation regarding accuracy testing and updated another regarding the SORN development process, based on information DOJ provided after reviewing our draft report. As of March 2017, DOJ has begun taking actions to address three of our six recommendations, such as initiating audits to oversee the FBI's use of its face recognition technology. DEA should better administer the controlled substance quota setting process. In February 2015, we found that DEA had not effectively administered the quota setting process that limits the amount of certain controlled substances available for use in the United States. Each year, manufacturers apply to DEA for quotas needed to make drugs. We found that DEA did not respond within the time frames required by its regulations for any year from 2001 through 2014, which, according to some manufacturers, caused or exacerbated shortages of drugs. We recommended that DEA take seven actions to improve its management of the quota setting process and address drug shortages. DEA concurred, and as of March 2017 has implemented four of the seven recommendations, one related to establishing an agreement to facilitate information sharing with the Food and Drug Administration regarding drug shortages and the three others related to strengthening internal controls in the quota setting process. DEA has also taken some actions towards addressing the remaining three recommendations--including working with the Food and Drug Administration to establish a work plan to specifically outline the information the agencies will share and the time frames for doing so--but needs to take additional actions to fully implement them. DEA needs to provide additional guidance to entities that handle controlled substances. In June 2015, based on four nationally representative surveys of DEA registrants--distributors of controlled substances, individual pharmacies, chain pharmacy corporate offices, and practitioners--we reported that many registrants were not aware of various DEA resources, such as manuals for pharmacists and practitioners. In addition, some distributors, individual pharmacies, and chain pharmacy corporate offices wanted improved guidance from, and additional communication with, DEA about their roles and responsibilities under the Controlled Substances Act. We recommended that DEA take three actions to increase registrants' awareness of DEA resources and improve the information DEA provides to registrants. DEA concurred and, as of March 2017, has taken some actions towards addressing our three recommendations, such as conducting and participating in conferences and other industry outreach events. However, DEA needs to take additional actions to fully implement the recommendations, including establishing a means of regular communication with registrants, such as through listservs, which would reach a larger proportion of registrants than conferences and other events. DOJ should improve handling of FBI whistleblower retaliation complaints. In January 2015, we reported that unlike employees in other executive branch agencies, FBI employees did not have a process to seek corrective action if they experienced retaliation in certain circumstances. Specifically, FBI employees could not seek corrective action if they experienced retaliation based on a disclosure of wrongdoing to their supervisors or others in their chain of command who were not designated DOJ or FBI officials. We suggested that Congress consider whether FBI employees should have a means to obtain corrective action for retaliation for disclosures of wrongdoing made to supervisors and others in the employee's chains of command. In response to our report, in December 2016, Congress passed and the President signed the FBI Whistleblower Protection Enhancement Act of 2016, which, among other things, provides a means for FBI employees to obtain corrective action in these cases and brings FBI whistleblower protection in line with the protection in place for employees of other executive branch agencies for reporting wrongdoing to their chain of command. This change will help ensure that whistleblowers have access to recourse, that retaliatory action does not go unpunished, and that other potential whistleblowers are encouraged to come forward. We also reported that (1) DOJ and FBI guidance for making a protected disclosure was not always clear; (2) DOJ did not provide whistleblower retaliation complainants with estimates of when to expect DOJ decisions throughout the complaint process; (3) DOJ offices responsible for investigating complaints have not consistently complied with certain regulatory requirements, such as obtaining complainants' approvals for extensions of time; and (4) although DOJ officials have ongoing and planned efforts to reduce the duration of retaliation complaints, they have limited plans to assess the impacts of these actions. To address these deficiencies, we made eight recommendations that DOJ clarify guidance, provide complainants with estimated complaint decision time frames, develop an oversight mechanism to monitor regulatory compliance, and assess the impact of efforts to reduce the duration of FBI whistleblower complaints. DOJ concurred with these recommendations, but as of March 2017 has not provided documentation of actions taken to address them. As part of their mission to enforce and control crime, DOJ and its components--including the Bureau of Prisons (BOP) and the U.S. Marshals Service (USMS)--are responsible for the custody and care of federal prisoners and inmates. To carry out these responsibilities, the President's Budget requested $8.8 billion for fiscal year 2017. Our recent reports on DOJ's programs for incarceration and offender management highlight areas for oversight, including better estimating costs and measuring outcomes. Since August 2014, we have made 17 recommendations to DOJ, BOP, and USMS to improve the custody and care of federal prisoners and inmates. As of March 2017, DOJ or its component agencies have implemented 7 of the 17 recommendations, have begun taking actions on 8 recommendations that remain open, and have not taken actions for the remaining 2 recommendations. DOJ could better assess federal incarceration initiatives. In June 2015, we reported that DOJ could better measure the efficacy of three key new initiatives designed to address federal incarceration challenges, such as overcrowding and rising costs. We found that the Smart on Crime Initiative indicators were well-linked to overall goals, which includes prioritizing prosecution of the most serious offenses, but many lacked clarity and context. The Clemency Initiative, which encourages certain inmates to petition to have sentences reduced by the President, does not track how long it takes for the average petition to clear each step in the review process. In addition, BOP created the Reentry Services Division in 2014 to improve inmate reentry into society, but we found that it lacked a plan to prioritize evaluations among all 18 of the programs it lists in its national reentry directory. To address these deficiencies, we made three recommendations to improve measurement of the initiatives. DOJ concurred with two of the recommendations and partially concurred with the third. In May 2016, BOP finalized an updated evaluation plan for the Reentry Services Division that was consistent with our recommendation and we consider that recommendation to be implemented. As of March 2017, DOJ has not provided documentation of actions on the remaining two recommendations. DOJ and BOP could better measure the outcomes of alternatives to incarceration. In June 2016, we reported that in part to help reduce the size and costs of the federal prison population, DOJ has used a variety of alternatives to incarceration before sentencing, but it does not reliably track the use of some of these alternatives. For instance, we reported that DOJ has used two types of pretrial diversion as alternatives to incarceration--one at the discretion of the U.S. Attorney's Office and the other involving additional stakeholders, such as judges and defense counsel. However, DOJ data on the use of pretrial diversion are unreliable because DOJ's database does not distinguish between these different types of pretrial diversions and DOJ does not have guidance in place to ensure that its attorneys consistently enter the use of pretrial diversion into the database. In addition, over the past 7 years, BOP has increased its use of incarceration alternatives, such as the placement of inmates in residential reentry centers (also known as halfway houses) and home confinement. However, we found that while BOP has tracked data on the cost implications of using these alternatives, it does not track the information needed to help measure the outcomes of incarceration alternatives. Similarly, we found that DOJ has not measured the outcomes or identified the cost implications of pretrial diversion programs. To address these deficiencies, we made six recommendations that DOJ enhance its tracking of data on the use of pretrial diversions and that DOJ and BOP obtain outcome data and develop measures for alternatives used. DOJ concurred and, as of March 2017, has fully implemented the two recommendations on tracking data by revising its system to separately track the different types of pretrial diversion programs and providing guidance to its attorneys on the appropriate way to enter data. DOJ and BOP have partially addressed the remaining four recommendations. BOP faces challenges in activating new prisons. In August 2014, we found that BOP was behind schedule in activating six new prison institutions designed to handle the projected growth of the federal inmate population, and that BOP did not have a policy or best practices to guide the activations or activation schedules. Activation of the prisons--the process by which BOP prepares institutions for inmates--was delayed, in part, because of schedule challenges, such as staffing, posed by locations of the new institutions. We also found that BOP did not effectively communicate to Congress on how the locations of the new institutions may affect activation schedules. To address these deficiencies, we recommended that (1) DOJ use its annual budget justification to communicate to Congress factors that might delay prison activation; (2) BOP analyze institution-level staffing data and develop effective, tailored strategies to mitigate staffing challenges; (3) BOP develop and implement a comprehensive activation policy; and (4) BOP develop and implement an activation schedule that reflects best practices. DOJ and BOP concurred, and as of March 2017 have implemented two of the four recommendations by enhancing recruitment approaches to address staffing challenges and developing a policy to guide future activations. Additional actions are needed to address the remaining two recommendations. U.S. Marshals Service could better estimate cost savings and monitor ways to achieve efficiencies. In May 2016, we found that the U.S. Marshals Service's largest prisoner costs were housing payments to state, local, and private prisons. For example, in fiscal year 2015, USMS spent approximately $1.2 billion on these costs. USMS has implemented actions that it reports have saved prisoner-related costs from fiscal years 2010 through 2015, which include automating detention management services, developing cost-saving housing options, investing in alternatives to pre-trial detention to reduce housing and medical expenditures, and improving medical claim management. For actions with identified savings over this time period, however, we found that about $654 million of the USMS's estimated $858 million in total savings was not reliable because the estimates were not sufficiently comprehensive, accurate, consistent, or well-documented. For example, USMS identified $375 million in savings from the alternatives to pre-trial detention program for fiscal years 2010 through 2015, but did not verify the data or methodology used to develop the estimate or provide documentation supporting its reported savings for fiscal years 2012 onward. We also found that USMS has designed systems to identify opportunities for cost efficiencies, including savings. For example, the agency requires districts to conduct annual self-assessments of their procedures to identify any deficiencies that could lead to cost savings. However, USMS cannot aggregate and analyze the results of the assessments across districts. To address these deficiencies, we recommended that USMS (1) develop reliable methods for estimating cost savings and validating reported savings achieved, and (2) establish a mechanism to aggregate and analyze the results of annual district self-assessments. USMS concurred, and as of March 2017 has provided us with information on how it plans to move forward in addressing the recommendations, but needs to take additional actions to fully implement them. DOJ has improved outreach to states to notify tribes about registered sex offenders who plan to live, work, or attend school on tribal land. In November 2014, we found that most eligible tribes have retained their implementation authority, and have either substantially implemented or were in the process of implementing the Sex Offender Registration and Notification Act (SORNA). In our survey of tribes that retained their authority to implement the act, the four most frequently reported implementation challenges were the inability to submit convicted sex offender information to federal databases, lack of notification from state prisons upon the release of sex offenders, lack of staff, and inability to cover the costs of SORNA implementation. SORNA established the Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking (SMART Office) within DOJ to administer and assist jurisdictions with implementing the law. However, we found that some states had not notified tribes when sex offenders who will be or have been released from state prison register with the state and indicate that they intend to live, work, or attend school on tribal land, as SORNA requires. We found that the SMART Office has taken some actions, but could do more to encourage states to provide notification to tribes. To address this deficiency, we made two recommendations to DOJ related to the SMART Office encouraging states to notify tribes about offenders who plan to live, work, or attend school on tribal land upon release from prison. DOJ concurred with these recommendations and has fully implemented them. DOJ supports a range of activities--including policing and victims' assistance--through grants provided to federal, state, local, and tribal agencies, as well as national, community-based, and non-profit organizations. Congress appropriated $2.4 billion for DOJ discretionary grant programs in fiscal year 2016. The Office of Justice Programs (OJP) is the largest of DOJ's three granting components and operated with an enacted discretionary budget of approximately $1.8 billion in fiscal year 2016. The four reports discussed below highlight DOJ's overall grant administration practices, management of specific programs, and efforts to reduce duplication in grant programs across the federal government. The four reports included 17 recommendations to DOJ. The department concurred with these recommendations, and as of March 2017 had taken actions to fully implement 15 of the 17 recommendations. DOJ has also begun taking actions on the remaining 2 recommendations, which are still open. DOJ has addressed recommendations to reduce the risk of grant program overlap and unnecessary duplication. In July 2012, we found that DOJ had not assessed its grant programs department-wide to identify overlap, which occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. We reported that DOJ published 253 fiscal year 2010 grant solicitations to support crime prevention, law enforcement, and crime victim services. We also found that DOJ did not routinely coordinate grant awards to avoid unnecessary duplication, which occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries without being knowledgeable about each other's efforts. Further, we reported that DOJ could take steps to better assess the results of all the grant programs it administers. As result, we made eight recommendations to DOJ. The department concurred with these recommendations and has fully implemented them. DOJ has addressed recommendations for improving management of the bulletproof vest partnership. In February 2012, we found that DOJ's Bureau of Justice Assistance--within the Office of Justice Programs--could enhance grant management controls and better ensure consistency in management of the bulletproof vest partnership grant program. For example, we found that DOJ could better manage the grant program by improving grantee accountability in the use of funds for body armor purchases, reducing the risk of grantee noncompliance with program requirements, and ensuring consistency across its efforts to promote law enforcement officer safety. We made five recommendations to DOJ's Bureau of Justice Assistance. The department concurred with these recommendations and has fully implemented them. DOJ could better manage the Victims of Child Abuse Act grant program. In April 2015, we found that OJP's Office of Juvenile Justice and Delinquency Prevention (OJJDP) had several administrative review and approval processes in place that contributed to delays in grantees' ability to begin spending their award funds. For example, for the 28 Victims of Child Abuse Act (VOCA) program grants awarded from fiscal years 2010 through 2013, grantees had expended less than 20 percent, on average, of each grant they received during the original 12-month project period. In particular, we found that OJJDP's processes for reviewing grantees' budgets and conference planning requests were contributing to delays in grantees' ability to begin spending their funds. Further, we found that OJJDP's guidance on grant extensions was unclear and irregularly enforced. For example, OJJDP approved 72 of 73 extension requests from fiscal years 2010 through 2013 without the required narrative justification. We also found that OJJDP did not have complete data to assess VOCA grantees' performance against the measures it had established because the tools it used to collect this information did not align to the measures themselves. As a result, we made four recommendations to OJP and the office concurred. As of March 2017, OJP has implemented two recommendations by establishing and enforcing clear guidance related to grant extensions and enhancing its performance management capacity. DOJ has partially taken action to address the remaining two recommendations. DOJ and other federal agencies have taken steps to avoid duplication among human trafficking grants. In June 2016, we identified 42 grant programs with awards made in 2014 and 2015 that may be used to combat human trafficking or assist victims of human trafficking, 15 of which are intended solely for these purposes. Although some overlap exists among these human trafficking grant programs, federal agencies have established processes to help prevent unnecessary duplication. For instance, in response to recommendations in a prior GAO report, DOJ requires grant applicants to identify any federal grants they are currently operating under as well as federal grants for which they have applied. In addition, agencies that participate in the grantmaking committee of the Senior Policy Operating Group--an entity through which federal agencies coordinate their efforts to combat human trafficking--are to share grant solicitations and information on proposed grant awards, allowing other agencies to comment on proposed awards and determine whether they plan to award funding to the same organization. DOJ has the ability to fund programs using money it collects through alternative sources, such as fines, fees, and penalties, in addition to the budget authority Congress provides DOJ through annual appropriations. For example, the Crime Victims Fund, which is financed by collections of fines, penalties, and bond forfeitures from defendants convicted of federal crimes, obligated almost $2.4 billion for a variety of grants and programs to assist victims of crimes in fiscal year 2015. The following three reports highlight DOJ's collection, use, and management of these funds. One of the three reports contains three recommendations, which have been partially implemented. DOJ could better manage alternative sources of funding. In February 2015, we reported that DOJ could better manage its alternative sources of funding--collections by DOJ from sources such as fines, fees, and penalties--which, in fiscal year 2013, made up about 15 percent of DOJ's total budgetary resources. Specifically, DOJ collected about $4.3 billion from seven major alternative sources of funding--including the Assets Forfeiture Fund, the Crime Victims Fund, and non-criminal fingerprint checks fees, among others. We found that two of these funds could be better managed. For example, DOJ has the authority to deposit up to 3 percent of amounts collected from DOJ's civil debt collection litigation activities, such as Medicare fraud cases and referred student loan collections, into the Three Percent Fund. Collections are used to defray DOJ's costs for conducting these activities. However, the department had not conducted analyses of the fund that include elements such as projected collections or the impact of previous obligations rates on unobligated balances. In addition, the FBI's Criminal Justice Information Services collects fees for providing non-criminal justice fingerprint-based background checks. We found that the FBI was not transparent in how it sets its fees, and did not evaluate the appropriate range of carryover amounts for a portion of those fees, even though unobligated balances had been growing. As a result, we recommended that (1) DOJ develop a policy to analyze unobligated balances and develop collection estimates for the Three Percent Fund; (2) the FBI publish a breakdown of how it assesses its fingerprint check fees to better communicate the cost of the service to users; and (3) the FBI develop a policy to analyze and determine an appropriate range for unobligated balances from a portion of those fees. DOJ partially concurred with the first recommendation and generally concurred with the other two recommendations. As of March 2017, DOJ is working to improve how it analyzes unobligated funds needed for future fiscal years for the Three Percent Fund; however, it provided various reasons why it does not calculate revenue estimates. Our report recognized DOJ's concerns and we continue to believe that DOJ could develop an estimated range of potential collections based on historical trends and current collection activities. The FBI has partially implemented our recommendations to be more transparent with its fees and improve how it analyzes unobligated balances from a portion of the fingerprint checks fees. DOJ distributes fines, penalties, and forfeitures from financial institutions to support program expenses and victims of related crimes. In March 2016, we reported that since 2009, the federal government had assessed financial institutions about $12 billion in fines, penalties, and forfeitures for violations of the Bank Secrecy Act's anti- money-laundering regulations, Foreign Corrupt Practices Act of 1977, and U.S. sanctions programs requirements. Of this amount, about $3.2 billion was deposited into DOJ's Assets Forfeiture Fund (AFF). Funds from the AFF are primarily used for program expenses, payments to victims of the related crimes, and payments to law enforcement agencies that participated in the efforts resulting in forfeitures. For example, as of December 2015, approximately $2 billion of forfeited funds deposited in the AFF was planned for distribution to victims of fraud. DOJ retained a portion of selected mortgage-related financial institution settlement payments for its Three Percent Fund. In November 2016, we reported that federal agencies have collected billions of dollars in settlement payments and penalties from financial institutions for violations alleged to have been committed during the mortgage origination process, servicing of mortgages, and in the packaging and sale of residential mortgage-backed securities. Several federal agencies have responsibility for regulating financial institutions in relation to these activities, and these agencies may engage DOJ to pursue investigations of financial institutions and individuals for civil or criminal violations of various laws and regulations. We reviewed a sample of nine cases where federal agencies, in some instances including DOJ, either reached settlements with or assessed penalties against financial institutions in connection with alleged mortgage-related violations. Financial institutions in these nine cases were assessed a total of about $25 billion, generally in penalties, settlement amounts, and consumer relief. In the cases involving DOJ, the department generally retained 3 percent of the settlement and penalty amounts paid and deposited this amount in its Three Percent Fund. For example, in 2016, one financial institution agreed to pay $1.2 billion to settle DOJ's claims brought on behalf of the Federal Housing Administration. DOJ collected the entire $1.2 billion settlement amount from this case and retained $36 million (3 percent of the total collection) and deposited this amount in its Three Percent Fund. DOJ distributed $622.7 million to the Federal Housing Administration and deposited the remaining amount--$541.3 million--in an account in the Treasury General Fund. Chairman Goodlatte, Ranking Member Conyers, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For further information about this statement, please contact Diana Maurer at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Eric Erdman (Assistant Director), Claudia Becker, Joy Booth, Willie Commons III, Tonnye' Connor-White, Karen Doran, Chris Hatscher, Rebecca Hendrickson, Paul Hobart, Valerie Kasindi, Beth Kowalewski, Susanna Kuebler, Dawn Locke, Kristy Love, Tarek Mahmassani, Jeremy Manion, Mara McMillen, Adrian Pavia, Geraldine Redican-Bigott, Christina Ritchie, Michelle Serfass, Jack Sheehan, Janet Temko-Blinder, and Jill Verret. Key contributors for the previous work on which this testimony is based are listed in each product. Financial Institutions: Penalty and Settlement Payments for Mortgage- Related Violations in Selected Cases. GAO-17-11R. Washington, D.C.: Nov. 10, 2016. Firearms Data: ATF Did Not Always Comply with the Appropriations Act Restriction and Should Better Adhere to Its Policies. GAO-16-552. Washington, D.C.: June 30, 2016. Human Trafficking: Agencies Have Taken Steps to Assess Prevalence, Address Victim Issues, and Avoid Grant Duplication. GAO-16-555. Washington, D.C.: June 28, 2016. Federal Prison System: Justice Has Used Alternatives to Incarceration, But Could Better Measure Program Outcomes. GAO-16-516. Washington, D.C.: June 23, 2016. Missing Persons and Unidentified Remains: Opportunities May Exist to Share Information More Efficiently. GAO-16-515. Washington, D.C.: June 7, 2016. Prisoner Operations: United States Marshals Service Could Better Estimate Cost Savings and Monitor Efforts to Increase Efficiencies. GAO-16-472. Washington, D.C.: May 23, 2016. Face Recognition Technology: FBI Should Better Ensure Privacy and Accuracy. GAO-16-267. Washington, D.C.: May 16, 2016. Financial Institutions: Fines, Penalties, and Forfeitures for Violations of Financial Crimes and Sanctions Requirements. GAO-16-297. Washington, D.C.: Mar. 22, 2016. Prescription Drugs: More DEA Information about Registrants' Controlled Substances Roles Could Improve Their Understanding and Help Ensure Access. GAO-15-471. Washington, D.C.: June 25, 2015. Federal Prison System: Justice Could Better Measure Progress Addressing Incarceration Challenges. GAO-15-454. Washington, D.C.: June 19, 2015. Victims of Child Abuse Act: Further Actions Needed to Ensure Timely Use of Grant Funds and Assess Grantee Performance. GAO-15-351.Washington, D.C.: Apr. 29, 2015. Department of Justice: Alternative Sources of Funding Are a Key Source of Budgetary Resources and Could Be Better Managed. GAO-15-48. Washington, D.C.: Feb. 19, 2015. Drug Shortages: Better Management of the Quota Process for Controlled Substances Needed; Coordination between DEA and FDA Should Be Improved. GAO-15-202. Washington, D.C.: Feb. 2, 2015. Whistleblower Protection: Additional Actions Needed to Improve DOJ's Handling of FBI Retaliation Complaints. GAO-15-112. Washington, D.C.: Jan. 23, 2015. Sex Offender Registration and Notification Act: Additional Outreach and Notification of Tribes about Offenders Who Are Released from Prison Needed. GAO-15-23. Washington, DC: Nov. 18, 2014. Bureau of Prisons: Management of New Prison Activations Can Be Improved. GAO-14-709. Washington, D.C.: Aug. 22, 2014. Justice Grant Programs: DOJ Should Do More to Reduce the Risk of Unnecessary Duplication and Enhance Program Assessment. GAO-12-517. Washington, D.C.: July 12, 2012. Law Enforcement Body Armor: DOJ Could Enhance Grant Management Controls and Better Ensure Consistency in Grant Program Requirements. GAO-12-353. Washington, D.C.: Feb. 15, 2012. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
|
In fiscal year 2016, DOJ's $29 billion budget funded a broad array of national security, law enforcement, and criminal justice system activities. GAO has examined a number of key programs where DOJ has sole responsibility or works with other departments, and recommended actions to improve program efficiency and resource management. This statement summarizes findings and recommendations from recent GAO reports that address DOJ's (1) law enforcement activities, (2) custody and care of federal prisoners and inmates, (3) grant management and administration, and (4) use of alternative sources of funding. This statement is based on prior GAO products issued from February 2012 to November 2016, along with selected updates obtained as of March 2017. For the selected updates on DOJ's progress in implementing GAO recommendations, GAO analyzed information provided by DOJ officials on actions taken and planned. DOJ has not fully addressed most GAO recommendations related to its law enforcement activities. The Department of Justice (DOJ) undertakes a number of activities to enforce the law and defend the interests of the United States. Key findings and recommendations from six recent GAO reports include, among other things, that DOJ should: better adhere to policies on collecting firearms data, assess opportunities to more efficiently share information on missing persons, better ensure the privacy and accuracy of face recognition technology, provide more information to entities that handle controlled substances, and improve the handling of whistleblower complaints. Collectively, these reports resulted in 28 recommendations. As of March 2017, DOJ has fully implemented 5 of these recommendations, begun actions to address 11, has not taken actions for 8, and disagreed with 4 recommendations. DOJ has not fully addressed most GAO recommendations related to the custody and care of federal prisoners and inmates. DOJ is responsible for the custody and care of federal prisoners and inmates, for which the President's Budget requested $8.8 billion for fiscal year 2017. GAO's recent reports highlight areas for continued improvements in DOJ incarceration and offender management, including better assessing key initiatives to address overcrowding and other federal incarceration challenges, better measuring the outcomes of alternatives to incarceration, improving the management of new prison activations, better estimating cost savings for prisoner operations, and improving notification to tribes about registered sex offenders upon release. Since August 2014, GAO has made 17 recommendations to DOJ in five reports related to these issues, and DOJ generally concurred with them. As of March 2017, DOJ has fully implemented 7 of the recommendations, partially implemented 8, and has not taken actions for 2 recommendations. DOJ has implemented most GAO recommendations to improve grant administration and management. DOJ supports a range of activities--including policing and victims' assistance--through grants provided to federal, state, local, and tribal agencies, as well as national, community-based, and non-profit organizations. Congress appropriated $2.4 billion for DOJ grant programs in fiscal year 2016. Four recent GAO reports highlight DOJ's overall grant administration practices, management of specific programs, and efforts to reduce overlap and duplication amongst its grant programs. The four reports include 17 recommendations to DOJ, and the department generally concurred with all of them. As of March 2017, DOJ has fully implemented 15 of the 17 recommendations and partially implemented the remaining two. DOJ has partially implemented GAO recommendations designed to improve management of funds collected through alternative sources. DOJ has the ability to fund programs using money it collects through alternative sources, such as fines, fees, and penalties in addition to its annual appropriations. For example, in 2015, we reported that DOJ collected $4.3 billion from seven alternative sources of funding in 2013. This statement highlights three reports that address DOJ's collection, use, and management of these funds. One of the three reports includes three recommendations, which DOJ has partially implemented. GAO has made several recommendations to DOJ in prior reports to help improve program efficiency and resource management. DOJ generally concurred with most of these recommendations and has implemented or begun taking action to address them.
| 7,727 | 851 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.